LLMConfiguration

This functionality requires LLM access »

LLMConfiguration[]

represents a configuration for an LLM.

LLMConfiguration[propval]

creates a configuration based on $LLMEvaluator with the specified property set to val.

LLMConfiguration[<|prop1->val1,prop2->val2,...|>]

specifies several properties and values.

LLMConfiguration[LLMConfiguration[],propspec]

creates a configuration based on an existing configuration.

Details

  • LLMConfiguration objects can be used with functions such as LLMSynthesize, ChatObject and ChatEvaluate through the LLMEvaluator option.
  • $LLMEvaluator is set to an LLMConfiguration.
  • Supported properties of LLMConfiguration objects include:
  • "MaxTokens"maximum amount of tokens to generate
    "Model"base model
    "PromptDelimiter"string to insert between prompts
    "Prompts"initial prompts or LLMPromptGenerator objects
    "StopTokens"tokens on which to stop generation
    "Temperature"sampling temperature
    "ToolMethod"method to use for tool calling
    "Tools"list of LLMTool objects to make available
    "TopProbabilities"sampling classes cutoff
    "TotalProbabilityCutoff"sampling probability cutoff (nucleus sampling)
  • Valid settings for "Model" include:
  • namenamed model
    {service,name}named model from service
    <|"Service"service,"Name"name|>fully specified model
  • Text generated by an LLM is sampled from a distribution. Details of the sampling can be specified using the following properties of the LLMConfiguration:
  • "Temperature"tAutomaticsample using a positive temperature t
    "TopProbabilities"kAutomaticsample only among the k highest-probability classes
    "TotalProbabilityCutoff"pAutomaticsample among the most probable choices with an accumulated probability of at least p (nucleus sampling)
  • The Automatic value of these parameters uses the default for the specified "Model".
  • Valid settings for "Prompts" include:
  • "string"static text
    LLMPrompt["name"]a repository prompt
    LLMPromptGenerator[]an LLMPromptGenerator object
    {prompt1,}a list of prompts
  • The setting for "PromptDelimiter" determines how multiple prompts are joined.
  • Valid settings for "ToolMethod" include:
  • Automaticuse tools when supported by service
    "Service"rely on the tool mechanism of service
    "Textual"use prompt-based tool calling
    assocspecific textual prompting and parsing
  • Valid keys in assoc include:
  • "ToolPrompt"prompt specifying tool format
    "ToolRequestParser"function for parsing tool requests
    "ToolResponseInsertionFunction"function for serializing tool responses
  • The prompt specified by "ToolPrompt" is only used if at least one tool is specified.
  • "ToolPrompt" can be a template, and is applied to an association containing all properties of the LLMConfiguration.
  • "ToolRequestParser" specifies a function that takes the most recent completion from the LLM and returns one of the following forms:
  • Noneno tool request
    {{start,end},LLMToolRequest[]}tool request
    {{start,end},Failure[]}invalid tool request
  • The pair of integers {start,end} indicates the character range within the completion string where the tool request appears.
  • Not all LLM services support every parameter that can be specified in the LLMConfiguration.

Examples

open allclose all

Basic Examples  (3)

Create a configuration that includes a prompt:

Use the configuration in an LLM evaluation:

Specify multiple properties of a configuration:

Modify an existing configuration:

Scope  (9)

Specify a token limit to the LLM-generated text:

Specify the service and the model to use for the generation:

Specify several prompts and how to join them together before submitting them to the LLM:

Specify that the sampling should be performed at zero temperature:

Specify the maximum cumulative probability before cutting off the distribution (aka nucleus sampling):

Specify the number of top-probability tokens to sample from:

Specify one or more alternative strings that will stop the LLM generation process:

Specify a tool that the LLM can call if needed:

Specify that tool calls should attempt to use the native API mechanism:

Compare with the alternative text-based method (which performs a single call at a time):

Wolfram Research (2023), LLMConfiguration, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMConfiguration.html.

Text

Wolfram Research (2023), LLMConfiguration, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMConfiguration.html.

CMS

Wolfram Language. 2023. "LLMConfiguration." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/LLMConfiguration.html.

APA

Wolfram Language. (2023). LLMConfiguration. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LLMConfiguration.html

BibTeX

@misc{reference.wolfram_2024_llmconfiguration, author="Wolfram Research", title="{LLMConfiguration}", year="2023", howpublished="\url{https://reference.wolfram.com/language/ref/LLMConfiguration.html}", note=[Accessed: 21-December-2024 ]}

BibLaTeX

@online{reference.wolfram_2024_llmconfiguration, organization={Wolfram Research}, title={LLMConfiguration}, year={2023}, url={https://reference.wolfram.com/language/ref/LLMConfiguration.html}, note=[Accessed: 21-December-2024 ]}