LLMSynthesizeSubmit
✖
LLMSynthesizeSubmit
Details and Options




- LLMSynthesizeSubmit generates text asynchronously according to the instruction in the prompt using a large language model (LLM). It can create content, complete sentences, extract information and more.
- LLMSynthesizeSubmit requires external service authentication, billing and internet connectivity.
- Possible values for prompt include:
-
"text" static text LLMPrompt["name"] a repository prompt StringTemplate[…] templated text TemplateObject[…] template for creating a prompt Image[…] a static image (not supported by all LLMs) {prompt1,…} a list of prompts - Static content in prompt can be disambiguated using an explicit association syntax:
-
<"Type""Text","Data"data > an explicit text part <"Type""Image","Data"data > an explicit image part (supports File[…] objects) - Template objects are automatically converted to strings via TemplateObject[…][].
- Prompts created with TemplateObject can contain text and images.
- LLMSynthesizeSubmit returns a TaskObject[…].
- The following options can be specified:
-
Authentication Inherited explicit user ID and API key HandlerFunctions how to handle generated events HandlerFunctionsKeys Automatic parameters to supply to handler functions LLMEvaluator Inherited LLM configuration to use - During the asynchronous execution of LLMSynthesizeSubmit, events can be generated.
- Events triggered by the LLM:
-
"ContentChunkReceived" incremental message content received "StoppingReasonReceived" stopping reason for the generation received "MetadataReceived" other metadata received "ToolRequestReceived" LLMToolRequest[…] received "UsageInformationReceived" incremental usage information received - Events triggered by local processing:
-
"CompletionGenerated" the completion is generated "ToolResponseGenerated" an LLMToolResponse[…] is generated - Events triggered by the task framework:
-
"FailureOccurred" failure is generated during the computation "TaskFinished" task is completely finished "TaskRemoved" task is being removed "TaskStarted" task is started "TaskStatusChanged" task status changed - HandlerFunctionsf uses f for all the events.
- With the specification HandlerFunctions-><…,"eventi"->fi,… >, fi[assoc] is evaluated whenever eventi is generated. The elements of assoc have keys specified by the setting for HandlerFunctionsKeys.
- Possible keys specified by HandlerFunctionsKeys include:
-
"CompletionText" textual answer by the LLM "CompletionToolsText" textual answer including tool interactions "ContentChunk" a message part "EventName" the name of the event being handled "Failure" failure object generated if task failed "FullText" string representation of "History" "History" complete history including prompt and completion "Model" model used to generate the message "Prompt" content submitted to the LLM "PromptText" string representation of "Prompt" "StoppingReason" why the generation has stopped "Task" the task object generated by LLMSynthesizeSubmit "TaskStatus" the status of the task "ToolRequest" last generated LLMToolRequest[…] "Timestamp" timestamp of the message "ToolRequests" list of LLMToolRequest objects "ToolResponse" last generated LLMToolResponse[…] "ToolResponses" list of LLMToolResponse objects "Usage" token usage "UsageIncrement" token usage update {key1,…} a list of keys All all keys Automatic figures out the keys from HandlerFunctions - Values that have not yet been received are given as Missing["NotAvailable"].
- LLMEvaluator can be set to an LLMConfiguration object or an association with any of the following keys:
-
"MaxTokens" maximum amount of tokens to generate "Model" base model "PromptDelimiter" string to insert between prompts "Prompts" initial prompts or LLMPromptGenerator objects "StopTokens" tokens on which to stop generation "Temperature" sampling temperature "ToolMethod" method to use for tool calling "Tools" list of LLMTool objects to make available "TopProbabilities" sampling classes cutoff "TotalProbabilityCutoff" sampling probability cutoff (nucleus sampling) - Valid forms of "Model" include:
-
name named model {service,name} named model from service <"Service"service,"Name"name > fully specified model - Multiple prompts are separated by the "PromptDelimiter" property.
- The generated text is sampled from a distribution. Details of the sampling can be specified using the following properties of the LLMEvaluator:
-
"Temperature"t Automatic sample using a positive temperature t "TopProbabilities"k Automatic sample only among the k highest-probability classes "TotalProbabilityCutoff"p Automatic sample among the most probable choices with an accumulated probability of at least p (nucleus sampling) - The Automatic value of these parameters uses the default for the specified "Model".
- Possible values for "ToolMethod" include:
-
"Service" rely on the tool mechanism of service "Textual" use prompt-based tool calling - Possible values for Authentication are:
-
Automatic choose the authentication scheme automatically Environment check for a key in the environment variables SystemCredential check for a key in the system keychain ServiceObject[…] inherit the authentication from a service object assoc provide explicit key and user ID - With AuthenticationAutomatic, the function checks the variable ToUpperCase[service]<>"_API_KEY" in Environment and SystemCredential; otherwise, it uses ServiceConnect[service].
- When using Authenticationassoc, assoc can contain the following keys:
-
"ID" user identity "APIKey" API key used to authenticate - LLMSynthesizeSubmit uses machine learning. Its methods, training sets and biases included therein may change and yield varied results in different versions of the Wolfram Language.
Examples
open allclose allBasic Examples (2)Summary of the most common use cases
Start an asynchronous text generation task:

https://wolfram.com/xid/0g360140opq45thva-lz6yo2


https://wolfram.com/xid/0g360140opq45thva-7ck402

Retrieve dynamically text generated using a simple prompt:

https://wolfram.com/xid/0g360140opq45thva-r5k3m4
Show all the generation steps:

https://wolfram.com/xid/0g360140opq45thva-1rzj0o

Scope (3)Survey of the scope of standard use cases
Synthesize text based on a prompt:

https://wolfram.com/xid/0g360140opq45thva-sjy5e6


https://wolfram.com/xid/0g360140opq45thva-py8n9d

Use a prompt with both text and images:

https://wolfram.com/xid/0g360140opq45thva-h0m2sv


https://wolfram.com/xid/0g360140opq45thva-y3cktv

Specify a different property to return:

https://wolfram.com/xid/0g360140opq45thva-4uo94

Inspect the prompt together with the completion:

https://wolfram.com/xid/0g360140opq45thva-2pcfqb

Options (8)Common values & functionality for each option
Authentication (4)
Provide an authentication key for the API:

https://wolfram.com/xid/0g360140opq45thva-hjhy5v

Provide both a user ID and the API key:

https://wolfram.com/xid/0g360140opq45thva-26n13x

Store the API key using the operating system's keychain:

https://wolfram.com/xid/0g360140opq45thva-bdfisk
Look for the key in the system keychain:

https://wolfram.com/xid/0g360140opq45thva-cntmim

Store the API key in an environment variable:

https://wolfram.com/xid/0g360140opq45thva-wra5l
Look for the key in the system environment:

https://wolfram.com/xid/0g360140opq45thva-fy6sex

Authenticate via a service object:

https://wolfram.com/xid/0g360140opq45thva-f8oogn


https://wolfram.com/xid/0g360140opq45thva-hae6g6

LLMEvaluator (4)
By default, the text generation continues until a termination token is generated:

https://wolfram.com/xid/0g360140opq45thva-4deaiq

Limit the amount of generated samples (tokens):

https://wolfram.com/xid/0g360140opq45thva-d6g3qu

Specify that the sampling should be performed at zero temperature:

https://wolfram.com/xid/0g360140opq45thva-dtptgp

Specify a high temperature to get more variation in the generation:

https://wolfram.com/xid/0g360140opq45thva-r3p372

Specify the maximum cumulative probability before cutting off the distribution:

https://wolfram.com/xid/0g360140opq45thva-grgwys

Specify the service and the model to use for the generation:

https://wolfram.com/xid/0g360140opq45thva-jpl8jr

Possible Issues (2)Common pitfalls and unexpected behavior
The text generation is not guaranteed to follow instructions to the letter:

https://wolfram.com/xid/0g360140opq45thva-ruf2k1


https://wolfram.com/xid/0g360140opq45thva-bikr0k


https://wolfram.com/xid/0g360140opq45thva-7vi000

Use exact arithmetic for precise computations:

https://wolfram.com/xid/0g360140opq45thva-odvxzz

The failures are silent if not caught:

https://wolfram.com/xid/0g360140opq45thva-wvtvrc


https://wolfram.com/xid/0g360140opq45thva-h0xhqs


https://wolfram.com/xid/0g360140opq45thva-k2s6eo

Wolfram Research (2025), LLMSynthesizeSubmit, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html.
Text
Wolfram Research (2025), LLMSynthesizeSubmit, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html.
Wolfram Research (2025), LLMSynthesizeSubmit, Wolfram Language function, https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html.
CMS
Wolfram Language. 2025. "LLMSynthesizeSubmit." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html.
Wolfram Language. 2025. "LLMSynthesizeSubmit." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html.
APA
Wolfram Language. (2025). LLMSynthesizeSubmit. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html
Wolfram Language. (2025). LLMSynthesizeSubmit. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html
BibTeX
@misc{reference.wolfram_2025_llmsynthesizesubmit, author="Wolfram Research", title="{LLMSynthesizeSubmit}", year="2025", howpublished="\url{https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html}", note=[Accessed: 30-April-2025
]}
BibLaTeX
@online{reference.wolfram_2025_llmsynthesizesubmit, organization={Wolfram Research}, title={LLMSynthesizeSubmit}, year={2025}, url={https://reference.wolfram.com/language/ref/LLMSynthesizeSubmit.html}, note=[Accessed: 30-April-2025
]}