"OpenAI" (Service Connection)
Connecting & Authenticating
Requests
"TestConnection" — returns Success for working connection, Failure otherwise
"Completion" — create text completion for a given prompt
"Prompt" | (required) | the prompt for which to generate completions | |
"BestOf" | Automatic | number of completions to generate before selecting the "best" | |
"Echo" | Automatic | include the prompt in the completion | |
"FrequencyPenalty" | Automatic | penalize tokens based on their existing frequency in the text so far (between -2 and 2) | |
"LogProbs" | Automatic | include the log probabilities on the most likely tokens, as well as the chosen tokens (between 0 and 5) | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of completions to return | |
"PresencePenalty" | Automatic | penalize new tokens based on whether they appear in the text so far (between -2 and 2) | |
"StopTokens" | None | up to four strings where the API will stop generating further tokens | |
"Stream" | Automatic | return the result as server-sent events | |
"Suffix" | Automatic | suffix that comes after a completion | |
"Temperature" | Automatic | sampling temperature (between 0 and 2) | |
"ToolChoice" | Automatic | which (if any) tool is called by the model | |
"Tools" | Automatic | one or more LLMTool objects available to the model | |
"TotalProbabilityCutoff" | None | an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass | |
"User" | Automatic | unique identifier representing the end user |
"Chat" — create a response for the given chat conversation
"Messages" | (required) | a list of messages in the conversation, each given as an association with "Role" and "Content" keys | |
"FrequencyPenalty" | Automatic | penalize tokens based on their existing frequency in the text so far (between -2 and 2) | |
"LogProbs" | Automatic | include the log probabilities on the most likely tokens, as well as the chosen tokens (between 0 and 5) | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of chat completions to return | |
"PresencePenalty" | Automatic | penalize new tokens based on whether they appear in the text so far (between -2 and 2) | |
"StopTokens" | None | up to four strings where the API will stop generating further tokens | |
"Stream" | Automatic | return the result as server-sent events | |
"Suffix" | Automatic | suffix that comes after a completion | |
"Temperature" | Automatic | sampling temperature (between 0 and 2) | |
"ToolChoice" | Automatic | which (if any) tool is called by the model | |
"Tools" | Automatic | one or more LLMTool objects available to the model | |
"TotalProbabilityCutoff" | None | an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with the requested probability mass | |
"User" | Automatic | unique identifier representing the end user |
"Embedding" — create an embedding vector representing the input text
"Input" | (required) | one or a list of texts to get embeddings for | |
"EncodingFormat" | Automatic | format to return the embeddings | |
"EncodingLength" | Automatic | number of dimensions of the result | |
"Model" | Automatic | name of the model to use | |
"User" | Automatic | unique identifier representing the end user |
"ImageCreate" — create a square image given a prompt
"Prompt" | (required) | text description of the desired image | |
"Model" | Automatic | name of the model to use | |
"N" | Automatic | number of images to generate | |
"Quality" | Automatic | control the quality of the result; possible values include "hd" | |
"Size" | Automatic | size of the generated image | |
"Style" | Automatic | style of generated images; possible values include "vivid" or "natural" | |
"User" | Automatic | unique identifier representing the end user |
"ImageVariation" — create a variation of a given image
"Image" | (required) | image to use as the basis for the variation | |
"N" | Automatic | number of images to generate | |
"Size" | Automatic | size of the generated image | |
"User" | Automatic | unique identifier representing the end user |
"ImageEdit" — create an edited image given an original image and a prompt
"Image" | (required) | image to edit; requires an alpha channel if a mask is not provided | |
"Mask" | None | additional image whose fully transparent areas indicate where the input should be edited | |
"N" | Automatic | number of images to generate | |
"Prompt" | None | text description of the desired image edit | |
"Size" | Automatic | size of the generated image | |
"User" | Automatic | unique identifier representing the end user |
"AudioTranscription" — transcribe an audio recording into the input language
"Audio" | (required) | the Audio object to transcribe | |
"Language" | Automatic | language of the input audio | |
"Model" | Automatic | name of the model to use | |
"Prompt" | None | optional text to guide the model's style or continue a previous audio segment | |
"Temperature" | Automatic | sampling temperature (between 0 and 1) | |
"TimestampGranularities" | Automatic | the timestamp granularity of transcription (either "word" or "segment") |
"AudioTranslation" — translate an audio recording into English
"Audio" | (required) | the Audio object to translate | |
"Model" | Automatic | name of the model to use | |
"Prompt" | None | optional text to guide the model's style or continue a previous audio segment | |
"Temperature" | Automatic | sampling temperature (between 0 and 1) |
"SpeechSynthesize" — synthesize speech from text
"Input" | (required) | the text to synthesize | |
"Model" | Automatic | name of the model to use | |
"Speed" | Automatic | the speed of the produced speech | |
"Voice" | Automatic | the voice to use for the synthesis |
"ChatModelList" — list models available for the "Chat" request
"CompletionModelList" — list models available for the "Completion" request
"EmbeddingModelList" — list models available for the "Embedding" request
"ModerationModelList" — list models available for the "Moderation" request
"ImageModelList" — list models available for the image-related requests
"SpeechSynthesizeModelList" — list models available for the "SpeechSynthesize" request
"AudioModelList" — list models available for the "AudioTranscribe" request
"Moderation" — classify if text violates OpenAI's Content Policy
"Input" | (required) | the text to classify | |
"Model" | Automatic | name of the model to use |
Examples
open all close allBasic Examples (1)
Scope (10)
Text (4)
Completion (1)
Chat (2)
Respond to a chat containing multiple messages:
Change the sampling temperature:
Increase the number of characters returned:
Allow the model to use an LLMTool:
Send a chat request asynchronously using ServiceSubmit and collect the response using the HandlerFunctions and HandlerFunctionsKeys options:
Image (3)
ImageCreate (1)
ImageVariation (1)
Audio (3)
AudioTranscription (1)
Transcribe an Audio object:
Use a prompt to provide context for the transcription:
Transcribe a recording made in a different language:
AudioTranslation (1)
Translate an Audio object into English:
Authentication (4)
If no connections exist, ServiceConnect will prompt a dialog where an API key can be entered:
The API key can also be specified using the Authentication option:
Use credentials stored in SystemCredential:
The credentials are stored directly by the framework, since SystemCredential["key"] evaluates to a string:
Only store the SystemCredential key rather than its value by using RuleDelayed:
Retrieve the value of the authentication credentials used in a specific service object:
Overwrite the authentication credentials of an existing service object:
See Also
ServiceExecute ▪ ServiceConnect ▪ LLMFunction ▪ LLMSynthesize ▪ ChatEvaluate ▪ LLMConfiguration ▪ ImageSynthesize ▪ SpeechRecognize
Service Connections: AlephAlpha ▪ Anthropic ▪ Cohere ▪ DeepSeek ▪ GoogleGemini ▪ Groq ▪ MistralAI ▪ TogetherAI ▪ GoogleSpeech