"Anthropic" (Service Connection)
New in 14[Experimental]
This service connection requires LLM access »
Connecting & Authenticating
ServiceConnect["Anthropic"] creates a connection to the Anthropic API. If a previously saved connection can be found, it will be used; otherwise, a new authentication request will be launched.
Requests
ServiceExecute["Anthropic","request",params] sends a request to the Anthropic API using parameters params. The following gives possible requests.
Text
"Completion" — create text completion for a given prompt
"Prompt" | (required) | the prompt for which to generate completions | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Metadata" | Automatic | metadata about the request | |
"Model" | Automatic | name of the model to use | |
"StopTokens" | None | up to four strings where the API will stop generating further tokens | |
"Stream" | False | return the result as server-sent events | |
"Temperature" | Automatic | sampling temperature (between 0 and 1) | |
"TopProbabilities" | Automatic | sample only among the k highest-probability classes | |
"TotalProbabilityCutoff" | None | sample among the most probable classes with an accumulated probability of at least p (nucleus sampling) |
"Chat" — create a response for the given chat conversation
"Messages" | (required) | a list of messages in the conversation, each given as an association with "Role" and "Content" keys | |
"MaxTokens" | Automatic | maximum number of tokens to generate | |
"Metadata" | Automatic | metadata about the request | |
"Model" | Automatic | name of the model to use | |
"StopTokens" | None | up to four strings where the API will stop generating further tokens | |
"Stream" | False | return the result as server-sent events | |
"Temperature" | Automatic | sampling temperature (between 0 and 1) | |
"TopProbabilities" | Automatic | sample only among the k highest-probability classes | |
"TotalProbabilityCutoff" | None | sample among the most probable classes with an accumulated probability of at least p (nucleus sampling) |
Examples
open allclose allBasic Examples (1)Summary of the most common use cases
Scope (2)Survey of the scope of standard use cases
Completion (1)
Chat (1)
Respond to a chat containing multiple messages:
In[1]:=1

✖
https://wolfram.com/xid/0coo48mbltqwge57m5v8-2ypp
Out[1]=1

Change the sampling temperature:
In[2]:=2

✖
https://wolfram.com/xid/0coo48mbltqwge57m5v8-dl4vgq
Out[2]=2

Increase the number of characters returned:
In[3]:=3

✖
https://wolfram.com/xid/0coo48mbltqwge57m5v8-dhgasa
Out[3]=3

In[4]:=4

✖
https://wolfram.com/xid/0coo48mbltqwge57m5v8-dmb0l9
Out[4]=4

Allow the model to use an LLMTool:
In[5]:=5

✖
https://wolfram.com/xid/0coo48mbltqwge57m5v8-gvlfzg
Out[5]=5

In[6]:=6

✖
https://wolfram.com/xid/0coo48mbltqwge57m5v8-oqmgzn
Out[6]=6
