---
title: "ChatEvaluate"
language: "en"
type: "Symbol"
summary: "ChatEvaluate[chat, prompt] appends prompt and its follow-up to the ChatObject chat. ChatEvaluate[prompt] represents an operator form of ChatEvaluate that can be applied to a ChatObject."
keywords: 
- Advance Conversation
- Chat Function
- Chat Interaction
- LLM conversation
- LLM interaction
- AI Prompt Response
- AI Response Generator
- Manage AI Conversation
- Conversation Progress Function
- AI Messaging Function
- Update Chat State
- LLM tool usage
- chat tools
canonical_url: "https://reference.wolfram.com/language/ref/ChatEvaluate.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Notebook Document Generation"
    link: "https://reference.wolfram.com/language/guide/DocumentGeneration.en.md"
  - 
    title: "LLM-Related Functionality"
    link: "https://reference.wolfram.com/language/guide/LLMFunctions.en.md"
  - 
    title: "Text Generation"
    link: "https://reference.wolfram.com/language/guide/TextConstruction.en.md"
  - 
    title: "Natural Language Processing"
    link: "https://reference.wolfram.com/language/guide/NaturalLanguageProcessing.en.md"
---
[EXPERIMENTAL]

# ChatEvaluate

[image]  This functionality requires [LLM access](https://www.wolfram.com/notebook-assistant-llm-kit/)

ChatEvaluate[chat, prompt] appends prompt and its follow-up to the ChatObject chat.

ChatEvaluate[prompt] represents an operator form of ChatEvaluate that can be applied to a ChatObject.

## Details and Options

* ``ChatEvaluate`` is used to continue the conversation in a ``ChatObject``.

* ``ChatEvaluate`` requires external service authentication, billing and internet connectivity.

* Possible values for ``prompt`` include:

|                   |                                |
| ----------------- | ------------------------------ |
| "text"            | static text                    |
| LLMPrompt["name"] | a repository prompt            |
| StringTemplate[…] | templated text                 |
| TemplateObject[…] | template for creating a prompt |
| Image[…]          | an image                       |
| {prompt1, …}      | a list of prompts              |

* Template objects are automatically converted to message content via ``TemplateObject[…][]``.

* Prompt created with ``TemplateObject`` can contain text and images. Not every LLM supports image input.

* The following options can be specified:

|                   |                     |                                               |
| ----------------- | ------------------- | --------------------------------------------- |
| Authentication    | Inherited           | explicit user ID and API key                  |
| LLMEvaluator      | Inherited           | LLM configuration to use                      |
| ProgressReporting | \$ProgressReporting | how to report the progress of the computation |

* If ``LLMEvaluator`` is set to ``Inherited``, the LLM configuration specified in ``chat`` is used.

* ``LLMEvaluator`` can be set to an ``LLMConfiguration`` object or an association with any of the following keys:

|                          |                                                |
| ------------------------ | ---------------------------------------------- |
| "MaxTokens"              | maximum amount of tokens to generate           |
| "Model"                  | base model                                     |
| "PromptDelimiter"        | string to insert between prompts               |
| "Prompts"                | initial prompts or LLMPromptGenerator objects  |
| "StopTokens"             | tokens on which to stop generation             |
| "Temperature"            | sampling temperature                           |
| "ToolMethod"             | method to use for tool calling                 |
| "Tools"                  | list of LLMTool objects to make available      |
| "TopProbabilities"       | sampling classes cutoff                        |
| "TotalProbabilityCutoff" | sampling probability cutoff (nucleus sampling) |

* Valid forms of ``"Model"`` include:

|                                          |                          |
| ---------------------------------------- | ------------------------ |
| name                                     | named model              |
| {service, name}                          | named model from service |
| <\|"Service" -> service, "Name" -> name\|> | fully specified model    |

* Prompts specified in ``"Prompts"`` are prepended to the messages in ``chat`` with role set as ``"System"``.

* Multiple prompts are separated by the ``"PromptDelimiter"`` property.

* The generated text is sampled from a distribution. Details of the sampling can be specified using the following properties of the ``LLMEvaluator`` :

|     |     |     |
| --- | --- | --- |
| "Temperature" -> t | Automatic | sample using a positive temperature t |
| "TopProbabilities" -> k | Automatic | sample only among the k highest-probability classes |
| "TotalProbabilityCutoff" -> p | Automatic | sample among the most probable choices with an accumulated probability of at least p (nucleus sampling) |

* The ``Automatic`` value of these parameters uses the default for the specified ``"Model"``.

* Possible values for ``"ToolMethod"`` include:

|           |                                       |
| --------- | ------------------------------------- |
| "Service" | rely on the tool mechanism of service |
| "Textual" | used prompt-based tool calling        |

* Possible values for ``Authentication`` are:

|                  |                                                  |
| ---------------- | ------------------------------------------------ |
| Automatic        | choose the authentication scheme automatically   |
| Inherited        | inherit settings from chat                       |
| Environment      | check for a key in the environment variables     |
| SystemCredential | check for a key in the system keychain           |
| ServiceObject[…] | inherit the authentication from a service object |
| assoc            | provide explicit key and user ID                 |

* With ``Authentication -> Automatic``, the function checks the variable ``ToUpperCase[service] <> "_API_KEY"`` in ``Environment`` and ``SystemCredential``; otherwise, it uses ``ServiceConnect[service]``.

* When using ``Authentication -> assoc``, ``assoc`` can contain the following keys:

|          |                              |
| -------- | ---------------------------- |
| "ID"     | user identity                |
| "APIKey" | API key used to authenticate |

* ``ChatEvaluate`` uses machine learning. Its methods, training sets and biases included therein may change and yield varied results in different versions of the Wolfram Language.

---

## Examples (18)

### Basic Examples (3)

Create a new chat:

```wl
In[1]:= chat = ChatObject[]

Out[1]=
ChatObject[Association["Authentication" -> Automatic, 
  "LLMEvaluator" -> LLMConfiguration[Association["Model" -> Association["Service" -> "OpenAI", 
       "Name" -> "gpt-4o"], "MaxTokens" -> Automatic, "Temperature" -> Automatic, 
     "TotalPro ... }, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic], 
  "LLMPacletVersion" -> "1.2.10"]]
```

Add a message and a response to the conversation:

```wl
In[2]:= ChatEvaluate[chat, "What's the tallest mountain?"]

Out[2]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     "P ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Create chat specifying a multimodal model:

```wl
In[1]:= multichat = ChatObject[LLMEvaluator -> <|"Model" -> {"OpenAI", "gpt-4-vision-preview"}|>]

Out[1]=
ChatObject[Association["Authentication" -> Automatic, 
  "LLMEvaluator" -> LLMConfiguration[Association["Model" -> Association["Service" -> "OpenAI", 
       "Name" -> "gpt-4-vision-preview"], "MaxTokens" -> Automatic, "Temperature" -> Automatic, 
 ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

Now both text and images can be used in the conversation:

```wl
In[2]:= ChatEvaluate[multichat, {"what is this picture?", Entity["TaxonomicSpecies", "FelisCatus::ddvt3"][EntityProperty["TaxonomicSpecies", "Image"]]}]

Out[2]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4-vision-preview"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Auto ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Create a chat object with a tool:

```wl
In[1]:= toolchat = ChatObject[LLMEvaluator -> <|"Tools" -> LLMTool["countcharacter", "string", StringLength]|>]

Out[1]=
ChatObject[Association["Authentication" -> Automatic, 
  "LLMEvaluator" -> LLMConfiguration[Association["Model" -> Association["Service" -> "OpenAI", 
       "Name" -> "gpt-4o"], "MaxTokens" -> Automatic, "Temperature" -> Automatic, 
     "TotalPro ... }, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic], 
  "LLMPacletVersion" -> "1.2.10"]]
```

Show the LLM answer together with the tool-calling steps:

```wl
In[2]:= ChatEvaluate[toolchat, "How many letters in the word \"characters\" (use the tool) ?"]

Out[2]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4o"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     " ... }, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic], 
  "LLMPacletVersion" -> "1.2.10"]]
```

### Scope (3)

Start a new conversation:

```wl
In[1]:= ChatEvaluate[ChatObject[], "Tell me a joke"]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     "P ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Continue an existing conversation:

```wl
In[1]:=
ChatEvaluate[ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     "P ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]], "And what about doctors?"]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     "P ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Use the function as an operator:

```wl
In[1]:=
ChatEvaluate["Hi there!"][ChatObject[Association["LLMEvaluator" -> LLMConfiguration[Association["Model" -> Automatic, 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     "Prompts" -> Automatic, "PromptDelimiter" -> "\n\n" ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     "P ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

### Options (10)

#### Authentication (4)

Provide an authentication key for the API:

```wl
In[1]:= ChatEvaluate[ChatObject[], "which element has atomic number 2?", Authentication -> <|"APIKey" -> "1234abcd"|>]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "Pro ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Look for the key in the system keychain:

```wl
In[1]:= ChatEvaluate[ChatObject[], "which element has atomic number 2?", Authentication -> SystemCredential]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "Pro ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

Specify the name of the key:

```wl
In[2]:= ChatEvaluate[ChatObject[], "which element has atomic number 2?", Authentication -> {SystemCredential, SystemCredentialKey -> "OPENAI_API_KEY"}]

Out[2]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "Pro ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Look for the key in the system environment:

```wl
In[1]:= ChatEvaluate[ChatObject[], "which element has atomic number 2?", Authentication -> Environment]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "Pro ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Authenticate via a service object:

```wl
In[1]:= so = ServiceConnect["OpenAI"]

Out[1]= ServiceObject["OpenAI", "ID" -> "connection-93c2bf186e91951f69eb6c0e80c06c42"]

In[2]:= ChatEvaluate[ChatObject[], "which element has atomic number 2?", Authentication -> so]

Out[2]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "Pro ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

#### LLMEvaluator (6)

Specify the service used to generate the answer:

```wl
In[1]:= ChatEvaluate[ChatObject[], "the first 20 digits of Pi", LLMEvaluator -> <|"Model" -> {"Anthropic", Automatic}|>]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "Anthropic", "Name" -> Automatic], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
   ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

Specify both the service and the model:

```wl
In[2]:= ChatEvaluate[ChatObject[], "the first 20 digits of Pi", LLMEvaluator -> <|"Model" -> {"Anthropic", "claude-2.1"}|>]

Out[2]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "Anthropic", "Name" -> "claude-2.1"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic,  ... ]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic], 
  "LLMPacletVersion" -> "1.6.3"]]
```

---

By default, the text generation continues until a termination token is generated:

```wl
In[1]:= ChatEvaluate[ChatObject[], "the first 20 digits of Pi"]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-3.5-turbo"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic,  ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

Limit the amount of generated samples (tokens):

```wl
In[2]:= ChatEvaluate[ChatObject[], "the first 20 digits of Pi", LLMEvaluator -> <|"MaxTokens" -> 3|>]

Out[2]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-3.5-turbo"], 
     "MaxTokens" -> 3, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     "P ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Specify that the sampling should be performed at zero temperature:

```wl
In[1]:= ChatEvaluate[ChatObject[], "Tell me three colors", LLMEvaluator -> <|"Temperature" -> 0|>]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> 0, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "PromptDelim ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

Specify a high temperature to get more variation in the generation:

```wl
In[2]:= ChatEvaluate[ChatObject[], "Tell me three colors", LLMEvaluator -> <|"Temperature" -> 2|>]

Out[2]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> 2, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "PromptDelim ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Specify the maximum cumulative probability before cutting off the distribution:

```wl
In[1]:= ChatEvaluate[ChatObject[], "What's the plural of mouse?", LLMEvaluator -> <|"TotalProbabilityCutoff" -> .5|>]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> 0, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "PromptDelim ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Specify the service and the model to use for the generation:

```wl
In[1]:= ChatEvaluate[ChatObject[], "What's the plural of mouse?", LLMEvaluator -> <|"Model" -> {"OpenAI", "gpt-4"}|>]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> 0, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "PromptDelim ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

---

Specify a prompt to be automatically inserted:

```wl
In[1]:= ChatEvaluate[ChatObject[], "What's the plural of mouse?", LLMEvaluator -> <|"Prompts" -> LLMPrompt["ELI5"]|>]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     "Prompts" -> {TemplateObject ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

### Applications (1)

#### Tool Calling (1)

Define a tool that can be called by the LLM:

```wl
In[1]:= altimeter = LLMTool[{"altimeter", "gives the altitude at a location"}, {"where" -> "Location"}, GeoElevationData[#where]&]

Out[1]=
LLMTool[Association["Name" -> "altimeter", "Description" -> "gives the altitude at a location", 
  "Parameters" -> {"where" -> Association["Interpreter" -> "Location", 
      "Help" -> Missing["NotSpecified"], "Required" -> True]}, 
  "Function" -> (GeoElevationData[#where] & )], {}]
```

Instantiate a chat object with the tool:

```wl
In[2]:= chat = ChatObject[LLMEvaluator -> <|"Tools" -> altimeter|>]

Out[2]=
ChatObject[Association["Authentication" -> Automatic, 
  "LLMEvaluator" -> LLMConfiguration[Association["Model" -> Association["Service" -> "OpenAI", 
       "Name" -> "gpt-4"], "MaxTokens" -> Automatic, "Temperature" -> Automatic, 
     "TotalProb ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

Ask a question that can get a precise answer using the tool:

```wl
In[3]:= ChatEvaluate[chat, "what's the altitude of mount Kilimanjaro?"]

Out[3]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> "gpt-4"], 
     "MaxTokens" -> Automatic, "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, 
     "P ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

### Possible Issues (1)

Evaluating a chat session with a specific service embeds the authentication information:

```wl
In[1]:= chat = ChatEvaluate[ChatObject[], "Who are you?", LLMEvaluator -> <|"Model" -> {"Anthropic", Automatic}|>]

Out[1]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "Anthropic", "Name" -> Automatic], 
     "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

With the default setting ``Authentication -> Inherited``, the authentication will not work on a different service:

```wl
In[2]:= ChatEvaluate[chat, "Who are you?", LLMEvaluator -> <|"Model" -> {"OpenAI", Automatic}|>]

Out[2]=
Failure["APIError", Association["MessageTemplate" -> 
   "Cannot connect to `1` with a ServiceObject for `2`", 
  "MessageParameters" -> {"OpenAI", "Anthropic"}]]
```

Use ``Authentication -> Automatic`` or provide explicit authentication for the new service to reconnect:

```wl
In[3]:= ChatEvaluate[chat, "Who are you?", LLMEvaluator -> <|"Model" -> {"OpenAI", Automatic}|>, Authentication -> Automatic]

Out[3]=
ChatObject[Association["LLMEvaluator" -> LLMConfiguration[
    Association["Model" -> Association["Service" -> "OpenAI", "Name" -> Automatic], 
     "Temperature" -> Automatic, "TotalProbabilityCutoff" -> Automatic, "Prompts" -> Automatic, 
     "P ...         {21.00040054321289, 21.}}}]}, FaceForm[RGBColor[0.5372549019607843, 0.5372549019607843, 
        0.5372549019607843, 1.]]]}, ImageSize -> {{27., 27.}, {27., 27.}}, 
    PlotRange -> {{-0.5, 26.5}, {-0.5, 26.5}}, AspectRatio -> Automatic]]]
```

## See Also

* [`ChatObject`](https://reference.wolfram.com/language/ref/ChatObject.en.md)
* [`ChatSubmit`](https://reference.wolfram.com/language/ref/ChatSubmit.en.md)
* [`LLMSynthesize`](https://reference.wolfram.com/language/ref/LLMSynthesize.en.md)
* [`LLMConfiguration`](https://reference.wolfram.com/language/ref/LLMConfiguration.en.md)
* [`LLMPrompt`](https://reference.wolfram.com/language/ref/LLMPrompt.en.md)
* [`LLMPromptGenerator`](https://reference.wolfram.com/language/ref/LLMPromptGenerator.en.md)
* [`LLMTool`](https://reference.wolfram.com/language/ref/LLMTool.en.md)
* [`LLMResourceTool`](https://reference.wolfram.com/language/ref/LLMResourceTool.en.md)
* [`OpenAI`](https://reference.wolfram.com/language/ref/service/OpenAI.en.md)
* [`Anthropic`](https://reference.wolfram.com/language/ref/service/Anthropic.en.md)
* [`GoogleGemini`](https://reference.wolfram.com/language/ref/service/GoogleGemini.en.md)
* [`AlephAlpha`](https://reference.wolfram.com/language/ref/service/AlephAlpha.en.md)
* [`Cohere`](https://reference.wolfram.com/language/ref/service/Cohere.en.md)
* [`DeepSeek`](https://reference.wolfram.com/language/ref/service/DeepSeek.en.md)
* [`Groq`](https://reference.wolfram.com/language/ref/service/Groq.en.md)
* [`MistralAI`](https://reference.wolfram.com/language/ref/service/MistralAI.en.md)
* [`TogetherAI`](https://reference.wolfram.com/language/ref/service/TogetherAI.en.md)

## Related Guides

* [Notebook Document Generation](https://reference.wolfram.com/language/guide/DocumentGeneration.en.md)
* [LLM-Related Functionality](https://reference.wolfram.com/language/guide/LLMFunctions.en.md)
* [Text Generation](https://reference.wolfram.com/language/guide/TextConstruction.en.md)
* [Natural Language Processing](https://reference.wolfram.com/language/guide/NaturalLanguageProcessing.en.md)

## History

* [Introduced in 2023 (13.3)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn133.en.md)