# Add Insights to Agent Context

Agent Context Insights captures conversations between your users and your AI agent and classifies them with an LLM. Once set up, the Insights dashboard appears automatically in Studio, showing where the agent succeeds, where it struggles, and what content is missing. Use this data to improve the agent over time.

## How it works

Insights has two parts that work together:

- **Telemetry**: saves conversations from your chat application to Sanity.
- **Classification**: a scheduled function that analyzes saved conversations with AI, extracting success scores, sentiment, and content gaps.

Telemetry alone stores raw conversations. Classification populates the dashboard. You need both.

## Classification metrics

##### Classification metrics

| Metric | Type | Description |
| --- | --- | --- |
| successScore | 1–10 | How well the agent resolved the user's needs |
| sentiment | positive / neutral / negative | Overall user tone |
| contentGaps | string[] | Topics where the agent lacked information |

## Prerequisites

- **Code running Agent Context**: [Follow the setup instructions](https://www.sanity.io/docs/ai/agent-context). The code examples below will add to your existing implementation.
- Sanity **project ID** and **dataset name**: Check your `sanity.config.ts` file, or visit [Manage](https://www.sanity.io/manage).
- A **write token** with the Editor role or similar: This is used to save conversations to your app. Note that this differs from the read token used in the setup guide. You’ll still use that to configure Agent Context. If you’re using more [granular permissions](https://www.sanity.io/docs/user-guides/roles), you can limit writes to the `sanity.agentContextConversation` document `_type`. 
- **LLM API key**: For classifying conversations, you’ll need an API key from an LLM provider (Anthropic, OpenAI, etc.).

## Setup

### Step 1: Enable telemetry integration

#### AI SDK

Add `sanityInsightsIntegration` to your existing `streamText` calls:

**chat/route.ts**

```typescript
import {sanityInsightsIntegration} from '@sanity/agent-context/ai-sdk'
import {streamText} from 'ai'

const result = streamText({
  model: yourModel,
  messages,
  experimental_telemetry: {
    isEnabled: true,
    integrations: [
      sanityInsightsIntegration({
        client: writeClient,  // Sanity client with Editor permissions
        agentId: 'my-agent',  // Groups conversations by agent
        threadId: chatId,     // Unique ID per conversation
      }),
    ],
  },
})
```



#### Custom integration

If you’re not using Vercel’s AI SDK, use `saveConversation` directly. Call it after each turn with the full conversation history. Repeated calls update the same document, with the ID derived from `agentId` and `threadId`.

**chat/route.ts**

```typescript
import {saveConversation} from '@sanity/agent-context/insights'

await saveConversation({
  client: writeClient,
  agentId: 'my-agent',
  threadId: chatId,
  messages: [
    {role: 'user', content: 'How do I return an item?'},
    {role: 'assistant', content: 'You can return items within 30 days...'},
  ],
})
```



### Step 2: Deploy the classification function

The classification function is a scheduled job that runs outside your app using [Sanity Functions](https://www.sanity.io/docs/functions/scheduled-function-quickstart). It finds unclassified conversations and analyzes them with an LLM of your choice. The classification interval, how often the function runs, is up to you. You may want it to run once a day to accommodate daily updates, or more frequently if your agent receives more traffic.

Here’s an example function:

**functions/classify-conversations/index.ts**

```typescript
import {createClient} from '@sanity/client'
import {
  classifyConversation,
  getConversationsToClassify,
  getPreviousContentGaps,
} from '@sanity/agent-context/insights'
import {scheduledEventHandler} from '@sanity/functions'
import {anthropic} from '@ai-sdk/anthropic'
import {env} from 'node:process'

export const handler = scheduledEventHandler(async ({context}) => {
  if (!context.clientOptions?.token) {
    console.error('[classify-conversations] No robot token available')
    return
  }

  const client = createClient({
    projectId: env.SANITY_PROJECT_ID,
    dataset: env.SANITY_DATASET,
    apiVersion: '2026-02-27',
    token: context.clientOptions.token,
    useCdn: false,
  })

  const [conversations, previousContentGaps] = await Promise.all([
    getConversationsToClassify({client}),
    getPreviousContentGaps({client}),
  ])

  await Promise.allSettled(
    conversations.map((conv) =>
      classifyConversation({
        client,
        conversationId: conv._id,
        model: anthropic('claude-sonnet-4-5'),
        messages: conv.messages,
        previousContentGaps,
      })
    )
  )
})
```

For blueprint configuration, deployment, and token setup, see the [Sanity Functions documentation](https://www.sanity.io/docs/functions/scheduled-function-quickstart).

## Primitives reference

##### Primitives reference

| Primitive | Import | Purpose |
| --- | --- | --- |
| sanityInsightsIntegration | @sanity/agent-context/ai-sdk | AI SDK telemetry integration |
| saveConversation | @sanity/agent-context/insights | Save conversations directly |
| getConversationsToClassify | @sanity/agent-context/insights | Fetch conversations ready for classification |
| getPreviousContentGaps | @sanity/agent-context/insights | Fetch known content gaps to avoid duplicates |
| classifyConversation | @sanity/agent-context/insights | Classify a conversation and write results back |

## Opt out

The Insights studio integration is enabled by default with `agentContextPlugin()`. To disable it:

**sanity.config.ts**

```typescript
agentContextPlugin({insights: {enabled: false}})
```

This removes the conversation schema and dashboard from your Studio.

## Telemetry sharing

When using conversation classification, you can opt in to share classification data with Sanity. Both levels are off by default.

- `shareMetrics`: shares classification metrics (scores, sentiment, content gap counts), message shapes, model info, and token usage. No conversation content is included.
- `shareConversations`: also shares actual message contents. Provide a contact so the team can reach out and help dial in your agent.

**functions/classify-conversations/index.ts**

```typescript
await classifyConversation({
  client,
  conversationId: conv._id,
  model: anthropic('claude-sonnet-4-5'),
  messages: conv.messages,
  modelProvider: conv.modelProvider,
  modelId: conv.modelId,
  tokenUsage: conv.tokenUsage,
  telemetry: {
    shareMetrics: true,
    shareConversations: true,
    contact: 'you@company.com',
  },
})
```

