Add Insights to Agent Context
Agent Context Insights tracks and analyzes your agent conversations
Agent Context Insights captures conversations between your users and your AI agent and classifies them with an LLM. Once set up, the Insights dashboard appears automatically in Studio, showing where the agent succeeds, where it struggles, and what content is missing. Use this data to improve the agent over time.
How it works
Insights has two parts that work together:
- Telemetry: saves conversations from your chat application to Sanity.
- Classification: a scheduled function that analyzes saved conversations with AI, extracting success scores, sentiment, and content gaps.
Telemetry alone stores raw conversations. Classification populates the dashboard. You need both.
Classification metrics
| Metric | Type | Description |
|---|---|---|
| successScore | 1–10 | How well the agent resolved the user's needs |
| sentiment | positive / neutral / negative | Overall user tone |
| contentGaps | string[] | Topics where the agent lacked information |
Prerequisites
- Code running Agent Context: Follow the setup instructions. The code examples below will add to your existing implementation.
- Sanity project ID and dataset name: Check your
sanity.config.tsfile, or visit Manage. - A write token with the Editor role or similar: This is used to save conversations to your app. Note that this differs from the read token used in the setup guide. You’ll still use that to configure Agent Context. If you’re using more granular permissions, you can limit writes to the
sanity.agentContextConversationdocument_type. - LLM API key: For classifying conversations, you’ll need an API key from an LLM provider (Anthropic, OpenAI, etc.).
Setup
Step 1: Enable telemetry integration
Add sanityInsightsIntegration to your existing streamText calls:
import {sanityInsightsIntegration} from '@sanity/agent-context/ai-sdk'
import {streamText} from 'ai'
const result = streamText({
model: yourModel,
messages,
experimental_telemetry: {
isEnabled: true,
integrations: [
sanityInsightsIntegration({
client: writeClient, // Sanity client with Editor permissions
agentId: 'my-agent', // Groups conversations by agent
threadId: chatId, // Unique ID per conversation
}),
],
},
})If you’re not using Vercel’s AI SDK, use saveConversation directly. Call it after each turn with the full conversation history. Repeated calls update the same document, with the ID derived from agentId and threadId.
import {saveConversation} from '@sanity/agent-context/insights'
await saveConversation({
client: writeClient,
agentId: 'my-agent',
threadId: chatId,
messages: [
{role: 'user', content: 'How do I return an item?'},
{role: 'assistant', content: 'You can return items within 30 days...'},
],
})Step 2: Deploy the classification function
The classification function is a scheduled job that runs outside your app using Sanity Functions. It finds unclassified conversations and analyzes them with an LLM of your choice. The classification interval, how often the function runs, is up to you. You may want it to run once a day to accommodate daily updates, or more frequently if your agent receives more traffic.
Here’s an example function:
import {createClient} from '@sanity/client'
import {
classifyConversation,
getConversationsToClassify,
getPreviousContentGaps,
} from '@sanity/agent-context/insights'
import {scheduledEventHandler} from '@sanity/functions'
import {anthropic} from '@ai-sdk/anthropic'
import {env} from 'node:process'
export const handler = scheduledEventHandler(async ({context}) => {
if (!context.clientOptions?.token) {
console.error('[classify-conversations] No robot token available')
return
}
const client = createClient({
projectId: env.SANITY_PROJECT_ID,
dataset: env.SANITY_DATASET,
apiVersion: '2026-02-27',
token: context.clientOptions.token,
useCdn: false,
})
const [conversations, previousContentGaps] = await Promise.all([
getConversationsToClassify({client}),
getPreviousContentGaps({client}),
])
await Promise.allSettled(
conversations.map((conv) =>
classifyConversation({
client,
conversationId: conv._id,
model: anthropic('claude-sonnet-4-5'),
messages: conv.messages,
previousContentGaps,
})
)
)
})For blueprint configuration, deployment, and token setup, see the Sanity Functions documentation.
Primitives reference
| Primitive | Import | Purpose |
|---|---|---|
| sanityInsightsIntegration | @sanity/agent-context/ai-sdk | AI SDK telemetry integration |
| saveConversation | @sanity/agent-context/insights | Save conversations directly |
| getConversationsToClassify | @sanity/agent-context/insights | Fetch conversations ready for classification |
| getPreviousContentGaps | @sanity/agent-context/insights | Fetch known content gaps to avoid duplicates |
| classifyConversation | @sanity/agent-context/insights | Classify a conversation and write results back |
Opt out
The Insights studio integration is enabled by default with agentContextPlugin(). To disable it:
agentContextPlugin({insights: {enabled: false}})This removes the conversation schema and dashboard from your Studio.
Telemetry sharing
When using conversation classification, you can opt in to share classification data with Sanity. Both levels are off by default.
shareMetrics: shares classification metrics (scores, sentiment, content gap counts), message shapes, model info, and token usage. No conversation content is included.shareConversations: also shares actual message contents. Provide a contact so the team can reach out and help dial in your agent.
await classifyConversation({
client,
conversationId: conv._id,
model: anthropic('claude-sonnet-4-5'),
messages: conv.messages,
modelProvider: conv.modelProvider,
modelId: conv.modelId,
tokenUsage: conv.tokenUsage,
telemetry: {
shareMetrics: true,
shareConversations: true,
contact: 'you@company.com',
},
})