Build with AI

Add Insights to Agent Context

Agent Context Insights tracks and analyzes your agent conversations

Agent Context Insights captures conversations between your users and your AI agent and classifies them with an LLM. Once set up, the Insights dashboard appears automatically in Studio, showing where the agent succeeds, where it struggles, and what content is missing. Use this data to improve the agent over time.

How it works

Insights has two parts that work together:

  • Telemetry: saves conversations from your chat application to Sanity.
  • Classification: a scheduled function that analyzes saved conversations with AI, extracting success scores, sentiment, and content gaps.

Telemetry alone stores raw conversations. Classification populates the dashboard. You need both.

Classification metrics

MetricTypeDescription
successScore1–10How well the agent resolved the user's needs
sentimentpositive / neutral / negativeOverall user tone
contentGapsstring[]Topics where the agent lacked information

Prerequisites

  • Code running Agent Context: Follow the setup instructions. The code examples below will add to your existing implementation.
  • Sanity project ID and dataset name: Check your sanity.config.ts file, or visit Manage.
  • A write token with the Editor role or similar: This is used to save conversations to your app. Note that this differs from the read token used in the setup guide. You’ll still use that to configure Agent Context. If you’re using more granular permissions, you can limit writes to the sanity.agentContextConversation document _type.
  • LLM API key: For classifying conversations, you’ll need an API key from an LLM provider (Anthropic, OpenAI, etc.).

Setup

Step 1: Enable telemetry integration

Add sanityInsightsIntegration to your existing streamText calls:

Step 2: Deploy the classification function

The classification function is a scheduled job that runs outside your app using Sanity Functions. It finds unclassified conversations and analyzes them with an LLM of your choice. The classification interval, how often the function runs, is up to you. You may want it to run once a day to accommodate daily updates, or more frequently if your agent receives more traffic.

Here’s an example function:

For blueprint configuration, deployment, and token setup, see the Sanity Functions documentation.

Primitives reference

PrimitiveImportPurpose
sanityInsightsIntegration@sanity/agent-context/ai-sdkAI SDK telemetry integration
saveConversation@sanity/agent-context/insightsSave conversations directly
getConversationsToClassify@sanity/agent-context/insightsFetch conversations ready for classification
getPreviousContentGaps@sanity/agent-context/insightsFetch known content gaps to avoid duplicates
classifyConversation@sanity/agent-context/insightsClassify a conversation and write results back

Opt out

The Insights studio integration is enabled by default with agentContextPlugin(). To disable it:

This removes the conversation schema and dashboard from your Studio.

Telemetry sharing

When using conversation classification, you can opt in to share classification data with Sanity. Both levels are off by default.

  • shareMetrics: shares classification metrics (scores, sentiment, content gap counts), message shapes, model info, and token usage. No conversation content is included.
  • shareConversations: also shares actual message contents. Provide a contact so the team can reach out and help dial in your agent.

Was this page helpful?