Build a conference concierge with Agent Context and Anthropic
Agent Context gives you MCP tools for your content. Wire them into `streamText`. And now you have a chatbot.

Knut Melvær
Principal Developer Marketing Manager
Published
You're at a conference. You want to see the AI talks, but the schedule is a 47-row spreadsheet organized by time slot, not by topic. You open the conference app and scroll. And scroll. You find one that looks right, but it's in "Room C" and you have no idea where Room C is. You check the venue map (a PDF, naturally). By the time you've figured it out, the talk has started.
Conference attendees have questions that don't fit neatly into a search box. "What talks about AI are on Thursday afternoon?" requires joining sessions, tracks, schedule slots, and rooms. "Is there anything for people new to design systems?" requires understanding topic relationships across the content model. A schedule page can list sessions. It can't understand intent.
For the fictive ContentOps Conf, I built a Telegram bot that attendees can message with exactly these kinds of questions. "What's happening in Room B right now?" "Tell me about the keynote speaker." "Are there any workshops I can still sign up for?" The bot has read access to the full conference content model and answers in natural language. Telegram because that's where conference attendees already are (the group chat, the hallway conversations, the "where are we meeting for dinner?" threads).
The whole thing is about 60 lines of application code across two files. And you can repurpose it to surface the agent on the website, or any other channel supported by Chat SDK. Agent Context also ships with a skill to make it even more effortless to add it to your code base.
This is the attendee side of a two-bot setup. The same Telegram app also has an ops bot for organizers that can read and write content using Sanity's Content Agent API. That bot is covered in a separate post. The two bots share a codebase but use different integration patterns, and the routing between them is a single if statement.
Two patterns, one decision
Before getting into the code, it's worth explaining why the attendee bot uses a different pattern than the organizer bot.
The organizer bot uses Content Agent API. That's a language model provider that bundles content access directly into the model. You call it like any Vercel AI SDK model, but it queries your Content Lake (Sanity's hosted content API) as part of inference. You don't choose the underlying model. You don't manage tools. It's opaque by design, and that's fine for an internal tool where you want simplicity and write access.
The attendee bot uses Agent Context. Agent Context is a different approach: it exposes your Sanity content as MCP (Model Context Protocol, a standard for connecting AI models to external tools) tools, and you bring your own LLM. You create an Agent Context document in Studio, get an MCP endpoint URL, and connect any model to those tools via @ai-sdk/mcp. The model can query your content through the tools but cannot write anything. The access boundary is structural, not prompt-based.
I chose Agent Context for the attendee bot for three reasons:
- Attendees only need read access. There's no reason to use a pattern that supports writes.
- I wanted to use Claude Sonnet 4.6 specifically. Content Agent API doesn't give you model choice.
- I wanted explicit control over the conversation flow, history management, and streaming behavior.
The routing in bot.ts makes the split visible:
bot.onNewMention(async (thread, message) => {
if (await isAllowedOrganizer(userId)) {
await handleOpsMessage(thread, message) // Content Agent — read/write
} else {
await handleAttendeeMessage(thread, message) // Agent Context — read-only
}
})Same bot, same Telegram app, two completely different integration patterns depending on who's asking.
Setting up Agent Context
Using the skill
If you're using Claude Code, Cursor, or a similar agent-capable tool, there's a faster path for existing projects. Install the Sanity Agent Context skills:
npx skills add sanity-io/agent-context --all
Then prompt at the root of your repo: "Use the create-agent-with-sanity-context skill to help me build an agent." It handles Studio setup, MCP connection, and scaffolding for your stack. Two companion skills, dial-your-context and shape-your-agent, help you refine the Instructions field on your Agent Context document and craft the system prompt respectively. The manual steps below cover the same ground if you'd rather wire it up yourself.
The manual old-school way
Three things to do before writing any application code.
1. Add the plugin to Studio
Install @sanity/agent-context and add agentContextPlugin() to your sanity.config.ts:
import {agentContextPlugin} from '@sanity/agent-context'
export default defineConfig({
// ...
plugins: [
agentContextPlugin(),
],
})2. Create an Agent Context document in Studio
Once the plugin is installed, you'll see an "Agent Context" section in your Studio. Create a new document there. Give it a name and configure which document types the model can access. When you save, Studio generates an MCP endpoint URL in the format:
https://api.sanity.io/vX/agent-context/{projectId}/{dataset}/{slug}That URL is your SANITY_CONTEXT_MCP_URL environment variable.
3. Create a read-only API token
In your Sanity project settings, create a new API token with the Viewer role. This is your SANITY_API_READ_TOKEN. The MCP endpoint requires this token in the Authorization header on every request.
That's three environment variables total for the attendee bot:
ANTHROPIC_API_KEY— your Anthropic API keySANITY_CONTEXT_MCP_URL— the MCP endpoint from the Agent Context documentSANITY_API_READ_TOKEN— the Viewer role token
The implementation
The application code splits into two files. The first creates the MCP client. The second handles the full message lifecycle.
import {createMCPClient, type MCPClient} from '@ai-sdk/mcp'
import type {ToolSet} from 'ai'
export async function createAgentContextClient(agentContextConfig: {
mcpUrl: string
readToken: string
}): Promise<{mcpClient: MCPClient; tools: ToolSet}> {
const mcpClient = await createMCPClient({
transport: {
type: 'http',
url: agentContextConfig.mcpUrl,
headers: {Authorization: `Bearer ${agentContextConfig.readToken}`},
},
})
const tools = await mcpClient.tools()
return {mcpClient, tools}
}createMCPClient from @ai-sdk/mcp connects to the Agent Context endpoint over HTTP. mcpClient.tools() fetches the available content tools. These are generated from your Agent Context document configuration and give the model the ability to query your content using GROQ (Sanity's query language). The function returns both the client (needed for cleanup) and the tools (passed to the model).
import {stepCountIs, streamText} from 'ai'
import {createAnthropic} from '@ai-sdk/anthropic'
import {createAgentContextClient} from './ai/agent-context.js'
import {fetchSystemPrompt} from './ai/prompts.js'
import {saveConversation} from './conversation/save.js'
import {loadConversationHistory} from './conversation/history.js'
import {cleanMarkdownStream, stripMarkdown} from './format-telegram.js'
import {sanitizeDocumentId} from './utils/sanitize.js'
import {config} from './config.js'
const MAX_HISTORY_MESSAGES = 10
export async function handleAttendeeMessage(
thread: {id: string; post: (text: string | AsyncIterable<string>) => Promise<unknown>},
message: {text: string},
) {
const systemPrompt = await fetchSystemPrompt('prompt.botAttendee')
const chatId = `agent.conversation.attendee-telegram-${sanitizeDocumentId(thread.id)}`
const history = await loadConversationHistory(chatId, MAX_HISTORY_MESSAGES)
const messages = [
...history.map((m) => ({role: m.role as 'user' | 'assistant', content: m.content})),
{role: 'user' as const, content: message.text},
]
const anthropic = createAnthropic({apiKey: config.anthropicApiKey})
const {mcpClient, tools} = await createAgentContextClient({
mcpUrl: config.mcpUrl,
readToken: config.readToken,
})
try {
const result = streamText({
model: anthropic('claude-sonnet-4-6'),
system: systemPrompt,
messages,
tools,
stopWhen: stepCountIs(10),
})
await thread.post(cleanMarkdownStream(result.textStream))
const finalText = stripMarkdown(await result.text)
const allMessages = [
...history,
{role: 'user', content: message.text},
{role: 'assistant', content: finalText},
]
saveConversation({chatId, messages: allMessages}).catch(console.error)
} finally {
await mcpClient.close()
}
}A few things worth noting here.
fetchSystemPrompt('prompt.botAttendee') fetches the system prompt from the Content Lake at runtime, not from a hardcoded string. This means conference organizers can adjust how the bot behaves without a code change or deploy. That pattern is covered in detail in a separate post on editable AI prompts.
loadConversationHistory(chatId, MAX_HISTORY_MESSAGES) loads the last 10 message pairs for this Telegram thread. The chatId is derived from the Telegram thread ID, so each conversation has its own history. The bot remembers context within a session.
streamText from Vercel AI SDK takes the model, system prompt, message history, and tools together. The stopWhen: stepCountIs(10) guard prevents runaway tool-calling loops. The model gets at most 10 steps to use tools and generate a response. In practice, answering "What talks about AI are on Thursday afternoon?" takes two or three tool calls: one to fetch sessions, one to filter by track or time slot.
The finally block calls mcpClient.close(). This is important. The MCP client holds an open connection to the Agent Context endpoint, and you need to close it explicitly when the request is done. Forgetting this will leak connections.
cleanMarkdownStream and stripMarkdown are Telegram-specific formatting utilities. Telegram's message format doesn't accept raw Markdown, so the stream gets cleaned before posting and the saved version gets stripped for storage.
System prompt from the Content Lake
The fetchSystemPrompt('prompt.botAttendee') call deserves a bit more explanation. Rather than hardcoding the system prompt in the application, it's stored as a prompt document in the Content Lake with the ID prompt.botAttendee. The function fetches it at runtime on every request.
This means the system prompt is editable content. A conference organizer can open Studio, find the "Bot Attendee" prompt document, and change how the bot introduces itself, what topics it declines to answer, or what tone it uses. No PR, no deploy. The change takes effect on the next message.
The full pattern for this, including the schema and the fetchSystemPrompt function, is in Store AI prompts in the CMS, not the codebase.
When to use Agent Context vs Content Agent API
Yes, Agent Context requires more setup than Content Agent API. Three environment variables instead of two, an explicit MCP client, manual lifecycle management. That's a real tradeoff. But the control you get in return is meaningful for public-facing interfaces.
Content Agent API bundles the LLM and content access together. You don't choose the model. It has read and write permissions, and setup is simpler (two environment variables). Best for internal tools and write operations.
Agent Context separates them. You bring your own LLM and get content access as MCP tools. Read-only permissions, full model control, but more setup (three environment variables plus an MCP client). Best for public-facing, read-only interfaces.
The read-only constraint is the most important difference for public-facing bots. With Agent Context, the model literally cannot write to your content. That's not a system prompt instruction that a clever user might talk the model out of. It's a structural boundary enforced by the MCP endpoint. For an attendee-facing bot, that's exactly what you want.
The model choice matters too. Different models have different strengths for conversational tasks, and you may have existing API agreements or cost structures that make one provider preferable. Agent Context doesn't lock you in.
Get started
The full implementation is in the conference-starter repo. The attendee bot code lives in apps/bot/src/handler-attendee.ts and apps/bot/src/ai/agent-context.ts.
To use Agent Context in your own project:
- Install
@sanity/agent-contextand add the plugin to your Studio config - Create an Agent Context document in Studio and copy the MCP URL
- Create a Viewer role API token
- Install
@ai-sdk/mcpand@ai-sdk/anthropic(or your preferred provider) - Use
createMCPClientto connect,mcpClient.tools()to get the tools, and pass them tostreamText
The Agent Context documentation covers the Studio configuration in more detail, including how to scope which document types the model can access and how to configure the GROQ filters that control what content is visible.
If you need write access or want the simplest possible setup, the Content Agent API post covers that pattern. The two approaches complement each other, and the conference-starter repo shows both running side by side in the same application.
Don't have a Sanity project yet? Create one free and try Agent Context with the conference-starter repo.