# [Architecture & DevOps](/learn/course/architecture-and-devops) Whether you're first getting your project off the ground or developing new features in an existing one, having a well-defined end-to-end development workflow is crucial for shipping your work efficiently and reliably without impacting the day-to-day content operations of your editors. ## [Introduction to Development Workflow](/learn/course/architecture-and-devops/introduction-to-development-workflow) Sanity’s code-first approach makes it uniquely suited for automation and naturally aligns with CI/CD to supports safe, continuous iteration without disrupting content teams. ## Sanity's Mental Model The Studio, from your schema definitions and configuration, defines the structure of your content, enforces the rules and validation of that content, and allows you to customize the editorial interface for how your editors interact with and manage that content. You can think of the Studio as a customizable "window" through which your editors interact with Content Lake. Sanity provides a unique architecture that decouples the content editing experience in the Studio from the underlying content storage in the Content Lake. ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/83ef968df598f301495a0a6341e5fd0b6da20be5-12288x4754.png) One of the key benefits of this architecture is that the Studio's schema and configuration live entirely in code, which means you can manage them in source control and test any changes as part of your regular development and QA process. In other words, a solid development workflow allows you to: * Develop and test new features in an isolated environment separate from production * Promote changes from development to production environments in a controlled manner * Allow content editors to continue their work uninterrupted in the production environment while development is ongoing By setting up separate development and production environments, along with processes to migrate code and content between them, you can establish a smooth flow from development to release. This ensures that new features are properly tested before reaching production, and that content editors always have a stable production environment to work in that is insulated from development activities. ## What is DevOps? According to [Atlassian](https://www.atlassian.com/devops), > DevOps is a set of [practices](https://www.atlassian.com/devops/what-is-devops/devops-best-practices), [tools](https://www.atlassian.com/devops/devops-tools/choose-devops-tools), and a [cultural philosophy](https://www.atlassian.com/devops/what-is-devops/devops-culture) that automate and integrate the processes between software development and IT teams. It emphasizes team empowerment, cross-team communication and collaboration, and technology automation. Continuous integration and continuous delivery (CI/CD) automates the development workflow above and allows developers to iterate quickly, catch issues early, and deliver new features seamlessly. Integrating the development and content workflows through CI/CD empowers developers and content editors to collaborate effectively on delivering new experiences. Developers can focus on building and shipping features, while content editors can create and manage content without disruption. This setup provides a robust foundation for ongoing development and content operations to occur in parallel–all in service of enabling your organization to realize its business goals. In the upcoming lessons, we'll walk through the specific steps to configure your Sanity project with multiple environments and datasets to support this development workflow. You'll learn how to structure your project, manage datasets, and deploy Studios. By the end, you'll have a the foundations of a setup to confidently develop and ship ongoing improvements to your project.## [Setting Up Your Environments](/learn/course/architecture-and-devops/setting-up-your-environments) Separate development and production environments ensure isolated testing, stable workflows, and safe content migrations without disrupting editors. 1. This course assumes you have already initialized a Sanity Studio as described in [Day one content operations](https://www.sanity.io/learn/course/day-one-with-sanity-studio). ## Create Development Dataset When developing new features, the code changes made by developers to schemas and other Studio configuration shouldn't impact content editors working in the production environment. That's why it's a best practice to have separate datasets and Studio deployments for development and production environments. By provisioning a dedicated development dataset, developers can freely iterate and test code changes without worrying about interrupting the day-to-day content operations. This clean separation allows both content and development workflows to proceed in parallel, while keeping the production environment stable. As new features are validated in the development environment, the code changes can be promoted to production, and any necessary content migrations can be performed in a controlled manner. Developing in a separate environment can also ensure that any schema changes they make will have migration scripts. Meanwhile, content editors can continue their work in the production dataset, insulated from any development activities. First, to complement the `production` dataset you should already have, create a `development` dataset using the CLI: ```sh npx sanity dataset create development --visibility private ``` You should now see two datasets when you run `npx sanity dataset list` and in Manage (`npx sanity manage)`. ## Using Environment Variables Environment variables allow your Studio configuration to adapt to each deployment without modifying the codebase—making them essential for managing different environments. Rather than hardcoding the project ID and dataset, for example, you can instead use environment variables to statically replace them at build time. First, initialize a new environment file by running: ```sh npx sanity init --env --project [your-project-id] --dataset production ``` You'll see a new `.env` file in your workspace: ```text:.env # Warning: Do not add secrets (API keys and similar) to this file, as it is source controlled! # Use `.env.local` for any secrets, and ensure it is not added to source control SANITY_STUDIO_PROJECT_ID="[your-project-id]" SANITY_STUDIO_DATASET="production" ``` 1. `.env` by default won't be ignored by Git; however, these two environment variables aren't considered sensitive. If you'd rather they weren't checked into source control, you can use `--env .env.local`. Now duplicate `.env` and name it `.env.development`, and remove the project ID so you just have the following: ```text:.env.development # Warning: Do not add secrets (API keys and similar) to this file, as it is source controlled! # Use `.env.local` for any secrets, and ensure it is not added to source control SANITY_STUDIO_DATASET="development" ``` 1. In this example, we'll be checking these into source control. If you needed to override a variable on your local machine, you could add a `.env[.mode].local` file with your override(s). Finally, let's update our configuration files to read from environment variables: ```typescript:sanity.config.ts import {defineConfig} from 'sanity' export default defineConfig({ // ... projectId: process.env.SANITY_STUDIO_PROJECT_ID!, dataset: process.env.SANITY_STUDIO_DATASET!, // ... }) ``` ```typescript:sanity.cli.ts import {defineCliConfig} from 'sanity/cli' export default defineCliConfig({ api: { projectId: process.env.SANITY_STUDIO_PROJECT_ID!, dataset: process.env.SANITY_STUDIO_DATASET!, }, // ... }) ``` ## Conclusion In this lesson, we covered how to set up separate development and production datasets in your Sanity project. By creating dedicated datasets and configuring environment variables, you can establish a clean separation between ongoing development work and the stable production environment used by content editors. With this foundation in place, you're ready to deploy separate Sanity Studios for each environment. In the next lesson, we'll walk through the process of deploying a development Studio and how to set CORS origins for each environment.## [Deploying Environment-Specific Studios](/learn/course/architecture-and-devops/deploying-environment-specific-studios) Deploying separate Studios ensures clean environment separation, safer iteration, and uninterrupted content editing. With our datasets and environment files in place, let's now walk through the process of deploying separate Sanity Studios for each environment and setting up the proper CORS origins. ## Designate a Studio Host First, we'll need to configure the subdomain for our Studio deployments. Sanity CLI will either use the `studioHost` option in `sanity.cli.ts`, if it's provided, or prompt for a hostname in the terminal. Like our project ID and dataset, we can use an environment variable to configure the hostname for our Studio deployment. So let's add a new environment variable, `SANITY_STUDIO_HOSTNAME`, to our `.env` file: ```text:.env # Warning: Do not add secrets (API keys and similar) to this file, as it source is controlled! # Use `.env.local` for any secrets, and ensure it is not added to source control SANITY_STUDIO_PROJECT_ID="[your-project-id]" SANITY_STUDIO_DATASET="development" # [hostname].sanity.studio HOSTNAME="[your-hostname]" SANITY_STUDIO_HOSTNAME="$HOSTNAME" ``` Then in your `.env.development` file add: ```text:.env.development # https://www.sanity.io/docs/environment-variables # Warning: Do not add secrets (API keys and similar) to this file, as it is source controlled! # Use `.env.local` for any secrets, and ensure it is not added to source control SANITY_STUDIO_DATASET="development" # [hostname]-development.sanity.studio SANITY_STUDIO_HOSTNAME="${HOSTNAME}-development" ``` 1. Here we're using a variable `HOSTNAME` as the base and then using the `dotenv-expand` syntax to reference and add a suffix. Now let's add a `studioHost` option and set it to the value of our new environment variable: ```typescript:sanity.cli.ts import {defineCliConfig} from 'sanity/cli' export default defineCliConfig({ api: { projectId: process.env.SANITY_STUDIO_PROJECT_ID!, dataset: process.env.SANITY_STUDIO_DATASET!, }, studioHost: process.env.SANITY_STUDIO_HOSTNAME!, // ... }) ``` ## Targeting Environments with Modes Now that we've setup our environment variables and configured our CLI and Studio configuration files to read from them, we need a way to target a specific environment. Sanity CLI will load your environment variables in a predictable order, which we have leveraged to set environment variables for our different modes (`production` vs. `development`). `.env` will be loaded in all modes, so we'll use it as our fallback and add `development`-specific overrides. For example, we've overridden `SANITY_STUDIO_DATASET` and we've suffixed `HOSTNAME` to set `SANITY_STUDIO_HOSTNAME`. When running Sanity CLI, you can specify the intended mode for your commands. Commands like `build` and `deploy` will run in `production` mode by default. To target a different environment, you can set the mode by specifying `SANITY_ACTIVE_ENV` in your terminal: ```sh # builds Sanity Studio in `development` mode, loading `.env` and then `.env.development` SANITY_ACTIVE_ENV=development npm run build ``` 1. You can learn more about modes and environment variable loading order in [Environment Variables](https://www.sanity.io/learn/studio/environment-variables) Now let's deploy our two Studio environments: ```sh # Deploy the development Studio SANITY_ACTIVE_ENV=development npm run deploy # Deploy the production Studio npm run deploy ``` The last step will be to add your Studio environments to your CORS origin. Navigate to the Studio URL's that you've just created. If they haven't yet been added as CORS origins, you'll be prompted to add them to Manage. You can also either run `npx sanity manage` from your terminal or open Manage in your browser directly. Navigate to the 'API' tab and add your Studio URL's as origins with credentials allowed. ## Conclusion Congratulations—you now have separate, environment-specific Studios configured and deployed! This setup gives your team the freedom to iterate safely in development while keeping production stable for content editors. Up until now, you've been running CLI commands manually, carefully passing the right environment mode. With everything now structured and standardized, you're ready to take the next step: automating your deployment. In the next lesson, we’ll connect these pieces into a CI/CD pipeline that streamlines your workflow and eliminates manual steps.## [Automating Development Workflow](/learn/course/architecture-and-devops/automating-development-workflow) Automate Sanity Studio deployments and CI checks that validate schemas and content, ensuring every code change is rigorously reviewed and production-ready. Now that your environments and Studios are fully configured, it’s time to automate the workflow. In this lesson, we will explore how automating the deployment of your Sanity Studio streamlines your development process and helps you achieve faster, more reliable releases. By transitioning from manual deployments to an automated workflow, you not only ensure that your production code is built and deployed consistently, but you also gain immediate feedback on changes with minimal human intervention. ## Development Workflow ![Flowchart showing the process of authoring a feature into production](https://cdn.sanity.io/images/3do82whm/next/02f5e9c440d506e7b23f11b9414a4029390f29ba-1564x5222.png) When developing new features, for example adding a new schema definition or creating a custom input component, should follow a consistent process: 1. Start by checking out a new feature branch 2. Making code changes while running the Studio locally 3. Once they're ready to deploy and looking for a code review, the developer will push their branch to the remote and open a pull request 4. Once their code has been reviewed and validated, they'll merge their pull request to the main branch. ## Automate Deployment Imagine you have just committed changes to a feature branch and opened a pull request. Instead of manually building and deploying your Sanity Studio, like we did in the previous lesson, an automated process springs into action. The workflow is triggered by push or pull request events. First, it checks out the latest code from your branch, sets up the Node.js environment, and installs the dependencies. It'll then build your Studio and deploy it to a PR-numbered hostname. As an added benefit, the workflow also automatically posts a comment with a link to the preview environment where reviewers can see your changes. When your code is merged into the main branch, the workflow builds and deploys the Studio to the production environment. Once a pull request is closed, a separate job is triggered to clean up the associated preview deployment. Here is a sample GitHub workflow that demonstrates this automated deployment process for a Sanity Studio. 1. Though written here for GitHub, these steps can be ported to any CI/CD provider and can be adapted to your preferred solution. ```yaml:deploy.yml name: Deploy Sanity Studio on: push: branches: - main - development pull_request: types: [opened, synchronize, reopened, closed] permissions: contents: read pull-requests: write env: SANITY_AUTH_TOKEN: ${{ secrets.SANITY_AUTH_TOKEN }} concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.ref_name }} cancel-in-progress: true jobs: deploy: name: Deploy runs-on: ubuntu-latest if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') environment: name: ${{ github.ref == 'refs/heads/main' && 'Production' || github.ref == 'refs/heads/development' && 'Development' || 'Preview' }} url: ${{ steps.deploy.outputs.STUDIO_URL }} steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: lts/* cache: npm - run: npm ci - name: Set Studio hostname run: | if [ "${{ github.event_name }}" == "pull_request" ]; then echo "SANITY_STUDIO_HOSTNAME=${HOSTNAME}-pr-${{ github.event.pull_request.number }}" >> $GITHUB_ENV else echo "SANITY_STUDIO_HOSTNAME=${HOSTNAME}" >> $GITHUB_ENV fi - name: Build and deploy Sanity Studio id: deploy run: | if [ -z "${SANITY_STUDIO_HOSTNAME}" ]; then echo "Error: SANITY_STUDIO_HOSTNAME is not set" >&2 exit 1 fi if [[ "$SANITY_ACTIVE_ENV" == "development" ]]; then npm run deploy -- --yes --source-maps else npm run deploy -- --yes fi echo "STUDIO_URL=https://${SANITY_STUDIO_HOSTNAME}.sanity.studio" >> $GITHUB_OUTPUT - name: Post preview link if: github.event_name == 'pull_request' && github.event.action == 'opened' uses: actions/github-script@v7 with: script: | const body = [ '**🚀 Preview environment has been deployed!**', `Visit [${process.env.STUDIO_URL}](${process.env.STUDIO_URL}) to see your changes.`, "*This is a temporary environment that will be undeployed when this PR is merged or closed.*" ].join('\n\n') github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body, }) env: STUDIO_URL: ${{ steps.deploy.outputs.STUDIO_URL }} teardown: name: Teardown runs-on: ubuntu-latest if: github.event_name == 'pull_request' && github.event.action == 'closed' environment: name: Preview steps: - name: Checkout repository uses: actions/checkout@v4 - name: Set up Node.js uses: actions/setup-node@v4 with: node-version: lts/* cache: npm - name: Install dependencies run: npm ci - name: Cleanup PR preview run: npx sanity undeploy -- --yes env: SANITY_STUDIO_HOSTNAME: ${HOSTNAME}-pr-${{ github.event.pull_request.number }} ``` ## Adding Pull Request Checks Now that your Sanity Studio is deployed automatically, it’s crucial that every change merged into the main branch has been thoroughly reviewed and validated. When a pull request is opened or updated, your CI pipeline not only runs the typical linting and type-checking jobs but also includes Sanity-specific checks to catch errors early. If any of these jobs fail, detailed reports are automatically posted to the pull request, providing instant feedback for your team. In this way, before any merge occurs, your code is guaranteed to have passed all the necessary automated checks. Within your CI pipeline, the commands `sanity schema validate` and `sanity documents validate` play critical roles in ensuring that your code does not introduce breaking changes. These validation steps create a robust safety net that goes beyond simply automating deployments. The command `sanity schema validate` is used to verify that your schema definitions are error-free. When you run this command, it checks your schema files for syntax errors, misconfigurations, or other issues that might cause runtime errors. In contrast, the command `sanity documents validate` verifies that the content stored in your Sanity dataset conform to the constraints defined in your schema. This command inspects each document to ensure that required fields are present, data types match the expected formats, and any additional validation rules you have implemented are adhered to. This step is essential for maintaining data integrity, and any discrepancies—such as missing values or incorrect data formats—are flagged to prevent problematic changes from being merged into production. 1. Changes to your content model often require migration scripts to ensure data integrity. You can learn more about migrating data in [Handling schema changes confidently](https://www.sanity.io/learn/course/handling-schema-changes-confidently). ```yaml:ci.yml name: CI on: pull_request: types: [opened, synchronize, reopened] push: branches: [main] workflow_dispatch: permissions: contents: read pull-requests: write concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} cancel-in-progress: true env: SCHEMA_VALIDATION_REPORT: schema-report.txt DATASET_VALIDATION_REPORT: dataset-report.txt jobs: typecheck: name: Typecheck runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: cache: npm node-version: lts/* - run: npm ci - name: Typecheck run: npm run typecheck lint: name: Lint runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: cache: npm node-version: lts/* - run: npm ci - name: Lint run: npm run lint -- --max-warnings 0 validate-schema: name: Validate Studio schema runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: cache: npm node-version: lts/* - run: npm ci - name: Validate Studio schema id: validate run: | npx sanity schema validate >> ${{ env.SCHEMA_VALIDATION_REPORT }} exit_code=$? { echo "## Schema Validation Results" echo "\`\`\`" cat ${{ env.SCHEMA_VALIDATION_REPORT }} echo "\`\`\`" } >> $GITHUB_STEP_SUMMARY exit $exit_code - name: Post schema validation report uses: actions/github-script@v6 if: failure() && steps.validate.outcome == 'failure' with: script: | const fs = require('fs'); const report = fs.readFileSync('${{ env.SCHEMA_VALIDATION_REPORT }}', 'utf8'); const body = [ '### ❌ Schema validation failed', '', `\`\`\`${report}\`\`\``, ].join('\n'); await github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body, }); validate-dataset: name: Validate dataset runs-on: ubuntu-latest if: (github.event_name == 'pull_request' && github.base_ref == 'main') || (github.ref == 'refs/heads/main') steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: cache: npm node-version: lts/* - run: npm ci - name: Validate dataset id: validate run: | npx sanity documents validate --yes --level info >> ${{ env.DATASET_VALIDATION_REPORT }} exit_code=$? { echo "## Dataset Validation Results" echo "\`\`\`" cat ${{ env.DATASET_VALIDATION_REPORT }} echo "\`\`\`" } >> $GITHUB_STEP_SUMMARY exit $exit_code env: SANITY_ACTIVE_ENV: production SANITY_AUTH_TOKEN: ${{ secrets.SANITY_AUTH_TOKEN }} # TODO: delete SANITY_STUDIO_PROJECT_ID: ${{ vars.SANITY_PROJECT_ID }} - name: Post dataset validation report if: failure() && steps.validate.outcome == 'failure' uses: actions/github-script@v6 with: script: | const fs = require('fs'); const report = fs.readFileSync('${{ env.DATASET_VALIDATION_REPORT }}', 'utf8'); const body = [ '### ❌ Dataset validation failed', '', `\`\`\`${report}\`\`\``, ].join('\n'); await github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body }); ``` By incorporating these validation steps into your GitHub workflow, you ensure that changes undergo rigorous review before they trigger the automated deployment process. This CI process not only enhances the quality and reliability of your Sanity Studio but also builds confidence that both its structure and underlying data are sound when updates are pushed to production. 1. Changes to your content model often require migration scripts to ensure data integrity. You can learn more about migrating data in [Handling schema changes confidently](https://www.sanity.io/learn/course/handling-schema-changes-confidently). You'll now have a robust DevOps process that enables continuous development while maintaining a stable production environment for your content team. This approach balances the needs of both developers and content editors, ensuring smooth operations and reliable deployments.# [AI-powered Sanity development](/learn/course/code-with-ai) With its all-code configuration, AI tools make Sanity a perfect choice as a content backend, and developers of varying levels of experience—including those with none at all—greatly benefit from AI tooling. Do it the right way for the best results, take on more ambitious projects, lower the barrier to entry and create happier content authors. ## [The present future of Sanity development](/learn/course/code-with-ai/the-present-future-of-sanity-development) Coding is no longer just for developers, however AI won't "do it all." Level-set your expectations on what AI tooling can and can't do for the pace and quality of development. With its all-code configuration, thanks to AI-assisted development you can more rapidly configure a new Sanity Studio or integrate Sanity Client into your applications faster than low-code or no-code tools, which require hundreds of clicks in a browser to complete. 1. The videos in this course are also available as a full-length walkthrough, watch [Build web apps with your voice and Cursor](https://youtu.be/j6zrfJ56KYE) on YouTube. However without guidance AI tools will typically write the most average code possible. Among other problems, this can lead to a Sanity Studio configured only with the defaults and will not provide the best possible experience for your content creators. While most AI tools understand the APIs Sanity makes available, they are unlikely to follow our opinionated best practices, which we have published over the years in guides and courses on Sanity Learn. In this course, you'll be onboarded with several of the tools we use at Sanity, which we know will make your ability to ship ambitious projects faster and better. ## What you'll learn This course covers how to: * Write prompts with clear expectations for the best results * Setup an AI-assistant specific code editor, Cursor * Apply best-practice "rules" to guide the responses to your prompts * Write prompts more efficiently with your voice * Create and import placeholder content * Rapidly build a front-end for your content ### What you should already know This course expects you to have some amount of understanding of development and some appreciation for Sanity, the Content Operating System. In short, and among other things, Sanity offers a hosted backend—the Content Lake—and an all-code administration dashboard called Sanity Studio. If you have any other questions, ask your favorite AI tool. ## You don’t need AI, but it’ll help You don't even need to take this course. You _could_ just open up any AI tool and ask it to do everything in one shot. This is most likely to lead to an impossible-to-maintain project that you don't fully understand and that your authors will not appreciate. Frustration will follow. In this course, you’ll learn how to write prompts that will keep your project focused and achieve the best outcomes for both you as a developer maintaining your project and your authors using your Sanity Studio. Getting from 0-80% has never been easier. Going beyond that still requires finesse. As amazing as AI tooling is, it won't do the whole job for you. In terms of technical completeness, or even knowing what the "whole job" is, what its goals are or who it benefits! It can’t—at this moment—understand human needs and translate them into technical solutions. That's your job. That's why we still need a human in the loop. ## Why I wrote this course Whether you’re a seasoned programmer, a rookie developer, or a complete novice—you may have formed some opinion on AI tools. I’m a 10+ year web developer and recently reformed AI cynic who has seen the light that our futures involve AI tooling, and there’s no going back. If you’re new to programming, welcome! There’s never been a better time to dip your toe in the water. AI tools will do all the work writing things you haven’t learned yet. You’ll get started faster than ever before and get to focus purely on the outcomes that you want—less so on the code. Note that this course assumes that you already have some degree of developer knowledge. In terms of writing code or using code editing tools. The course shouldn’t be _technically_ challenging because AI will be writing the majority of the code. But as a general rule, we advise you to ask the AI to explain the work it has done if anything is unclear. On the other hand, if you have programming experience, I hope that this course will show you how to work faster and better with modern tools especially when constrained by opinionated rules. You may also benefit from taking the [Day one content operations](https://www.sanity.io/learn/course/day-one-with-sanity-studio) course to understand the Sanity platform conceptually. There's no better way to understand something than to get hands-on with it yourself. ## More reading Beyond the material presented in this course, I recommend reading the following valuable, hype-free content: 1. [Best practices for AI-powered Sanity development](https://www.sanity.io/learn/developer-guides/ai-best-practices) 2. [How I use LLMs to help me write code](https://simonw.substack.com/p/how-i-use-llms-to-help-me-write-code) by Simon Willison## [Introduction to prompting](/learn/course/code-with-ai/introduction-to-prompting) Get better results from AI tools by crafting effective prompts, setting realistic expectations, and using them for interactive brainstorming sessions. Much of working with AI tooling requires “prompting,“ and writing good prompts is referred to as "prompt engineering." Essentially, writing instructions into a text box that the AI will execute. For the entire history of computing, programming relied on getting consistent outputs from consistent inputs. However, this is not true when working with AI tools and so knowing how to write good inputs to get _predictably good_ outputs becomes critically important. Many factors will define the results that you get. The model that you are working with (for example, Claude Sonnet by Anthropic, ChatGPT by OpenAI, etc), the context that it has, and above all the quality of your prompt. ## How to write good prompts Through a lot of hype and the need for attention, AI tools have been largely oversold in terms of the scale what they can do with short prompts and large code-bases. If you have only seen a few tweets and demos, you may expect to be able to write a short description of what you need, watch the computer magically do the rest, and put your feet up. This is not the case. Here's a few tips on writing better prompts. ### Fill in the gaps You should never consider that any AI tool or model is a flawless oracle, an all-knowing entity that can understand your deepest thoughts and desires and make exactly what you want. A **bad** prompt that's missing all your context looks like this, and because AI tools are hard-coded to always respond with something, you'll get _something_, but it's extremely unlikely to be what you want. ```text:Prompt Create a blog ``` While incredibly capable, AI tools don't know what they don't know. They have gaps in their knowledge. Technical gaps like not knowing about a new framework that was released yesterday. Context gaps like not knowing what you don't tell it. Novice prompt engineers leave out details for which every AI model will do its best to fill the void. Without contextual, specific prompting, AI tools will at best produce the most average (read: uninspiring) result or at worst make up something (also known as a “hallucination”) that doesn't actually work. Don't let AI fill the gaps in your prompts with its own ideas. ### Ask, don’t tell One way to approach writing good prompts is to ask instead of tell. Depending on your level of expertise in writing software, you may not fully understand the best course of action, especially at the very beginning of a project. A **good** prompt asks context-rich questions: ```text:Prompt I run a local bakery and would like to start blogging. What's a suitable technology stack that you would choose to build this for me? ``` Therefore, you are more likely to get great results by asking the AI what _should_ be done. Describe the problem that you have and that you need its help to solve. Describe who you are and what you are expecting. Describe to the computer the human problems that you are being paid to solve. In the early stages of a project, AI tools are most valuable as a sparring partner for your ideas or a brainstorming session. ### Set your expectations accordingly Despite having access to almost all knowledge in human history, you should consider your AI tool to be no smarter than a fresh, out-of-the-box intern. An intern that you have found asleep and has only been awoken at the moment that you have pressed “send” in the prompt text box. An intern that will confidently lie at any moment. An intern that you can simply put away and replace in a moment when you are unhappy with the results. Okay maybe "intern" isn't a good simile. It's important to not consider your AI tools to be "human." This is kind of the best framing we have for now. But you get the point—frame your expectations. For all that AI tools can do, they can quickly lead to user frustration if you come to them with mismatched expectations. ### KISS As it is in the physical world, so it is true in the digital. _Keep it simple, stupid_. Keep your prompts short and with all required context. Unless you are prepared to spend a lot of time pre-planning your project into the perfect prompt, don't expect one prompt and get the result you want. Break down your required outcome into bite-sized pieces—whether the outcome of each prompt is to write code or to keep ideating. ## Summary In short, when writing prompts * Early in a project, you should ask, not tell * Ideate and brainstorm with the AI; you don’t need to write code with every prompt * Level-set your expectations about how much AI can actually do * Make sure the AI has all the context you had to ask the question, to answer it In the following lesson, we'll start writing some prompts inside of a code editing tool.## [Writing code with AI assistance](/learn/course/code-with-ai/writing-code-with-ai-assistance) An introduction to Cursor, the AI-powered code editor you'll use in this course. Get to know the Chat window and the difference between an Ask and an Agent. In 2021, [VS Code](https://code.visualstudio.com/) popularized AI coding assistance with GitHub Copilot. This coding assistant helps modify or complete single lines of code. Recently, it has been expanded into a chat interface and multi-line editing. While it continues to receive updates, in my opinion, it has not kept pace with alternative options. This space is evolving so rapidly that it's likely that this course will be updated in future with alternate recommendations. In this course, we'll use Cursor, a fork of VS Code. At the time of writing, it is emerging as the most popular IDE for authoring code with AI assistance. [Windsurf](https://codeium.com/windsurf) and [Zed](https://zed.dev/) are popular alternatives, but we will not use them in this course. You may also have experience writing code through copy-and-paste sessions with in-browser AI tools like Open AI's ChatGPT or Anthropic's Claude. However, in this course, you'll use AI tools closer to the code base. ## Installation Cursor is a free app which can be [downloaded from their website](https://www.cursor.com/). While the paid plan provides access to better models and features, the free version is still feature-rich. You should be able to complete this course without upgrading. 1. **Download** and install Cursor ## Create a new Sanity project Open Cursor and open the Terminal by clicking View -> Terminal or pressing `Cmd+T` 1. We could have "prompted" a new Sanity project into existence, but I wanted to make sure you start with this specific experience. Run the following command to create a new Sanity project. If you are not logged into Sanity in the terminal, you will be asked to do so. If you do not yet have a Sanity account, you can create one for free. 1. **Run** the following from the command line to create a new Sanity project ```sh:Terminal npm create sanity@latest -- --template blog --create-project "AI-powered Sanity" --dataset production --typescript --output-path ai-powered-sanity cd ai-powered-sanity ``` The install command above has several options preselected so that you won’t need to weigh up the options. Install the dependencies as instructed and enter the `ai-powered-sanity` directory. 1. **Run** the following from the command line to start the development server ```sh:Terminal npm run dev ``` You can now open the Studio at [http://localhost:3333](http://localhost:3333) and log in. ![A blank Sanity Studio with the blog template schema running in local development](https://cdn.sanity.io/images/3do82whm/next/c1db705d3d1d39480543c8ef88a6fdb43d6561c9-2240x1480.png) You now have a configured Sanity project that is a cloud-hosted real-time content database and a local Sanity Studio development server that is an admin interface for authoring content. In the script that created this project we chose the “blog” template to create a new Sanity Studio. This is why you can currently see Post, Author, and Category document schema types in your studio. 1. This course **won't** cover [Hosting and deployment](https://www.sanity.io/learn/studio/deployment)—see our documentation and other courses on Sanity Learn for more details. It’s time to finally do some AI’ing. ### Your mileage may vary Before continuing consider that many factors effect the results you'll get from a prompt. Which model you choose, its capacity at the time, etc. During this course you'll be given prompts to enter and I'll detail in broad terms the responses you are likely to receive. Just know that your specific results may differ in some small way. This is part of the nature of AI assisted coding. ## Prompt some new content types 1. **Open** the `ai-powered-sanity` directory in Cursor. Go to File -> Open or press `Cmd+O` and select the folder. Alternatively, install a command to open the current directory in Cursor from the terminal. Open the Command Palette `Cmd+Shift+P` and type "shell install cursor." ![Cursor editor with command palette open](https://cdn.sanity.io/images/3do82whm/next/358fa1bf9654b7914dde5533b6586a9f03c14ea4-2240x1480.png) You can now type the below into your Terminal to open the current directory in a new Cursor window. ```sh:Terminal cursor . ``` You should now see a view similar to the image below, with your project folder open. ![The Cursor code editor with file browser open](https://cdn.sanity.io/images/3do82whm/next/9fc8632eb6f35a2dbf0759617e2d64065e7e6267-2240x1480.png) ### Chat: Ask vs Agent Press `Cmd+L` to open the chat in “Ask” mode. Prompts written here will return responses that you will need to manually **apply** to the files in the project. This is a similar experience to ChatGPT or Claude. Press `Cmd+I` to open the chat in “Agent” mode. Prompts written here will be automatically written to files in the project. It may also ask you to run commands. ![Cursor's chat panel with the choices of Agent and Ask](https://cdn.sanity.io/images/3do82whm/next/361549c8c53ab9277fe3a5c53e48c2002488bf70-2240x1480.png) If you'd like to introduce a little chaos, open Cursor settings (`Cmd+Shift+J`) and enable "YOLO mode" to have all commands run automatically. ### Ask about the project 1. **Open** the Chat in “Ask” mode and enter the following prompt ```text:Prompt Look through this codebase and tell me what you know about this project. ``` The response should scan through the code base of the current directory and make some determinations about the project. It should conclude something like: > This project contains a Sanity content management system designed to manage blog posts. 1. If your answer is something very different, try adding the codebase to the context of your prompt by typing or clicking the `@` symbol and selecting "Codebase." ![Cursor code editor with "codebase" being added to the context](https://cdn.sanity.io/images/3do82whm/next/bd06da75b2fc7bffe1704eed69f9ddf2756a0833-2240x1480.png) ### Extend the project Imagine we wanted to add to currently available document types. Before proceeding, we could ask for some guidance. 1. **Enter** a second prompt to ask for help adding more schema types ```text:Prompt You are mostly right. This is a content management system for a blog. However, my business also includes store locations. How do you think we could represent that in the content model? ``` The result is an example schema type for Sanity Studio to represent a store location. It's a reasonably good summation. However in my results it has some inconsistencies that I don't love. For one, it says that the geopoint schema type requires a plugin, which is not true. And `storeLocation` as a content type feels too specific. We can help guide the AI tool to produce a better result. 1. **Enter** one another prompt to perfect the new schema type ```text:Prompt I think we should just call them locations, not store locations in case we want to reuse this type for other physical locations in future. Also, geopoints do not require a plugin. ``` If you are happy with the new files, you can now apply them to the project by clicking the play button at the top right of both code examples. 1. **Apply** to the new `location.ts` and updated `schemaTypes/index.ts` files You should now see "Locations" as a document type in the Studio. ![Sanity Studio structure tool showing Post, Author, Category and Location document types](https://cdn.sanity.io/images/3do82whm/next/bfb2b12de29764f7b3e56a2cafa2702c3a1d100a-2240x1480.png) Now that our content model is extended, it seems too specific to call a person an author. AI tools can write new code and refactor existing code. ### Agentic workflows Let's give "Agent" mode a try and trust it to write code directly to the project. 1. **Open** a new Chat in “Agent” mode and enter the following prompt ```text:Prompt This project contains an @author.ts document type. However, this is too specific for our use case. I want to make it generic to represent a person. Update all the files necessary that refer to this document. ``` Note that we are referencing the `author.ts` file directly by adding it to the prompt’s context. This helps keep the chat focused on the specific problem we’re solving without it having to do its own investigation work. This should now: * Create a new document for the `person` document type * Update the reference field in `post` to `person` instead of `author` * Update the schema imports in `schemaTypes/index.ts` * Delete the original `author.ts` file Accept all these changes and see the Studio update. And with that, you have now both extended and refactored an existing content model. The codebase is simple for now, but so have been our asks. If you continue to work this way with prompts, you will continue to have success. Let's take a quick detour to learn how to make prompting even faster.## [Voice dictated prompts](/learn/course/code-with-ai/voice-dictated-prompts) You may find it much faster and more natural to write prompts with your voice rather than your hands. Here's how I like to do it. I can type over 100 words per minute. I've been proud of that for a long time, to the point of—for better or worse—considering it part of my personality. But there's just something about writing prompts that I find so laborious. It feels like micromanagement. Having to write out a task in full instead of just _doing_ it. Fortunately, I have found a way to reduce the friction of writing prompts: I don’t write them at all; I say them. In fact, I barely “wrote” any of this course. The words you are reading right now, I spoke into my microphone and cleaned up with AI. What a time to be alive. ## Install Superwhisper While there are many voice dictation tools available, and your system likely comes with one built in, I have found Superwhisper to be an excellent choice. It's free to try. 1. **Download** and install [Superwhisper](https://superwhisper.com/). ## Speak a prompt 1. **Open** a new Chat in “Agent” mode and speak something like the following prompt ```text:Prompt The title of this project is AI Powered, but it's just a blog for my local bakery. Can you please update any references to AI in this project to My Bakery? ``` The agent should successfully review the project’s configuration files, including `package.json` and `sanity.config.ts`, and update the existing “AI-powered” value in both. ### Speech-to-text alternatives If for some reason you're not happy with Superwhisper, there are alternatives that will perform similarly. * [VS Code Speech](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-speech) is an extension by Microsoft for use in VS Code * [Wispr Flow](https://wisprflow.ai/) is a similar OS application with some additional features Now we’re more comfortable with prompting, let’s add some guardrails to the results.## [Agent rules](/learn/course/code-with-ai/ai-rules) By default AI tools will write the most average code, with a little extra guidance it can be much more expressive—to the benefit of your authors. So far, Cursor’s code has just used the defaults it has found within its corpus of understanding. It will then write the most average implementation of the Sanity Studio configuration API. 1. See the documentation for more guidance on [Best practices for AI-powered Sanity development](https://www.sanity.io/learn/developer-guides/ai-best-practices) While functional, it's far from the most feature-complete, best-practice example of a Sanity Studio. Fortunately, a number of years ago, I wrote a guide called [The Opinionated Guide to Sanity Studio](https://www.sanity.io/guides/an-opinionated-guide-to-sanity-studio). Before now, a developer would have to keep these opinionated code styles in mind and effectively editorialize the way they write configuration code. Now, we can automate it. There is an emerging standard called [AGENTS.md](https://agents.md/) for AI tools to guide their implementations. We have made a subset of the “opinionated guide” available in the AGENTS.md format to add to any Sanity project. 1. **Add** this [`AGENTS.md`](https://github.com/sanity-io/ai-rules/blob/main/AGENTS.md) file to your project root The schema type files that came with the Studio and those that we have created since don't follow the opinionated guidance. They don't express the full potential of the Sanity Studio document authoring experience. Let's get the agent to refactor the schema type files using the rules we've added to the project. 1. **Open** a new Chat in “Agent” mode and speak something like the following prompt ```text:Prompt Use @AGENTS.md to refactor the schema type files imported into @index.ts ``` Once complete, you should note significant changes to the way the schema types are configured. Two notable examples are: * all document types and objects have icons and previews in lists * fields are now arranged in field groups. ![Sanity Studio showing four document types with icons and a new Post document being created](https://cdn.sanity.io/images/3do82whm/next/cb4a08a20e0e2c2652c418b4bb15861959561cb3-2240x1480.png) You can now imagine how by entering prompts you could continue to expand and refactor the content model—while still maintaining best practices throughout. ## Referencing documentation The same way you can reference files and rules with `@` in chat also allows you to reference URLs, such as the Sanity documentation. If you're not getting satisfactory results from a model when trying to interact with some part of Sanity's API, you may benefit from pointing it at accurate, up to date information. This is typically because models have a "cutoff date," which may be well behind the current APIs. A common example of this causing confusion with Sanity is that the "Desk" tool was renamed "Structure" in May 2024. But your AI generated code probably still tries to import functions and Types from `sanity/desk`. So if you were creating configuration for the structure tool, you may wish to reference a documentation page like the [Structure Builder cheat sheet](https://www.sanity.io/learn/studio/structure-builder-cheat-sheet). ![Code editor with a chat window open showing a prompt to setup structure builder](https://cdn.sanity.io/images/3do82whm/next/22054f9582efe9026c1e5515410c71bad0b9ea01-2240x1480.png) 1. You may also wish to add the entire Sanity documentation to Cursor's context by using the @Docs command. [Learn more on Cursor's documentation](https://docs.cursor.com/context/@-symbols/@-docs). ## Creating content These rules also contain guidance for generating placeholder content, which is helpful during development. Let's make some in the next lesson.## [Rapidly generating placeholder content](/learn/course/code-with-ai/rapidly-generating-placeholder-content) You may think the only use for AI tools is to write code, but we can use it to write content and run commands to import it. Our content model is defined entirely in code, and content can be imported into a Sanity Studio dataset in bulk by creating an NDJSON file. This is the file format that datasets are exported to, where every document in a dataset is represented as JSON on a single line. Creating scripts to create files like these has previously been a laborious process, especially when it comes to creating richer data structures such as [Presenting Portable Text](https://www.sanity.io/learn/developer-guides/presenting-block-text). And this is the sort of complex, busy work that AI excels at and can make simple. The opinionated rules that you added in the last step contains a section for guidance on creating these sorts of files, with helpful inclusions like including images. 1. **Open** a new chat in "Agent" mode and enter something like the following prompt. ```text:Prompt I need to create some placeholder content to be able to validate my content model. Looking at all of the content types in the @schemaTypes directory, create an NDJSON file and import it into the production dataset. When running the import, overwrite existing document ID's. Use the guidance from @sanity-opinionated.mdc ``` You may be prompted to select a dataset when running the import, but once completed, you should now have some posts and other document types in your Sanity Studio. ![Sanity Studio showing a blog post](https://cdn.sanity.io/images/3do82whm/next/b526f73bec432ed2908a0b31ee7c32d1fb3d7f5e-2240x1480.png) The prompt above could have been extended to be more creative or ask for a specific number of documents to be created. Try re-running with a few different takes. Now that all of the content is represented as a single file, you could also write prompts to make edits directly to that file and re-import. Now we have content, we can render it in a front end.## [Adding Sanity content to any front end](/learn/course/code-with-ai/adding-sanity-content-to-any-front-end) You can pick whatever framework you'd like to complete this lesson. It's up to AI—and your prompting skills—to make it work. Our project currently contains a Sanity Studio at the root, but if we're going to also create a frontend to render Sanity data, it should probably live in its own directory. Usually this meant busy work for us with the file system, but Cursor Agent can take care of it instead. 1. It's always good practice to commit your work regularly. Especially before making large project changes such as reorganizing files. This lesson won't prompt you to do this, but if you are familiar with git, you are advised to. 1. **Open** a new Chat in "Agent" mode and prompt it to rearrange your project files ```text:Prompt This project currently has a Sanity Studio in the root, but I would also like to add a frontend later. Move all of the project files to their own folder called "studio". Add a new README to the root. Add a package.json and .gitignore to the root. Do not create any frontend code. ``` Once this completes, it should have successfully moved all of the working files into a directory called `/studio`, left the Cursor rules in the root, and created an updated `README` that details the project. It may have also created an updated `package.json` file with scripts to run the studio from the root directory. In my example it left the `node_modules` directory in the root. You may need to install your dependencies in the studio directory again. ## Pick a front end, any front end In every course that I have ever written before this one, I've had to be very specific about what steps you are going to take. However, I can take a great liberty with this in an AI-powered course because I am relying on the AI's ability to perform most of the technical tasks. All we need to do is describe _what_ we want, not recall _how_ we want it. And so instead of telling you which framework to use, I'm going to trust the AI has a reasonable understanding on the best way to do things. Admittedly, this is not the most reliable method to get bulletproof code. If you do want to start with a particular framework, we have many templates available, which will give you a handcrafted starting point. 1. [Visit the Templates page](https://www.sanity.io/templates) for hand-written starter kits ## Prompt a front end I'm not an expert in Astro so I'm going to choose it for my example. 1. **Open** a new Chat in "Agent" mode and prompt it to create a new project in the front end of your choice ```text:Prompt Create a new, blank Astro website within its own folder in this project. ``` Once the prompt finishes, you should have a new directory with some instructions on how to continue to configure your build. It may have even started the development process for you. It's now time to integrate your Sanity content within it. Before prompting any code changes, check first how confident your AI-assistant is to perform the task. 1. **Open** a new Chat in "Agent" mode and prompt it to clarify what it knows about Sanity and the framework of your choice ```text:Prompt Without writing any code, tell me how well you understand importing content from Sanity into an Astro frontend. ``` Hopefully, it's very confident! However, we should never be too trusting of AI's confidence or give it too much to do in one go. Let's keep our asks small for now and just get it to successfully query all of the blog posts in our project. Because we have all of our Sanity Studio configuration files within the same project, it should be able to read through them to get a good understanding of the project details and the GROQ queries it will need to write to complete this task. 1. **Open** a new Chat in "Agent" mode and prompt it to look at the Studio directory for clues on how to integrate Sanity and fetch the latest blog posts ```text:Prompt Excellent. In that case, I would like you to look at - @sanity.config.ts to find project details about my Sanity Studio - @schemaTypes to understand the document types that we have available - @sanity-opinionated.mdc on how to write GROQ queries Then update the Astro front end to query for the latest blog posts. Render them on the home page as a simple list. Do not create individual route blog posts. We just want to confirm this first small task before taking any further action. ``` The Agent should then work through several steps researching the project, installing dependencies, configuring Sanity Client, writing a GROQ query and eventually querying for and rendering blog posts from your Sanity project. What you do next is entirely up to you. You may like to include styling, render individual blog posts, or author pages, or store locations. Whatever you need, just ask. ## Back-to-front coding One of the major benefits of having both your back and front end in code is now that you can make changes to both at once. Say you wanted to rename a field or add a new field to a document type. You could write a prompt that will update both the Sanity Studio configuration and the front end layout.## [AI-unassisted quiz](/learn/course/code-with-ai/ai-unassisted-quiz) AI can't help you here. Let's reflect on what you've learned. We're in a wonderful future where AI can do most of the grunt work for you and you can focus on solving problems of yours and your authors. But without intentionality to commit your new understandings to memory you may find yourself tripping over time and time again. Here's a quick quiz to reinforce what you've learned in this course. You're on your own, now. **Question:** What will an "agentic" AI tool do? 1. Brainstorm more deeply 2. Take actions independently 3. Give better responses 4. Work faster **Question:** What will improve the response from a prompt 1. Shorter words in your prompt 2. A faster internet connection 3. More context about what you're doing 4. Writing TypeScript **Question:** What is the purpose of Cursor rules? 1. To get the same result from prompt responses 2. To add guardrails to prompt responses 3. To get faster prompt responses 4. To teach the AI new skills **Question:** Before having an AI write code, you should 1. Take a good, hard look at yourself 2. Memorize the documentation 3. Ask it about the best approach forward 4. Ask it to rewrite your app in Rust **Question:** The major benefit of AI tooling for development is: 1. Building applications with code you don't understand 2. Forcing authors to configure their own apps 3. Rapidly prototyping things no one will ever use 4. A way to do better work, faster# [Content-driven web application foundations](/learn/course/content-driven-web-application-foundations) Combine Sanity and Next.js and deploy to Vercel via GitHub to get the fundamentals right. Powering a fast and collaborative development and content editing experience. ## [Building content-editable websites](/learn/course/content-driven-web-application-foundations/building-a-content-editable-website) Sanity powers content operations beyond a single website or application, while Next.js focuses on best-in-class content delivery. Combine them into a powerful modern stack to build content-driven experiences. 1. The videos in this course, in parts, are out of step with the written lessons. Follow the lesson text and code examples for the latest implementation best practices. There are no shortcuts to achieving outstanding results. Time spent learning the fundamentals of website development in a modern context will set you up for future success. ## About this course There are [ready-made templates](https://www.sanity.io/templates) to create websites. There are "One-click Deploy" buttons to rapidly get something online. You'll get _something_ faster with those but learn very little. This course will teach you how developer teams build production-ready web applications from the ground up and gain an appreciation of Sanity and Next.js from first principles. To complete this course, you will copy and paste commands, create and modify local files, set up your repository, and deploy from your Vercel account. ### Building "Layer Caker" ![An index of blog posts for a website about cakes](https://cdn.sanity.io/images/3do82whm/next/bf8ad9ca2dc4c162305171cc1d5e8973d3d0c3a7-2240x1480.png) Throughout the courses in this track, you'll play the role of a developer tasked with beginning the construction of a web application for a cake-manufacturing superstore, Layer Caker. By the end of this first course, you will have created and deployed a blog on Next.js using Tailwind CSS for styling and an embedded, configurable content management dashboard called Sanity Studio. Future courses within this track will continue to expand on this with interactive live previews for Visual Editing and website specifics like page building and SEO. There will also be demonstrations of moving away from presentational thinking and towards structured content. ### About the author My name is Simeon Griggs, and I've been building, deploying, and selling content-editable websites for over a decade. I wrote this course to help you make great websites for your end-users, collaborate confidently, and power the best content operations for creators. Throughout this course, you'll work through lessons with the least friction possible to accelerate your momentum. I've worked with, on, and at Sanity to understand how it is best used. I have also done the research with Next.js to give you best-practice choices, not decision fatigue or burdensome homework. I wrote this course to do things quickly and correctly. That means a little setup work on your first project, but once you've built a solid foundation, you'll fly through future projects. You'll learn plenty. ### Why build a content-driven website? As a developer, you should not be a bottleneck to the availability of accurate and valid content for end-users. Your content creators deserve the tools to perform content operations rapidly without developer intervention. Content Management Systems (CMSes) have come a long way since monolithic platforms with click-and-play website builders. Sanity Studio—the configurable dashboard you will embed in your Next.js application—is just the CMS part of the Sanity platform which also includes features like a content delivery CDN, asset management and webhooks. User expectations both to consume and create content are higher than ever. Thankfully, the technology for powering great experiences from content is also more sophisticated. ## Getting started The first course in this track focuses on the **basics** of developing a Next.js web application. If you're more experienced and seeking concise guidance on topics like TypeScript and caching, the [`next-sanity` readme](https://github.com/sanity-io/next-sanity) might be a better place to start. ### Prerequisites To complete this course, you will need the following: * A free Sanity account to create new projects and initialize a new Sanity Studio. If you do not yet have an account, you'll be prompted later in this course to create one. * Some familiarity with running commands from the terminal. Wes Bos' [Command Line Power User](https://commandlinepoweruser.com/) video course is free and can get you up to speed with the basics. * [Node and installed](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) (or [an npm-compatible JavaScript runtime](https://developer.mozilla.org/en-US/docs/Learn/Tools_and_testing/Understanding_client-side_tools/Package_management#what_exactly_is_a_package_manager)) to install and run the Next.js development server locally. * [`pnpm` installed](https://pnpm.io), though you could swap out commands for `npm` * Some familiarity with JavaScript and React. The code examples in this course can all be copied and pasted and are written in TypeScript, but you will not need advanced knowledge of TypeScript to proceed. If you're stuck or have feedback on the lessons here on Sanity Learn, [join the Community Slack](https://slack.sanity.io/) or use the feedback form at the bottom of every lesson. Ready? Let's start by creating a new Next.js application.## [Create a new Next.js 16 application](/learn/course/content-driven-web-application-foundations/create-a-new-next-js-application) Create a new, clean Next.js application with a few opinionated choices for TypeScript and Tailwind CSS. There are many technology choices available to make a web application. So why was Next.js chosen for this course? * JavaScript is the most popular programming language for writing server and client web applications. * React is the most popular library for writing JavaScript-powered applications. * By a large margin, Next.js is the most popular meta-framework for React. * Next.js also has a large community following for extra support and useful utilities. * It also has an excellent deployment developer experience with Vercel. * Best of all, Next.js has a tight integration with Sanity. In short, if your day job involves building web applications on a developer team, there's a good chance you're doing it with Next.js. Next.js is not without its challenges. It typically operates at the leading edge of React, so you may interact with React features not yet considered stable. Some architectural decisions, such as caching, can cause confusion. However, this course aims to demystify some of these challenges. ## Create a new Next.js application 1. **Run** the following command to create a new Next.js application: ```sh pnpm dlx create-next-app@16 layer-caker --typescript --tailwind --eslint --app --src-dir --import-alias="@/*" --turbopack --react-compiler ``` The options in the command above configure your app to use: * TypeScript * [Tailwind CSS](https://tailwindcss.com/) * [eslint](https://eslint.org/) * The [App router](https://nextjs.org/docs/app) * A `src` directory for your application's files * The default import alias for your application's files * Turbopack * React Compiler These are all the default settings for a new Next.js application. The flags in the command above save you from having to select these options. You may modify the command above to make different choices, but the following lessons contain code snippets that assume these are the settings you used. 1. **Run** the development server ```sh pnpm run dev ``` Your app should start up in the terminal in development mode: ```text > layer-caker@0.1.0 dev > next dev ▲ Next.js 16.0.1 (Turbopack) - Local: http://localhost:3000 - Network: http://192.168.4.154:3000 ✓ Starting... ✓ Ready in 591ms ``` Open [http://localhost:3000](http://localhost:3000). You should see the default home page for a new Next.js application like the one below: ![A new Next.js 16 application](https://cdn.sanity.io/images/3do82whm/next/8b1bdbc2d8bae8a9a9ed4adeda120338aee712fe-2240x1488.png) As recommended, you can edit the `src/app/page.tsx` file and see updates instantly. In the following lessons, you'll be given code examples to update this home page route and create new pages. ## Update Tailwind CSS implementation 1. The video for this lesson shows Tailwind 3 configuration, but you now have Tailwind 4 installed. Follow the code examples below. The Next.js starter has fonts and styles you don't need for this course, so you'll remove them for simplicity. 1. **Update** `layout.tsx` to remove custom fonts ```tsx:src/app/layout.tsx import type { Metadata } from "next"; import "./globals.css"; export const metadata: Metadata = { title: "Create Next App", description: "Generated by create next app", }; export default function RootLayout({ children, }: Readonly<{ children: React.ReactNode; }>) { return ( {children} ); } ``` 1. **Update** `globals.css` to remove anything other than Tailwind's import ```css:src/app/globals.css @import "tailwindcss"; ``` The app in development should still look mostly the same. You'll add more content and styling in the following lessons. You now have a Next.js application with Tailwind CSS for styling. However, it lacks content management, so the next step is to set up a Sanity account and initialize Sanity Studio inside your Next.js project.## [Create a new Sanity project](/learn/course/content-driven-web-application-foundations/create-a-new-sanity-project) Create a new free Sanity project from the command line and automatically install Sanity Studio configuration files into your Next.js project. For your Next.js application, Sanity will play the role of content storage for documents and assets such as images. That content is cloud-hosted in what we call the Sanity [Store and query structured content](https://www.sanity.io/learn/content-lake). In this lesson, you'll create a new project at Sanity and embed an editing interface—[Studio](https://www.sanity.io/learn/sanity-studio)—inside the Next.js application. An embedded Studio allows you to create, edit, and publish content hosted in the Content Lake from your Next.js application's development environment or wherever it is deployed. The Sanity Content Lake also powers content operations workflows, such as firing fine-grained [GROQ-powered webhooks](https://www.sanity.io/learn/compute-and-ai/webhooks) so your business can react to content changes as they happen. In time, your Next.js application may also _write_ content – such as comments and likes – into the Content Lake from the front end. While this course focuses on building a web application, Sanity is more than a website-focused CMS (content management system). In a nutshell, Sanity is a _Content Operating System_, with a configurable, React-based administration panel, cloud-hosted data storage, and a worldwide CDN for content delivery. ## Create a new project The Sanity CLI can initialize a new Sanity project within a Next.js application. It detects the framework during the process and prompts you to make appropriate choices. If you do not yet have a Sanity account, follow the prompts to create one. 1. You can create new free Sanity projects at any time. 1. **Run** the following command inside your Next.js application to create a new free project from the command line: ```sh pnpm dlx sanity@latest init ``` When prompted, make the following selections. If you accidentally select the wrong option, you can cancel and re-run the command again. 1. **Create** a new project, call it what you like, for example `layer-caker` 2. **Create** a dataset with the default settings: public and named `production` 3. **Add** configuration files to the Next.js folder 4. **Use** TypeScript 5. **Embed** Sanity Studio at `/studio` 6. **Select** the `blog` template 7. **Add** your project details to an `.env.local` file ### What just happened? This command: 1. Created a new Sanity **project** and **dataset**, which are remotely configured and hosted on the Content Lake 1. A **dataset** is a collection of content (text and assets) within a project hosted in the Sanity [Store and query structured content](https://www.sanity.io/learn/content-lake). 2. A **project** can have many datasets and is also where you'd configure other project-level settings like members, webhooks, and API tokens. 2. Added relevant files to your local Next.js application and installed some dependencies that you'll need to get started. Your Sanity Studio code in the Next.js application is like a "window" into the remotely hosted content. Your Studio configuration code determines which document types are available to create, update, and delete. All the content you author is hosted in the Content Lake. In short, with Sanity: * **Studio configuration** is performed locally with code. * **Content** (text and assets) is hosted remotely. * **Project configuration** is handled at [sanity.io/manage](https://www.sanity.io/manage). ### New project files **In addition to** your Next.js files, you should have the following files in your project. These files configure: * Sanity Studio for creating content * Sanity Client for querying content * A helper file to display images on the front end, `src/sanity/lib/image.ts` ```text . ├── .env.local ├── sanity.cli.ts ├── sanity.config.ts ├── (...and all your Next.js files) └── src ├── app │ └── studio │ └── [[...tool]] │ └── page.tsx └── sanity ├── lib │ ├── client.ts │ ├── image.ts │ ├── live.ts ├── schemaTypes │ ├── authorType.ts │ ├── blockContentType.ts │ ├── categoryType.ts │ ├── postType.ts ├── env.ts └── schema.ts ``` ### Hello, Sanity Studio Browse your embedded Sanity Studio route at [http://localhost:3000/studio](http://localhost:3000/studio) to see your built-in content management system. Make sure you log in with the same credentials you used to log in to the Sanity CLI in your terminal. 1. If you see the Studio but not these three document types (posts, categories, authors) on the left-hand side, you may have chosen the "clean" template instead. Re-run the `sanity init` command above to change. ![A new Sanity Studio with the blog schema types installed](https://cdn.sanity.io/images/3do82whm/next/25830b878dbeb7bc279ba11bc5d3efa1d7c57544-2144x1388.png) You're embedding the Sanity Studio within the Next.js application for the convenience of managing everything in one repository**.** It's also convenient for authors to only need to know one URL for their front end and content administration. However, it can promote website-specific thinking. 1. Remember, content representing your business goes far beyond a few web pages. For now you only have blog content schema types in your Sanity Studio, but you can expand it to much more! Fortunately, if you ever decide to separate your Sanity Studio into its repository—or both applications into a mono repo—it should be a straightforward process of moving the configuration files around. The data storage of your text and assets would remain unchanged in the Content Lake. The `blog` template gave you three website-specific schema types: `post`, `category` and `author`. You can now create content of these types within your embedded Sanity Studio. ## Create and publish posts Soon, you'll be querying for content on the front end. For this to work, you'll need to create some. 1. **Create** and **Publish** at least one `post` document type ![Sanity Studio showing a published blog post](https://cdn.sanity.io/images/3do82whm/next/d56d2d90395a3c4c041c1d7ef8a8b17ed23d67e9-2144x1388.png) ### Or use our seed data We have prepared a dataset for you to speed up the process. You can optionally download and import this into your project. 1. Download `production.tar.gz` – a pre-prepared dataset backup with assets, posts, categories, and authors. Place this file in the root of your project and import it using the CLI. ```sh:Terminal pnpm dlx sanity dataset import production.tar.gz production ``` Delete the backup file once the import successfully completes. ```sh:Terminal rm production.tar.gz ``` You have content in your Studio, but your front-end is not yet configured to display it. In the next lesson, let's unpack the bridge between your Sanity content and front-end.## [The next-sanity toolkit](/learn/course/content-driven-web-application-foundations/the-next-sanity-toolkit) Unpack next-sanity, the all-in-one Sanity toolkit for "live by default," production-grade content-driven Next.js applications. One of the dependencies automatically installed during `sanity init` in the last lesson was [`next-sanity`](https://github.com/sanity-io/next-sanity), a collection of utilities and conventions for data fetching, live updates, Visual Editing, and more. You could look through the readme for full details on what it provides. For now, let's examine some of the files that were automatically created in the previous lesson and explain their purpose. ## Environment variables A `.env.local` file should have been created with your Sanity project ID and dataset name. These are not considered sensitive, and so are prepended with `NEXT_PUBLIC_`. 1. See the Next.js documentation about [public and private environment variables](https://nextjs.org/docs/app/building-your-application/configuring/environment-variables). In future lessons, you'll add secrets and tokens to this file. It is important that you **do not** check this file in your Git repository. Also, remember that values in this file will need to be recreated when deploying the application to hosting. We'll remind you of this when we get there. 1. **Confirm** you have an `.env.local` file at the root of your application. ```scss:.env.local NEXT_PUBLIC_SANITY_PROJECT_ID="your-project-id" NEXT_PUBLIC_SANITY_DATASET="production" ``` Additionally, a file to retrieve, export, and confirm these values exist has been written to `src/sanity/env.ts` 1. You can use Sanity CLI to update these values with a new or existing Sanity project by running `sanity init` again with the `--env` flag ```sh pnpm dlx sanity@latest init --env ``` ## Sanity Client The file `client.ts` contains a lightly configured instance of Sanity Client. ```typescript:src/sanity/lib/client.ts import { createClient } from 'next-sanity' import { apiVersion, dataset, projectId } from '../env' export const client = createClient({ projectId, dataset, apiVersion, useCdn: true, }) ``` Sanity Client is a JavaScript library commonly used to interact with Sanity projects. Its most basic function is querying content, but once authenticated with a token, it can interact with almost every part of a Sanity project. 1. See more about what [Sanity Client](https://www.sanity.io/docs/js-client) can do You won't need to change the Sanity Client configuration now, but it is good to know where to make modifications later. ### sanityFetch and SanityLive In the file `live.ts`, the preconfigured client is used to export a function `sanityFetch`, and the component `SanityLive`. ```typescript:src/sanity/lib/live.ts import { defineLive } from "next-sanity/live"; import { client } from "@/sanity/lib/client"; export const { sanityFetch, SanityLive } = defineLive({client}); ``` * `sanityFetch` is a helper function to perform queries, and under the hood it handles the integration with Next.js tag-based caching and revalidation, as well as Draft Mode. * `SanityLive` is a component which creates a subscription to the [Live Content API](https://www.sanity.io/learn/content-lake/live-content-api) and will automatically revalidate content as it changes. These two exports are the foundation of "Live by default" experiences in Next.js applications. In future lessons you'll implement these and learn how they work. ## Sanity Config and CLI The two root files `sanity.cli.ts` and `sanity.config.ts` are important for interacting with your project: * `sanity.cli.ts` allows you to run CLI commands (like `dataset import` from the previous lesson) that affect the project while targeting the correct project ID and dataset * `sanity.config.ts` is used to configure the Sanity Studio, including schema types, plugins, and more. 1. Run the following command to show project details: ```sh pnpm sanity@latest debug ``` ## Schema Types In the `src/sanity/schemaTypes` folder are files for the three document types and one custom type which you can see in the Studio. You're able to create `category`, `post` and `author` type documents because these have been registered to the Studio configuration. Datasets are schemaless, so data of any shape could be _written_ into a dataset. But these are the only schema types currently configured in the _Studio_. In future lessons, you'll change and add to these schema types, but they give us enough to work with now. 1. See [Improving the editorial experience](https://www.sanity.io/learn/course/studio-excellence/improving-the-editorial-experience) in [Day one content operations](https://www.sanity.io/learn/course/day-one-with-sanity-studio) to see how basic schema type configurations can be dramatically enhanced. You now have a Next.js application with an embedded Sanity Studio for creating and publishing content. It's time to start integrating them. Writing GROQ queries is the most common method of querying content from Sanity. In the next lesson, we'll set up conventions for this.## [Query content with GROQ](/learn/course/content-driven-web-application-foundations/writing-groq-queries) Organize and author queries for your content with best-practice conventions. If you're new to Sanity, you're probably new to GROQ. It's an incredibly powerful way to query content, and thankfully, it's quick to get started with. You'll only need to know the basics of writing queries for now. However, it is beneficial to learn GROQ when working with Sanity as it powers queries, [GROQ-powered webhooks](https://www.sanity.io/learn/compute-and-ai/webhooks) and content permissions when configuring [Roles](https://www.sanity.io/learn/user-guides/roles). This lesson is focused on writing basic GROQ queries to serve our Next.js application. Future lessons will expand on these queries. 1. See [Between GROQ and a hard place](https://www.sanity.io/learn/course/between-groq-and-a-hard-place) for more thorough lessons on how to write expressive queries with GROQ. 2. The [Query Cheat Sheet - GROQ](https://www.sanity.io/learn/content-lake/query-cheat-sheet) is the most popular resource for quickly finding useful query examples. ## What about GraphQL? Sanity content is typically queried with GROQ queries from a configured Sanity Client. [Sanity also supports GraphQL](https://www.sanity.io/docs/graphql?utm_source=github&utm_medium=readme&utm_campaign=next-sanity). You may prefer to use GraphQL in your application, but these courses will focus on querying with Sanity Client and GROQ. ## GROQ basics You can break up most GROQ queries into three key parts. Consider this query: ```groq *[_type == "post"]{title} ``` * `*`: returns **all documents** in a dataset as an array * `[_type == "post"]` represents a **filter** where you narrow down the proceeding array * `{ title }` represents a **projection** where you define which **attributes** in those array items you want to return in the response ## Organizing GROQ queries `next-sanity` exports the `defineQuery` function which will give you syntax highlighting in VS Code with the Sanity extension installed. 1. **Install** the [Sanity VS Code extension](https://marketplace.visualstudio.com/items?itemName=sanity-io.vscode-sanity) if this is the IDE you are using. The `defineQuery` function also has another important role, [Sanity TypeGen](https://www.sanity.io/learn/apis-and-sdks/sanity-typegen) searches for variables that use it to generate Types for query results. For convenience and organization, you'll write all queries inside a dedicated file in your project. 1. **Create** a file to store two basic GROQ queries: ```typescript:src/sanity/lib/queries.ts import {defineQuery} from 'next-sanity' export const POSTS_QUERY = defineQuery(`*[_type == "post" && defined(slug.current)][0...12]{ _id, title, slug }`) export const POST_QUERY = defineQuery(`*[_type == "post" && slug.current == $slug][0]{ title, body, mainImage }`) ``` * `POSTS_QUERY` will return an array of up to 12 published documents of the type `post` that have a slug. From each document, it will return the `_id`, `title` and `slug` attributes. * This can be used on a "posts index" page to show the latest posts. * `POST_QUERY` filters down to `post` documents of the post type where the value the `slug` matches a passed-in variable `$slug`. Only one document is returned because of the `[0]` filter. From this one document, it will return the `title`, `body` and `mainImage` attributes. ### Testing GROQ queries Before using these queries in your front end, it's possible to test them at any time from within your Sanity Studio using the Vision tool. 1. **Open** [http://localhost:3000/studio/vision](http://localhost:3000/studio/vision), paste the `POSTS_QUERY` GROQ query string and click **Fetch** ```groq *[_type == "post" && defined(slug.current)][0...12]{ _id, title, slug } ``` You should see up to 12 items in the "result" panel. ![Vision tool in Sanity Studio showing a GROQ query and a response](https://cdn.sanity.io/images/3do82whm/next/3110d3eae0cfeac9e75f6f91aeb7c256ea56f88d-2144x1388.png) Queries fetched in Vision use the same user authentication that the Studio does. So it will return private documents when using the default perspective – `raw`. 1. In a **public** dataset, a document is private if it has a period "`.`" in the `_id`, such as `{ _id: "drafts.asdf-1234" }` and can only be queried by an authenticated request. In a **private** dataset all documents are private. The Sanity Client for your front end is not authenticated (unless you give it `token`) so it will only return publicly visible documents in a public dataset. 1. See [Datasets](https://www.sanity.io/learn/content-lake/datasets) for more information about Public and Private datasets. 2. [Perspectives for Content Lake](https://www.sanity.io/learn/content-lake/perspectives) determine whether published or draft documents are returned in the response. Now that you've proven that your GROQ queries get results, let's automatically generate TypeScript types for these responses.## [Generate TypeScript Types](/learn/course/content-driven-web-application-foundations/generate-typescript-types) Add Type-safety to your project and reduce the likelihood that you will write code that produces errors. In the case of working with [Sanity TypeGen](https://www.sanity.io/learn/apis-and-sdks/sanity-typegen), it can create Types for Sanity Studio schema types and GROQ query results. So, as you build out your front end, you only access values within documents that exist, as well as defensively code against values that could be `null`. 1. The [Generating types](https://www.sanity.io/learn/course/day-one-with-sanity-studio/generating-types) Lesson has a more in-depth exploration of the `sanity typegen` command. Sanity TypeGen will [create Types for queries](https://www.sanity.io/docs/sanity-typegen#c3ef15d8ad39) that are assigned to a variable and use the `defineQuery` function. ## Extracting schema You're able to use the Sanity CLI from inside the Next.js application because of the `sanity.cli.ts` file at the root of your project. 1. **Run** the following command in your terminal ```sh pnpm dlx sanity@latest schema extract --path=./src/sanity/extract.json ``` 1. Re-run this every time you modify your schema types The `--path` argument is provided so the schema file is written to the same folder as all our other Sanity utilities. You should see a response like the one below and a newly generated `extract.json` file in your `src/sanity` directory ```sh ✅ Extracted schema ``` This file contains all the details about your Sanity Studio schema types, which TypeGen will need to create types from. ## Generating types By default, TypeGen will create a file for types at the project's root. Again, to keep Sanity-specific files colocated, this configuration will keep the project root tidy. 1. Without this step, Typegen will look for your schema in the default named `schema.json` file instead of the `extract.json` file we have created. 1. **Create** a new file at the root of your project ```json:sanity-typegen.json { "path": "./src/**/*.{ts,tsx,js,jsx}", "schema": "./src/sanity/extract.json", "generates": "./src/sanity/types.ts" } ``` The configuration here will: 1. Scan the `src` directory for GROQ queries to create Types. 2. Additionally, use the `extract.json` file created during the previous task. 3. Write a new `types.ts` file with our other Sanity utilities. 1. **Run** the following command in your terminal ```sh pnpm dlx sanity@latest typegen generate ``` 1. Re-run this every time you modify your schema types or GROQ queries You should see a response like the one below and a newly created `src/sanity/types.ts` file in your project. ```sh ✅ Generated TypeScript types for 15 schema types and 2 GROQ queries in 1 files into: ./src/sanity/types.ts ``` Success! You now have Types for your Sanity Studio schema types and GROQ queries. ## Automating TypeGen The `extract.json` file will need to be updated every time you update your Sanity Studio schema types and TypeGen every time you do or update your GROQ queries. Instead of doing these steps separately, you can include scripts in your `package.json` file to make running these automatic and more convenient. 1. Update `package.json` scripts ```json:package.json "scripts": { // ...all your other scripts "predev": "pnpm run typegen", "prebuild": "pnpm run typegen", "typegen": "sanity schema extract --enforce-required-fields --path=./src/sanity/extract.json && sanity typegen generate" }, ``` You can now run both the schema extraction and TypeGen commands with one line: ```sh pnpm run typegen ``` 1. Sanity TypeGen is currently in Beta and may not always produce perfect results. The following lessons will highlight any meaningful issues. You now have all the tools and configurations to author and query Sanity content with a Type-safe, excellent developer experience. Now it's finally time to query and display Sanity content. ## Automatic type inference Sanity TypeGen contains a feature to map GROQ queries against their types automatically. However, this is done by extending the Sanity Client package, as you will see at the bottom of the automatically generated types file. ```typescript:src/sanity/types.ts // Query TypeMap import "@sanity/client"; declare module "@sanity/client" { ``` Since we are using the next Sanity package and have not installed Sanity Client directly, this automatic type inference may not work. **Install** Sanity Client as a dependency to solve this before the next lesson. ```sh pnpm add @sanity/client ```## [Fetch Sanity Content](/learn/course/content-driven-web-application-foundations/fetch-sanity-content) Query for your content using Sanity Client, a library compatible with the Next.js cache and React Server Components for modern, integrated data fetching. Sanity content is typically queried with GROQ queries from a configured [Sanity Client](https://www.sanity.io/docs/js-client). Fortunately, one has already been created for you. 1. **Open** `src/sanity/lib/client.ts` to confirm it exists in your project. Sanity Client is built to run in any JavaScript run time and in any framework. It is also compatible with Next.js caching features, React Server Components, and the App Router. It also provides ways to interact with Sanity projects and even write content back to the Content Lake with mutations. You'll use some of these features in later lessons. It's time to put everything we've set up to work. In this lesson, you'll create a route to serve as a Post index page and a dynamic route to display an individual post. ## Next.js App Router For now, you'll focus on data fetching at the top of each route. React Server Components allow you to perform fetches from inside individual components. Future lessons may address where this is beneficial. For now, our queries are simple enough – and GROQ is expressive enough – to get everything we need at the top of the tree. 1. See the [Next.js App Router](https://nextjs.org/docs/app/building-your-application/routing) documentation for more details about file-based routing and how file and folder names impact URLs The most significant change we'll make first is creating a separate "Route Group" for the entire application front end. This route group will separate the front end layout code from the Studio without affecting the URL. It is also useful when integrating Visual Editing and displaying the front end _inside_ the Studio. 1. **Create** a new `(frontend)` directory and **duplicate** `layout.tsx` into it ```sh mkdir -p "src/app/(frontend)" && cp "src/app/layout.tsx" "src/app/(frontend)/" ``` You should now have **two** `layout.tsx` files inside the app folder at these locations: ```text src └── app ├── // all other files ├── layout.tsx └── (frontend) └── layout.tsx ``` The `(frontend)/layout.tsx` file has duplicated `html` and `body` tags, but you'll update the file those later in the lesson. 1. **Update** the root `layout.tsx` file to remove `globals.css` ## Update the home page Later in this track, the home page will become fully featured. For now, it just needs a link to the posts index. 1. **Move** `page.tsx` into the `(frontend)` folder 2. **Update** your home page route to add basic navigation to the posts index. ```tsx:src/app/(frontend)/page.tsx import Link from "next/link"; export default async function Page() { return (

Home


Posts index →
); } ``` 1. Next.js provides the [`` component](https://nextjs.org/docs/pages/api-reference/components/link) as an enhancement to the [HTML anchor](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a) (``) element. You should now have a basic home page like this: ![Basic home page](https://cdn.sanity.io/images/3do82whm/next/6f35be43157c8edf9d8c478c5e9b8f39743f0a6a-2144x1388.png) ## Create a post-index page This page will list up to 12 of the latest post documents. Inside this route: * The configured Sanity Client is imported as `client` * The GROQ query `POSTS_QUERY` is used by `client.fetch` * Thanks to automatic type inference, the response will be typed `POSTS_QUERYResult` 1. **Create** a new directory for a post-index page to fetch all `post` type documents ```tsx:src/app/(frontend)/posts/page.tsx import Link from "next/link"; import { client } from "@/sanity/lib/client"; import { POSTS_QUERY } from "@/sanity/lib/queries"; const options = { next: { revalidate: 60 } }; export default async function Page() { const posts = await client.fetch(POSTS_QUERY, {}, options); return (

Post index


← Return home
); } ``` 1. Next.js supports [React Server Components](https://nextjs.org/docs/app/building-your-application/rendering/server-components), which allow you to fetch and `await` data within the component. Read more on the Next.js documentation. ### Viewing the post-index You should now have a post index page at [http://localhost:3000/posts](http://localhost:3000/posts) like this: ![Blog posts index web page](https://cdn.sanity.io/images/3do82whm/next/f7951171b80f0ed0024178315cb49450ca6c1e75-2144x1388.png) ### Current cache configuration The `options` variable passed into the Sanity Client is a light configuration for Next.js caching. You should know that with these settings, the cache has been configured to only update pages at most every 60 seconds. Finding the right balance between fresh and stale content is a complex topic, and there are ways to mitigate the concerns of your content creators and end users to find a solution for everyone. If you'd like to learn more on the topic and continue to configure caching manually, see: [Controlling cached content in Next.js](https://www.sanity.io/learn/course/controlling-cached-content-in-next-js). What's better than manually configuring the cache? **Never doing it.** ## Live by default The `next-sanity` package contains helper functions to perform fetches that take advantage of the [Live Content API](https://www.sanity.io/learn/content-lake/live-content-api). So every fetch for data is automatically cached and revalidated using the built-in tag-based revalidation. 1. **Update** the frontend `layout.tsx` file to include `SanityLive` ```tsx:src/app/(frontend)/layout.tsx import { SanityLive } from '@/sanity/lib/live' export default function FrontendLayout({ children, }: Readonly<{ children: React.ReactNode }>) { return ( <> {children} ) } ``` 1. **Update** the post index page's fetch from `client` to `sanityFetch` ```tsx:src/app/(frontend)/posts/page.tsx import Link from "next/link"; import { sanityFetch } from "@/sanity/lib/live"; import { POSTS_QUERY } from "@/sanity/lib/queries"; export default async function Page() { const { data: posts } = await sanityFetch({ query: POSTS_QUERY }); return (

Post index

    {posts.map((post) => (
  • {post?.title}
  • ))}

← Return home
); } ``` Now when you publish changes in Sanity Studio, you should see those updates take place live. No more caching. No more hammering the refresh button. ### Sanity TypeGen in Beta The GROQ query included a filter to ensure only documents with a `slug.current` was defined – but the TypeGen generated a type where `slug.current` could be `null`. This is a known limitation of TypeGen while it is in beta. ## Create an individual post page The GROQ query `POST_QUERY` used a variable `$slug` to match a route with a `post` in the dataset. For this, you can use a "Dynamic Route," where a segment in the URL is made available to the server component for the route as a prop. 1. Read more about [Next.js Dynamic Routes](https://nextjs.org/docs/app/building-your-application/routing/dynamic-routes) on their documentation So, for example, because you're creating a route at: ```text src/app/(frontend)/posts/[slug]/page.tsx ``` If you visited the URL: ```text http://localhost:3000/posts/hello-world ``` The route would have this `params` object in its `props`: ```json { "slug": "hello-world" } ``` Which can then be passed into Sanity Client to match the value of `slug` to a value in a document. 1. **Create** a new route for an individual post ```tsx:src/app/(frontend)/posts/[slug]/page.tsx import { sanityFetch } from "@/sanity/lib/live"; import { POST_QUERY } from "@/sanity/lib/queries"; import { notFound } from "next/navigation"; import Link from "next/link"; export default async function Page({ params, }: { params: Promise<{ slug: string }>; }) { const { data: post } = await sanityFetch({ query: POST_QUERY, params: await params, }); if (!post) { notFound(); } return (

{post?.title}


← Return to index
); } ``` You should now be able to click any of the links on the posts index page and see the title of a blog post with a link back to the index: ![Individual post page showing just the title](https://cdn.sanity.io/images/3do82whm/next/14545936d26cac969adafd662cda50621ddd1ded-2144x1388.png) You now have a basic – but functional – web application. It's currently trapped in your local development environment. And while it isn't much, it's an excellent habit to deploy early and often so you can get into a habit of continuous improvement. You'll deploy your web application to the world in the following lessons.## [Git-based workflows](/learn/course/content-driven-web-application-foundations/git-based-workflows) Version control, collaborate on, and deploy your Next.js application by storing it in a Git repository. If you've built modern web applications with a developer team, you may already be familiar with Git and GitHub. This lesson explains the basics for anyone new to Git-based version control or unfamiliar with branch-based workflows for collaborating with other developers on a project. A strategy for safely iterating on a project is the key to working confidently on updates, new features, and improvements. 1. If you're entirely new to Git, Epic Web has a [free Git Fundamentals tutorial](https://www.epicweb.dev/tutorials/git-fundamentals), which will get you up to speed with the basics. ## Create a remote repository For this lesson, you will need an account on GitHub. You could use other Git providers if you choose, but you would need to adapt the tasks in this lesson accordingly. Currently, your Next.js application is only available on your machine. It must be deployed on Vercel's hosting to be shared with the world. To do that, there is an intermediary step of uploading your files to a Git repository. 1. **Create or log in** to your [GitHub account](https://github.com/) 2. From the GitHub dashboard, click "New" to create a new repository. ![GitHub dashboard for creating a new repository](https://cdn.sanity.io/images/3do82whm/next/473b6851f9d2f67b32341802df97be05a5529b49-2144x1388.png) You can give your repository any name. You can also choose to make it Public or Private. On the next screen, you should see instructions for "quick setup" and commands to run for either a new or existing repository. When you ran `create-next-app`, it initialized one automatically, so you can follow the instructions "... or push an existing repository from the command line." ![GitHub instructions for pushing an existing repository](https://cdn.sanity.io/images/3do82whm/next/7c68715dc8e1aba960da361ba0891f46998033a0-2144x1388.png) The code for my repository looks like this above, but the first line will differ from yours as it has your account and repository name. 1. **Run** the command to push an existing repository 2. **Refresh** the page, and you should now see _most_ of your local files in your remote GitHub repository ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/2483919ccd3687615dc0ae9ee4ac739317a4632d-2144x1388.png) ## Updating remote Currently, the remote repository only contains the **original** files created when you ran `create-next-app`, not any Sanity-related files you created or changed after that. You must "commit" those files locally and push them to the `main` branch. 1. **Run** the following from your terminal to add the remaining local files to `main` ```sh git add . git commit -m "add sanity files" git push origin main ``` Refresh your repository on GitHub, and you should see the additional files. 1. Pushing directly to the `main` branch like this is _okay_ this once, but it's not great for tracking the history of changes and is _terrible_ for working collaboratively with others. Continuing to do this will likely result in problems. Before connecting your repository to Vercel to host your application, it's good to have a strategy **now** for working locally on new features and updating remotely. ## Workflow for making changes Your local files are now stored remotely on GitHub as a point-in-time snapshot. At this moment, your local and remote files are in sync. Each time you work locally, you will need to update your remote Git repository to update your hosted Next.js application. Most commonly, this would be done by: 1. creating a "branch" off of the `main` branch 2. committing changes to local files 3. pushing that branch to remote 4. creating a "pull request" from the branch to `main` 5. merging those changes into the `main` branch remotely 6. updating your `main` branch locally Let's try this now. 1. **Run** the following command to create a new branch named `update-readme` ```sh git checkout -b update-readme ``` 1. **Update** the `README.md` with the following: ```markdown # Sanity and Next.js This is a [Sanity.io](https://sanity.io) and [Next.js](https://nextjs.org) project created following a Course on [Sanity Learn](https://sanity.io/learn). ## Getting Started First, run the development server: ```bash npm run dev ``` - Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. - Open [http://localhost:3000/studio](http://localhost:3000/studio) to edit content. ``` 1. Keeping the readme of a project up to date with helpful notes for new developers joining the project in the future is good practice. 1. **Run** this command to check the current local status of all files: ```sh git status ``` You should see the following, which shows us that the `README.md` file has been modified, but nothing is yet staged for a commit: ```text On branch update-readme Changes not staged for commit: (use "git add ..." to update what will be committed) (use "git restore ..." to discard changes in working directory) modified: README.md no changes added to commit (use "git add" and/or "git commit -a") ``` 1. **Run** the following commands to add all changed files and create a commit message: ```text git add . git commit -m "update readme" ``` 1. **Run** this command to push the local state of this branch to the remote: ```text git push -u origin update-readme ``` You should get a confirmation in the terminal that it was successfully completed, and a URL to visit to create a pull request. 1. **Create** a pull request: On GitHub, in your repository, go to the "Pull requests" tab and create a new one to merge `update-readme` into `main`. You should then see a page just like this: ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/c0935c0bba2a40b969b327c0d4be1bfe290e5327-2144x1388.png) Creating a "PR" is an essential step in your future workflow for your fellow developers and content creators. On a developer team, you may have a colleague review and approve your changes before they are merged. For your authors, and once you have connected this repository to Vercel, this will create a "Preview build" of your Next.js application so that you can see the results of your latest round of changes in the hosted environment _before_ merging. Preview builds are great for collaboration so you can share updated versions of the application without affecting the production environment. This is useful for testing new front end features as well as showing authors any updated Sanity Studio configuration. From this page, you can also see an overview of the changes being made. In this PR, it is only one file with a few lines changed. 1. Scroll to the bottom and click "Merge pull request" and "Confirm merge." Now, the `main` branch has the latest version of your code, but only remotely. In your local development environment, the `main` branch is out of date. 1. **Run** the following in the terminal to switch back to the main branch: ```text git checkout main ``` If you open `README.md` now you'll notice it has reverted to the previous version, because this is what it looked like when you were last **locally** working on `main`. 1. **Run** the following in the terminal to pull the latest version of main from remote: ```text git pull origin main ``` Now `README.md` is as you had it before, and both your local and remote versions of the `main` branch are up to date. You now have a _basic_ version of a Git workflow to follow. Once other developers are working on the same repository, they'll all have their branches and pull requests which you may be involved in reviewing. You'll also need to update your local version of the `main` branch before creating any new branches. 1. Larger teams may benefit from even more structure around Git workflows. [Conventional commits](https://www.conventionalcommits.org/) is one commonly implemented pattern. Your code is now hosted remotely, but your web application is not. With this setup, it's time to go live. Let's connect Vercel to your repository in the next lesson.## [Go live on Vercel](/learn/course/content-driven-web-application-foundations/deploy-to-vercel) Publish your web application to the world. Vercel's hosting and Next.js are made for one another, so it just makes sense to put them together for this project. For this lesson, you'll need a free account at Vercel. If you don't already have one, you'll be prompted to create one. Vercel is an app deployment platform, cloud hosting provider, and much more. Not only can we host the production application there, but its tight integration with Git will create preview builds when developing new features. In this lesson you'll connect Vercel to your GitHub account in a new project to automatically and continuously deploy your Next.js application to its hosting. ## Create a new Vercel project 1. **Create** a new Vercel project at [vercel.com/new](https://vercel.com/new) Connect the Vercel project to the repository you made in the last Lesson. 1. Ensure the **Framework Preset** has been set to **Next.js** 2. Populate all of the **Environment Variables** with values from your local `.env.local` file. ![Vercel new project settings page](https://cdn.sanity.io/images/3do82whm/next/3b243f446c2476c2cd6b604210abfe2a162f7390-2144x1388.png) 1. Click **Deploy** 1. **Getting a deploy error?** You may need to remove `--turbopack` from the build script in `package.json`, this is likely a temporary issue. You should now be able to watch the **Build Logs** update as your repository is cloned, its dependencies installed, the site built, and your production Next.js application deployed to hosting. The same site you were working on locally should now be deployed online for everyone to see. ![Blog post index page hosted on Vercel](https://cdn.sanity.io/images/3do82whm/next/637b757ab1096e237c9dfe0336d607e1dbe87904-2144x1388.png) ### The Vercel CLI It's worth noting that Vercel also has a CLI tool for interacting with projects locally. 1. Read about the [Vercel CLI](https://vercel.com/docs/cli) in their documentation. ### Hosted Sanity Studio On the hosted web application, visit `/studio` to open your Sanity Studio. The first time you do you'll be prompted to add the current URL as a CORS origin. This is required for every unique URL that wishes to interact with your Sanity content client-side. It is okay to click "Continue." 1. For more detail on CORS and Sanity see the documentation: [Access Your Data (CORS)](https://www.sanity.io/learn/content-lake/cors) ### Datasets as environments You can make changes to content in your hosted Studio, and those changes will also be mirrored locally. This is because wherever your Studio is used, it is always writing to and reading from the Content Lake. For this reason, developer teams will often use Sanity datasets as a proxy for "environments." Creating and migrating content between datasets is made simple with the Sanity CLI. The commands below cover creating a new dataset named `development`, and updating it to the current state of the `production` dataset. 1. **Run** the following to create a new dataset named `development` ```sh pnpm dlx sanity@latest dataset create development ``` Choose a **public** dataset again for this project. For your mission-critical future projects, you may prefer **private**. 1. **Run** this to export the current documents and assets from the `production` dataset ```sh pnpm dlx sanity@latest dataset export production ``` 1. Run this to import the `production.tar.gz` dataset backup into the `development` dataset ```sh npx sanity@latest dataset import production.tar.gz development ``` 1. **Update** the value of your local `.env` file to use the `development` dataset ```text:.env.local NEXT_PUBLIC_SANITY_DATASET="development" ``` Now your content updates during development and testing won't impact the production site. You may also choose to configure Vercel to target a different dataset for preview and production builds. You can now delete the `production.tar.gz` backup file. 1. [Cross-Dataset Duplicator](https://www.sanity.io/plugins/cross-dataset-duplicator) is a popular plugin for allowing authors to move content between datasets from inside the Studio. ## Quick review You now have the foundational skills to build a content-editable web application: * An account at Sanity for storing content * A basic Next.js application for displaying that content * A lightly-configured Sanity Studio for content editing * An account at GitHub for version controlling your files * A Git workflow for branching, committing changes and merging * An account at Vercel for hosting preview and production builds of your web application This is all _functional_, but it's far from _finished_. In the next lessons you'll look at doing more interesting things on the front end with your content.## [Displaying images](/learn/course/content-driven-web-application-foundations/displaying-images) Sanity stores your image assets, learn how both the Sanity CDN and Next.js's Image component help optimize rendering them. ## Why optimized assets matter For most web applications, the majority of data sent over the network will be for assets – such as images and videos. Your end users want your application to load as fast as possible. It's also well-known that faster-loading sites directly improve conversion rates. Optimizing web applications for performance is an intense topic. This lesson aims to give essential guidance for serving images using utilities provided by Sanity and Next.js. 1. See [Optimising Largest Contentful Paint](https://csswizardry.com/2022/03/optimising-largest-contentful-paint/) on CSS Wizardry for an example of how much further this topic goes. ## Git workflow reminder This is the last time we'll remind you to create a branch when working on new features. We trust you'll get in the habit from now on. 1. **Create** a new local branch before continuing. ```text git checkout -b add-images ``` ## Uploading and querying images Assets uploaded to the Content Lake are available on the Sanity CDN to render on your front end. Parameters can be added to an image URL to determine its size, cropping, file type, and more. 1. See [Presenting Images](https://www.sanity.io/learn/apis-and-sdks/presenting-images) in the documentation for more details When you upload an image to the Content Lake, an additional document is created to represent that asset, along with details of its metadata and more. Uploading an image from within a document creates a reference to that asset document. 1. **Upload** an image to the "Main image" field of a `post` document and publish Now you can query for a single post-type document with an image and return just the image field. 1. **Run** the query below in Vision in your Sanity Studio ```groq *[_type == "post" && defined(mainImage)][0]{ mainImage } ``` The response should contain a `_ref` inside the `asset` attribute. There may also be crop information and an `alt` string field. ```json { "mainImage": { "_type": "image", "asset": { "_ref": "image-a9302e7a5555e209623897eeec703c39499db23e-5785x3857-jpg", "_type": "reference" } } } ``` ### Additional fields and alt text The `image` field schema type is similar to the `object` type in that it can have additional fields. One is already configured for you for "alternative text." "Alt" text is used as a fallback when the image is not yet loaded and helps describe the image for screen readers. It is an essential addition to the accessibility of your web application. 1. [See the MDN documentation](https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageElement/alt) for more information about the `alt` attribute. 2. Your current schema type setup stores the `alt` text in **this** document. The popular [Media browser plugin](https://www.sanity.io/plugins/sanity-plugin-media) writes alt text to the **asset** document. Including alt text for images is important enough that your Studio schema should enforce it as a requirement. Use a custom validation rule to require the alt field when the asset field has a value. 1. **Update** the post-type schema to add a validation rule to the `alt` text field ```typescript:src/sanity/schemaTypes/postType.ts defineField({ name: 'alt', type: 'string', title: 'Alternative text', validation: rule => rule.custom((value, context) => { const parent = context?.parent as {asset?: {_ref?: string}} return !value && parent?.asset?._ref ? 'Alt text is required when an image is present' : true }), }) ``` ### Resolving the asset reference Using the GROQ operator to resolve a reference (`->`) you can return everything from the `asset` attribute. 1. **Run** the query below in Vision to return the referenced asset document ```json *[_type == "post" && defined(mainImage)][0]{ mainImage { ..., asset-> } } ``` You should now have a much larger response, and within it, a `url` attribute with the full path to the original image. This is useful, however: * It would be slow for end-users and wasteful of bandwidth to serve a full-size image for every request. * Because Sanity image URLs follow a strict convention, the [`@sanity/image-url`](https://www.sanity.io/docs/image-url) package allows you to create image URL's _without_ resolving references – using just the project ID, dataset and asset ID. 1. [`@sanity/asset-utils`](https://www.npmjs.com/package/@sanity/asset-utils) is another handy library for working with Sanity Assets using just their ID Next, you'll update the front-end to display the "main image" with a dynamically generated URL. ## On-demand transformations Images served from Sanity's CDN can be resized and delivered in different qualities and formats, all by appending specific parameters to the URL. Serving images closer to the size they are viewed and in the most efficient format is the best way to reduce bandwidth and loading times. When you ran `sanity init` with the Next.js template, a file was created for you with the image builder preconfigured with your project ID and dataset name: ```typescript:src/sanity/lib/image.ts import createImageUrlBuilder from '@sanity/image-url' import { SanityImageSource } from "@sanity/image-url/lib/types/types"; import { dataset, projectId } from '../env' // https://www.sanity.io/docs/image-url const builder = createImageUrlBuilder({ projectId, dataset }) export const urlFor = (source: SanityImageSource) => { return builder.image(source) } ``` This `urlFor` function will accept a Sanity image – as a full asset document, or even just the ID as a string – and return a method for you to generate a full URL. 1. **Update** your individual post route to render an image if it exists: ```tsx:src/app/posts/[slug]/page.tsx import { notFound } from "next/navigation"; import Link from "next/link"; import { sanityFetch } from "@/sanity/lib/live"; import { POST_QUERY } from "@/sanity/lib/queries"; import { urlFor } from "@/sanity/lib/image"; export default async function Page({ params, }: { params: Promise<{ slug: string }>; }) { const { data: post } = await sanityFetch({ query: POST_QUERY, params: await params, }); if (!post) { notFound(); } return (
{post?.mainImage ? ( {post?.mainImage?.alt ) : null}

{post?.title}


← Return to index
); } ``` The post to which you uploaded an image and published the changes should now render the image from Sanity. ![Blog post web page showing an image and title](https://cdn.sanity.io/images/3do82whm/next/cb8c7a29637e54a732c50101281c18d0034ba0b6-2144x1388.png) Take note of the methods passed along to `urlFor`, which created a unique URL to the image which was cropped to `800x300` pixels, a quality of 80%, in the most efficient file format that the current browser can display, and finally returned a string to the complete URL. 1. See [`@sanity/image-url`](https://www.sanity.io/docs/image-url) for the full list of available methods and their uses. If you inspect the URL of the image, you should see a result like this: ```text https://cdn.sanity.io/images/mml9n8hq/production/a9302e7a5555e209623897eeec703c39499db23e-5785x3857.jpg?rect=0,845,5785,2169&w=800&h=300&q=80&auto=format ``` ### Crop and hotspot While the front end is set to determine the size of the image, your content creators may want to draw focus to a specific region. In other CMSes, this typically means uploading several versions of the same image at different crop sizes. With Sanity, you can store the crop and focal intentions as data. Within your Sanity Studio `post` schema type, the "main image" field contains an option of `hotspot` set to `true`. ```typescript defineField({ name: 'mainImage', type: 'image', options: { hotspot: true, }, // ... other settings }), ``` This enables the crop and hotspot tool inside Sanity Studio, allowing creators to set the bounds of the image that should be displayed and, when cropped, which area it should focus on. ![Crop and hotspot tool on an image in Sanity Studio](https://cdn.sanity.io/images/3do82whm/next/a10d0d6b80db2e3d67c8293097b5dd5dbf366412-2144x1388.png) Because the crop and hotspot values were returned from the GROQ query for the asset, they were sent along when creating the URL. 1. **Update** the image in the document with a crop area and focal point and publish the document. Now, your image should look somewhat different on the front end, with its best efforts made to utilize the crop area and focal point. ![A blog post web page with an image of a cake in focus](https://cdn.sanity.io/images/3do82whm/next/4724a08b1324ac081458f2feac34446011f8913f-2144x1388.png) You've now successfully uploaded, editorialized, and displayed an image from Sanity on the front end that is performant and accessible. However, your IDE may be showing a warning that the use of the `` element may produce sub-par performance. There is some logic to this, as the rendering of an image can be further enhanced for performance than what you currently have. Next.js prefers you use their `Image` component, we can switch to that now. ## Next.js Image component Fast image rendering is crucial to fast web applications, so any improvements that can be made in this area are beneficial. Next.js ships an `Image` component for this reason. 1. See Vercel's documentation for more about the [Next.js Image component](https://nextjs.org/docs/app/building-your-application/optimizing/images) and optimization ### Update Next.js config The Next.js documentation mentions that you'll need to update the Next.js config to accept the Sanity CDN URL to use the `Image` component with remote images. 1. **Update** `nextjs.config.ts` to include the Sanity CDN URL ```javascript:next.config.ts import type { NextConfig } from "next"; const nextConfig: NextConfig = { // ...all other settings images: { remotePatterns: [ { protocol: "https", hostname: "cdn.sanity.io", }, ], }, }; export default nextConfig; ``` Now update your individual post route to use the imported Next.js component `` instead of the HTML ``. It is important to note that this component requires a specified height and width. 1. **Update** the post route to use `Image` from `next/image`: ```tsx:src/app/(frontend)/posts/[slug]/page.tsx import { notFound } from "next/navigation"; import Image from "next/image"; import Link from "next/link"; import { sanityFetch } from "@/sanity/lib/live"; import { POST_QUERY } from "@/sanity/lib/queries"; import { urlFor } from "@/sanity/lib/image"; export default async function Page({ params, }: { params: Promise<{ slug: string }>; }) { const { data: post } = await sanityFetch({ query: POST_QUERY, params: await params, }); if (!post) { notFound(); } return (
{post?.mainImage ? ( {post?.mainImage?.alt ) : null}

{post?.title}


← Return to index
); } ``` Once the page reloads, open the web inspector and look at the generated `` element. You will notice it has several more attributes than before. The `loading` and `decoding` attributes, in particular, are subtle performance improvements. ```html Chocolate layer cake ``` You're now retrieving images from Sanity's CDN with the best possible front end performance thanks to the Next.js `Image` component. The next building block of web applications to render is rich text and block content. Let's get acquainted with Portable Text in the next lesson.## [Block content and rich text](/learn/course/content-driven-web-application-foundations/block-content-and-rich-text) Put the power of Portable Text to work for rendering simple formatted text up to complex block objects. You may be familiar with "rich text" from almost every text editing interface you've used. Any text with formatting applied — like bold, italic, etc — is considered rich text. Displaying rich text is one of the fundamental building blocks of the web. "Block content" is a more modern concept in which rich media — like video and images — or complex objects are editable as "blocks" within paragraphs of rich text. Editing block content and rich text typically takes one of two forms. Visual editors like Notion or WordPress' "Gutenberg" block editor allow you to author block content and rich text with a focus on visuals and a locked-down interface. Extracting the blocks and text as data is not simple and typically does not follow a published standard. The alternative is authoring the formatting markup of text inline. Examples include Markdown and MDX, where there are many "standards," authoring the styles inline requires deep knowledge, and rendering the content requires complex parsers. Sanity created, standardized, and maintains tooling for Portable Text to address these challenges. ## What is Portable Text? Portable Text is a published standard for storing block content and rich text as an array of objects compatible with JSON. 1. If you're interested, the [specification for Portable Text](https://www.portabletext.org/) is published on GitHub. Portable Text is not intended to be human-readable or human-authored. Tooling is provided for both of these purposes. ### Authoring Portable Text The Portable Text editor — which you see as the `body` field inside `post`-type documents — is maintained by Sanity for the authoring of Portable Text. It has many options as part of the Studio configuration API. Your content creators style rich text and insert blocks, and the Portable Text editor creates the correct data structures. ### Querying Portable Text One of the significant benefits of authoring in this standard is that the data structures it writes are queryable with GROQ. It makes queries like _"find every post document with a link"_ or _"extract all the headings from this text to generate a table of contents"_ possible. ### Rendering Portable Text Converting this array of objects into HTML (or any other output) is a matter of mapping over it and serializing each item into the desired output. Fortunately Sanity also provides tooling for this. Before doing this in your project, let's get some styling in place first. ## Tailwind Typography Tailwind CSS provides a plugin to add "good defaults" to blocks of rich text – and an option to revert to default styles for blocks – with the [Typography Plugin](https://github.com/tailwindlabs/tailwindcss-typography). Install it now into the project. 1. **Run** the following to install the Tailwind Typography plugin ```sh pnpm add -D @tailwindcss/typography ``` 1. **Update** the `globals.css` file to use it ```css:src/app/globals.css @import "tailwindcss"; @plugin "@tailwindcss/typography"; ``` ## @portabletext/react Sanity provides a `PortableText` component from [`@portabletext/react`](https://www.npmjs.com/package/@portabletext/react) to render Portable Text blocks into React components. It was installed as part of `next-sanity`. You can now import this component 1. **Update** the individual post page route ```tsx:src/app/(frontend)/posts/[slug]/page.tsx import { notFound } from "next/navigation"; import Image from "next/image"; import Link from "next/link"; import { PortableText } from "next-sanity"; import { sanityFetch } from "@/sanity/lib/live"; import { POST_QUERY } from "@/sanity/lib/queries"; import { urlFor } from "@/sanity/lib/image"; export default async function Page({ params, }: { params: Promise<{ slug: string }>; }) { const { data: post } = await sanityFetch({ query: POST_QUERY, params: await params, }); if (!post) { notFound(); } return (
{post?.mainImage ? ( {post?.mainImage?.alt ) : null}

{post?.title}

{post?.body ? (
) : null}
← Return to index
); } ``` ![A blog post web page showing paragraphs of text](https://cdn.sanity.io/images/3do82whm/next/1c7dff07f120b31a98f4e22fbe2826b7dcc6980e-2144x1388.png) You should now see the rich text of any published document rendered with sensible default styling. The base configuration of the `PortableText` component includes rendering headings (`h1`, `h2`, etc) and lists (`li`, `ol`, etc). We don't yet have any block content. Let's look at that in the Studio first, then render on the front end. ## Adding blocks to the editor Open `blockContentType.ts` and you will see the current configuration of the Portable Text editor. It contains styles, lists, marks, and annotations you could add or remove. You can load additional blocks into the editor at the end of the array. An `image` field is already configured. 1. **Add** an image to any Portable Text field and publish the document. ![Sanity Studio with an image block inside the Portable Text editor](https://cdn.sanity.io/images/3do82whm/next/fa165f876fd17e9d91e8564d2315e7a986bcef20-2144x1388.png) If you refresh your front end for this post now, you won't see the image rendered on the page. This is because the `PortableText` component does not know what to do with it. To solve this, you can create an object of components that can replace or extend the defaults. 1. **Create** a new file for Portable Text components ```tsx:src/sanity/portableTextComponents.tsx import Image from "next/image"; import { PortableTextComponents } from "next-sanity"; import { urlFor } from "@/sanity/lib/image"; export const components: PortableTextComponents = { types: { image: (props) => props.value ? ( {props?.value?.alt ) : null, }, }; ``` 1. **Update** your post route to import this components configuration ```tsx:src/app/(frontend)/posts/[slug]/page.tsx import { components } from "@/sanity/portableTextComponents"; ``` 1. **Update** the `PortableText` component to accept it as a prop ```tsx:src/app/(frontend)/posts/[slug]/page.tsx ``` You should now have the image from your Portable Text field rendered inline. You could adjust the styling or size to give it different treatment. ![Blog post web page showing image of a stand mixer](https://cdn.sanity.io/images/3do82whm/next/b844e4a546669375d902708c439c8b6d9ba0caf7-2144x1388.png) You might also extend the `image` field inside the `body` field to give your authors some additional options for presenting the image. You've now finished your final fundamental. Your functional blog lacks some polish. Let's spruce it up in the next lesson.## [Build up the blog](/learn/course/content-driven-web-application-foundations/build-up-the-blog) With all the basics in place, let's blow out our blog front end into something more visually impressive. ![A nicely designed post index page](https://cdn.sanity.io/images/3do82whm/next/456fd8b7757f0c0a8a279a154b0fb75f5b5ba02c-2144x1388.png) For the remaining courses in this track, a much richer front end that requests and renders more content from your three schema types will be helpful. In this lesson, you'll build the blog into something more interesting. ## Install new dependency To format date strings returned from Sanity documents, install [Day.js](https://www.npmjs.com/package/dayjs). 1. **Run** the following to install Day.js ```sh pnpm add dayjs ``` ## Update queries and types 1. **Update** your queries to request more content, including resolving `category` and `author` references. ```typescript:src/sanity/lib/queries.ts import { defineQuery } from 'next-sanity' export const POSTS_QUERY = defineQuery(`*[_type == "post" && defined(slug.current)]|order(publishedAt desc)[0...12]{ _id, title, slug, body, mainImage, publishedAt, "categories": coalesce( categories[]->{ _id, slug, title }, [] ), author->{ name, image } }`) export const POSTS_SLUGS_QUERY = defineQuery(`*[_type == "post" && defined(slug.current)]{ "slug": slug.current }`) export const POST_QUERY = defineQuery(`*[_type == "post" && slug.current == $slug][0]{ _id, title, body, mainImage, publishedAt, "categories": coalesce( categories[]->{ _id, slug, title }, [] ), author->{ name, image } }`) ``` 1. **Run** Typegen to update your query Types ```sh pnpm run typegen ``` 1. This command was setup during [Generate TypeScript Types](https://www.sanity.io/learn/course/content-driven-web-application-foundations/generate-typescript-types) ## Create new components As some content will be rendered on both the post index and the individual post routes, abstracting these elements into components helps keep code somewhat DRY (don't repeat yourself). You may like to adapt the Tailwind CSS class names to your liking. 1. Create a new directory — `/src/components` — in your Next.js application for storing components. These aren't stored in `/app` since that directory is primarily for generating routes. 1. **Create** an `Author` component ```tsx:src/components/author.tsx import { POST_QUERYResult } from '@/sanity/types' import { urlFor } from '@/sanity/lib/image' import Image from 'next/image' type AuthorProps = { author: NonNullable['author'] } export function Author({ author }: AuthorProps) { return author?.image || author?.name ? (
{author?.image ? ( {author.name ) : null} {author?.name ? (

{author.name}

) : null}
) : null } ``` 1. **Create** a `Categories` component ```tsx:src/components/categories.tsx import { POST_QUERYResult } from '@/sanity/types' type CategoriesProps = { categories: NonNullable['categories'] } export function Categories({ categories }: CategoriesProps) { return categories.map((category) => ( {category.title} )) } ``` 1. **Create** a `PublishedAt` component ```tsx:src/components/published-at.tsx import { POST_QUERYResult } from '@/sanity/types' import dayjs from 'dayjs' type PublishedAtProps = { publishedAt: NonNullable['publishedAt'] } export function PublishedAt({ publishedAt }: PublishedAtProps) { return publishedAt ? (

{dayjs(publishedAt).format('D MMMM YYYY')}

) : null } ``` 1. **Create** a `Title` component for rendering a page title in a `

` ```tsx:src/components/title.tsx import { PropsWithChildren } from 'react' export function Title(props: PropsWithChildren) { return (

{props.children}

) } ``` 1. **Create** a `Post` component for rendering the above components on a single post page ```tsx:src/components/post.tsx import { PortableText } from 'next-sanity' import Image from 'next/image' import { Author } from '@/components/author' import { Categories } from '@/components/categories' import { components } from '@/sanity/portableTextComponents' import { POST_QUERYResult } from '@/sanity/types' import { PublishedAt } from '@/components/published-at' import { Title } from '@/components/title' import { urlFor } from '@/sanity/lib/image' export function Post(props: NonNullable) { const { title, author, mainImage, body, publishedAt, categories } = props; return (
{title}
{mainImage ? (
) : null} {body ? (
) : null}
); } ``` 1. **Create** a `PostCard` component for rendering the above components on the post index page ```tsx:src/components/post-card.tsx import Link from 'next/link' import Image from 'next/image' import { Author } from '@/components/author' import { Categories } from '@/components/categories' import { POSTS_QUERYResult } from '@/sanity/types' import { PublishedAt } from '@/components/published-at' import { urlFor } from '@/sanity/lib/image' export function PostCard(props: POSTS_QUERYResult[0]) { const { title, author, mainImage, publishedAt, categories } = props return (

{title}

{mainImage ? ( {mainImage.alt ) : null}
) } ``` 1. **Create** a `Header` component for the top nav of the site ```tsx:src/components/header.tsx import Link from 'next/link' export function Header() { return (
Layer Caker
  • Posts
  • Sanity Studio
) } ``` ## Update your routes Now that you have many small components, it's time to import them into your routes to complete the design. 1. **Update** the root layout to display the site-wide navigation ```tsx:src/app/(frontend)/layout.tsx import { Header } from '@/components/header' import { SanityLive } from '@/sanity/lib/live' export default function FrontendLayout({ children, }: Readonly<{ children: React.ReactNode }>) { return (
{children}
) } ``` 1. **Update** the root page to use the `Title` component ```tsx:src/app/(frontend)/page.tsx import Link from 'next/link' import { Title } from '@/components/title' export default async function Page() { return (
Layer Caker Home Page
Posts index →
) } ``` 1. **Update** the post index page to use the `PostCard` component ```tsx:src/app/(frontend)/posts/page.tsx import Link from 'next/link' import { sanityFetch } from '@/sanity/lib/live' import { POSTS_QUERY } from '@/sanity/lib/queries' export default async function Page() { const { data: posts } = await sanityFetch({ query: POSTS_QUERY }) return (

Post index

    {posts.map((post) => (
  • {post?.title}
  • ))}

← Return home
) } ``` 1. **Update** the individual post route to use the `Post` component ```tsx:src/app/(frontend)/posts/[slug]/page.tsx import { notFound } from 'next/navigation' import { sanityFetch } from '@/sanity/lib/live' import { POST_QUERY } from '@/sanity/lib/queries' import { Post } from '@/components/post' export default async function Page({ params, }: { params: Promise<{ slug: string }> }) { const { data: post } = await sanityFetch({ query: POST_QUERY, params: await params, }) if (!post) { notFound() } return (
) } ``` ## All done! Click around the site now. You should have a richer site-wide header, post index, and individual post pages. ![A blog post web page with a nice design](https://cdn.sanity.io/images/3do82whm/next/bf2cc448c4f1ce6b4b05a74c29218c92da511323-2144x1388.png) You're also in a much better position for the remaining lessons in this track. Let's test what you've learned in the final lesson.## [Fundamentals quiz](/learn/course/content-driven-web-application-foundations/fundamentals-quiz) A quick test of everything you've learned through this course. With the skills learned in this course, you can now build and deploy content-editable applications that serve three user groups that are impacted by the work we do: * **Developers** can now collaboratively code, deploy, and repeat. * **Authors** can now collaboratively write, publish, and repeat. * **End users** can now consume content and act on it. Here are a few quiz questions designed to reinforce what you've learned. **Question:** What is the purpose of using Sanity and Next.js? 1. To create a content-editable application for end users and authors 2. To ensure we're on the cutting edge 3. To generate TypeScript types 4. To make a website **Question:** Which command creates a new Sanity project inside a Next.js application 1. npm install sanity 2. next init 3. sanity init 4. yarn add sanity **Question:** What enables route-level fetching in async components? 1. React Query 2. React Server Components 3. Sanity Client 4. Promises **Question:** What is the purpose of using Git in a development workflow? 1. Deploying the application to Vercel 2. Collaboration with developer team members 3. Creating preview builds from branches 4. All of the above **Question:** Why use the Next.js Image component? 1. To silence eslint warnings 2. Improved performance and optimization 3. You can't display images without it 4. GIF support **Question:** What format does the Portable Text editor write? 1. Markdown 2. HTML 3. Portable Text 4. Textile **Question:** What is the purpose of the PortableText React component? 1. To convert Portable Text to MDX 2. To author Portable Text 3. To serialize Portable Text into components 4. To query Portable Text **Question:** How are TypeScript types generated from schema and queries? 1. Sanity Studio 2. Sanity TypeGen 3. Sanity Manage 4. GROQ ## What's next? With the fundamentals finished, it's time to make the rendering and editing experience even more robust by completing the following courses in this track.# [SEO optimized content with Next.js](/learn/course/seo-optimization) SEO doesn't have to be complicated. It's a matter of taking content you've already responsibly structured with Sanity and rendering it in the format and places that search engines expect. Complete this course to improve how robots and humans interact with your content with Sanity and Next.js ## [An introduction to SEO and structured content](/learn/course/seo-optimization/an-introduction-to-seo-and-structured-content) A few core principles, applied consistently, can form a solid foundation that benefits both search engines and editorial workflows. ## About this course This course will guide you on the best practices of building SEO-optimized content in Next.js with Sanity. Rather than getting bogged down in complex SEO configurations, the focus is on creating simple but effective schema types and queries that give content editors flexibility while maintaining SEO best practices. This approach emphasizes pragmatic solutions that address essential SEO needs without adding unnecessary complexity. It focuses on structuring content for both search engines and editorial teams, offering smart defaults along with optional granular controls. The aim is to simplify SEO-friendly content creation while adhering to Next.js best practices. ## About the author I'm Jono, the founder of [Roboto Studio](https://robotostudio.com/?utm-source=sanity-learn). We specialize in building the best editorial experiences on the web with Sanity and Next.js I'm excited to share our opinionated but battle-tested approach to SEO with you. This isn't just theory - these are the same patterns we use successfully with our clients every day. Also a special thanks to Sne and Hrithik for their help structuring this course. ## Simplifying SEO with structured content SEO is often presented as a complex endeavor, but it is more straightforward than commonly assumed. A few core principles, applied consistently, can form a solid foundation that benefits both search engines and editorial workflows. A well-structured content model handles most of the heavy lifting, removing the need for overly complex schemas or endless metadata fields. This approach facilitates agnostic SEO practices. This means you can incrementally adopt SEO best practices without having to always enter content from scratch. Next.js includes opinionated API's that streamline SEO optimization. Aligning Sanity schema types and queries with these conventions creates an effective framework for building SEO-ready websites. In the following lessons we will take a closer look at how we can leverage structured content and Next.js to help search engines understand and rank your content.## [SEO schema types and metadata](/learn/course/seo-optimization/seo-schema-types-and-metadata) Prepare useful, optional and reusable schema types specifically for SEO content and render them into page metadata the Next.js way. For the benefit of content authors, fields relevant to SEO should not always be required. Instead, they should be used to override some existing content, when provided. ## Create an SEO schema type No matter what document type you're applying SEO content to, the same data will be required. For example a title, description, image and more. So to easily re-use these fields you'll register a custom schema type to the Studio 1. **Create** a new schema type for SEO fields ```typescript:src/sanity/schemaTypes/seoType.ts import { defineField, defineType } from "sanity"; export const seoType = defineType({ name: "seo", title: "SEO", type: "object", fields: [ defineField({ name: "title", description: "If provided, this will override the title field", type: "string", }), ], }); ``` 1. **Update** your registered schema types to include `seoType` ```typescript:src/sanity/schemaTypes/index.ts // ...all your other imports import { seoType } from "./seoType"; export const schema: { types: SchemaTypeDefinition[] } = { types: [ // ...all your other types seoType, ], }; ``` 1. **Update** your `page` and `post` document types to include the SEO fields ```typescript:src/sanity/schemaTypes/pageType.ts export const pageType = defineType({ // ...all other configuration fields: [ // ...all other fields defineField({ name: "seo", type: "seo", }), ], }); ``` 1. Throughout the rest of this course you'll be expected to keep both the `page` and `post` document schema types, GROQ queries and Next.js routes updated—but code examples may only be shown for the `page` type. You should now see the SEO object field at the bottom of `page` and `post` document types in the Studio. ![Sanity Studio with a document open showing an SEO field](https://cdn.sanity.io/images/3do82whm/next/d5199c6f4e33fff47a2b01e75ad39bf990ece0ef-2240x1480.png) ## Queries with fallbacks In the description field of the SEO title, we've informed the author that the title is not required, but that it will override the title field if provided. The title field is likely to be sufficient for SEO the majority of the time, but if for some reason it needs to be different, the author now has an avenue to override it. For the front-end to respect this, there are a few ways to do it. You _could_ choose which field to render with logic like this: ```tsx:Example only {seo?.title ?? title} ``` But then we'd need to duplicate that logic _everywhere_ we optionally render the correct value. It's also annoying because the `seo` attribute may or may not exist. Because we have GROQ, we can move all this logic into our query instead. 1. **Update** the `PAGE_QUERY` to include an `seo` attribute with values and fallbacks ```typescript:src/sanity/lib/queries.ts export const PAGE_QUERY = defineQuery(`*[_type == "page" && slug.current == $slug][0]{ ..., "seo": { "title": coalesce(seo.title, title, ""), }, content[]{ ..., _type == "faqs" => { ..., faqs[]-> } } }`); ``` Don't forget to update your POST_QUERY to include the same projection. 1. `coalesce()` is a GROQ [GROQ Functions Reference](https://www.sanity.io/learn/specifications/groq-functions) that returns the first value that is not null Now `seo.title` will never be `null`, and contain either the optionally provided SEO title, or the page title, or an empty string. 1. **Run** the following command to update your Types now that you've made schema and query changes ```sh npm run typegen ``` 1. This command was setup in the [Generate TypeScript Types](https://www.sanity.io/learn/course/content-driven-web-application-foundations/generate-typescript-types) lesson of the [Content-driven web application foundations](https://www.sanity.io/learn/course/content-driven-web-application-foundations) course. Just to prove this works, update the dynamic route that renders your pages to include a `` tag. It is a [feature of React 19](https://react.dev/blog/2024/12/05/react-19#support-for-metadata-tags) to move meta tags into the `<head>` tag. (But it's not how Next.js 15 recommends, you'll do that later). 1. **Update** the dynamic page route to include the `<title>` tag ```tsx:src/app/(frontend)/[slug]/page.tsx import { PageBuilder } from "@/components/PageBuilder"; import { sanityFetch } from "@/sanity/lib/live"; import { PAGE_QUERY } from "@/sanity/lib/queries"; export default async function Page({ params, }: { params: Promise<{ slug: string }>; }) { const { data: page } = await sanityFetch({ query: PAGE_QUERY, params: await params, }); return ( <> <title>{page.seo.title} {page?.content ? ( ) : null} ); } ``` Your front end should now have rendered either the page title, or the SEO title field value into a `` tag inside the `<head>`. ## Metadata, the Next.js way The problem with relying on the previous method is deduplication. React will render multiple `<title>` tags when it finds them, and your Next.js application may eventually use nested layouts where this is a possibility. Instead, Next.js has an API to export a uniquely named function from a route to take the same dynamic data that is rendered into the page to generate those meta tags as required. In the example below we have extracted the `sanityFetch` to its own function, because now it will be re-used multiple times. (Which Next.js should cache and only run once). Inside the `generateMetadata` function, the same `seo.title` value is used to generate a `<title>` tag in the final markup. 1. **Update** your `page` route to generate metadata and the rendered on-page content in separate functions. ```tsx:src/app/(frontend)/[slug]/page.tsx import type { Metadata } from "next"; import { PageBuilder } from "@/components/PageBuilder"; import { sanityFetch } from "@/sanity/lib/live"; import { PAGE_QUERY } from "@/sanity/lib/queries"; type RouteProps = { params: Promise<{ slug: string }>; }; const getPage = async (params: RouteProps["params"]) => sanityFetch({ query: PAGE_QUERY, params: await params, }); export async function generateMetadata({ params, }: RouteProps): Promise<Metadata> { const { data: page } = await getPage(params); return { title: page.seo.title, }; } export default async function Page({ params }: RouteProps) { const { data: page } = await getPage(params); return page?.content ? ( <PageBuilder documentId={page._id} documentType={page._type} content={page.content} /> ) : null; } ``` Following the same pattern, you can add SEO overrides for other important metadata tags. Such as a `<meta name="description" />` tag. This is what Google uses to display a description of your page in the search results. Again, you can also add an override for the `seoImage` field, which will be used to populate the `<meta property="og:image" />` tag. The most important takeaway from this, is that you always want to have an override, and a fallback. It keeps consistency in your content, and standardizes the way you query your SEO fields. Don't forget to update your individual post route to use the same conventions. 1. Take a look at the Next.js [metadata documentation](https://nextjs.org/docs/app/building-your-application/optimizing/metadata) for more information. ## Just getting started At this point, you can see how the pattern works, and how this is easy to extend to other SEO fields. In the next lesson, you will enhance your SEO functionality by adding more fields, including those for Open Graph data, and one to control search engine indexing## [Extending the SEO schema types](/learn/course/seo-optimization/adding-seo-fields-to-your-project) Now you're setup for success, extend the fields made available to your authors. In the first lesson, you learned how to add some basic SEO fields to your schema. Now you're going to kick it up a notch with Open Graph fields and more granular controls over displaying documents in lists. ## Add more SEO fields 1. **Update** your `seoType` schema type to include `description`, `image` and a `noIndex` field ```typescript import { defineField, defineType } from "sanity"; export const seoType = defineType({ name: "seo", title: "SEO", type: "object", fields: [ defineField({ name: "title", description: "If provided, this will override the title field", type: "string", }), defineField({ name: "description", type: "text", }), defineField({ name: "image", type: "image", options: {hotspot: true} }), defineField({ name: "noIndex", type: "boolean", }), ], }); ``` You may wish to have separate title and description fields for Open Graph properties. But in this course you'll re-use these values. 1. **Update** `PAGE_QUERY` and `POST_QUERY` to include these new attributes, along with default values ```typescript:src/sanity/lib/queries.ts export const PAGE_QUERY = defineQuery(`*[_type == "page" && slug.current == $slug][0]{ ..., "seo": { "title": coalesce(seo.title, title, ""), "description": coalesce(seo.description, ""), "image": seo.image, "noIndex": seo.noIndex == true }, content[]{ ..., _type == "faqs" => { ..., faqs[]-> } } }`); ``` 1. **Run** the following to regenerate Types now that you've made schema and query changes ```sh npm run typegen ``` ## Render more metadata With these fields now present in your schema types and queries, you can now render even more metadata in your route. Note in the code below how the Open Graph image reuses the `urlFor` helper function to generate an image the correct width and height–and will also respect crop and hotspot data. The value for `noindex` we only include in the metadata if it is set to true. ```typescript:src/app/(frontend)/[slug]/page.tsx // ...the rest of your route export async function generateMetadata({ params, }: RouteProps): Promise<Metadata> { const { data: page } = await getPage(params); if (!page) { return {} } const metadata: Metadata = { title: page.seo.title, description: page.seo.description, }; if (page.seo.image) { metadata.openGraph = { images: { url: urlFor(page.seo.image).width(1200).height(630).url(), width: 1200, height: 630, }, }; } if (page.seo.noIndex) { metadata.robots = "noindex"; } return metadata; } ``` Don't forget to update your individual post route as well. ### A note on `noIndex` Having a page set to `noIndex` typically means that you want the published document to exist as a route in your application—but you don't want it included in search results. Either on search engine results or within your website. Nothing needs to change now with your page type documents, but if you were to include these fields in your post type documents, you'd likely want to update any query that looks up and renders many posts to exclude results where `noIndex` is true. For example: ```groq:Example only *[_type == "post" && seo.noIndex != true] ``` You'll see an example of this later in the lesson [Build a dynamic sitemap](https://www.sanity.io/learn/course/seo-optimization/building-a-dynamic-sitemap). Now your Sanity Studio and application are capable of authoring, querying and rendering complex metadata for the most common SEO needs. You can continue to extend these fields for any other metadata requirements. In the following lesson you'll take on another major SEO concern: redirects.## [Implementing redirects](/learn/course/seo-optimization/implementing-redirects) Redirects are a critical component of SEO and site maintenance. While they may appear straightforward at first, improper implementation can lead to complex redirect chains and degraded site performance. Let's go through best practices for implementing redirects with Next.js and Sanity. ## Learning objectives You will create a redirect system that: * Is configured with documents in Sanity Studio * Can be managed by your content team * Won't create a maintenance headache later ## Creating the schema Let's start with your redirect schema type first. You want to make this as editor friendly as possible. The goal is to build a non-technical solution that can be managed by your content team, and output to your Next.js configuration. 1. See the Next.js [documentation about creating redirects](https://nextjs.org/docs/app/building-your-application/routing/redirecting#redirects-in-nextconfigjs) Below is a simplified document schema, which you'll make much smarter with validation rules later in the lesson. 1. **Create** a new document schema type for redirects ```typescript:src/sanity/schemaTypes/redirectType.ts import { defineField, defineType } from "sanity"; import { LinkIcon } from "@sanity/icons"; export const redirectType = defineType({ name: "redirect", title: "Redirect", type: "document", icon: LinkIcon, fields: [ defineField({ name: "source", type: "string", }), defineField({ name: "destination", type: "string", }), defineField({ name: "permanent", type: "boolean", initialValue: true, }), defineField({ name: "isEnabled", description: "Toggle this redirect on or off", type: "boolean", initialValue: true, }), ], }); ``` Don't forget to register it to your Studio schema types ```typescript:src/sanity/schemaTypes/index.ts // ...all other imports import { redirectType } from "./redirectType"; export const schema: { types: SchemaTypeDefinition[] } = { types: [ // ...all other schema types redirectType, ], }; ``` 1. **Update** your structure builder configuration to add the redirect document type: ```typescript:src/sanity/structure.ts // add this line S.documentTypeListItem('redirect').title('Redirects') ``` ## Fetching the redirects The redirect documents created in Sanity Studio will need to be queried into our Next.js config file. 1. **Update** `queries.ts` to include a GROQ query for redirect documents ```typescript:src/sanity/lib/queries.ts // ...all other queries export const REDIRECTS_QUERY = defineQuery(` *[_type == "redirect" && isEnabled == true] { source, destination, permanent } `); ``` 1. **Create** a new utility to fetch all redirect documents ```typescript:src/sanity/lib/fetchRedirects.ts import { client } from "./client"; import { REDIRECTS_QUERY } from "./queries"; export async function fetchRedirects() { return client.fetch(REDIRECTS_QUERY); } ``` Since you've added schema types and a new query to the application, don't forget to generate Types. ```sh:Terminal pnpm run typegen ``` ### Things to note * Vercel has a limit of 1,024 redirects in Next.js config. * For large numbers of redirects (1000+), use a custom middleware solution instead. See Vercel's documentation on [managing redirects at scale](https://nextjs.org/docs/app/building-your-application/routing/redirecting#managing-redirects-at-scale-advanced) for more details. ## Add redirects in your Next.js config Now we can use Next.js's built-in redirects configuration in `next.config.ts`. This allows us to define redirects that will be applied at build time. Note that redirects defined in `next.config.ts` run **before** any middleware, should you use it in the future. 1. **Update** your `next.config.ts` file to include redirects ```typescript:next.config.ts // ...other imports import { fetchRedirects } from "@/sanity/lib/fetchRedirects"; const nextConfig: NextConfig = { // ...other config async redirects() { return await fetchRedirects(); }, }; export default nextConfig; ``` ## Validation rules Validation is critical as invalid redirects can break future builds. Without validation, authors could publish a redirect that prevents your application from deploying—or create a deployment with circular redirects. * Source paths must start with `/` * Never create circular redirects like `A` -> `B` -> `A` Here's the validation logic, yes it's a bit complex but it's worth it to avoid hours of debugging, when your build breaks because of a missing slash. 1. **Update** the `source` field in the `redirectType` schema ```typescript:src/sanity/schemaTypes/redirectType.ts import { defineField, defineType, SanityDocumentLike } from "sanity"; import { LinkIcon } from "@sanity/icons"; function isValidInternalPath(value: string | undefined) { if (!value) { return "Value is required"; } else if (!value.startsWith("/")) { return "Internal paths must start with /"; } else if (/[^a-zA-Z0-9\-_/:]/.test(value)) { return "Source path contains invalid characters"; } else if (/:[^/]+:/.test(value)) { return "Parameters can only contain one : directly after /"; } else if ( value.split("/").some((part) => part.includes(":") && !part.startsWith(":")) ) { return "The : character can only appear directly after /"; } return true; } function isValidUrl(value: string | undefined) { try { new URL(value || ""); return true; } catch { return "Invalid URL"; } } export const redirectType = defineType({ name: "redirect", title: "Redirect", type: "document", icon: LinkIcon, validation: (Rule) => Rule.custom((doc: SanityDocumentLike | undefined) => { if (doc && doc.source === doc.destination) { return ["source", "destination"].map((field) => ({ message: "Source and destination cannot be the same", path: [field], })); } return true; }), fields: [ defineField({ name: "source", type: "string", validation: (Rule) => Rule.required().custom(isValidInternalPath), }), defineField({ name: "destination", type: "string", validation: (Rule) => Rule.required().custom((value: string | undefined) => { const urlValidation = isValidUrl(value); const pathValidation = isValidInternalPath(value); if (urlValidation === true || pathValidation === true) { return true; } return typeof urlValidation === "boolean" ? urlValidation : pathValidation; }), }), defineField({ name: "permanent", description: "Should the redirect be permanent (301) or temporary (302)", type: "boolean", initialValue: true, }), defineField({ name: "isEnabled", description: "Toggle this redirect on or off", type: "boolean", initialValue: true, }), ], }); ``` The additional validation logic now thoroughly checks: * If the `source` is a valid internal path * If the `destination` is a valid URL, or valid internal path * If the `source` and `destination` values are different ## Pro tips from experience * Keep an eye on redirect chains, they can cause "too many redirects" errors * Clean up old redirects periodically * Consider logging redirects if you need to track usage * Adjust the cache duration based on how often you update redirects * You may need to redeploy your site as new redirects are added or existing redirects are modified Next up, you'll learn how to generate Open Graph images using Tailwind CSS and Vercel edge functions.## [Creating dynamic Open Graph images](/learn/course/seo-optimization/creating-dynamic-open-graph-images-with-vercel-og) Generate dynamic Open Graph images that pull your data directly from Sanity, saving you hours of design work and ensuring your social previews are always up to date with your content. Open Graph images (or social cards) are the preview images that appear when your content is shared on social media platforms. It is proven that having these images included with your social shares increases click through rates. Dependent on which platform you're sharing to, you may want to create a range of different aspect ratios. For this tutorial, you'll create the most common size, `1200x630` pixels. As always, you'll set this up in such a way that if you do upload a bespoke image to the `seo.image` field, it will override the automatically generated one. ## Learning objectives By the end of this lesson, you'll be able to: * Generate dynamic Open Graph images using Next.js Edge Runtime * Extract and use dominant colors from featured images * Create professional, branded social previews ### Setting up the edge route Let's create a new API route using Next.js Edge Runtime. This route will: * Accept a parameter to dynamically fetch data * Return an image response using Next.js `ImageResponse` Make sure before you proceed any further, you have read the [limitations](https://vercel.com/docs/functions/og-image-generation#limitations) of Open Graph image generation on Vercel. 1. **Create** a new route in your Next.js application ```tsx:src/app/api/og/route.tsx import { ImageResponse } from "next/og"; export const runtime = "edge"; const dimensions = { width: 1200, height: 630, }; export async function GET(request: Request) { const { searchParams } = new URL(request.url); const title = searchParams.get("title"); return new ImageResponse( ( <div tw="flex w-full h-full bg-blue-500 text-white p-10"> <h1 tw="text-6xl font-bold">{title || "Missing title parameter"}</h1> </div> ), dimensions ); } ``` Visit [http://localhost:3000/api/og?title=hello](http://localhost:3000/api/og?title=hello) and you should see an image rendered of a blue rectangle with the word "hello" in the top right. ![A blue box with the word "hello"](https://cdn.sanity.io/images/3do82whm/next/0eb9c72c8ed88e118d399c9b6417d5e58873822f-2240x1480.png) This creates a route that generates an image which will render whatever was passed into the title parameter. We're using Tailwind CSS utility classes in a `tw` prop for styling. If you use `className` you will get an error. This is all part of how the `ImageResponse` function works. This works, but isn't much to look at. And at present any user could enter _any_ value for the `title` parameter and have it render a custom image. Not safe! If we're going to render content, it's better to do so from a single source of truth. Your Content Lake. ### Creating the Sanity query We'll need to fetch specific data for our OG images: * Page title * Featured image URL * Color palette information [There's lots of neat metadata you can pull from Sanity images](https://www.sanity.io/docs/image-metadata), such as the dominant colors within an image. We'll use these as part of the design. 1. **Update** queries with a GROQ query for the data needed to generate an image ```typescript:src/sanity/lib/queries.ts // ...all other queries export const OG_IMAGE_QUERY = defineQuery(` *[_id == $id][0]{ title, "image": mainImage.asset->{ url, metadata { palette } } } `); ``` 1. **Update** the route that generates the OG image to fetch data based on the search parameter `id` ```tsx:src/app/api/og/route.tsx import { client } from "@/sanity/lib/client"; import { urlFor } from "@/sanity/lib/image"; import { OG_IMAGE_QUERY } from "@/sanity/lib/queries"; import { notFound } from "next/navigation"; import { ImageResponse } from "next/og"; export const runtime = "edge"; async function loadGoogleFont(font: string, text: string) { const url = `https://fonts.googleapis.com/css2?family=${font}&text=${encodeURIComponent(text)}`; const css = await (await fetch(url)).text(); const resource = css.match( /src: url\((.+)\) format\('(opentype|truetype)'\)/ ); if (resource) { const response = await fetch(resource[1]); if (response.status == 200) { return await response.arrayBuffer(); } } throw new Error("failed to load font data"); } export async function GET(request: Request) { const { searchParams } = new URL(request.url); const id = searchParams.get("id"); if (!id) { notFound(); } const data = await client.fetch(OG_IMAGE_QUERY, { id }); if (!data) { notFound(); } const vibrantBackground = data?.image?.asset?.metadata?.palette?.vibrant?.background ?? "#3B82F6"; const darkVibrantBackground = data?.image?.asset?.metadata?.palette?.darkVibrant?.background ?? "#3B82F6"; const text = data.title || ""; return new ImageResponse( ( <div tw="flex w-full h-full relative" style={{ background: `linear-gradient(135deg, ${vibrantBackground} 0%, ${darkVibrantBackground} 100%)`, }} > {/* Content container */} <div tw="flex flex-row w-full h-full relative"> {/* Text content */} <div tw="flex-1 flex items-center px-10"> <h1 tw="text-7xl tracking-tight leading-none text-white leading-tight"> {text} </h1> </div> {/* Image container */} {data.image && ( <div tw="flex w-[500px] h-[630px] overflow-hidden"> {/* eslint-disable-next-line @next/next/no-img-element */} <img src={urlFor(data.image).width(500).height(630).url()} alt="" tw="w-full h-full object-cover" /> </div> )} </div> </div> ), { width: 1200, height: 630, fonts: [ { name: "Inter", data: await loadGoogleFont("Inter", text), weight: 400, style: "normal", }, ], } ); } ``` This query fetches the page title, image, and its color palette information—all based on the value of an ID passed to the route. It uses this data to create a dynamic background color based on the image. The route now also uses the font Inter, [fetched from Google Fonts](https://fonts.google.com/specimen/Inter). You can test this route by visiting `/api/og?id=your-document-id` in your browser, replacing `your-document-id` with an actual Sanity document ID. ![A red card with a cake and the words "vegan cake recipes that taste great"](https://cdn.sanity.io/images/3do82whm/next/15dc7bd210ae9c813edc9c3331d57f6b4e431f4f-2240x1480.png) The image template includes: * A dynamic background color based on the featured image * The page title * The featured image, respecting its crop and hotspot settings What we have now is a basic—but working—prototype for the future. You could extend this design or even explore creating different layouts depending on the value of the document's `_type`. ### Implementing metadata Now that you have your Open Graph image generation set up, it will need to be added to each route's metadata so that it renders when that URL is shared. 1. **Update** the `generateMetadata` function in your `page` and `post` routes to use the dynamically generated Open Graph image, if an image is not specified in the document ```typescript:src/app/(frontend)/[slug]/page.tsx // ...all your imports export async function generateMetadata({ params, }: RouteProps): Promise<Metadata> { const { data: page } = await getPage(params); if (!page) { return {}; } const metadata: Metadata = { metadataBase: new URL('https://acme.com'), title: page.seo.title, description: page.seo.description, }; metadata.openGraph = { images: { url: page.seo.image ? urlFor(page.seo.image).width(1200).height(630).url() : `/api/og?id=${page._id}`, width: 1200, height: 630, }, }; if (page.seo.noIndex) { metadata.robots = "noindex"; } return metadata; } ``` Be sure to copy this logic over to your individual post route as well. This setup generates metadata dynamically for each page, uses the page's Sanity ID to generate the correct Open Graph image, and maintains consistent dimensions across platforms. ## Testing your implementation There are a few ways to test your implementation. If you have a service like [ngrok](https://ngrok.com/) setup locally you can pipe your local development environment to an external URL, and then run that URL through an Open Graph previewing service. 1. [opengraph.ing](https://opengraph.ing/) is a simple service for validating your social previews in multiple applications and services ![social sharing previews of a blog post](https://cdn.sanity.io/images/3do82whm/next/9709794e46c9ded8725e6191add390891f47bfd1-2240x1480.png) Once you're ready to deploy, you can check the implementation from your preview environment. Once deployed you can use the Vercel toolbar to preview your site and see the Open Graph image. ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/57ee3d7022f136136194c209d64ae80695dc54c0-1284x1151.webp) Other alternatives include using platform-specific debugging tools: * [Facebook Sharing Debugger](https://developers.facebook.com/tools/debug/) * [Twitter Card Validator](https://cards-dev.twitter.com/validator) * [LinkedIn Post Inspector](https://www.linkedin.com/post-inspector/) These tools allow you to test your Open Graph image on the platform you are sharing to, and see the preview image. There is an added benefit of these tools, that some of them force the cache to be invalidated, which means you can see the latest version of your Open Graph image across that social platform. The next lesson will cover remixing content for social platforms using the Sanity AI Assistant.## [Generate social posts from your content](/learn/course/seo-optimization/ai-generate-social-posts-from-your-content) Speed up ideation of social media posts. And as a result, boost your SEO from sharing your content to a wider audience across different social platforms. 1. This lesson uses features only available in paid plans. If you started a new project for this course, you can test these features during the free trial period. You can also start a new free project at any time. Summarizing existing content and making it more useful in different forms is one of the best features of AI tooling. Sanity AI Assist makes it possible for authors to automatically generate new content, using existing fields along with prompts that they can save and share. In this lesson you'll use Sanity AI Assist to generate text to post on social networks while sharing a link to the content. 1. Read more about [Sanity AI Assist](https://www.sanity.io/docs/install-and-configure-sanity-ai-assist) in the documentation ## Create new schema types First you'll need new fields to write content to. Just like you made a new custom object schema type for SEO fields, create another for social networks. In the example below we've chosen only LinkedIn and X (formerly known as Twitter) for now, feel free to include any of the countless others. 1. **Create** a new `social` object schema type ```typescript:src/sanity/schemaTypes/socialType.ts import { defineField, defineType } from "sanity"; export const socialType = defineType({ name: "social", title: "Social", type: "object", fields: [ defineField({ name: "linkedIn", title: "LinkedIn", type: "text", rows: 3, }), defineField({ name: "x", description: "Formerly known as Twitter", type: "text", rows: 2, }), ], }); ``` Don't forget to register this type to your Sanity Studio schema types ```typescript:src/sanity/schemaTypes/index.ts // ...all other imports import { socialType } from "./socialType"; export const schema: { types: SchemaTypeDefinition[] } = { types: [ // ...all other types socialType, ], }; ``` 1. **Update** your `page` and `post` schema types to include the `social` field. ```typescript:src/sanity/schemaTypes/pageType.ts export const pageType = defineType({ // ...all other settings fields: [ // ...all other fields defineField({ name: "social", type: "social", }), ], }); ``` ## Install Sanity AI Assist To automatically generate content for these fields, you'll now install and configure the Sanity AI assistant. 1. **Run** the following in your terminal ```sh:Terminal npm install @sanity/assist ``` 1. **Update** your Sanity Studio config file to include the `assist` plugin ```typescript:sanity.config.ts // ...all other imports import { assist } from "@sanity/assist"; export default defineConfig({ // ...all other config plugins: [ // ...all other plugins assist(), ], }); ``` With this installed you can create a prompt to help Sanity AI Assist generate content from your existing fields. Sanity AI Assist works at both field level and document level, however, for this example, you will be using the document level. Look at the top right of the Studio with any document open and you should see a sparkly new icon. ![Sanity Studio document editor with the AI Assist pane open](https://cdn.sanity.io/images/3do82whm/next/dd14f510b2c470a2f0b8889136805128c7efe1bc-2784x1628.png) The first time you click this button you may be asked to Enable AI Assist 1. Click **Enable AI Assist** You can see there are currently no instructions. 1. Click **Add item** to create your first instruction. What we want AI Assist to do is to summarize the main body of the document You're going to create a new prompt, use the example below for guidance. Where you see the boxes like `[Title]`, replace these with references to the fields in your document. Make sure the **Allowed fields** is set to only write to the **Social** field object. 1. **Create** your new Instruction and run it ```text Take the content from [title] and [body] to generate text that will encourage people to click the link and find out more when this content is shared on social networks. ``` ![Sanity Studio showing an AI Assist prompt being run](https://cdn.sanity.io/images/3do82whm/next/494aca1c1959bac7585f64998c585465cfd3ac5e-2784x1628.png) 1. Writing prompts for AI is a bit of an art form! Take a look at the [instructions cheat sheet](https://www.sanity.io/docs/ai-assist-cheat-sheet) in the documentation for inspiration. ### Posting to social networks In this lesson you're only using Sanity to **generate** text for posting to social networks. So for now your authors would need to copy and paste them from Sanity. There are a variety of third-party tools available to automate this process. Please use the feedback form below to let us know if you have preferred ways to automate posting to social networks. ### Adapt to your tone of voice By default, AI-generated copy can be generic. Consider adding some **AI context** documents (now visible in your Studio structure) to inform your preferred writing style and tone of voice. You can then add this context to your instruction, so that copy generated in future will be consistently informed. In the following lesson, you'll create a dynamic sitemap that automatically updates when content changes. Helping search engines discover and index your content more effectively.## [Build a dynamic sitemap](/learn/course/seo-optimization/building-a-dynamic-sitemap) A sitemap helps search engines understand and index your website more effectively. Generate a dynamic sitemap to guide search crawlers through your content, showing them what pages exist and how often they change. A well-structured sitemap gives search engines clear guidance about your content hierarchy and update frequency. ## Why this approach? Search engines like Google use sitemaps as a primary method to discover and understand your content. While they can crawl your site naturally through links, a sitemap: 1. Ensures all your content is discoverable, even pages that might be deep in your site structure 2. Helps search engines understand when content was last updated 3. Allows you to indicate content priority 4. Speeds up the indexing process for new content This is especially important for dynamic content managed through Sanity, where pages might be added or updated frequently. ## Learning objectives By the end of this lesson, you will: * Create a dynamic sitemap from Sanity content * Implement graceful validation error handling ### Understanding sitemaps Before diving into the code, let's understand what makes a good sitemap from a technical perspective: * **XML Format**: Search engines expect a specific XML format * **Last Modified Dates**: Helps search engines know when content was updated * **Change Frequency**: Indicates how often content changes * **Priority**: Suggests the importance of pages ### Building the sitemap Let's start with a GROQ query to fetch all `page` and `post` type documents. 1. **Update** `queries.ts` to include `SITEMAP_QUERY` ```typescript:src/sanity/lib/queries.ts // ...all other queries export const SITEMAP_QUERY = defineQuery(` *[_type in ["page", "post"] && defined(slug.current)] { "href": select( _type == "page" => "/" + slug.current, _type == "post" => "/posts/" + slug.current, slug.current ), _updatedAt } `) ``` This query: * Gets all documents of type `page` and `post` * Dynamically creates a complete path depending on the value of `_type` * Returns that path as `href`, and the last updated date of the document You've created a new query, so you'll need to create new types. ```sh:Terminal pnpm run typegen ``` The Next.js app router has a special, reserved route for generating an XML sitemap response from an array of objects in JavaScript. 1. See the Next.js documentation for more details on the [sitemap route](https://nextjs.org/docs/app/api-reference/file-conventions/metadata/sitemap) The route below fetches content from Sanity using the query above, and generates the shape of content response that Next.js requires. 1. **Create** a new route to generate the sitemap ```typescript:src/app/sitemap.ts import { MetadataRoute } from "next"; import { client } from "@/sanity/lib/client"; import { SITEMAP_QUERY } from "@/sanity/lib/queries"; export default async function sitemap(): Promise<MetadataRoute.Sitemap> { try { const paths = await client.fetch(SITEMAP_QUERY); if (!paths) return []; const baseUrl = process.env.VERCEL ? `https://${process.env.VERCEL_URL}` : "http://localhost:3000"; return paths.map((path) => ({ url: new URL(path.href!, baseUrl).toString(), lastModified: new Date(path._updatedAt), changeFrequency: "weekly", priority: 1, })); } catch (error) { console.error("Failed to generate sitemap:", error); return []; } } ``` ### Testing your sitemap After deploying your changes, you can test your sitemap by visiting [http://localhost:3000/sitemap.xml](http://localhost:3000/sitemap.xml) on your site. You should see something like this: ```xml <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://localhost:3000/welcome-to-layer-caker</loc> <lastmod>2025-01-10T14:13:34.000Z</lastmod> <changefreq>weekly</changefreq> <priority>1</priority> </url> <!-- // all other URLs... --> </urlset> ``` Even if your sitemap looks correct, checking with a sitemap validator tool is recommended. Especially as your website grows. It's very easy to miss validation errors. A solid option is [XML Sitemaps](https://www.xml-sitemaps.com/validate-xml-sitemap.html) for a free and quick check. ### Best practices To ensure your sitemap is doing what it's meant to, keep these points in mind: * **Regular Updates**: Your sitemap should update when content changes * **Size Limits**: Keep under 50,000 URLs **per sitemap file** * **Valid URLs**: Ensure all URLs are properly formatted At this stage, your sitemap will now automatically update whenever you publish new content in Sanity, helping search engines discover and index your content. As you continue to enhance your sitemap implementation and expand out through other document types, you may want to consider adding different priorities for different page types to help search engines understand the relative importance of your content. Next, you'll explore structured data and JSON-LD, a clever way of reusing your documents for set-and-forget SEO benefits.## [Generating JSON-LD dynamically](/learn/course/seo-optimization/generating-json-ld-dynamically) JSON-LD is a powerful way to provide structured data to search engines—fortunately structured data is what Sanity does best. JSON-LD data follows structured conventions for many different types of content. All you'll need to do is take content already authored in your documents, and render it into the DOM in the expected format. We already have FAQs as a document type, so it makes sense to start there. When it comes to FAQs, proper JSON-LD implementation can help your content appear in rich snippets and potentially even surface you near the top of search results just by providing useful information. ## Learning objectives By the end of this lesson, you will: * Generate JSON-LD for FAQs programmatically * Implement type-safe JSON-LD using Google's `schema-dts` package * Improve your FAQ block from the page builder course ### Understanding JSON-LD generation JSON-LD generation can be challenging to get right as it follows a strict structure. Fortunately, [Google provides a TypeScript package](https://github.com/google/schema-dts) called `schema-dts` that gives you type safety for your structured content. Let's start by creating a function that transforms your FAQ data into a JSON-LD friendly structure. Back in [Create page builder schema types](https://www.sanity.io/learn/course/page-building/create-page-builder-schema-types) you created a document type schema for FAQs. 1. **Run** the following to install the `schema-dts` package ```sh:Terminal pnpm add schema-dts ``` Currently in the GROQ query for pages the FAQ block is returning the full document for every reference. Let's update this to only extract specific fields from the document, as well as the Portable Text in the `body` field as a string using the GROQ function `pt::text()` 1. **Update** your GROQ query for pages, to return the answer in plain text ```groq:src/sanity/lib/queries.ts // replace this faqs[]-> // with this faqs[]->{ _id, title, body, "text": pt::text(body) } ``` You've updated your queries, so update your types ```sh:Terminal pnpm run typegen ``` ### Implementing FAQ JSON-LD in components The JSON-LD markup can be rendered anywhere in the page—it doesn't need to be stored inside the `<head>`. So you can add it directly into components where you already have access to the correct data. In this instance, you're rendering the FAQ content into an accordion in this block, so you can also have it process that same content into the JSON-LD format and add it to the component output. 1. **Update** your `FAQs` block to render JSON-LD content in a script tag ```tsx:src/components/blocks/faqs.tsx // ...all your imports and types import { FAQPage, WithContext } from "schema-dts"; const generateFaqData = (faqs: FAQsProps["faqs"]): WithContext<FAQPage> => ({ "@context": "https://schema.org", "@type": "FAQPage", mainEntity: faqs?.map((faq) => ({ "@type": "Question", name: faq.title!, acceptedAnswer: { "@type": "Answer", text: faq.text!, }, })), }); export function FAQs({ _key, title, faqs }: FAQsProps) { const faqData = generateFaqData(faqs); return ( <section className="container mx-auto flex flex-col gap-8 py-16"> <script type="application/ld+json" dangerouslySetInnerHTML={{ __html: JSON.stringify(faqData) }} /> {/* ...the rest of the component */} </section> ); } ``` Notice the most important part of this block, the `<script>` tag. This is where you're adding the JSON-LD to the page. This should get you thinking about how you can reuse this pattern across your site. The FAQ schema covered here is just the tip of the JSON-LD iceberg. The beauty of structured data is that once implemented, it works silently in the background to enhance your search presence. Some other examples you may want to implement: * [Product information](https://schema.org/Product) * [Event information](https://schema.org/Event) * [Blog post information](https://schema.org/BlogPosting) ## Conclusion The more you lean into structured data, the more benefits you'll experience. If there's one thing to take away from this whole course, it's that SEO doesn't need to be a complex process. It's about taking your existing data and repurposing it to make it more digestible for your audience and, by extension, for search crawlers. The future of the web won't be about configuring hundreds of different fields to get the perfect SEO score. Instead, it will focus on finding ways to accelerate content creation and automating the tasks you don't want to do. It's time to revise everything you've learned in the final lesson!## [Sanity SEO quiz](/learn/course/seo-optimization/sanity-seo-quiz) Let's test what you've learned in the prior lessons! **Question:** What is the benefit of using dynamic metadata in Next.js? 1. It reduces the website's loading time 2. It improves the server's performance 3. It automatically creates backlinks 4. It allows for page-specific SEO optimization based on content **Question:** When would you use a Sanity AI Assist context? 1. When you need to provide more information about your business within the prompt 2. When you need to add your specific writing style 3. When you need to reference important information that is regularly reused 4. All of the above **Question:** What is the purpose of the coalesce query in GROQ? 1. To override the value if another value has been provided 2. To combine the two values together 3. To type check the value inside of a Sanity field 4. To assess which is the bigger of two fields **Question:** Why might we specify an image, title and description, as well as an SEO title, SEO description, SEO image? 1. To create more work for the content editors 2. Because you cannot reuse fields in multiple locations 3. To be more granular when entering SEO information 4. To provide different content for different social media platforms **Question:** What is the recommended way to structure SEO fields in your Sanity schema? 1. Create unique schema types for each document type's SEO fields 2. Add SEO fields directly to the document type without any structure 3. Create a reusable SEO object type that can be referenced across different document types 4. Store all SEO fields in a single global configuration document **Question:** Why is there such a significant amount of validation within Next.js redirects? 1. To make the development process more complex 2. To slow down the build process 3. Because Next.js is overly cautious 4. Because providing incorrect redirect data can break your deployment pipeline **Question:** What is the key advantage of implementing a dynamic sitemap in a Sanity + Next.js project? 1. It makes the website load faster 2. It automatically updates when content changes 3. It improves the website's visual design 4. It automatically generates meta descriptions **Question:** When implementing schema markup with Next.js and Sanity, what is the recommended data format? 1. XML 2. JSON-LD 3. JavaScript 4. HTML microdata **Question:** What is considered best practice when implementing SEO features in a Sanity + Next.js project? 1. Hardcoding all SEO values in the Next.js files 2. Creating reusable schemas and components that can be managed by content creators 3. Managing everything through external SEO tools 4. Letting search engines handle everything automatically **Question:** What is the main purpose of implementing on-page schema in your website? 1. To make your website look better on social media 2. To increase your chance of search engines displaying rich results 3. To improve website loading speed 4. To create better URLs# [Build content apps with Sanity App SDK](/learn/course/build-content-apps-with-sanity-app-sdk) Building fast, real-time content authoring applications has never been simpler. Create a feedback processing application with user assignment, AI analysis and more. ## [Building content apps](/learn/course/build-content-apps-with-sanity-app-sdk/building-content-apps) A true content operating system provides more than one way to author content. Build powerful, fit-for-purpose applications faster than ever before. The Sanity App SDK is a collection of utilities for building content applications backed by the Content Lake. It is headless by design, so you can use whatever front-end framework you like. In the React package, almost all of its functionality is provided by React Hooks. It is the fastest way to rapidly build task-specific applications for authors and editor teams to perform content operations. ## Why build content apps? Sanity Studio is a powerful and nearly infinitely customizable admin panel for creating and editing content. However, given its flexibility, it can become so complex that it becomes difficult for authors with a specific task in mind, especially one that needs to be done repetitively, to perform it efficiently. So, while Sanity Studio may be the default experience for all of your content operations, the Sanity App SDK provides a way to build novel applications with a specific job in mind. > When all you have is a hammer, everything looks like a nail Sanity Studio is a hammer, but not all content operations are nails. Sanity App SDK is a scalpel. And this is a tortured metaphor. For example, imagine a content author at a media publication who needs to process feedback. They could click through a Studio to find what they need, or you could build an application to do what they need faster and better. And that's what you'll build in this course. ## Prerequisites This course expects that you have a reasonable understanding of the Command Line, PNPM, React, TypeScript, and Sanity. You will not need to be an expert in any of these, but this course is written with the expectation that this is not your first time encountering these words. If you would prefer to get a top-to-bottom look at how to work with _Sanity, the platform_, you may be better served by the Day One with Sanity course. 1. Take [Day one content operations](https://www.sanity.io/learn/course/day-one-with-sanity-studio) ## What you'll build ![An application with a list of feedback items on the left and a document editor on the right.](https://cdn.sanity.io/images/3do82whm/next/40e32fb51ae5fa89b6ed1ad41a822947bd83e523-2240x1480.png) You'll be building a single-purpose application for processing feedback. You might imagine this feedback is received from a comments form or email account. Your content authors need a more efficient way to see which feedback: * is pending approval to proceed in the workflow * should be marked as spam and dismissed * should be deleted * and to mark the sentiment of any feedback as positive, neutral or negative. Let's begin in the next lesson by first creating a new Sanity project and Studio.## [Create a new Project and Studio](/learn/course/build-content-apps-with-sanity-app-sdk/create-a-new-project-and-studio) Get setup with a fresh hosted backend for your content, and the traditional administration panel for Sanity. Content written to Sanity is stored within the Content Lake, not in Sanity Studio. The schema types you configure are made available in Sanity Studio, and it writes content of that shape to the Content Lake. You can think of the Sanity Studio as a _"window into the Content Lake."_ ## So I don't need a Studio to build an SDK App? Nope! You can write data of any shape (that conforms to JSON) to the Content Lake. But having a Studio makes it easier to reason about our _complete_ universe of content if we have one configured. It's also where you'd configure TypeScript types. So you'll start this course by creating a new Sanity project and Sanity Studio—you just won't need to edit any content there. ## Create a new project You can create a new free project and initialize a new Sanity Studio from one command using the Sanity CLI. If you do not yet have a Sanity account, you can create one during this process. You may like to create a parent folder to contain the projects you work on in this course, as your Sanity Studio and App SDK app will live side-by-side. Once you have a Studio and App setup, your folder structure should look like this: ```text feedback-course ├── studio └── app-feedback ``` 1. **Run** the following command in your terminal to create a new Sanity project and Sanity Studio. ```sh:Terminal pnpm dlx create-sanity@latest --template blog --create-project "Feedback Processor" --dataset production --typescript --output-path studio ``` 1. **Run** the following from inside of the `/studio` directory ```sh:Terminal pnpm run dev ``` Open [http://localhost:3333](http://localhost:3333) in your browser and log in. You should now see the Sanity Studio dashboard interface with Post, Category and Author schema types already configured. ## Add "Feedback" schema types The schema types that appear in the Structure tool of Sanity Studio are defined in the project's configuration files. You'll need to create a new file to add "Feedback" type documents as an option in the Studio and a TypeScript type for your App. 1. **Create** a new file for Feedback type documents ```typescript:studio/schemaTypes/feedbackType.ts import {defineField, defineType} from 'sanity' export const feedbackType = defineType({ name: 'feedback', title: 'Feedback', type: 'document', fields: [ defineField({ name: 'content', type: 'text', }), defineField({ name: 'author', type: 'string', }), defineField({ name: 'email', type: 'string', }), defineField({ name: 'sentiment', type: 'string', options: {list: ['positive', 'neutral', 'negative'], layout: 'radio'}, }), defineField({ name: 'status', type: 'string', options: {list: ['pending', 'approved', 'spam'], layout: 'radio'}, }), defineField({ name: 'assignee', type: 'string', }), defineField({ name: 'notes', type: 'text', }), ], preview: { select: { title: 'content', subtitle: 'author', }, }, }) ``` **Update** the schema types index file to include the feedback schema type. ```typescript:studio/schemaTypes/index.ts import blockContent from './blockContent' import category from './category' import post from './post' import author from './author' import {feedbackType} from './feedbackType' export const schemaTypes = [post, author, category, blockContent, feedbackType] ``` You should now see an option in your Structure tool to list and create Feedback type documents. ## Import seed data It is easier to build front ends when you have content, and so some has been prepared for you already. 1. **Download** `feedback-seed.ndjson` 2. **Run** the following from the terminal, inside the `studio` folder to import 20 example Feedback type documents. ```sh:Terminal pnpm dlx sanity dataset import feedback-seed.ndjson production ``` Open your Studio to confirm these documents are now visible, there's a mix of spam comments and none of them have been marked to indicate their "sentiment," that's the job of our new app! Let's start building it next.## [Quickstart a new App SDK app ](/learn/course/build-content-apps-with-sanity-app-sdk/quickstart-a-new-app-sdk-app) Start a new App SDK app in seconds from the command line using the Sanity UI template. You've now got content to work with and a Sanity Studio, it's time to start building your app. 1. **Run** the following from the terminal in the root directory (above the `studio` directory) ```sh:Terminal # in the parent directory pnpm dlx sanity@latest init --template app-sanity-ui --typescript --output-path app-feedback ``` You may be prompted to select an Organization, select your personal organization, as that's where the project you created in the previous lesson was created. You should now have these two adjacent folders. ```text feedback-course ├── app-feedback └── studio ``` ## Why Sanity UI? The command you just ran uses the Sanity UI app template. This includes the required context and packages for the same front end library used in other Sanity applications like Sanity Studio, Media Library and Dashboard. You're free to use the front-end library of your choice in Sanity App SDK applications. For this course, however, you'll use Sanity UI so that the application you build shares visual harmony with the rest of the dashboard experience. 1. [Sanity UI](https://www.sanity.io/ui) has its own documentation site with implementation details 2. See the App SDK docs for examples on how to install other styling libraries ## Running two apps By default, SDK Apps use the same port number (`3333`) as the Studio. To run the Studio and your applications simultaneously, you can update `sanity.cli.ts` of either one. Let's change the default port of the Studio. 1. **Update** the Sanity CLI config of the Sanity Studio ```typescript:studio/sanity.cli.ts import {defineCliConfig} from 'sanity/cli' export default defineCliConfig({ server: { port: 3334, }, // ...all other settings }) ``` Restart your Studio's development server, you'll get a new development URL. Open the Studio in your browser and be asked to create a new CORS origin. You can follow the instructions in the browser, or create a new origin using Sanity CLI with the following command run from inside your `studio` folder. ```sh:Terminal # in /studio pnpm dlx sanity@latest cors add http://localhost:3334 --allow ``` 1. **Run** the following inside the `app-feedback` folder to start the app's development server. ```sh # in /app-feedback pnpm run dev ``` 1. If you get an error about a mismatched Organization ID, you may have selected a different Organization to the one in which the project was created. Update `app-feedback/sanity.cli.ts` to use the correct Organization ID. You'll see a URL in the terminal to open the App running from within the Sanity Dashboard. Dashboard is the default "home screen" where authors can move between deployed Studios and other applications—such as the one you're building right now. The Dashboard also provides authentication to your app. In a large enough organization, you may have many teams of authors working between or across multiple projects all served by deployed Sanity Studio instances or Apps of all shapes and sizes. Content operations are not the job of a one-size-fits-nobody CMS! ## Targeting your project(s) While this app will only target one project, an App SDK app can target multiple (worth knowing: Sanity Studio can't do this). The entry point for your application is `App.tsx`, you can see this defined in `sanity.cli.ts`. 1. **Update** the main `App.tsx` file with the details found in your `studio/sanity.config.ts` file. ```typescript:app-feedback/src/App.tsx const sanityConfigs: SanityConfig[] = [ { projectId: 'REPLACE-WITH-YOUR-PROJECT-ID', dataset: 'production', }, ] ``` There is an `ExampleComponent` already loaded as the main child of the application. Let's replace this with our first use of the App SDK's React hooks to query for a list of documents.## [useDocuments](/learn/course/build-content-apps-with-sanity-app-sdk/use-documents) Performant querying for a live-updating list of documents has never been simpler. Maybe the most basic thing we can do when creating an application is query and render a list of documents. In this lesson, you'll do it using the `useDocuments` hook. Before we do, let's interrogate this decision. ## Why not use client.fetch? [Sanity Client](https://github.com/sanity-io/client) is the primary way JavaScript applications are built to interact with Sanity's APIs. The App SDK is a wrapper of Sanity Client for building apps and solves much of the UI complexity that comes with working with a lower-level client. For example, when you query documents with `client.fetch`, the list of documents will not update in real-time as documents change. You might also unknowingly fetch 10,000 documents. Edits made to these documents will not automatically create per-render optimistic updates in the UI. All complexity and performance concerns are bundled up in the App SDK's React hooks, making it simpler and faster for us to build better content applications. ## Why not useQuery? The App SDK provides the useQuery hook, which takes a GROQ query. We could use this hook for all our data fetching. However, many other hooks in the App SDK for React require "document handles" to be passed in as parameters. These can be created ad hoc from values in a document. Still, it's simpler to retrieve document handles in a parent component and pass them down to child components, which perform fetches or actions using those handles. ### When you might want useQuery There are some patterns where it makes more sense to use `useQuery` instead of `useDocuments`, such as when a parent component needs to know specific values of each document. For example, a component may need to know which day an event is on to render it into a calendar or the geolocation of an event to plot it on a map. In these instances, you may be better off fetching with `useQuery` documents and their values in the parent component. Just be aware that this can lead to overfetching. ## What are document handles? A document handle contains at least two and up to four useful pieces of information about a document to identify its type and origin. By keeping the data returned by fetches of documents smaller, we maintain focus on improved performance in our applications. ```json { "dataset": "production", "documentId": "116d2c7a-d1de-4d00-9a88-8ac65ceaad10", "documentType": "feedback", "projectId": "xe385msc" } ``` ## tl;dr it's about more, smaller fetches In Server-Side Rendered (SSR) web application frontends, you have likely formed a habit of fetching **everything** your web page needed in one huge GROQ query using `client.fetch`. This isn't necessary in a Single Page Application (SPA). The happy path for App SDK apps is to filter for specific documents using `useDocuments` and pass down the returned document handles to components which individually fetch, edit and take actions on documents. Any concerns about caching, real-time and optimistic updates are all taken care of by the App SDK. ## Let's finally fetch something 1. **Create** a new component called `Feedback`, which will be the parent component of all our UI. ```tsx:app-feedback/src/Feedback.tsx import { Suspense, useState } from "react" import { DocumentHandle } from "@sanity/sdk-react" import { Card, Flex, Grid, Spinner } from "@sanity/ui" import { styled } from "styled-components" import { FeedbackList } from "./FeedbackList" const ScreenHeightCard = styled(Card)` height: 100vh; overflow: scroll; ` export function Feedback() { const [selectedFeedback, setSelectedFeedback] = useState<DocumentHandle | null>(null) return ( <Grid columns={5}> <ScreenHeightCard columnStart={1} columnEnd={3}> <Suspense fallback={<Loading />}> <FeedbackList setSelectedFeedback={setSelectedFeedback} selectedFeedback={selectedFeedback} /> </Suspense> </ScreenHeightCard> <ScreenHeightCard borderLeft columnStart={3} columnEnd={6}> {/* TODO: Add <FeedbackEdit /> form */} </ScreenHeightCard> </Grid> ) } function Loading() { return ( <Flex justify="center" align="center" width="fill" height="fill"> <Spinner /> </Flex> ) } ``` 1. Don't forget your `Suspense` boundaries. The App SDK React Hooks use Suspense for data fetching. This means any component which uses one of these hooks could cause a re-render further up the component tree. Since this `Feedback` component will be rendering both the `FeedbackList` and `FeedbackEdit` form components, without being wrapped in `Suspense` an update in one component would force both to re-render. 1. **Read more** about [`Suspense` in the React documentation](https://react.dev/reference/react/Suspense) 1. **Create** another component to query for the feedback documents. ```tsx:app-feedback/src/FeedbackList.tsx import { type DocumentHandle, useDocuments } from "@sanity/sdk-react" import { Stack, Button } from "@sanity/ui" type FeedbackListProps = { selectedFeedback: DocumentHandle | null setSelectedFeedback: (feedback: DocumentHandle | null) => void } export function FeedbackList({ selectedFeedback, setSelectedFeedback, }: FeedbackListProps) { const { data, hasMore, loadMore } = useDocuments({ documentType: "feedback", }) return ( <Stack space={2} padding={5}> {data?.map((feedback) => ( <pre key={feedback.documentId}>{JSON.stringify(feedback, null, 2)}</pre> ))} {hasMore && <Button onClick={loadMore} text="Load more" />} </Stack> ) } ``` Lastly, you'll need to load the Feedback component into the main App. 1. **Update** `App.tsx` to replace `ExampleComponent` with `Feedback` ```tsx:feedback-app/src/App.tsx import {Feedback} from "./Feedback" export default function App() { // ...sanityConfigs, Loading return ( <SanityUI> <SanityApp config={sanityConfigs} fallback={<Loading />}> <Feedback /> </SanityApp> </SanityUI> ) } ``` In your application you should now see a list of document handles rendered into the UI. ![Application showing a list of JSON objects](https://cdn.sanity.io/images/3do82whm/next/349b181d054a399467f27797733cbf9d451b2b89-2240x1480.png) So you now have an application that can query documents, but not any useful information about them. Let's fix that in the next lesson.## [useDocumentProjection](/learn/course/build-content-apps-with-sanity-app-sdk/use-document-projection) Pick just the content you need from individual documents, and only when a component is rendered in view. We have a list of document handles, but we need more information about each document. Let's create a component that uses these handles to fetch more values from each document. 1. **Create** a new component to visualize the value of the `status` field in a document throughout our application. ```tsx:app-feedback/src/StatusBadge.tsx import { Badge } from "@sanity/ui" type StatusBadgeProps = { status?: string fontSize?: number } export function StatusBadge({ status = "PENDING", fontSize = 2, }: StatusBadgeProps) { return ( <Badge tone={ status === "approved" ? "positive" : status === "spam" ? "caution" : "default" } padding={2} fontSize={fontSize} > {status.toUpperCase()} </Badge> ) } ``` 1. **Create** a component to retrieve and display values from a document by its handle ```tsx:app-feedback/src/FeedbackPreview.tsx import { useRef } from "react" import { DocumentHandle, useDocumentProjection } from "@sanity/sdk-react" import { Box, Stack, Text } from "@sanity/ui" import { StatusBadge } from "./StatusBadge" type FeedbackPreviewData = { _createdAt: string content: string | null author: string | null email: string | null status: string } export function FeedbackPreview(props: DocumentHandle) { const previewRef = useRef<HTMLDivElement>(null) const { data, isPending } = useDocumentProjection<FeedbackPreviewData>({ ...props, ref: previewRef, projection: `{ _createdAt, content, author, email, "status": coalesce(status, "PENDING") }`, }) const showPlaceholder = isPending && !data return ( <Stack ref={previewRef} space={3}> <Text size={2} weight="semibold" textOverflow="ellipsis"> {showPlaceholder ? "..." : data.author} </Text> <Text muted size={1} textOverflow="ellipsis"> {showPlaceholder ? "..." : data.email + " " + data._createdAt.split("T")[0]} </Text> <Text size={2} textOverflow="ellipsis"> {showPlaceholder ? "..." : data.content} </Text> <Box> <StatusBadge status={data.status} fontSize={1} /> </Box> </Stack> ) } ``` There's a few key things to look at in this component. * `useDocumentProjection` receives the passed-in document handle as props, and then declares a GROQ "projection" which retrieves values from the document. * The `ref` being passed into the hook is attached to the outermost Stack component. This will ensure that the content returned by this projection is only queried when the component is rendered and visible on the page. Another small but important performance win! 1. Throughout this course we're manually creating Types. This is because TypeGen support for App SDK currently uses experimental packages and may change in future. See the documentation for the most current implementation method. 1. See [App SDK and TypeGen](https://www.sanity.io/learn/app-sdk/sdk-typegen) in the documentation ## Update the feedback list Now you have a component to fetch individual documents, let's update the feedback list component to use it. 1. **Update** the `FeedbackList` component to render the `FeedbackPreview` component ```tsx:app-feedback/src/FeedbackList.tsx import { Suspense } from "react" import { type DocumentHandle, useDocuments } from "@sanity/sdk-react" import { Stack, Button, Spinner } from "@sanity/ui" import { FeedbackPreview } from "./FeedbackPreview" type FeedbackListProps = { selectedFeedback: DocumentHandle | null setSelectedFeedback: (feedback: DocumentHandle | null) => void } export function FeedbackList({ selectedFeedback, setSelectedFeedback, }: FeedbackListProps) { const { data, hasMore, loadMore } = useDocuments({ documentType: "feedback", }) return ( <Stack space={2} padding={5}> {data?.map((feedback) => { const isSelected = selectedFeedback?.documentId === feedback.documentId return ( <Button key={feedback.documentId} onClick={() => setSelectedFeedback(feedback)} mode={isSelected ? "ghost" : "bleed"} tone={isSelected ? "primary" : undefined} > <Suspense fallback={<Spinner />}> <FeedbackPreview {...feedback} /> </Suspense> </Button> ) })} {hasMore && <Button onClick={loadMore} text="Load more" />} </Stack> ) } ``` You should now have the feedback items rendered as a list of buttons. Most importantly you'll see values from each document in each button. And if any other author makes changes to these documents, you'll see those values update live! ![Application rendering a list of documents](https://cdn.sanity.io/images/3do82whm/next/e2132df769d4f4b14d78073cd5e21d63e5f52dbc-2240x1480.png) You can click to select them, they just won't do anything yet. In the next lesson you can start to building a form to edit each feedback document.## [useDocument](/learn/course/build-content-apps-with-sanity-app-sdk/use-document) Fetch content with real-time and optimistic updates when edits are made—locally or remotely. This hook is similar to `useDocumentProjection` in the previous lesson. However `useDocument` will sync with both local and remote changes to the document. Because of this it can be more memory intensive, this hook should be used sparingly. That is why in this course you've used `useDocumentProjection` for the document list— which could eventually render 100's of documents—while only using `useDocument` for the one document rendered in the editing form. 1. **Create** a new component to query for the entire document from its handle. ```tsx:app-feedback/src/FeedbackEdit.tsx import { DocumentHandle, useDocument } from "@sanity/sdk-react" import { Card, Flex, Stack, Text, Container } from "@sanity/ui" import { StatusBadge } from "./StatusBadge" type FeedbackEditProps = { selectedFeedback: DocumentHandle } export function FeedbackEdit({ selectedFeedback }: FeedbackEditProps) { const { data } = useDocument({ ...selectedFeedback }) if (!data) { return null } // Ensure type safety for all fields const author = typeof data.author === "string" ? data.author : "" const email = typeof data.email === "string" ? data.email : "" const content = typeof data.content === "string" ? data.content : "" const createdAt = typeof data._createdAt === "string" ? data._createdAt.split("T")[0] : "" const status = typeof data.status === "string" ? data.status : "pending" const sentiment = typeof data.sentiment === "string" ? data.sentiment : "" const notes = typeof data.notes === "string" ? data.notes : "" const assignee = typeof data.assignee === "string" ? data.assignee : "" return ( <Container width={1}> <Card padding={[0, 0, 4, 5]}> <Card padding={[0, 0, 4, 5]} radius={3} shadow={[0, 0, 2]}> <Stack space={5}> <Flex align="center" justify="space-between"> <Stack space={3}> <Text size={3} weight="semibold"> {author} </Text> <Text size={1} muted> {email} {createdAt} </Text> </Stack> <StatusBadge status={status} fontSize={2} /> </Flex> <Stack space={3}> <Card padding={4} radius={2} tone="transparent"> <Text size={3}>{content}</Text> </Card> </Stack> {/* In the next lessons... */} {/* Sentiment, Notes, Assignee, Actions */} </Stack> </Card> </Card> </Container> ) } ``` 1. **Update** the parent Feedback component to render the editing form ```tsx:app-feedback/src/Feedback.tsx import { Suspense, useState } from "react" import { DocumentHandle } from "@sanity/sdk-react" import { Card, Flex, Grid, Spinner } from "@sanity/ui" import { styled } from "styled-components" import { FeedbackList } from "./FeedbackList" import { FeedbackEdit } from "./FeedbackEdit" const ScreenHeightCard = styled(Card)` height: 100vh; overflow: scroll; ` export function Feedback() { const [selectedFeedback, setSelectedFeedback] = useState<DocumentHandle | null>(null) return ( <Grid columns={5}> <ScreenHeightCard columnStart={1} columnEnd={3}> <Suspense fallback={<Loading />}> <FeedbackList setSelectedFeedback={setSelectedFeedback} selectedFeedback={selectedFeedback} /> </Suspense> </ScreenHeightCard> <ScreenHeightCard borderLeft columnStart={3} columnEnd={6}> <Suspense fallback={<Loading />}> {selectedFeedback ? ( <FeedbackEdit selectedFeedback={selectedFeedback} /> ) : null} </Suspense> </ScreenHeightCard> </Grid> ) } function Loading() { return ( <Flex justify="center" align="center" width="fill" height="fill"> <Spinner /> </Flex> ) } ``` You should now be able to click each document in the list, and open the editing form on the right. The values in this form have a real-time subscription to changes in the Content Lake, as well as an in-memory optimistic cache to render any edits as they happen. It's time to start editing documents with the App SDK.## [useEditDocument](/learn/course/build-content-apps-with-sanity-app-sdk/use-edit-document) Edit values in documents with all user interface and versioning complexity extracted away. You've seen how simple it is to create performant, real-time lists of documents. Prepare to be amazed at how simple it is to edit them. While the API in this lesson may look simple, what it's doing under the hood is anything but. Edits are optimistically written to an in-browser cache. When editing a published document, a new draft is immediately invoked, behavior which the Sanity Studio performs but has been difficult to replicate with Sanity Client alone. The fetch from `useDocument` in the previous lesson provided us with real-time document values, now you can create form components which will update those values. ## Editing with a radio input First let's create a control to update the value of the `sentiment` field with a selection of radio buttons. 1. **Create** a new component with a list of available values ```tsx:app-feedback/src/Sentiment.tsx import { DocumentHandle, useEditDocument } from "@sanity/sdk-react" import { Radio, Text, Inline, Stack } from "@sanity/ui" type SentimentProps = { value: string handle: DocumentHandle } const SENTIMENTS = ["Positive", "Neutral", "Negative"] export function Sentiment({ value, handle }: SentimentProps) { const editSentiment = useEditDocument({ ...handle, path: "sentiment" }) return ( <Stack space={3}> <Text weight="medium">Sentiment</Text> <Inline space={3}> {SENTIMENTS.map((sentiment) => ( <Inline key={sentiment} as="label" space={1} htmlFor={sentiment}> <Radio id={sentiment} checked={value === sentiment.toLowerCase()} onChange={(e) => editSentiment(e.currentTarget.value)} name="sentiment" value={sentiment.toLowerCase()} /> <Text>{sentiment}</Text> </Inline> ))} </Inline> </Stack> ) } ``` Notice how this component uses `useEditDocument` to only modify a specific path in the document. This means any value passed into the `editSentiment` function will be automatically written to that path in the document. 1. In many React applications you are encouraged to write and track changes to local state, perhaps with a `useState` hook. The real-time nature of Sanity Studio and the App SDK encourages you to always write directly to—and render responses from—the Content Lake. App SDK is doing work under the hood to make this optimistic and fast. 1. **Update** the `FeedbackEdit` component to render the `Sentiment` component ```tsx:app-feedback/src/FeedbackEdit.tsx {/* In the next lessons... */} <Sentiment value={sentiment} handle={selectedFeedback} /> ``` You can now click the radio buttons on any selected document, and the edits will take place in real time. Open the same document side-by-side in your app and Sanity Studio to see how the changes are reflected to both documents. As well as noting how drafts are automatically invoked when making edits to published documents. ## Editing with a text input The `notes` field in our feedback schema is for authors to add some helpful details for other team members to read. 1. **Create** a new component to edit `notes` ```tsx:app-feedback/src/Notes.tsx import { type DocumentHandle, useEditDocument } from "@sanity/sdk-react" import { Stack, Text, TextArea } from "@sanity/ui" type NotesProps = { value: string handle: DocumentHandle } export function Notes({ value, handle }: NotesProps) { const editNotes = useEditDocument({ ...handle, path: "notes" }) return ( <Stack space={3}> <Text weight="medium">Reviewer Notes</Text> <TextArea value={value} onChange={(e) => editNotes(e.currentTarget.value)} placeholder="Add your notes about this feedback..." rows={3} /> </Stack> ) } ``` This component is quite similar to the `Sentiment` component before, where `editDocument` is configured with a pre-set path, and the TextArea input writes changes to it. 1. **Update** `FeedbackEdit` to include the `Notes` component ```tsx:app-feedback/src/FeedbackEdit.tsx <Notes value={notes} handle={selectedFeedback} /> ``` Once again, try writing into the text field, and watch the same document in Sanity Studio update almost immediately after. ![Application with a document editing form](https://cdn.sanity.io/images/3do82whm/next/bc0277250ac384bbac6f3604338cb714d535a4e9-2240x1480.png) You're now able to make edits to documents, but they're all left in a draft state. So our authors can't commit their changes. You'll setup document actions in the next lesson to finish the work.## [useApplyDocumentActions](/learn/course/build-content-apps-with-sanity-app-sdk/use-apply-document-actions) Perform actions on documents to end—or begin—the content lifecycle Document Actions are primarily used to modify the "version" of an entire document—to publish a draft document, to discard the current draft, or to delete the document. In our app it may be useful to delete feedback that we don't want to keep (this is different from spam where we may want to keep it as a record of a sender for who all future submissions should be blocked). ## Delete document action 1. **Create** a new component to hold all our document actions. ```tsx:app-feedback/src/Actions.tsx import { deleteDocument, type DocumentHandle, useApplyDocumentActions, } from "@sanity/sdk-react" import { Button, Flex } from "@sanity/ui" type ActionsProps = { handle: DocumentHandle } export function Actions({ handle }: ActionsProps) { const apply = useApplyDocumentActions() const handleDelete = () => apply(deleteDocument(handle)) return ( <Flex gap={1} direction={["column", "column", "column", "row"]}> <Button mode="ghost" tone="critical" text="Delete" onClick={handleDelete} /> </Flex> ) } ``` 1. **Update** the `FeedbackEdit` component to render it ```tsx:app-feedback/src/FeedbackEdit.tsx <Flex justify="flex-end" direction={["column-reverse", "column-reverse", "row"]} gap={2} > <Actions handle={selectedFeedback} /> </Flex> ``` You'll add some more buttons to this component in latter lessons, that's why we're wrapping it with a `Flex` component now. You can now click the "delete" button to delete a document. This action is immediate and the document is removed from the Content Lake. Remember, you can re-import the seed data if you want to re-populate the feedback list! You may notice a _small_ bug into the UI—the currently selected document no longer exists and displays for a few moments before the list is updated. We'll improve this in a future lesson. ## Edit and publish in one function The intention of these buttons is to perform a final action before moving onto the next piece of feedback. If we only edited the `status` field to set the value as "approved" or "spam," the resulting document would still be in a draft version. What we need is a function that will both edit the field value of the document and publish it. 1. **Update** the Actions component to include buttons to mark as spam and approve ```tsx:app-feedback/src/Actions.tsx import { deleteDocument, type DocumentHandle, publishDocument, useApplyDocumentActions, useEditDocument, } from "@sanity/sdk-react" import { Button, Flex } from "@sanity/ui" type ActionsProps = { handle: DocumentHandle } export function Actions({ handle }: ActionsProps) { const apply = useApplyDocumentActions() const editStatus = useEditDocument({ ...handle, path: "status" }) const handleDelete = () => apply(deleteDocument(handle)) const handleMarkAsSpam = () => { editStatus("spam") apply(publishDocument(handle)) } const handleApprove = () => { editStatus("approved") apply(publishDocument(handle)) } return ( <Flex gap={1} direction={["column", "column", "row"]}> <Button mode="ghost" tone="critical" text="Delete" onClick={handleDelete} /> <Button mode="ghost" tone="caution" text="Mark as Spam" onClick={handleMarkAsSpam} /> <Button mode="ghost" tone="positive" text="Approve" onClick={handleApprove} /> </Flex> ) } ``` 1. Invoking `editDocument` is how you can perform bulk editing operations across several documents at the same time. You'll see these two new actions will invoke both `editDocument` to set the new `status` value and apply the `publishDocument` action. ![Application with document editing form and three buttons](https://cdn.sanity.io/images/3do82whm/next/c673546cb61d49f6173a8bcce2efecfbf7b8f20d-2240x1480.png) If you click one of these buttons with the Studio open to the same document, you may only briefly see a draft document with the updated `status` value before it is immediately published. Now that your app is performing significant actions on documents, some additional feedback in the UI will help users. Let's add notifications in the next lesson.## [useDocumentEvent](/learn/course/build-content-apps-with-sanity-app-sdk/use-document-event) Listen to changes to content in your application and trigger events in the user interface. User feedback is extremely important in our applications. Especially when users are taking actions as significant as publishing or deleting documents. Sanity UI comes with hooks to pop up toast notifications. We'll configure that when events happen in this lesson. Sanity UI exports a `ToastProvider` which should already be included inside the SanityUI component. 1. **Confirm** `SanityUI.tsx` includes the `ToastProvider` ```tsx:app-feedback/src/SanityUI.tsx import {ThemeProvider, ToastProvider} from '@sanity/ui' import {buildTheme} from '@sanity/ui/theme' import {createGlobalStyle} from 'styled-components' const theme = buildTheme() const GlobalStyle = createGlobalStyle` html, body { margin: 0; padding: 0; } ` export function SanityUI({children}: {children: React.ReactNode}) { return ( <> <GlobalStyle /> <ThemeProvider theme={theme}> <ToastProvider>{children}</ToastProvider> </ThemeProvider> </> ) } ``` Now you can create a new component to listen to document events as they happen, and when required, pop up toast notifications in the UI. 1. **Create** a new `FeedbackEvents` component to fire toast notifications as events happen ```tsx:app-feedback/src/FeedbackEvents.tsx import { DocumentEvent, useDocumentEvent } from "@sanity/sdk-react" import { useToast } from "@sanity/ui" export function FeedbackEvents() { const toast = useToast() const onEvent = (documentEvent: DocumentEvent) => { if (documentEvent.type === "published") { toast.push({ title: "Feedback processed", status: "success", }) } else if (documentEvent.type === "deleted") { toast.push({ title: "Feedback deleted", status: "error", }) } } useDocumentEvent({ onEvent }) return null } ``` This component doesn't render any UI and so can be rendered anywhere inside the `SanityApp` provider. 1. **Update** `App.tsx` to include the `FeedbackEvents` component ```tsx:app-feedback/src/App.tsx import { FeedbackEvents } from "./FeedbackEvents" export function App() { // ...sanityConfigs, Loading return ( <SanityUI> <SanityApp config={sanityConfigs} fallback={<Loading />}> <Feedback /> <FeedbackEvents /> </SanityApp> </SanityUI> ) } export default App ``` Now you can click **Approve**, **Mark as Spam** or **Delete** on any document and see a toast notification to show you've completed an action. ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/6e39291af02cdcdd47592a7a5688ae8e63795284-2240x1480.png) At this point, you could consider our app to be "feature complete." Authors are able to set the sentiment of a piece of feedback, add some notes and take a final action on the document. But we can go deeper!## [useUsers](/learn/course/build-content-apps-with-sanity-app-sdk/use-users) Render an interactive list of Sanity project users to assign to documents. Storing user data against documents can be useful for instances such as user "assignment." You can add another document editing control to display all users in a project as a clickable list to set a user ID as a value in a document. 1. **Create** a new component `Assignee` to query for project users and render their avatars. ```tsx:app-feedback/src/Assignee.tsx import { DocumentHandle, useEditDocument, useUsers } from "@sanity/sdk-react" import { Inline, Avatar, Stack, Text, Button } from "@sanity/ui" type AssigneeProps = { value: string handle: DocumentHandle } export function Assignee({ value, handle }: AssigneeProps) { const { data: users } = useUsers() const editAssignee = useEditDocument({ ...handle, path: "assignee" }) return ( <Stack space={3}> <Text weight="medium">Assignee</Text> <Inline space={1}> {users?.map((user) => ( <Button key={user.sanityUserId} onClick={() => editAssignee(user.sanityUserId)} padding={0} mode="bleed" > <Avatar status={value === user.sanityUserId ? "online" : "inactive"} size={2} src={user.profile?.imageUrl} /> </Button> ))} </Inline> </Stack> ) } ``` 1. **Update** the `FeedbackEdit` component to include it ```tsx:app-feedback/src/FeedbackEdit.tsx <Assignee value={assignee} handle={selectedFeedback} /> ``` There is another hook to quickly retrieve the details of the currently logged in user. We can use it to filter the documents returned in `useDocuments`. Let's put it to work in the next lesson.## [useUser](/learn/course/build-content-apps-with-sanity-app-sdk/use-user) Filter the queried list of documents based on the current user and other selections. Now that feedback documents can be marked as assigned to specific users, it would be useful to filter the feedback list of documents to just those the current user is responsible for. The `useDocuments` hook you setup initially in `FeedbackList` only has a `documentType` option set: ```groq documentType: 'feedback' ``` However, this hook can also take `filter` and `params` options which may be dynamically updated by the application. Let's add some UI elements which will dynamically filter the list of returned documents. 1. **Create** a new component to dynamically filter documents by `status` ```tsx:app-feedback/src/StatusSelector.tsx import { Button, Grid } from "@sanity/ui" type StatusSelectorProps = { status: string setStatus: (nextStatus: string) => void } const STATUSES = ["All", "Pending", "Spam", "Approved"] export function StatusSelector({ status, setStatus }: StatusSelectorProps) { return ( <Grid columns={[2, 2, 2, 4]} gap={1}> {STATUSES.map((statusOption) => ( <Button key={statusOption} mode={statusOption.toLowerCase() === status ? "default" : "ghost"} onClick={() => setStatus(statusOption.toLowerCase())} text={statusOption} /> ))} </Grid> ) } ``` 1. **Create** another document to toggle an additional filter for the `assignee` field. ```tsx:app-feedback/src/OnlyMine.tsx import { Switch, Inline, Text, Card } from "@sanity/ui" import { useCurrentUser } from "@sanity/sdk-react" import { Dispatch, SetStateAction } from "react" type OnlyMineProps = { userId: string | null setUserId: Dispatch<SetStateAction<string | null>> } export function OnlyMine({ userId, setUserId }: OnlyMineProps) { const currentUser = useCurrentUser() return ( <Card border padding={2}> <Inline space={2}> <Text size={1} as="label" htmlFor="only-mine"> Only mine </Text> <Switch id="only-mine" disabled={!currentUser} checked={userId === currentUser?.id} onClick={() => { if (currentUser) { setUserId((currentId) => currentId === currentUser.id ? null : currentUser.id ) } }} /> </Inline> </Card> ) } ``` Now you'll need to import these into the `FeedbackList` and set a `filter` that will conditionally use the `params`. 1. **Update** the `FeedbackList` component ```tsx:app-feedback/src/FeedbackList.tsx import { Suspense, useState } from "react" import { type DocumentHandle, useDocuments } from "@sanity/sdk-react" import { Stack, Button, Spinner } from "@sanity/ui" import { FeedbackPreview } from "./FeedbackPreview" import { StatusSelector } from "./StatusSelector" import { OnlyMine } from "./OnlyMine" type FeedbackListProps = { selectedFeedback: DocumentHandle | null setSelectedFeedback: (feedback: DocumentHandle | null) => void } export function FeedbackList({ selectedFeedback, setSelectedFeedback, }: FeedbackListProps) { const [userId, setUserId] = useState<string | null>(null) const [status, setStatus] = useState("all") const { data, hasMore, loadMore } = useDocuments({ documentType: "feedback", filter: ` select(defined($userId) => assignee == $userId, true) && select( $status == "pending" => !defined(status) || status == "pending", $status == "spam" => status == $status, $status == "approved" => status == $status, true ) `, params: { userId, status }, orderings: [{ field: "_createdAt", direction: "desc" }], batchSize: 10, }) return ( <Stack space={2} padding={5}> <StatusSelector status={status} setStatus={setStatus} /> <OnlyMine userId={userId} setUserId={setUserId} /> {data?.map((feedback) => { const isSelected = selectedFeedback?.documentId === feedback.documentId return ( <Button key={feedback.documentId} onClick={() => setSelectedFeedback(feedback)} mode={isSelected ? "ghost" : "bleed"} tone={isSelected ? "primary" : undefined} > <Suspense fallback={<Spinner />}> <FeedbackPreview {...feedback} /> </Suspense> </Button> ) })} {hasMore && <Button onClick={loadMore} text="Load more" />} </Stack> ) } ``` You should now be able to click the buttons to filter based on user assignment or document status. Our app's really useful now! ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/d47f968c0190e7ae48962caeedf3414256b80884-2240x1480.png) ## Conditional GROQ params The GROQ filter we wrote is a bit gnarly! The `select()` function is used here to only filter by a param value if it is not `null`. First it uses `defined()` to check if `$userId` is not `null`. If not, it will only find documents where the `assignee` field matches `$userId`. If it is `null`, the value of the assignee field is not used as part of the filter. It also applies selective filtering looking at the value of the `status` field—first checking for documents without that value (or the value of "pending"), then only showing "spam" or "approved" documents if that's what the current filter matches. Lastly, it just returns everything regardless of the `status` field. We can go _further_. Let's link your app and the Studio more closely together.## [useNavigateToStudioDocument](/learn/course/build-content-apps-with-sanity-app-sdk/use-navigate-to-studio-document) Bridge the gap between your application and Sanity Studio with an automatic link. It may benefit your application to include links from any document to the Studio, as your Studio is probably still the source of truth for all your content. Fortunately, the App SDK provides a hook to automatically generate a link from a document to the correct Studio. ## Deploy the Studio You will need to deploy your Studio first to make this work, as links from your app's development server won't open in your local running Studio. 1. **Run** the following command in the `/studio` folder to deploy your Studio and schema ```sh # in the /studio folder npx sanity@latest deploy ``` Follow the prompts, once deployed you should see your Studio as an option on the left hand side of the Dashboard. Let's proceed! ## Composing suspenseful components In the example below, the Suspense boundary is exported from the component file itself. This is a useful pattern to unify the `fallback` and child components to remove any layout shift. Instead of a loading spinner, the fallback prop is a disabled version of the same button rendered by the child component when loading is complete. You may like to consider implementing other suspenseful components in the same way, so that your logic of what renders before and after loading is colocated in a single file. ## Navigate to Studio 1. **Create** a new component for a button which will open documents in the Studio. ```tsx:app-feedback/src/OpenInStudio.tsx import { Suspense } from "react" import { type DocumentHandle, useNavigateToStudioDocument, } from "@sanity/sdk-react" import { Button } from "@sanity/ui" const BUTTON_TEXT = "Open in Studio" type OpenInStudioProps = { handle: DocumentHandle } export function OpenInStudio({ handle }: OpenInStudioProps) { return ( <Suspense fallback={<OpenInStudioFallback />}> <OpenInStudioButton handle={handle} /> </Suspense> ) } function OpenInStudioFallback() { return <Button text={BUTTON_TEXT} disabled /> } function OpenInStudioButton({ handle }: OpenInStudioProps) { const { navigateToStudioDocument } = useNavigateToStudioDocument(handle) return <Button onClick={navigateToStudioDocument} text={BUTTON_TEXT} /> } ``` 1. **Update** the `FeedbackEdit` component to add this new button alongside your actions ```tsx:app-feedback/src/FeedbackEdit.tsx <Flex justify="space-between" direction={['column-reverse', 'column-reverse', 'row']} gap={2} > <OpenInStudio handle={selectedFeedback} /> <Actions handle={selectedFeedback} /> </Flex> ``` You should now be able to go directly from any selected feedback document in your app, to that same document in your Sanity Studio. Your app currently uses the most common hooks in the App SDK for React, but there's one more, do-anything hook we can put to work.## [useClient](/learn/course/build-content-apps-with-sanity-app-sdk/use-client) "Break glass in case of emergency" access to the all-powerful Sanity Client. If you've done Sanity development before App SDK, it's remarkable to think how complex of an application you've used without ever needing to access the Sanity Client directly. Thankfully, however, if we have needs that the App SDK doesn't provide, we can always reach out and grab the Sanity Client to take control of our own destiny. Currently, the sentiment editor is not very intelligent. It requires a human to read the feedback and spend mental cycles processing whether it is positive, neutral, or negative. This is no longer a job for humans; this is a job for AI. Sanity Client contains AI Agent Actions which are just the tools we need. ## Deploying schema In the previous lesson you deployed the Studio. This should have also deployed your Studio's schema types—which are required by Agent Actions. You can list your deployed schemas from the command line: ```sh # inside the /studio folder npx sanity@latest schema list ``` You should see at least one schema deployment. If not, try deploying now. ```sh # inside the /studio folder npx sanity@latest schema ``` ## Calling the agent action You'll now update the `Sentiment` component almost entirely. Instead of the value being edited by a user selection, clicking a button will hand the work off to the Agent Action. Note: In reality this might be better performed in a Sanity Function so it is automated, instead of waiting for user action. 1. **Update** the `Sentiment` component to call an Agent Action. ```tsx:app-feedback/src/Sentiment.tsx import { DocumentHandle, useClient } from "@sanity/sdk-react" import { Text, Inline, Stack, Button } from "@sanity/ui" import { useToast } from "@sanity/ui" type SentimentProps = { feedback: string value: string handle: DocumentHandle } function titleCase(str: string) { return str.replace( /\w\S*/g, (txt) => txt.charAt(0).toUpperCase() + txt.slice(1) ) } const SCHEMA_ID = "_.schemas.default" export function Sentiment({ feedback, value, handle }: SentimentProps) { const client = useClient({ apiVersion: "vX" }) const toast = useToast() function assessSentiment() { client.agent.action .generate({ targetDocument: { operation: "edit", _id: handle.documentId, }, instruction: ` You are a helpful assistant that analyzes customer feedback and determines the sentiment of the feedback. The sentiment can be one of the following: "positive", "neutral", "negative", Analyze the following feedback and determine the sentiment: $feedback `, instructionParams: { feedback: { type: "constant", value: feedback, }, }, target: { path: "sentiment", }, schemaId: SCHEMA_ID, }) .then((result) => { toast.push({ title: "Sentiment assessed", description: result.text, status: "success", }) }) .catch((error) => { toast.push({ title: "Error assessing sentiment", description: error.message, status: "error", }) }) } return ( <Stack space={3}> <Text weight="medium">Sentiment</Text> <Inline space={3}> <Button mode="ghost" onClick={assessSentiment} text="Assess" /> <Text>{value ? titleCase(value) : ""}</Text> </Inline> </Stack> ) } ``` You'll also need to pass down the content of the feedback into this component, so it can be used as a parameter in the Agent Action. 1. **Update** the `FeedbackEdit` component to pass feedback down to `Sentiment` as a prop ```tsx:app-feedback/src/FeedbackEdit.tsx <Sentiment value={sentiment} handle={selectedFeedback} feedback={content} /> ``` Now with any feedback document open you can click the "Assess" button and save yourself the decision making fatigue of determining user sentiment. Your app is now feature complete! Let's deploy it to the world.## [Deployment and finishing touches](/learn/course/build-content-apps-with-sanity-app-sdk/deployment-and-finishing-touches) You have a working app. It's time to share it with your authoring team and tidy up some rough edges. ## Deploy your app You can deploy your custom application at any time. Just like your Sanity Studio, it can be deployed from the command line. ```sh # in /app-feedback pnpm dlx sanity deploy ``` The first time you deploy your application, you’ll be prompted for a title. Once deployed, your app receives a unique ID which should be added to your app's `sanity.cli.ts` file for smoother future deployments. 1. **Update** your `sanity.cli.ts` file once your app is deployed with its `ID` ```typescript:app-feedback/sanity.cli.ts import { defineCliConfig } from "sanity/cli" export default defineCliConfig({ // ...all other settings deployment: { appId: "YOUR_APP_ID", }, }) ``` You should now see your Feedback application in the left hand column of your Sanity dashboard. ## Avoid repetitive loading spinners Using Suspense throughout the application has given us a way to render loading spinners while data is fetched. This is great at first, but you may notice some annoying behavior when clicking through multiple documents in the feedback list. The spinner appears almost every time you change documents. Even though the responses for these documents are cached, this occasional disappearance and re-rendering of the editing form is visual noise that we could do without. Fortunately, React gives us a function called `startTransition`, which we can use to prevent more than one loading spinner when we change the selected document. 1. **Update** the `Feedback` component to use `startTransition` ```tsx:app-feedback/src/Feedback.tsx import { startTransition, Suspense, useState } from "react" import { DocumentHandle } from "@sanity/sdk-react" import { Card, Flex, Grid, Spinner } from "@sanity/ui" import { styled } from "styled-components" import { FeedbackList } from "./FeedbackList" import { FeedbackEdit } from "./FeedbackEdit" const ScreenHeightCard = styled(Card)` height: 100vh; overflow: scroll; ` export function Feedback() { const [selectedFeedback, setSelectedFeedback] = useState<DocumentHandle | null>(null) const updateSelectedFeedback = (handle: DocumentHandle | null) => startTransition(() => setSelectedFeedback(handle)) return ( <Grid columns={5}> <ScreenHeightCard columnStart={1} columnEnd={3}> <Suspense fallback={<Loading />}> <FeedbackList setSelectedFeedback={updateSelectedFeedback} selectedFeedback={selectedFeedback} /> </Suspense> </ScreenHeightCard> <ScreenHeightCard borderLeft columnStart={3} columnEnd={6}> <Suspense fallback={<Loading />}> {selectedFeedback ? ( <FeedbackEdit selectedFeedback={selectedFeedback} /> ) : null} </Suspense> </ScreenHeightCard> </Grid> ) } function Loading() { return ( <Flex justify="center" align="center" width="fill" height="fill"> <Spinner /> </Flex> ) } ``` Now, when we click through documents in the list, the previously selected document should remain visible until the next document has finished loading. It is possible to get a pending state from the `useTransition` hook. [Take a look in the React documentation](https://react.dev/reference/react/Suspense#indicating-that-a-transition-is-happening) for more details. ## Show optimistic updates in the document list For performance reasons, we have used `useDocumentProjection` in the document list and `useDocument` in the editing form. You may notice this creates a small discrepancy when changing the status of a document. It happens immediately in the editing form but takes a second to update in the document list. Since we only need to see optimistic updates on the currently selected document—as that is the one being edited—we could create an optimistic preview component to use in the document list only for the currently selected document. 1. **Create** a new optimistic preview component for the document list ```tsx:app-feedback/src/FeedbackPreviewSelected.tsx import { useRef } from "react" import { DocumentHandle, useDocument } from "@sanity/sdk-react" import { Box, Stack, Text } from "@sanity/ui" import { StatusBadge } from "./StatusBadge" type FeedbackPreviewData = { _createdAt: string content: string | null author: string | null email: string | null status: string | null } export function FeedbackPreviewSelected(props: DocumentHandle) { const previewRef = useRef<HTMLDivElement>(null) const { data } = useDocument<FeedbackPreviewData>({ ...props }) const author = typeof data?.author === "string" ? data.author : "..." const email = typeof data?.email === "string" ? data.email : "..." const content = typeof data?.content === "string" ? data.content : "..." const createdAt = typeof data?._createdAt === "string" ? data._createdAt.split("T")[0] : "..." const status = typeof data?.status === "string" ? data.status : "PENDING" return ( <Stack ref={previewRef} space={3}> <Text size={2} weight="semibold" textOverflow="ellipsis"> {author} </Text> <Text muted size={1} textOverflow="ellipsis"> {email} {createdAt} </Text> <Text size={2} textOverflow="ellipsis"> {content} </Text> <Box> <StatusBadge status={status} fontSize={1} /> </Box> </Stack> ) } ``` 1. **Update** the FeedbackList component to selectively render the correct component ```tsx:app-feedback/src/FeedbackList.tsx {isSelected ? ( <FeedbackPreviewSelected {...feedback} /> ) : ( <FeedbackPreview {...feedback} /> )} ``` If you change a Feedback item from "Approved" to "Spam" now it should be reflected immediately in the selected document preview. Much better! Let's test what you've learned in the final lesson.## [SDK Quiz](/learn/course/build-content-apps-with-sanity-app-sdk/sdk-quiz) Let's put everything you've learned to the test! Now you've built a fully featured content application, let's see what you've learned about some of the hooks you've put to work. **Question:** useDocuments is preferable to client.fetch because... 1. It fetches faster 2. Built-in batching and real-time updates 3. Hooks are the only way to fetch data in React 4. You can't be trusted with client.fetch **Question:** useDocumentProjection is a hook for 1. Fetching multiple documents 2. Fetching document values 3. Fetching future values 4. Fetching user data **Question:** useDocument should be used sparingly because 1. It costs extra 2. It resolves both local and remote states of the document 3. It's slower 4. It's not always correct **Question:** useEditDocument creates better UI than client.patch because 1. It's faster 2. It handles versions 3. It handles webhooks 4. It's cheaper **Question:** useApplyDocumentActions provides a way to perform actions, like 1. Trigger a webhook 2. Publish a document 3. Update your billing 4. Delete a user **Question:** useDocumentEvent listens to 1. Mutations to documents 2. Webhooks firing 3. User log-ins 4. Your computer's microphone **Question:** useNavigateToStudioDocument 1. Is an incredibly specific name for a hook 2. Is a mysterious name for a hook# [Controlling cached content in Next.js](/learn/course/controlling-cached-content-in-next-js) Creating a high performance web application for fast loading depends on caching. Learn how to implement a caching strategy you can understand, debug and depend on. ## [Caching Fundamentals](/learn/course/controlling-cached-content-in-next-js/introduction) Next.js has prioritized performance with its caching methods and expects you to configure them. Learn how to integrate the Next.js cache and Sanity CDN for high performance. ## Prefer live by default You might not need this course. The [Live Content API](https://www.sanity.io/learn/content-lake/live-content-api), and its simplified implementation with [`next-sanity`](https://www.sanity.io/learn/course/visual-editing-with-next-js/enhanced-visual-editing-with-react-loader), handles all aspects of fetching, rendering, caching and invalidating queries in a few lines of code. 1. This course remains online to explain finer details of working with Sanity and the Next.js cache. And the code examples in it may not follow from the previous [Content-driven web application foundations](https://www.sanity.io/learn/course/content-driven-web-application-foundations) course. Our **strong recommendation** is to use live fetches by default. 1. Skip to the [Integrated Visual Editing with Next.js](https://www.sanity.io/learn/course/visual-editing-with-next-js) course next. ## Welcome to caching Caching is not unique to Next.js or Vercel; it's a common strategy across all programming and comes in many forms. For example, in-memory caching is one approach that stores data in the application's memory for quick access. When discussing caching for web applications, it typically refers to network requests. When a user makes a request from a web server, the response may be cached in their browser, so subsequent requests for the same page do not need to perform yet another round-trip for the same response. Similarly, when a web server computes and returns a response, it may be cached on the server so that subsequent requests from many other clients can be fulfilled from its cache – faster than recomputing the same request. This is where things get tricky. How long should your web server cache that response? If it's too long, your users may be frustrated by being served stale content. If it's too short, too many users may have to wait for the web server to compute responses – and your web server may use too many resources doing so. ## Next.js specific caching In typical web applications, caching is handled by modifying the headers sent with a request. 1. See the MDN documentation on [Cache-Control headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control) However, Next.js has framework-specific configuration options to scope and simplify setup. This course will primarily focus on these. The following resources may be valuable additional reading: 1. [Next.js data fetching](https://nextjs.org/docs/app/building-your-application/data-fetching/fetching-caching-and-revalidating) documentation 2. [Next.js caching](https://nextjs.org/docs/app/building-your-application/caching) documentation ## Goals of this course Once you have completed this course, you will: * Understand why caching matters based on who is most impacted and how. * Integrate requests for Sanity's CDN and API with the built-in Next.js cache, configured with sensible defaults. * Observe the impact of – and debug changes to – cache configuration. * Revalidate cached requests based on time, path, and tag. * Setup [GROQ-powered webhooks](https://www.sanity.io/learn/compute-and-ai/webhooks) to perform cache revalidation automatically when documents change. ## Who is impacted by caching? There's no one-size-fits-all strategy for caching, so a developer team is responsible for fine-tuning their application's caches. Let's consider how different user groups are impacted by the types of caching that can be implemented. ### Content authors In content-driven web applications, content authors typically want to see the effect of their changes happen immediately. The most reliable way to do this would be to remove all caching from the front end so that every response is freshly created. You could also retrieve content from Sanity's API instead of the CDN to ensure the freshest content is used. However, this strategy also creates the slowest loading and most expensive operating web applications. Not ideal. 1. Content authors that would prefer to see fresh content before – or immediately after – publishing are better served by configuring Visual Editing – rather than modifying cache settings in production. Take the [Integrated Visual Editing with Next.js](https://www.sanity.io/learn/course/visual-editing-with-next-js) course to find out how. ### Business stakeholders Stakeholders in your business would like to keep the running costs of your web application low and conversions high, so you might think the most aggressive caching strategy would suit them. The fewer requests directed to an API instead of a CDN, the better. The less bandwidth a web server spends computing and fulfilling requests, the better. 1. See Cloudflare's documentation on how [website speed affects conversion rates](https://www.cloudflare.com/learning/performance/more/website-performance-conversion-rates) However, overly aggressive caching is bound to frustrate your content authors and end-user groups. ### End users The stakeholders mentioned above would also like to see improved conversions from end-users – who expect a mix of fast-loading pages and up-to-date, reliable content. For example, it's no good if a product page loads quickly but the stock level or price information is invalid. 1. Split requests for long-lived and dynamic parts of the same page. [Partial pre-rendering](https://nextjs.org/docs/app/api-reference/next-config-js/partial-prerendering) is one solution for the above problem. As you can see, each group that is majorly impacted by your web application's cache brings a unique point of view. This makes knowing how caching works—and reacting to the changing realities of how your application is used—so important. Now that you understand the problem space and who is impacted, it's best to equip yourself with the tools required to configure and debug your web application's caching configuration.## [Demystifying caching in development](/learn/course/controlling-cached-content-in-next-js/debugging-caching-in-development) Set up Next.js so that as you make changes and navigate through the application, you can observe the impact of your cache configuration. In recent years, the popularity of the "Jamstack" and Static Site Generators reduced the importance of caching when serving web applications. However, as the limitations of those approaches became more apparent, dynamic, server-rendered responses have once again become popular, spotlighting caching once again. Next.js 14 not only provided aggressive caching for an application's `fetch` requests, but it also made it the default. This led to faster response times at the expense of increased developer frustration. Every `fetch` request was instantly cached, whether in development or production. Further, this cache is stored in a separate data layer from your site code, so redeploys did not reset the site's state like you may have expected in the Jamstack years. Next.js 15 has reversed this decision, and caching is opt-in once again. This was likely a difficult decision because there are pitfalls either way. In this writer's opinion, this decision is not strictly _better_; it is just _different_. It's more important to understand what has been cached and when than whether a request was cached by default. In short, you will want to specify the caching configuration and be able to observe its results. ## Logging fetch requests Fortunately, a Next.js configuration setting logs [the full URL of any fetch request](https://nextjs.org/docs/app/api-reference/next-config-js/logging), along with information about whether it was a cache `HIT` or `MISS` – and why. 1. A cache `HIT` occurs when the requested data is found in the cache, allowing it to be served quickly without fetching from the source. 2. A cache `MISS` is the opposite, requiring it to be fetched from the source, which is slower than serving from the cache. Sanity Client uses `fetch` under the hood, so once you have enabled this debugging mode below, every query you perform with it will appear in the console. 1. **Update** your `next.config.ts` with the following configuration ```typescript:next.config.ts import type { NextConfig } from "next"; const nextConfig: NextConfig = { logging: { fetches: { fullUrl: true, }, }, // ...all other settings }; export default nextConfig; ``` Now refresh any page that fetches data from Sanity – like the posts index or an individual post page – and you should see something like the following in your console: ```text GET /posts 200 in 39ms │ GET https://q1a918nb.apicdn.sanity.io/v2024-07-24/data/query/production?query=*%5B_type+%3D%3D+%22post%22+%26%26+defined%28slug.current%29%5D%5B0...12%5D%7B%0A++_id%2C+title%2C+slug%0A%7D&returnQuery=false 200 in 5ms (cache hit) ``` From this, you can observe: * The `client.fetch()` request was for `apicdn.sanity.io` which means the request was performed with Sanity Client's `useCdn` set to `true`. * As a **cache hit**, the response was fulfilled by the Next.js cache, so this request for `/posts` may not have been sent to Sanity's CDN. In the previous course – [Content-driven web application foundations](https://www.sanity.io/learn/course/content-driven-web-application-foundations) – these fetches were configured to update at most once every 60 seconds. * If a cache hit has already been served within 60 seconds, the response will be fast. * If that time has elapsed, the request will still be served stale, expired data – but in the background, the cache will be repopulated so that the next request receives fresh content. 1. This is similar to the [stale-while-revalidate](https://web.dev/articles/stale-while-revalidate) pattern of caching responses ## Purging the cache Seeing what is cached is helpful, but it's even better to be able to completely reset the cache during development. In the following lessons, you'll look at setting up surgical control for revalidating fetches based on time, path, and tag. This is the preferred option for your production web application. But sometimes, in development, you need a _hammer_. 1. **Create** a new API route in your application: ```typescript:src/app/api/revalidate/all/route.ts import { revalidatePath } from 'next/cache' export async function GET() { if (process.env.NODE_ENV === 'development') { revalidatePath('/', 'layout') return Response.json({ message: 'Layout revalidated' }) } return Response.json({ message: 'This route is configured to only revalidate the layout in development', }) } ``` 1. **Visit** [http://localhost:3000/api/revalidate/all](http://localhost:3000/api/revalidate/all) and you should see the same message above in your browser. 2. **Visit** [http://localhost:3000/posts](http://localhost:3000/posts) to check that it has worked. You should see a different log in the terminal that finishes with `cache skip`: ```text GET /posts 200 in 893ms │ GET https://q1a918nb.apicdn.sanity.io/v2024-07-24/data/query/production?query=*%5B_type+%3D%3D+%22post%22+%26%26+defined%28slug.current%29%5D%5B0...12%5D%7B%0A++_id%2C+title%2C+slug%0A%7D&returnQuery=false 200 in 743ms (cache skip) │ │ Cache skipped reason: (cache-control: no-cache (hard refresh)) ``` Refresh the page again, and the request should once again be a `cache hit`. Now you can purge the entire Next.js cache on demand, and observe the caching behavior of every `fetch` request made in the application. The two uses of `client.fetch` in your application currently have the same configuration. This presents an opportunity to make our code more DRY (don't repeat yourself) and set some sensible defaults. In the next lesson, let's do this and better understand how Sanity and Next.js caching work together.## [Combining Sanity CDN with the Next.js Cache](/learn/course/controlling-cached-content-in-next-js/combining-sanity-cdn-with-the-next-js-cache) Implement Sanity Client in a way that compliments and leverages the Next.js cache with sensible defaults. Even if Next.js had no affordances for caching – or you use a framework with no in-built caching options – the Sanity CDN provides a performant way to query content. This lesson briefly summarizes how querying the correct endpoint in your Next.js application is situation-dependent. 1. See the documentation on the [API CDN](https://www.sanity.io/learn/content-lake/api-cdn) for more details ## Querying Sanity's API or CDN Your Sanity Client's `useCdn` setting determines whether your fetch request uses the CDN or the API. Depending on the context of your fetch, you may choose to query either one. ### Querying the Sanity API Querying the API is slower but guaranteed to be fresh. Your project's plan will have a lower allocation of API requests, so you should factor that into your usage. For these reasons you should only query the API (`useCdn: false`) when responses are infrequent and fast responses are not required. Examples include statically building pages ahead of time and performing incremental static revalidation or tag-based revalidation. ### Querying the Sanity CDN Querying the CDN is faster but not guaranteed to be fresh. The Sanity CDN's cache is flushed every time a publish mutation is performed on a dataset, so there may be a brief delay between the latest content being published and it being available from the CDN. Your project's plan will have a far higher allocation of CDN requests. You should query the CDN (`useCdn: true`) when responses are frequent and fast responses are desired. Examples include all situations other than those outlined in the previous situation where the API is preferred. This makes `useCdn: true` a sensible default. ### Overriding `useCdn` per-request Whatever your Sanity Client configuration, it can be overridden at the time of request using the `withConfig` method. In this section, you'll configure `generateStaticParams` to build individual post pages at build time, instead of at request time. 1. Read more about [`generateStaticParams` on the Next.js documentation](https://nextjs.org/docs/app/api-reference/functions/generate-static-params) 1. **Add** an exported `generateStaticParams` function to the dynamic route ```typescript:src/app/(frontend)/posts/[slug]/page.tsx // update your imports import { POST_QUERY, POSTS_SLUGS_QUERY } from "@/sanity/lib/queries"; // add this export export async function generateStaticParams() { const slugs = await client .withConfig({useCdn: false}) .fetch(POSTS_SLUGS_QUERY); return slugs } ``` When you next deploy your Next.js application, every individual post page will be created ahead of time, with guaranteed fresh data (fresh at the time it was deployed) fetched direct from the Sanity API. ## Sanity Client and Next.js cache configuration If you completed the previous lesson, you are relying on Sanity Live to perform caching and live updates. It is possible to manually set cache configuration options when performing fetches with the Sanity Client. ### `sanityFetch` helper function As detailed in the [next-sanity readme](https://github.com/sanity-io/next-sanity?tab=readme-ov-file#caching-and-revalidation), you may wish instead to create a helper function that wraps Sanity Client with caching configuration options. This `next` key in the Sanity Client configuration takes the same configuration options found in the Next.js documentation for [controlling how fetches are cached](https://nextjs.org/docs/app/building-your-application/data-fetching/fetching-caching-and-revalidating#caching-data). 1. **Update** your `client.ts` file to include an exported helper function, `sanityFetch` ```typescript:src/sanity/lib/client.ts import {createClient, type QueryParams} from 'next-sanity' // other imports, client export // 👇 add this function export async function sanityFetch<const QueryString extends string>({ query, params = {}, revalidate = 60, // default revalidation time in seconds tags = [], }: { query: QueryString params?: QueryParams revalidate?: number | false tags?: string[] }) { return client.fetch(query, params, { next: { revalidate: tags.length ? false : revalidate, // for simple, time-based revalidation tags, // for tag-based revalidation }, }) } ``` The most important lines are highlighted above. By default this helper uses a 60 second revalidation—but will also accept tags for tag-based revalidation (covered in another lesson in this course). 1. **Update** your `/posts` route to use our own `sanityFetch` ```typescript:src/app/(frontend)/posts/page.tsx // update your imports import { sanityFetch } from "@/sanity/lib/client"; // update your fetch const posts = await sanityFetch({query: POSTS_QUERY}); ``` 1. **Update** your individual post route to do the same ```typescript:src/app/(frontend)/posts/[slug]/page.tsx // update your imports import { client, sanityFetch } from '@/sanity/lib/client' // update your fetch const post = await sanityFetch({ query: POST_QUERY, params, }) ``` 1. This route still uses `client` directly in the `generateStaticParams` export. It's acceptable to use the Sanity Client when the cache settings of a request are not essential. ### What about `options.cache`? Next.js will also accept options passed to the [Web Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API)'s cache, but incorrectly configuring both `next` and `cache` options can lead to errors. It's simplest to let `cache` fall back to its default setting and only focus on `next` in the following lessons. 1. See Next.js [documentation on how it handles fetch](https://nextjs.org/docs/app/api-reference/functions/fetch#fetchurl-options). ### Onward! With those changes made, you should see no change on your front end! However, you're in a much better place to implement time—and tag-based revalidation. Let's look a little more into time-based revalidation in the next lesson.## [Time-based cache revalidation](/learn/course/controlling-cached-content-in-next-js/time-based-cache-revalidation) Time-based revalidation is simple to setup and predictable. It might be "enough" for your project. 1. See the Next.js documentation about [time-based revalidation](https://nextjs.org/docs/app/building-your-application/caching#time-based-revalidation) For every `fetch` made so far, you've only been implementing time-based revalidation. This has been primarily for convenience. It's simple to set up and invalidates itself. The default setting of 60 seconds is okay. The type of content your application displays and the volume of traffic you receive will determine whether to modify that setting. Content that changes often, like product pages, may benefit from a lower revalidation time because accuracy is more important than raw speed. Other content, like your terms and conditions pages, likely changes so infrequently that their cache time could safely be set to hours. The blog post index and individual pages fall into the latter category. Let's update the requests on both routes to cache the responses for an hour. 1. **Update** the fetch on your post-index route ```typescript:src/app/(frontend)/posts/page.tsx const posts = await sanityFetch({ query: POSTS_QUERY, revalidate: 3600, }) ``` 1. **Update** the fetch on your post route ```typescript:src/app/(frontend)/posts/[slug]/page.tsx const post = await sanityFetch({ query: POST_QUERY, params, revalidate: 3600, }) ``` Instead of hitting Sanity's CDN at least once a minute, requests will be served by the Next.js cache for up to an hour, a significant performance and bandwidth improvement. Your business stakeholders and end-users are happy. Your content authors could be less so. ## The "typo" problem Imagine you're a content author who has just published a new post – or fixed a typo on a post – and want to ensure the site is fixed immediately. Right now, they may need to wait up to an hour for the world to see those changes. You do have a route to clear the entire cache, but this can potentially impact the performance of every request to every page for every user. The _hammer_ option isn't ideal for fixing a typo. Fortunately, the passage of time isn't the only way to invalidate the cache. Next.js provides a way to invalidate a specific route on demand, and Sanity provides a way to run it automatically on content changes. Let's revalidate by path in the next lesson.## [Path-based revalidation](/learn/course/controlling-cached-content-in-next-js/path-based-revalidation) Surgically revalidate individual post pages by their path when updates are made to their document in Sanity Studio. 1. See the Next.js [documentation on `revalidatePath`](https://nextjs.org/docs/app/api-reference/functions/revalidatePath). Next.js provides a function `revalidatePath`, which will purge the cache for fetches that took place on a specific URL. Implementing this feature is a massive win for your business stakeholder groups and content authors. * If post routes are revalidated individually, you can safely give them a much longer time-based revalidation setting – perhaps even infinite – significantly reducing Sanity request volume and server bandwidth. * Content authors can press "publish" on a document, which automatically revalidates the cache for just that page, and see their updates published instantly. Currently, each post document in Sanity is rendered on a unique route in Next.js. Because of this 1:1 relationship between document and route, revalidating a cached post is straightforward. The goal is to purge the cache for a post whenever that post is edited – fixing the "typo problem" highlighted in the previous lesson. Ideally, this should happen automatically, and [GROQ-powered webhooks](https://www.sanity.io/learn/compute-and-ai/webhooks) make this possible. ## Why webhooks? GROQ-powered webhooks allow you to automate side effects from the Content Lake based on any mutation to a document in a dataset. While you could automate a function from the Studio to call one of Next.js' revalidate functions, triggering this from a webhook is much safer. It is guaranteed to be called from Sanity infrastructure – not in the browser because of a client interaction. It's also safer with automatic retries should the operation fail. So, for a little extra setup, you get much more reliability. 1. See more about [GROQ-powered webhooks](https://www.sanity.io/learn/compute-and-ai/webhooks) in the documentation. ## Create an API route to revalidate paths The code below is for a new API route in your web application designed to be requested by a GROQ-powered webhook. It will: 1. Only handle a `POST` request. 2. Confirm that the request came from a Sanity GROQ-powered webhook. 3. Retrieve the `body` from the request. 4. Retrieve the `path` attribute from the request body and revalidate it. 1. **Rename** your `.env` file to `.env.local`, as it will now contain secrets 2. **Update** your `.env.local` file with a new secret to secure the route. It can be any random string ```text:.env.local SANITY_REVALIDATE_SECRET=<any-random-string> ``` You will also add this string to the GROQ-powered webhook you set up in this lesson. 1. **Create** a new route to execute `revalidatePath` ```typescript:src/app/api/revalidate/path/route.ts import { revalidatePath } from 'next/cache' import { type NextRequest, NextResponse } from 'next/server' import { parseBody } from 'next-sanity/webhook' type WebhookPayload = { path?: string } export async function POST(req: NextRequest) { try { if (!process.env.SANITY_REVALIDATE_SECRET) { return new Response( 'Missing environment variable SANITY_REVALIDATE_SECRET', { status: 500 }, ) } const { isValidSignature, body } = await parseBody<WebhookPayload>( req, process.env.SANITY_REVALIDATE_SECRET, ) if (!isValidSignature) { const message = 'Invalid signature' return new Response(JSON.stringify({ message, isValidSignature, body }), { status: 401, }) } else if (!body?.path) { const message = 'Bad Request' return new Response(JSON.stringify({ message, body }), { status: 400 }) } revalidatePath(body.path) const message = `Updated route: ${body.path}` return NextResponse.json({ body, message }) } catch (err) { console.error(err) return new Response((err as Error).message, { status: 500 }) } } ``` This route confirms the value of the `SANITY_REVALIDATE_SECRET` environment variable and handles any invalid requests to the route. Because this API route is only configured to handle requests with a `POST` method, visiting it in your browser (which performs a `GET` request) will not work. Let's configure the webhook to do that for us. 1. In this lesson, you'll only configure `revalidatePath` for a single static path. Still, it can also revalidate an entire dynamic path—like `/(frontend)/posts/[slug]`—see the Next.js [documentation on `revalidatePath`](https://nextjs.org/docs/app/api-reference/functions/revalidatePath) for more details. ## Create a webhook A GROQ-powered webhook allows Content Lake to perform requests on your behalf whenever a mutation on a dataset is made. This is ideal for content operations like on-demand cache invalidation and will power a workflow where even a page set to be indefinitely cached can be purged and repopulated on demand. ### Remote access for your local environment A tricky part of developing GROQ-powered webhooks is that even when making content changes in your Sanity Studio's local development environment, webhooks will fire remotely in the Content Lake – but the Content Lake cannot request API routes in your local development environment. You'll need to share your local URL with the world. Several services can help you do this. Perhaps the simplest and best-known is Ngrok. 1. **Create** a new [free account at Ngrok](https://ngrok.com/) if you do not already have one. 2. Once logged in, complete any installation instructions for your machine 3. **Run** the following command below to share your local Next.js remotely ```text ngrok http http://localhost:3000 ``` Now, in the terminal, you should see many details about your account, version, etc. Along with a "Forwarding" URL which looks something like this: ```text https://8067-92-26-32-42.ngrok-free.app ``` Open that URL in your browser to see your local Next.js app. You can click links – and even open `/studio` (though you will need to add a CORS origin to interact with it). Now, you have a remote URL of your local development environment for a GROQ-powered webhook to request. ### Create a new webhook Fortunately, we have prepared a webhook template to add to your Project. It has most of the settings preconfigured. You'll just need to update a few that are unique to you: 1. **Open** this [path revalidation webhook template](https://www.sanity.io/manage/webhooks/share?name=Path-based%20Revalidation%20Hook%20for%20Next.js&description=&url=https%3A%2F%2FYOUR-PRODUCTION-URL.TLD%2Fapi%2Frevalidate%2Fpath&on=create&on=update&on=delete&filter=_type%20in%20%5B%22post%22%5D&projection=%7B%0A%20%20%22path%22%3A%20select(%0A%20%20%20%20_type%20%3D%3D%20%22post%22%20%3D%3E%20%22%2Fposts%2F%22%20%2B%20slug.current%2C%0A%20%20%20%20slug.current%0A%20%20)%0A%7D&httpMethod=POST&apiVersion=v2021-03-25&includeDrafts=&headers=%7B%7D) 2. **Update** the URL to use your ngrok URL 3. **Click** "Apply Webhook" and select your project, apply to all datasets 4. **Click** "Edit Webhook" on the next screen, scroll to the bottom, and add the same "Secret" you added to your `.env` file 5. **Click** "Save" You're now ready to publish changes in Sanity Studio to automatically revalidate a document's cached page in your web application. 1. The "Path" being revalidated is set within the "Projection" of the webhook using the [GROQ function](https://www.sanity.io/docs/groq-functions) `select()`. As your application grows, it could be extended to include other unique paths based on the document type or any other logic. ## Test it out With all the machinery in place, you can test that what you've set up works. 1. **Visit** an individual post page in your application, and check the terminal first to ensure it was a cache `HIT`. If it is a cache `MISS`, reload the page, and you should see it cached for the following request. 1. **Open** that post within your Sanity Studio at [http://localhost:3000/studio](http://localhost:3000/studio), change the `title` and publish it. Almost instantly after publishing, in your terminal you should see a `POST` request which was automatically made from the GROQ-powered webhook: ```text POST /api/revalidate/path 200 in 3540ms ``` 1. **Reload** the same individual post page again. You should see a cache `MISS` in the terminal and your updated title on the page. If you reload the page again, it should be cached for the next request. ### Handling stale data Are you still seeing stale data? Currently, GROQ-powered webhooks fire when the mutation is made via the Sanity API, but _before_ the Sanity CDN is updated. Your request may have been made in the brief period in between, and the Next.js repopulated with stale data from the Sanity CDN. If you encounter this situation, there are ways to mitigate it. Within your API route to revalidate the path, the `parseBody` function from `next-sanity` takes a third argument to add a short delay before proceeding. ```typescript:src/app/api/revalidate/path/route.ts const {isValidSignature, body} = await parseBody<WebhookPayload>( req, process.env.SANITY_REVALIDATE_SECRET, true ) ``` Alternatively, if you ensure Next.js caches **every** fetch, you could change the Sanity Client configuration to never use the CDN by setting `useCdn: false`. 1. Beware of the potential pitfalls of constantly querying the Sanity API directly. You don't want your application to go viral with uncached requests to the Sanity API. ## We can go deeper What you have set up now is pretty great! Individual post pages have long-lived cache times and are revalidated automatically on demand when changes are published. There are two new problems to solve: 1. The post index page **isn't** being revalidated when a post changes, so an update to the title does not appear. 2. The post index and individual post pages show `author` and `category` document values. If those documents are updated, we need to revalidate any component that renders those. Fortunately, Next.js also offers "tag-based revalidation" for "update once, revalidate everywhere" content operations, and you'll set it up in the next lesson.## [Tag-based revalidation](/learn/course/controlling-cached-content-in-next-js/tag-based-revalidation) Assign tags to queries to revalidate the cache of many paths by targeting any individual tag. 1. See the Next.js [documentation about `revalidateTag`](https://nextjs.org/docs/app/api-reference/functions/revalidateTag) So far, the focus has been on revalidating individual post pages when a post changes. But with the current content model, more than just post-type document fields are queried—and there's more than one route where those responses are rendered. A `post`-type document could contain many `category` references and a single `author` reference. Ideally, a content author editing **any** of these document types should impact **every** route where that content is rendered. This is made possible with `revalidateTag`. 1. Time-based and tag-based revalidation cannot be used together. This is why your `sanityFetch` function is configured to ignore the `revalidate` parameter if `tags` are provided. In this lesson, you'll remove time-based revalidation from your existing queries instead of tag-based. You now know how to implement time and path-based revalidation in the future for instances where it is applicable. ## Tag your queries When performing a fetch, `tags` can be an array of strings of any value. It's a standard convention to use the Sanity document types expected to be in the response. For the posts index, add tags for the three document types that provide data for the response. If any documents of these types change, you'll want to revalidate any page that renders them. 1. **Update** your fetch in the post-index route ```typescript:src/app/(frontend)/posts/page.tsx const posts = await sanityFetch({ query: POSTS_QUERY, tags: ['post', 'author', 'category'], }) ``` You can be more granular for an individual post page. You don't need to revalidate **every** post page because **one** has post changed. Thankfully, the dynamic route provides us with a unique identifier for this post—its slug—so you can use it for this page's cache tags. 1. **Update** your fetch in the post route ```typescript:src/app/(frontend)/posts/[slug]/page.tsx const post = await sanityFetch({ query: POST_QUERY, params, tags: [`post:${params.slug}`, 'author', 'category'], }) ``` ### Create an API route to revalidate tags Once again, you will need to create an API route to accept a request from a GROQ-powered webhook and perform the revalidation. 1. **Create** a new API route for `revalidateTag` ```typescript:src/app/api/revalidate/tag/route.ts import { revalidateTag } from 'next/cache' import { type NextRequest, NextResponse } from 'next/server' import { parseBody } from 'next-sanity/webhook' type WebhookPayload = { tags: string[] } export async function POST(req: NextRequest) { try { if (!process.env.SANITY_REVALIDATE_SECRET) { return new Response( 'Missing environment variable SANITY_REVALIDATE_SECRET', { status: 500 }, ) } const { isValidSignature, body } = await parseBody<WebhookPayload>( req, process.env.SANITY_REVALIDATE_SECRET, true, ) if (!isValidSignature) { const message = 'Invalid signature' return new Response(JSON.stringify({ message, isValidSignature, body }), { status: 401, }) } else if (!Array.isArray(body?.tags) || !body.tags.length) { const message = 'Bad Request' return new Response(JSON.stringify({ message, body }), { status: 400 }) } body.tags.forEach((tag) => { revalidateTag(tag) }) return NextResponse.json({ body }) } catch (err) { console.error(err) return new Response((err as Error).message, { status: 500 }) } } ``` 1. If you wish to delay the revalidation due to the Sanity CDN, include the third argument in `parseBody`, highlighted above. ## Create a webhook 1. Instructions for how to test webhooks during local development are in the previous lesson: [Path-based revalidation](https://www.sanity.io/learn/course/controlling-cached-content-in-next-js/path-based-revalidation) Once again, we have prepared a webhook template for your Project. It has most of the settings preconfigured. You'll just need to update a few that are unique to you: 1. **Open** this [tag revalidation webhook template](https://www.sanity.io/manage/webhooks/share?name=Tag-based%20Revalidation%20Hook%20for%20Next.js%2015&description=&url=https%3A%2F%2FYOUR-PRODUCTION-URL.TLD%2Fapi%2Frevalidate%2Ftag&on=create&on=update&on=delete&filter=_type%20in%20%5B%22post%22%2C%20%22author%22%2C%20%22category%22%5D&projection=%7B%22tags%22%3A%20%5B_type%2C%20_type%20%2B%20%22%3A%22%20%2B%20slug.current%5D%7D&httpMethod=POST&apiVersion=v2021-03-25&includeDrafts=&headers=%7B%7D) 2. **Update** the URL to use your ngrok URL 3. **Click** "Apply Webhook" and select your project, apply to all datasets 4. **Click** "Edit Webhook" on the next screen, scroll to the bottom, and add the same "Secret" you added to your `.env` file 5. **Click** "Save" You're now ready to automatically revalidate your posts index and individual post pages in your web application simply by changing a `post`, `author`, or `category` document in Sanity Studio. 1. According to the Next.js documentation, `revalidateTag` only invalidates the cache when the path is _next_ visited. This means calling `revalidateTag` with a dynamic route segment will not immediately trigger many revalidations at once. ## Test it out 1. If you still have the path-based revalidation webhook enabled, disable it in Manage. With all the machinery in place, you can test that what you've set up works. 1. **Visit** the post index page at [http://localhost:3000/posts](http://localhost:3000/posts), and check the terminal first to ensure it was a cache `HIT`. If it is a cache `MISS`, reload the page, and you should see it cached for the next request. 1. **Open** any `category` or `author`-type document within your Sanity Studio at [http://localhost:3000/studio](http://localhost:3000/studio), change any field and publish. Almost instantly after publishing, in your terminal you should see a `POST` request which was automatically made from the GROQ-powered webhook: ```text POST /api/revalidate/tag 200 in 3540ms ``` 1. **Reload** the same individual post page again. If you reload the page again, it should be cached for the next request. You should see a cache `MISS` in the terminal, your updated title on the post-index and the individual post page. 1. Are you still seeing stale data? The previous lesson includes instructions on how to mitigate the time between a webhook firing and the CDN being repopulated: [Path-based revalidation](https://www.sanity.io/learn/course/controlling-cached-content-in-next-js/path-based-revalidation) With that, you're all cached up with somewhere to go. Let's review it in the final lesson.## [Quiz to win cache prizes](/learn/course/controlling-cached-content-in-next-js/conclusion) Let's review what you've learned about caching and balancing the content you have with the people it serves. With all these lessons completed, your application now only has one caching strategy: tag-based revalidation. It's a good one, but as your application grows, it may not be so one-sided. Individual pages with slow-moving content and few connections may benefit from long revalidation times and path-based revalidation, as well as content such as terms and conditions, or help pages. Connected content where many document types are joined together with references will continue to benefit from tag-based revalidation. When to query the Sanity API or CDN is also situation dependent as well. You should be able to query the CDN most of the time. But in situations where requests are only made infrequently—or demand freshness—querying the API sparingly can still be useful. To test what you've learned, let's take a brief quiz: **Question:** Caching impacts business users because 1. Time is money 2. It directly ties to bandwidth and server costs 3. Cache performance is their favorite KPI 4. Caching strategies impact tax liability **Question:** Caching impacts content authors because 1. It's their favorite thing to consider while authoring 2. They typically have strong opinions on Varnish 3. They'd like to see the impact of their published changes immediately 4. They expect manual control over caching configuration **Question:** Caching impacts end users because: 1. They have strong opinions on database caching 2. They prioritize fast and accurate content 3. They appreciate cache-control headers deeply 4. They're famously patient **Question:** Time-based revalidation is 1. Only available on paid plans 2. Impossible to set up 3. Simple and okay 4. Complex and not recommended **Question:** Path-based revalidation is 1. Able to revalidate dynamic paths 2. Able to revalidate the entire layout 3. Able to revalidate static paths 4. All the above **Question:** Tag-based revalidation is 1. Ideal for connected content 2. Ideal for single pages 3. Impossible 4. Required **Question:** GROQ-powered webhooks fire 1. When the Sanity CDN is refreshed 2. When manually triggered 3. When a mutation is performed on a dataset 4. When a page refreshes# [Testing Sanity Studio](/learn/course/testing-sanity-studio) Learn to balance test coverage with development velocity while protecting the critical operations that power your Content Operating System. Establish automated testing strategies, document business requirements in executable code, and enable confident iteration. From validation logic to React components, develop an intentional testing strategy that ensures reliability and enables you to ship Studio changes with confidence. ## [Why Testing Matters for Studio Development](/learn/course/testing-sanity-studio/why-testing-matters-for-studio-development) Testing isn't just quality assurance—it's strategic infrastructure that protects your business logic and enables rapid iteration. Understand how tests document requirements in executable code, provide confidence before changes reach content editors, and free your team to move faster. Learn what makes Studio customizations worth testing and how to think about testing as an investment in long-term velocity. This course takes an incremental approach to writing tests, starting with validation and access control logic, then progressing to more complex React components that interact with Studio's APIs. This building-block approach helps you establish a solid foundation of test coverage that grows with your Studio's complexity. ## What you'll build By the end of this course, you'll have a complete test suite for a real Sanity Studio that manages events, artists, and venues. You'll test: * **Validation and access control logic** - Permission checks and business rules * **Validation logic** - Custom schema validation rules * **React components** - Custom input components with Sanity hooks * **Integration workflows** - CI/CD pipelines that run tests automatically Upon completing this course, you'll be able to: * Set up Vitest in a Sanity Studio project * Write tests for validation and access control functions * Test async validation functions with mocked clients * Create reusable test fixtures for consistent mocking * Test React components that use Sanity UI and Studio APIs * Implement automated testing in GitHub Actions * Make strategic decisions about what to test and when ## The strategic case for testing Testing is an investment. It takes time to write tests, configure testing tools, and maintain test suites. But this upfront investment pays dividends: **For solo developers:** * Catch bugs before they reach production * Refactor with confidence * Document your intent for future you **For teams:** * Enable multiple developers to work simultaneously without breaking each other's code * Onboard new team members faster with executable documentation * Review pull requests more efficiently when tests verify behavior **For AI-assisted development:** * Tests provide clear specifications of expected behavior * AI agents can understand your business logic through test descriptions * Automated refactoring becomes safer when tests guard against regressions When code becomes commodity through AI assistance, your specifications—documented in tests—become your competitive advantage.## [Setting Up Your Testing Environment](/learn/course/testing-sanity-studio/setting-up-your-testing-environment) Configure Vitest as your testing framework and integrate it into your Studio development workflow. Set up test environments for both monorepo and single-app configurations, understand how testing fits into your build process, and write your first test. Learn the fundamentals of test structure, assertions, and organizing test files alongside the code they verify. In this lesson, you'll set up a testing environment using Vitest and understand why testing is a strategic investment for your Sanity Studio. ## Why test your Studio? With Sanity's code-first architecture, your Studio is configured through the code you write—custom inputs, validation functions, preview configuration, formatting helpers, and other business logic. Testing this code before it reaches Studio gives you confidence that your changes won't break the editing experience. When you write tests for your Studio code, you're not checking that it runs—you're encoding your team's business requirements and design decisions. Tests become documentation of how your Studio should work, written in code that can verify itself. Consider a concert venue booking system. If you write a validation rule that certain event types must have a venue, a test ensures this rule works correctly: * When a concert event has no venue, validation fails with a helpful message * When a livestream event has no venue, validation passes (it doesn't need one) * When someone modifies the validation logic later, the test catches breaking changes This is powerful when developing new features: write tests that describe the expected behavior first, then implement the code that makes them pass. Tests become executable specifications that document your business logic. ## Testing integrates into your workflow Testing fits into your development workflow at multiple points: * **During local development** - Watch mode provides instant feedback as you write code * **In pull requests** - Automated CI runs validate changes before code review * **Before deployment** - Tests ensure your changes won't disrupt content editors By implementing a testing strategy, you can iterate on your Studio with confidence, knowing that your custom inputs, schema helpers, and validation functions are covered by tests. ## Setting up Vitest [Vitest](https://vitest.dev) is a modern testing framework designed for TypeScript projects. It provides a fast, developer-friendly experience with instant feedback through watch mode. ### Monorepo configuration This repository is a monorepo with multiple apps (`apps/studio`, `apps/tickets`, `apps/web`). Vitest's workspace feature lets you run tests across all apps from the root, or target individual apps. First, install Vitest at the root level: ```sh pnpm add -D vitest -w ``` The `-w` flag installs to the workspace root, making Vitest available to all packages. Create a `vitest.config.ts` file at the repository root: ```typescript:vitest.config.ts import {defineConfig}from 'vitest/config' export default defineConfig({ test: { // Automatically discover test configs in all apps projects: ['apps/*'], }, }) ``` This tells Vitest to look for test configurations in each app directory. Each app can have its own specialized config. 1. Working with a standalone Studio app instead of in a monorepo? You can skip the workspace configuration and use `defineConfig()` directly in your Studio's `vitest.config.ts`. ### Studio app test configuration Now create `apps/studio/vitest.config.ts`: ```typescript:apps/studio/vitest.config.ts import {defineProject} from 'vitest/config' export default defineProject({ test: { name: 'studio', include: ['**/*.test.ts'], environment: 'node', }, }) ``` Notice we use `defineProject()` instead of `defineConfig()`. This provides better type checking for workspace projects. The `name: 'studio'` is required—Vitest needs unique names for each project. Add test scripts in the root `package.json`: ```json:package.json { "scripts": { "test": "vitest", } } ``` Now you can run tests from the root (all apps) or run individual app tests. ## Your first test Let's write a simple test to verify your setup works. Create a test file at `apps/studio/example.test.ts`: ```typescript:apps/studio/example.test.ts import {describe, it, expect} from 'vitest' describe('Vitest setup', () => { it('runs basic assertions', () => { expect(2 + 2).toBe(4) }) it('handles string comparisons', () => { const greeting = 'Hello, Sanity' expect(greeting).toContain('Sanity') }) it('validates arrays', () => { const events = ['concert', 'livestream', 'exhibition'] expect(events).toHaveLength(3) expect(events).toContain('concert') }) }) ``` This test file demonstrates the basic structure: * **`describe`** - Groups related tests together (think of it as a container) * **`it`** - Defines an individual test case (read it as "it should...") * **`expect`** - Makes assertions about values (this is where actual testing happens) Run the tests. Since we're in a monorepo, you have multiple options: ```sh # From the repository root - runs all tests in all apps pnpm test # From the root - run only Studio tests pnpm test --project=studio ``` For now, the simplest approach is to run from the root with `pnpm test`. 1. Create the `apps/studio/example.test.ts` file with the code above and run `pnpm test` to verify your setup works. You should see output indicating all three tests passed. Vitest enters watch mode, waiting for file changes to rerun tests automatically. ### The arrange-act-assert pattern Each test follows a three-step pattern: 1. **Arrange** - Set up the data and conditions 2. **Act** - Execute the code being tested 3. **Assert** - Verify the result matches expectations 1. The arrange-act-assert pattern works great for [Test-Driven Development (TDD)](https://en.wikipedia.org/wiki/Test-driven_development). Write the test first (it fails), implement the code (it passes), then refactor with confidence. Here's an example with the event domain: ```typescript describe('Event type classification', () => { it('identifies concerts as venue-required events', () => { // Arrange const eventType = 'concert' const venueRequiredTypes = ['concert', 'exhibition'] // Act const requiresVenue = venueRequiredTypes.includes(eventType) // Assert expect(requiresVenue).toBe(true) }) }) ``` This pattern keeps tests readable and maintainable. When a test fails, you can quickly identify which stage failed. ## What to test (and what not to test) Not everything needs a test. Focus on code that contains business logic or could break in surprising ways: **Write tests for:** * Validation functions that enforce business rules * Helper functions that transform or format data * Custom input components with complex interactions * Preview configurations that shape how content appears **Don't test:** * Simple field definitions with no logic * Third-party library code (assume it's tested) * Trivial getters and setters with no transformation For this events Studio, high-value tests would cover: * Validation: "Concert events must have a venue, livestream events don't" * Date logic: "Doors open time calculates correctly from event date" * URL validation: "Ticket URLs must be valid HTTPS URLs" ## Watch mode: your testing companion Leave `pnpm test` running while you develop. Vitest watches your files and automatically reruns affected tests when you save changes. Try modifying your test file—change an assertion to make it fail: ```typescript expect(2 + 2).toBe(5) ``` Vitest immediately detects the change and shows you the failure. Change it back to `4` and the tests pass again. This instant feedback loop helps you catch errors early and iterate quickly. 1. With watch mode running, modify the test to make it fail, then fix it. Observe how Vitest automatically reruns tests on file changes. ## Next steps You now have a working test environment and understand why testing is a strategic investment for Studio development. You've set up Vitest in a monorepo configuration with workspace projects, learned the basic structure of tests with `describe`, `it`, and `expect`, and discovered how watch mode provides instant feedback as you develop. You also know what types of code are worth testing—validation functions, helper utilities, and custom components with business logic—versus what to skip, like simple field definitions with no logic. In the next lesson, you'll test validation logic and access control rules from the events Studio, building confidence with real-world examples. 1. Think about one validation function or helper in your own Studio that would benefit from testing. You'll write tests for it as you progress through this course. In the next lesson, you'll test validation logic and access control rules from the events Studio, building confidence with real-world examples.## [Testing Validation and Access Control](/learn/course/testing-sanity-studio/testing-validation-and-access-control) Start with the simplest testing scenario: functions with no external dependencies or side effects. Test access control logic that determines who can edit fields, validation functions that enforce business rules, and utility functions that transform data. These isolated functions are straightforward to test and often contain critical business logic that protects content quality across your organization. In this lesson, you'll test validation logic and access control rules from the events Studio—functions with no external dependencies that are straightforward to test. ## Why start with isolated functions? A "pure" function is predictable and isolated: * **Same inputs always produce same outputs** - No randomness or hidden state * **No side effects** - Doesn't modify external state, make API calls, or change files * **Easy to test** - Pass inputs, verify outputs, done Functions with no external dependencies are the easiest place to start testing because they require no mocks, no setup, and no teardown. The function is completely self-contained. ## Testing validation and access control The Studio includes access control logic that determines who can edit certain fields. The slug field has a rule: anyone can set the initial slug, but only administrators can change it once set: ```typescript:apps/studio/helpers.ts import type {CurrentUser} from 'sanity' /** * Determines if the current user can edit a slug field * Only administrators can edit existing slugs */ export function canEditSlug(user?: Omit<CurrentUser, 'role'> | null): boolean { return user?.roles.some((role) => role.name === 'administrator') ?? false } ``` 1. Learn more about Sanity's role-based access control in [Users, roles and using roles](https://www.sanity.io/learn/course/introduction-to-users-and-roles) This function is pure: given a user object, it returns whether they have admin privileges. No API calls, no state changes, no external dependencies. To ensure this pure helper function matches our business logic, we should test the following scenarios: 1. **Admin user** - Has administrator role (should return `true`) 2. **Regular user** - Has non-admin role like editor (should return `false`) 3. **Multiple roles** - User with both editor and admin roles (should return `true`) 4. **Null user** - No user logged in (should return `false`) 5. **Empty roles** - User exists but has no roles assigned (should return `false`) ```typescript:apps/studio/helpers.test.ts import {describe, it, expect} from 'vitest' import type {CurrentUser} from 'sanity' import {canEditSlug} from './helpers' describe('canEditSlug', () => { it('allows administrators to edit slugs', () => { const adminUser: Omit<CurrentUser, 'role'> = { id: 'admin-user', name: 'Admin User', email: 'admin@example.com', roles: [{name: 'administrator', title: 'Administrator'}], } expect(canEditSlug(adminUser)).toBe(true) }) it('prevents non-admin users from editing slugs', () => { const regularUser: Omit<CurrentUser, 'role'> = { id: 'regular-user', name: 'Regular User', email: 'user@example.com', roles: [{name: 'editor', title: 'Editor'}], } expect(canEditSlug(regularUser)).toBe(false) }) it('handles users with multiple roles', () => { const multiRoleUser: Omit<CurrentUser, 'role'> = { id: 'multirole-user', name: 'Multi-role User', email: 'multi@example.com', roles: [ {name: 'editor', title: 'Editor'}, {name: 'administrator', title: 'Administrator'}, ], } expect(canEditSlug(multiRoleUser)).toBe(true) }) it('prevents access when user is `null`', () => { expect(canEditSlug(null)).toBe(false) }) it('prevents access when user has no roles', () => { const userWithoutRoles: Omit<CurrentUser, 'role'> = { id: 'no-roles', name: 'No Roles', email: 'noroles@example.com', roles: [], } expect(canEditSlug(userWithoutRoles)).toBe(false) }) }) ``` Run the tests from the root with `pnpm test`. All tests should pass. This approach ensures the permission check works correctly for all possible user states, protecting your content from unauthorized edits. ## Testing validation and access control Another example might be the IANA tags that you might use to localize your content: ```typescript:apps/studio/validation.ts import type {StringRule} from 'sanity' /** * IANA language tag pattern (BCP 47) * @see https://en.wikipedia.org/wiki/IETF_language_tag * * Supports formats like: * - en (2-letter language code) * - en-US (language + region) * - zh-Hant-TW (language + script + region) * - en-US-x-private (with private use extensions) */ export const LANGUAGE_TAG_PATTERN = /^[a-z]{2,3}(?:-[A-Z][a-z]{3})?(?:-(?:[A-Z]{2}|\d{3}))?(?:-[a-zA-Z0-9]{5,8}|-[0-9][a-zA-Z0-9]{3})*$/ /** * Validation function for IANA language tags (BCP 47 format) * * @example * ```ts * defineField({ * name: 'language', * type: 'string', * validation: validateLanguageTag * }) * ``` */ export const validateLanguageTag = (rule: StringRule): StringRule => rule.regex(LANGUAGE_TAG_PATTERN, { name: 'IANA language tag', }) ``` 1. This example uses [TSDoc](https://typedoc.org/)-style comments to annotate and provide at-a-glance, inline documentation when hovering over definitions. Our test suite for this validation might look something like: ```typescript:apps/studio/validation.test.ts import {describe, it, expect} from 'vitest' import {LANGUAGE_TAG_PATTERN} from './validation' describe('LANGUAGE_TAG_PATTERN', () => { it('matches valid language tags (real-world examples)', () => { // Simple language codes (most common) expect(LANGUAGE_TAG_PATTERN.test('en')).toBe(true) // English expect(LANGUAGE_TAG_PATTERN.test('fr')).toBe(true) // French expect(LANGUAGE_TAG_PATTERN.test('ja')).toBe(true) // Japanese expect(LANGUAGE_TAG_PATTERN.test('nan')).toBe(true) // Min Nan Chinese (3-letter ISO 639-3) // Language + Region (localization) expect(LANGUAGE_TAG_PATTERN.test('en-US')).toBe(true) // US English expect(LANGUAGE_TAG_PATTERN.test('en-GB')).toBe(true) // British English expect(LANGUAGE_TAG_PATTERN.test('fr-CA')).toBe(true) // Canadian French expect(LANGUAGE_TAG_PATTERN.test('es-419')).toBe(true) // Latin American Spanish (UN M.49 numeric code) }) it('rejects common mistakes', () => { // Case errors (most common mistake) expect(LANGUAGE_TAG_PATTERN.test('EN')).toBe(false) // Language must be lowercase expect(LANGUAGE_TAG_PATTERN.test('en-us')).toBe(false) // Region must be uppercase (en-US) expect(LANGUAGE_TAG_PATTERN.test('zh-hant')).toBe(false) // Script must be Title Case (zh-Hant) expect(LANGUAGE_TAG_PATTERN.test('zh-HANT')).toBe(false) // Script cannot be all caps // Wrong separator expect(LANGUAGE_TAG_PATTERN.test('en_US')).toBe(false) // Must use hyphen, not underscore expect(LANGUAGE_TAG_PATTERN.test('en.US')).toBe(false) // Must use hyphen, not period // Using full names instead of codes expect(LANGUAGE_TAG_PATTERN.test('english')).toBe(false) // Must use ISO code 'en', not full name expect(LANGUAGE_TAG_PATTERN.test('English')).toBe(false) // Wrong region code length expect(LANGUAGE_TAG_PATTERN.test('en-USA')).toBe(false) // Region must be 2 letters, not 3 expect(LANGUAGE_TAG_PATTERN.test('en-U')).toBe(false) // Region cannot be 1 letter // Confusing region with script expect(LANGUAGE_TAG_PATTERN.test('zh-CN')).toBe(true) // Valid but ambiguous - prefer zh-Hans expect(LANGUAGE_TAG_PATTERN.test('zh-TW')).toBe(true) // Valid but ambiguous - prefer zh-Hant }) }) ``` These functions exhibit a key trait: they're completely isolated. No API calls, no database queries, no React hooks. This makes them fast to test and easy to verify. Pure function tests require no special setup—no providers, no mocks, no configuration. This simplicity makes them an excellent starting point for your testing strategy. More importantly though, these pure functions often form the foundation of critical business rules in your Sanity Studio. Validation functions ensure data integrity, formatting utilities maintain consistency, and helper functions encapsulate important domain logic. By thoroughly testing these functions, you're safeguarding the core rules that protect your content quality. ## Next steps You've tested validation and access control logic—permission logic with edge cases like admin checks and null handling and IANA tag validation. Next you'll test functions that need more context—validation rules that query Sanity's Content Lake to enforce business logic. You'll learn to mock Sanity client, create reusable fixtures, and build a testing harness for async validation logic.## [Testing Stateful Studio Logic](/learn/course/testing-sanity-studio/testing-stateful-studio-logic) Test validation functions that query your Content Lake to verify business rules across documents. Learn to mock the Sanity client to create controlled test scenarios, build reusable test fixtures that simplify setup, and verify async validation logic that prevents invalid content states. Understand how to test functions that depend on external data without requiring a populated dataset. In this lesson, you'll test validation functions that need context—rules that query your Content Lake to check conditions across multiple documents. The validation logic you tested in the previous lesson worked in isolation: it took a user as input and returned a boolean to determine access control. But some business rules require checking other documents in your dataset. Consider validation that depends on dataset state: * _"Only one event can be featured at a time"_ (needs to check if others are featured) * _"Artist cannot have overlapping performances"_ (needs to check other event dates) These validations need to query Content Lake. To test them, you'll mock the Sanity client and validation context, creating reusable fixtures that keep your tests clean and focused on business logic. ## Testing stateful validation functions Event companies need to promote one event above others—the "featured" event appears on the homepage, gets social media promotion, and drives ticket sales. Only one event can be featured at a time. This business rule needs enforcement at the data layer. If two events are featured simultaneously, the homepage breaks and marketing campaigns become confused. ```typescript:apps/studio/validation.ts import { DEFAULT_STUDIO_CLIENT_OPTIONS, getPublishedId, type BooleanRule, type ValidationBuilder, type ValidationContext, } from 'sanity' /** * Checks if setting this event as featured would result in a single featured event * Business logic function that queries the dataset for other featured events */ export async function isSingleFeaturedEvent( value: boolean | undefined, context: ValidationContext, ): Promise<boolean> { // If not setting to featured, no need to check if (!value) return true const {getClient, document} = context if (!document) { throw new Error('Document context required for validation') } const client = getClient(DEFAULT_STUDIO_CLIENT_OPTIONS) const documentId = getPublishedId(document._id) // Query for other featured events (excluding this document's versions) const existingFeatured = await client.fetch<boolean>( `defined(*[_type == "event" && featured == true && !sanity::versionOf($documentId)][0]._id)`, {documentId}, {tag: 'validation.single-featured-event', perspective: 'raw'}, ) // Return true if no other featured event exists return !existingFeatured } /** * Validation builder for the featured field * Ensures only one event can be featured at a time * * @example * ```ts * defineField({ * name: 'featured', * type: 'boolean', * validation: validateSingleFeaturedEvent * }) * ``` */ export const validateSingleFeaturedEvent: ValidationBuilder<BooleanRule, boolean> = (rule) => rule.custom(async (value, context) => { if (await isSingleFeaturedEvent(value, context)) { return true } return 'Only one event can be featured at a time' }) ``` There is a clean separation between the testable business logic function (`isSingleFeaturedEvent`), which returns a boolean indicating validity, and the validation builder that wraps it with the error message. ## Understanding mocking When testing functions with external dependencies, you need a controlled environment where you can verify behavior without relying on external systems. **Mocking** creates this test "harness" by replacing real dependencies with controlled test doubles that you configure precisely for each test scenario. A mock is a fake implementation that mimics the behavior of a real object. You control what the mock returns, letting you simulate different scenarios without needing the real dependency. Mocks also track how they're called—which methods were invoked, with what arguments, and how many times—letting you verify your code interacts with dependencies correctly. For validation functions that query Sanity's Content Lake, you'll mock the Sanity client's `fetch()` method. Instead of running actual database queries, the mock returns predefined values you specify. This lets you test scenarios like "no featured events exist" or "another event is already featured" without populating a real dataset. The tests run in milliseconds instead of seconds, and always produce the same results regardless of what data exists in your actual Content Lake. 1. Mocks let you test business logic in isolation. You're verifying your code's behavior, not testing that Sanity's client works (we do that for you). ## Creating test fixtures Testing our various stateful functions requires setup—mock clients, mock contexts, test data. Rather than recreate this setup in every test, you'll use **fixtures**: reusable building blocks that encapsulate common test setup patterns. A fixture is a function that creates consistent test data or dependencies. Instead of writing the same mock setup repeatedly, you call a fixture function that handles the details. This keeps tests focused on what's unique (the scenario being tested) rather than boilerplate (how to create a mock client). First let's create a client fixture will be reused across all validation tests, ensuring consistency and reducing repetitive code: ```typescript:apps/studio/__tests__/fixtures/client.ts import {test as base, vi, type Mock} from 'vitest' import type {SanityClient} from 'sanity' type MockSanityClient = SanityClient & { fetch: Mock } /** * Helper function to create a mock Sanity client * Use this when you need a client outside of the test fixture * * @example * ```tsx * const mockClient = createMockClient() * mockClient.fetch.mockResolvedValue({...}) * vi.mocked(useClient).mockReturnValue(mockClient) * ``` */ export function createMockClient(): MockSanityClient { return { fetch: vi.fn(), } as unknown as MockSanityClient } /** * Mock Sanity client fixture * * Provides a mocked Sanity client for testing components that use useClient(). * The client has a mocked fetch() method that can be configured per-test. * * @example * ```tsx * import {test, expect} from '@/__tests__/fixtures/client' * * test('fetches data', async ({mockClient}) => { * mockClient.fetch.mockResolvedValue({_id: '123', title: 'Test'}) * * // Your test code here * }) * ``` */ export const test = base.extend<{ mockClient: MockSanityClient }>({ // eslint-disable-next-line no-empty-pattern async mockClient({}, use) { // eslint-disable-next-line react-hooks/rules-of-hooks await use(createMockClient()) }, }) ``` 1. The `mockClient` fixture will be reused across all validation tests in this course and beyond. Investing in good fixtures pays off quickly. This fixture extends Vitest's base `test` function with a `mockClient` property. Each test automatically gets a fresh mock client, preventing tests from interfering with each other. The fixture pattern keeps test setup minimal while ensuring consistency. Now let's test our stateful validation function using our `mockClient` fixture as well as some locally defined ones: ```typescript:apps/studio/validations.test.ts import {describe, expect} from 'vitest' import type {ValidationContext, ID} from 'sanity' import {isSingleFeaturedEvent} from '../helpers' import {it} from './__tests__/fixtures/client' describe('isSingleFeaturedEvent', () => { // Local helper - creates mock event document const createMockEventDocument = (id: ID) => ({ _id: id, _type: 'event', _createdAt: '2025-01-01T00:00:00Z', _updatedAt: '2025-01-01T00:00:00Z', _rev: 'mock-rev', }) // Local fixture - creates validation context for featured event tests const createValidationContext = ({documentId, client}: {documentId: string; client: any}) => ({ getClient: () => client, document: createMockEventDocument(documentId), path: ['featured'], }) as unknown as ValidationContext it('returns `true` when no other featured event exists', async ({mockClient}) => { mockClient.fetch.mockResolvedValue(false) // No existing featured event const context = createValidationContext({documentId: 'event-1', client: mockClient}) expect(await isSingleFeaturedEvent(true, context)).toBe(true) }) it('returns `false` when another event is already featured', async ({ mockClient }) => { mockClient.fetch.mockResolvedValue(true) // Another event is featured const context = createValidationContext({documentId: 'event-2', client: mockClient}) expect(await isSingleFeaturedEvent(true, context)).toBe(false) }) it('returns true when unsetting featured (no query needed)', async ({ mockClient }) => { const context = createValidationContext({documentId: 'event-3', client: mockClient}) expect(await isSingleFeaturedEvent(false, context)).toBe(true) // Should not query when value is false expect(mockClient.fetch).not.toHaveBeenCalled() }) it('queries with correct parameters and excludes document versions', async ({ mockClient }) => { mockClient.fetch.mockResolvedValue(false) const documentId = getDraftId('event-4') const context = createValidationContext({documentId, client: mockClient}) await isSingleFeaturedEvent(true, context) expect(mockClient.fetch).toHaveBeenCalledWith( expect.any(String), expect.objectContaining({documentId: getPublishedId(documentId)}), // Published ID, not draft expect.objectContaining({tag: 'validation.single-featured-event', perspective: 'raw'}), ) }) }) ``` 1. Validation functions that query your Content Lake are async by nature. All your test functions will need to be asynchronous and use `await` when calling these validators. ### Understanding the test strategy The local helper functions (`createMockEventDocument`, `createContext`) keep test setup close to the tests that use them. While `createMockClient()` is imported from fixtures (reusable across all tests), the validation context helper is specific to featured event validation—it knows about the `event` type and `featured` path. This pattern balances reusability with specificity: * **Global fixtures** - Broadly useful (mock clients) * **Local helpers** - Test-suite specific (event documents, featured field context) These four tests verify the business logic returns correct booleans: 1. **No existing featured event** → Returns `true` (can set featured) 2. **Existing featured event** → Returns `false` (cannot set featured) 3. **Unsetting featured** → Returns `true` without querying (performance) 4. **Correct query** → Verifies GROQ uses published IDs and tags By testing the business logic function, we verify the core decision-making. The validation builder just wraps this with an error message—that's simple enough to trust without testing. ## Why this validation matters Validation functions that query your dataset might have more moving parts than pure functions—async operations, client queries, document ID handling—but they're equally critical to test. The complexity makes them more fragile and the business impact makes them more important: **Without this test:** * Refactor the query → accidentally allow multiple featured events * Change the document ID logic → validation blocks the wrong documents * Remove the early return → unnecessary queries slow down the editor **With this test:** * Query changes break tests immediately * Document ID handling is verified * Performance optimizations are protected This is the kind of business logic that justifies test investment. A broken featured event selector means confused marketing, broken homepage, and lost ticket sales. ## Next steps You've learned to test validation functions that query your Content Lake to enforce business rules. By creating a reusable mock client fixture and test-specific local helpers for validation contexts, you've built a testing harness that keeps tests focused on business logic rather than setup boilerplate. You now know how to test async validation with controlled mock return values, verify that queries use correct parameters, and protect performance optimizations with assertions that functions don't query unnecessarily. These patterns work for any validation rule that accesses document state or queries your Content Lake to check conditions across multiple documents. In the next lesson, you'll test custom input components that render UI, use Sanity hooks, and handle user interactions. You'll learn to set up a browser-like test environment and simulate real user behavior.## [Testing Studio React Components](/learn/course/testing-sanity-studio/testing-studio-react-component) Test custom input components that render UI, use Studio hooks, and handle user interactions. Configure a browser-like test environment with React Testing Library, create provider fixtures that supply Studio context, and verify component behavior through simulated user actions. Learn what aspects of components are worth testing and how to balance thorough coverage with maintainable tests. The validation logic and access control functions you've tested so far run in Node.js with no UI. Custom input components are different—they render DOM elements, depend on Sanity UI's theming context, use Studio hooks like `useFormValue` and `useClient`, and respond to user interactions. Testing these components requires a browser-like environment, provider setup for Sanity UI, and tools to simulate user behavior like clicks and typing. Testing components verifies not just that your code runs, but that content editors can successfully interact with your custom inputs. A permission check might pass its pure function test but fail when integrated into a component's `readOnly` callback. A date calculation might work in isolation but render incorrectly when formatted for display. Component tests catch these integration issues by exercising the full user interaction flow. ## Setting up the component testing environment Install React Testing Library and `jsdom`: ```sh pnpm add -D @testing-library/react @testing-library/user-event jsdom @testing-library/jest-dom ``` These packages provide: * **`jsdom`** - A browser environment implementation for Node.js * **@testing-library/react** - Utilities for rendering and querying React components * **@testing-library/user-event** - Functions that simulate realistic user interactions * **@testing-library/jest-dom** - Additional matchers for DOM testing 1. `jsdom` simulates a browser environment in Node.js. It's not a real browser, so some browser-specific features (like layout calculations) won't work. For most Studio components, `jsdom` is sufficient. ## Configuring multiple test environments Your test suite now has different needs: pure functions run in Node, React components need a browser environment. Update `apps/studio/vitest.config.ts` to define **test-type projects** (not to be confused with workspace projects): ```typescript:apps/studio/vitest.config.ts import {defineProject} from 'vitest/config' export default defineProject({ test: { name: 'studio', // Define test-type projects within the Studio workspace project projects: [ { test: { name: 'unit', environment: 'node', include: ['**/*.test.ts'], }, }, { test: { name: 'component', environment: 'jsdom', include: ['**/*.test.tsx'], setupFiles: ['./__tests__/setup.ts'], }, }, ], }, }) ``` 1. **Important:** Vitest requires unique project names across your entire workspace. If you add other workspace projects, you may need to adjust the naming convention. And we'll need to create the setup file for the component test suite: ```typescript:apps/studio/__tests__/setup.ts import {cleanup} from '@testing-library/react' import {afterEach} from 'vitest' import '@testing-library/jest-dom/vitest' afterEach(() => { cleanup() }) ``` 1. Component tests are slower than unit tests because they render real DOM elements. This is why we separate them into different test projects with the `.test.tsx` extension. This configuration creates **nested projects**: 1. **Workspace project** - `studio` (defined at root `vitest.config.ts` with `projects: ['apps/*']`) 2. **Test-type projects** - `unit` and `component` (defined within Studio's config) The `component` project: * Uses `jsdom` to provide DOM APIs that React needs * Only runs files with `.test.tsx` extension (TypeScript with JSX) * Runs a setup file before each test ## Creating provider fixtures React components in Sanity Studio don't work in isolation—they expect certain contexts to exist. Sanity UI components read theme values from a `ThemeProvider` to render correctly (colors, spacing, typography). Without this provider, components throw errors or render incorrectly. Similarly, components that trigger toast notifications need a `ToastProvider`, and dialogs need a `LayerProvider` for z-index management. In production, your Studio wraps everything with these providers automatically. In tests, you need to recreate this harness manually. Rather than wrap each component individually in every test, create a reusable fixture that provides all required contexts: ```tsx:apps/studio/__tests__/fixtures/providers.tsx import {LayerProvider, ThemeProvider, ToastProvider} from '@sanity/ui' import {defaultTheme} from 'sanity' import {render, type RenderOptions} from '@testing-library/react' import type {ReactElement} from 'react' export function TestProviders({children}: {children: React.ReactNode}) { return ( <ThemeProvider theme={defaultTheme}> <ToastProvider> <LayerProvider>{children}</LayerProvider> </ToastProvider> </ThemeProvider> ) } export function renderWithProviders( ui: ReactElement, options?: Omit<RenderOptions, 'wrapper'> ) { return render(ui, {wrapper: TestProviders, ...options}) } ``` 1. Start with these three providers for most Sanity UI components. Add more providers (like `FormBuilderProvider`) only when your components require them. Keep provider setup minimal. The `TestProviders` component recreates the essential Studio provider hierarchy for these component tests: * **`ThemeProvider`** - Provides theme values (colors, spacing, typography) * **`ToastProvider`** - Enables toast notifications (some inputs use these) * **`LayerProvider`** - Manages z-index stacking for dialogs and popovers These are the providers the `DoorsOpenInput` component needs. Other custom components might require additional providers—like `FormBuilderProvider` for components that use form-level hooks, or custom context providers for specialized features. Add providers to `TestProviders` as your test suite requires them. The `renderWithProviders()` helper wraps React Testing Library's `render()` function, automatically including these providers for every component test. This fixture becomes your test harness for Sanity UI components—call it instead of `render()` and components work as they would in production. ## Testing the `DoorsOpenInput` component The `DoorsOpenInput` component shows when doors open based on the event date and a minutes-before value. It uses: * `useFormValue` hook to read the event date * `useClient` hook to access the Sanity client * Custom logic to calculate and display the doors open time ## Testing the `DoorsOpenInput` component The `DoorsOpenInput` component displays when doors open for an event, calculated from the event date and a minutes-before value: ```tsx:apps/studio/schemaTypes/components/DoorsOpenInput.tsx import {getPublishedId, type NumberInputProps, useClient, useFormValue} from 'sanity' import {Stack, Text} from '@sanity/ui' import {useEffect} from 'react' // .. export function DoorsOpenInput(props: NumberInputProps) { const date = useFormValue(['date']) as string | undefined const client = useClient({apiVersion: '2025-05-08'}) const documentId = useFormValue(['_id']) as string | undefined // Query for document versions (demonstration purposes) useEffect(() => { if (!documentId) return // ... query logic }, [client, documentId]) return ( <Stack space={3}> {props.renderDefault(props)} {typeof props.value === 'number' && date ? ( <Text size={1}> Doors open{' '} {subtractMinutesFromDate(date, props.value).toLocaleDateString(undefined, { month: 'long', day: 'numeric', year: 'numeric', hour: 'numeric', minute: 'numeric', })} </Text> ) : null} </Stack> ) } ``` The component uses Sanity hooks (`useFormValue`, `useClient`) to access form data and the Sanity client. Testing it requires mocking these hooks to control what values the component receives: ```tsx:apps/studio/schemaTypes/components/DoorsOpenInput.test.tsx import {describe, it, expect, vi, beforeEach} from 'vitest' import {screen} from '@testing-library/react' import {useFormValue, useClient} from 'sanity' import {renderWithProviders} from '../../../__tests__/fixtures/providers' import {DoorsOpenInput} from './DoorsOpenInput' // Mock Sanity hooks vi.mock('sanity', async () => { const actual = await vi.importActual('sanity') return { ...actual, useFormValue: vi.fn(), useClient: vi.fn(), } }) describe('DoorsOpenInput', () => { const mockProps = { value: 60, onChange: vi.fn(), renderDefault: vi.fn((props) => <input type="number" {...props} />), } as any beforeEach(() => { vi.clearAllMocks() }) it('shows doors open time when date and value exist', () => { vi.mocked(useFormValue).mockImplementation((path) => { if (path[0] === 'date') return '2025-06-15T20:00:00Z' if (path[0] === '_id') return 'event-1' return undefined }) vi.mocked(useClient).mockReturnValue({} as any) renderWithProviders(<DoorsOpenInput {...mockProps} />) expect(screen.getByText(/Doors open/i)).toBeInTheDocument() expect(screen.getByText(/June/i)).toBeInTheDocument() }) it('hides doors open time when date is missing', () => { vi.mocked(useFormValue).mockImplementation((path) => { if (path[0] === 'date') return undefined if (path[0] === '_id') return 'event-2' return undefined }) vi.mocked(useClient).mockReturnValue({} as any) renderWithProviders(<DoorsOpenInput {...mockProps} />) expect(screen.queryByText(/Doors open/i)).not.toBeInTheDocument() }) it('hides doors open time when value is missing', () => { vi.mocked(useFormValue).mockImplementation((path) => { if (path[0] === 'date') return '2025-06-15T20:00:00Z' if (path[0] === '_id') return 'event-3' return undefined }) vi.mocked(useClient).mockReturnValue({} as any) const propsWithoutValue = {...mockProps, value: undefined} renderWithProviders(<DoorsOpenInput {...propsWithoutValue} />) expect(screen.queryByText(/Doors open/i)).not.toBeInTheDocument() }) }) ``` 1. Module mocks apply to the entire test file. If you need different mock behavior per test, use `mockImplementation()` in each test's setup rather than at the file level. This test mocks the entire `sanity` module to control what `useFormValue` and `useClient` return: 1. **`vi.mock('sanity')`** - Intercepts all imports from the `sanity` package 2. **`vi.importActual()`** - Preserves original exports (types, utilities) 3. **`useFormValue: vi.fn()`** - Replaces hook with mock function 4. **`mockImplementation()`** - Controls what the hook returns based on the path argument 5. **`beforeEach()`** - Clears mocks between tests for isolation The `mockImplementation()` function lets you return different values based on which form field is being accessed. When the component calls `useFormValue(['date'])`, the mock checks if `path[0] === 'date'` and returns the appropriate value. This pattern works for any component that reads multiple form fields through `useFormValue`. ## What to test in components Focus on behavior that matters to content editors and enforced data integrity: **Test:** * Components render with expected content * User interactions trigger correct callbacks * Interactive components mutate documents accurately * Accessibility attributes are present **Don't test:** * Implementation details (state variable names, hooks used) * Sanity internals (we ensure test coverage for our own code) * CSS styles and exact positioning * Third-party library behavior (they should be testing it) Custom inputs that modify documents deserve thorough testing. A bug in a read-only component might confuse an editor temporarily, but a bug in a component that writes data to the published document rather than the current draft would be a headache. Test these rigorously to ensure they write the correct values to the intended fields in the intended document. 1. Add a test that verifies the component calls `onChange` when the user modifies the input value. Use `userEvent.type()` to simulate typing. ## Next steps You've learned to test React components in Sanity Studio: * Setting up `jsdom` for browser-like testing * Creating provider fixtures for Studio context * Rendering components with React Testing Library * Simulating user interactions with `userEvent` * Mocking Sanity hooks like `useFormValue` and `useClient` * Using accessible queries to find elements In the next lesson, you'll integrate tests into GitHub Actions, implement coverage reporting, and develop a testing strategy that scales with your Studio.## [Continuous Integration and Test Strategy](/learn/course/testing-sanity-studio/continuous-integration-and-test-strategy) Move tests from local development into CI pipelines that verify changes before they reach production. Configure automated test runs on pull requests, report test results directly in GitHub, and develop a strategic framework for deciding what to test. Learn to prioritize test coverage based on business impact, complexity, and change frequency—balancing protection with development velocity. ## From local development to production You've been running tests locally with `pnpm test` in watch mode. This provides instant feedback while developing. But tests become more valuable when integrated into your workflow at key points: * **Pull requests** - Automated checks prevent broken code from being merged * **Before deployment** - Tests catch issues before they reach content editors * **Scheduled runs** - Detect drift from dependencies or external changes Running tests as part of your continuous integration (CI) ensures every code change is validated automatically, regardless of who wrote it or what they tested locally. 1. [Architecture & DevOps](https://www.sanity.io/learn/course/architecture-and-devops) covers setting up schema validation, linting, and preview deployments for your pull requests. ## Reporting test output in pull requests When tests fail in CI, Vitest reports the failure(s) in your pull request. In Github, for example, failing pull requests will show: * ❌ Red X next to the commit * Detailed logs showing which tests failed * Line numbers and error messages * Option to re-run failed tests This prevents merging broken code and makes code review more efficient. Reviewers can focus on logic and design, trusting that tests verify correctness. ## What is important to test? Not all code is equally important to test. Prioritize based on: ### High priority - Always test **Validation functions** - Protect data integrity **Data transformation** - Shape content for display **Critical business logic** - Features that could break revenue/experience ### Medium priority - Test when complex **Custom input components** - When they have non-trivial logic **Schema structure helpers** - When they involve logic ### Low priority - Usually skip **Simple schema definitions** - No logic to test **Thin wrappers** - Just pass through to libraries **UI-only components** - Styling with no behavior 1. Testing strategy is about making intentional trade-offs. Perfect coverage isn't the goal—protecting critical business logic while maintaining development velocity is. ## Test organization patterns As your test suite grows, maintain structure: `apps/studio/ ├── schemaTypes/ │ ├── validation/ │ │ ├── eventValidation.ts │ │ └── eventValidation.test.ts │ ├── components/ │ │ ├── DoorsOpenInput.tsx │ │ └── DoorsOpenInput.test.tsx │ └── eventType.ts └── __tests__/ ├── fixtures/ │ ├── validation.ts │ ├── client.ts │ └── providers.tsx └── setup.ts` Key principles: * **Co-locate tests** with the code they test * **Share fixtures** in a central location (`__tests__/fixtures`) * **Name tests** after the file being tested (`DoorsOpenInput.test.tsx`) ## Test maintenance best practices Tests require maintenance like production code. Follow these practices: ### Keep tests simple ```typescript // ❌ Complex test with multiple concerns it('handles everything', async () => { const result1 = await validateVenue(venue1, context1) const result2 = await validateVenue(venue2, context2) const result3 = await validateVenue(venue3, context3) expect(result1).toBe(true) expect(result2).toBe(false) expect(result3).toBe(true) }) // ✅ Focused tests, one concept each it('allows venue for in-person events', async () => { const result = await validateVenue(venue, inPersonContext) expect(result).toBe(true) }) it('rejects venue for virtual events', async () => { const result = await validateVenue(venue, virtualContext) expect(result).toBe('Only in-person events can have a venue') }) ``` ### Use descriptive test names ```typescript // ❌ Vague it('works', () => {}) it('test1', () => {}) // ✅ Clear intent it('allows venue for in-person events', () => {}) it('rejects venue for virtual events', () => {}) it('calculates doors open time 60 minutes before event', () => {}) ``` ### Extract test helpers ```typescript // ❌ Repeated setup in every test it('test 1', () => { const context = {document: {_id: '1', _type: 'event', eventType: 'in-person'}} // ... test }) it('test 2', () => { const context = {document: {_id: '2', _type: 'event', eventType: 'virtual'}} // ... test }) // ✅ Reusable fixture function createEventContext(eventType: string) { return createMockValidationContext({ _id: `event-${eventType}`, _type: 'event', eventType, }) } it('allows venue for in-person events', () => { const context = createEventContext('in-person') // ... test }) ``` ## Start small, grow strategically Building a test suite is an investment. Start small and expand strategically: ### Phase 1: Test critical validation Begin with functions that protect data integrity: * Required field validation * Business rule enforcement * Data consistency checks **Goal**: Prevent content editors from creating invalid documents ### Phase 2: Test complex helpers Add tests for helper functions with non-trivial logic: * Date/time calculations * Formatting utilities * Data transformations **Goal**: Catch bugs in commonly-used utilities ### Phase 3: Test custom components Test custom inputs with complex behavior: * Components with conditional rendering * Components with user interactions * Components that query the dataset **Goal**: Ensure editor UI works correctly ### Phase 4: Integrate CI Add GitHub Actions to run tests automatically: * On every pull request * Before merging to main * Before deployments **Goal**: Prevent untested code from reaching production 1. Review your current Studio code. Categorize your validation functions, helpers, and components into high/medium/low priority based on business impact and complexity. ## Maintaining test quality As your suite grows, periodically review test quality: ### Red-green-refactor cycle 1. **Red** - Write a failing test 2. **Green** - Make it pass with minimal code 3. **Refactor** - Improve both test and production code 1. [Test-driven development (TDD)](https://en.wikipedia.org/wiki/Test-driven_development) naturally produces better-designed code. Writing tests first forces you to think about interfaces and edge cases before implementation. This discipline prevents over-engineering and keeps tests focused. ### Delete obsolete tests When you remove features or refactor code, delete tests that no longer serve a purpose. Dead code in tests is still maintenance burden. ## Key takeaways **Strategic investment** * Tests pay dividends through confident refactoring and faster debugging * Start with high-value tests (validation, critical logic) * Grow your suite incrementally as complexity increases **Technical implementation** * Pure functions are easiest to test * Mock external dependencies (clients, contexts) * Test user-facing behavior, not implementation details **Workflow integration** * Watch mode for instant local feedback * CI runs for automated validation * Coverage reports to find gaps **Sustainable testing** * Keep tests simple and focused * Co-locate tests with code * Delete obsolete tests * Prioritize readability over cleverness## [Tests as Content Operating System Infrastructure](/learn/course/testing-sanity-studio/tests-as-content-operating-system-infrastructure) Tests aren't overhead—they're strategic infrastructure that enables confident scaling of your Content Operating System. Comprehensive test coverage protects critical operations as your Studio grows, documents business requirements for future developers, and creates the safety net that allows your team to iterate rapidly. Learn to view testing as a foundational investment that compounds in value over time. You've built a comprehensive testing strategy, progressing from validation logic and access control to stateful functions to React components. In Sanity's Content Operating System, your Studio isn't just an editing interface—it's programmable infrastructure that encodes business rules, workflow automation, and content governance. Tests are specifications for this infrastructure. Just as the Content Operating System treats content as structured, reusable data rather than unstructured files, testing treats business requirements as executable specifications rather than documentation that drifts from reality. Your tests document how validation should work, what helpers should return, and how custom inputs should behave—in code that verifies itself. This aligns with Sanity's philosophy of strategic preparation: when code becomes commodity through AI assistance, your specifications become your competitive advantage. Tests enable AI agents to understand your business logic, allowing them to safely refactor, extend, and maintain your Studio while respecting the rules that protect your content quality. Your test suite becomes a strategic asset that compounds in value as complexity grows, enabling you to move fast always through preparation rather than scrambling. Start with high-value tests—validation functions that protect data integrity, helper utilities with business logic, custom components that mutate documents. Grow incrementally as complexity increases. Keep tests simple and focused, co-located with the code they verify. Mock external dependencies like Sanity clients and validation contexts. Separate business logic from validation builders for reusability. Use watch mode locally for instant feedback, run comprehensive suites in CI before merge, and leverage coverage reports to find gaps in critical paths. Add tests for new features before they reach production. Test components that write data rigorously to prevent corruption. Build reusable fixtures for your domain. Review tests in code review. Delete obsolete tests as features change. Refactor tests when they become harder to maintain than production code. Your Studio is now protected by executable specifications that grow with your business needs. Content editors work confidently, knowing custom logic protects their content. Developers refactor fearlessly, knowing tests catch regressions. Your tests are infrastructure for the Content Operating System, ensuring reliable content operations at scale.# [Build landing pages with Next.js](/learn/course/page-building) Give your content authors the creative freedom they need to produce landing pages by assembling individual blocks while still benefitting from structured content. ## [An introduction to page builders](/learn/course/page-building/an-introduction-to-page-builders) Setup your page builder the right way with Sanity and Next.js, understanding the process and best practices. With editing affordances your content creators will understand and appreciate. By the end of this course, you'll have a robust page builder that allows you to generate pages with a set of reusable "blocks". Not only will you learn how to implement new blocks but you'll understand every step of the process, and ensure best practices for your editors. But before we get into things, why don't we quickly discuss what a page builder _is_? Dependent on your experience, you may have a range of different expectations as to what a page builder actually is. ## What is a page builder? Think of a page builder as a set of stackable LEGO blocks, where each block is a piece of content that can be used to build a page. This is an apt description, because it's rare that a page will go left and right, it'll almost always go from top to bottom. 1. The drag-and-drop functionality in Sanity's Visual Editing can be used to move items that are laid out horizontally, even if the same content is represented vertically in the Sanity Studio. In simple terms, described as Sanity Schema Types, a page builder is an array of objects. With this in mind, a page builder allows you to move blocks vertically on a page, and to add and remove blocks on a page. If you build a new component for it, it can then be used on any page and added to the selection of blocks within your page builder. ### Keep an open mind While this course focuses primarily on page building for web pages, the same approach of being able to create an ordered list of different shapes of content may also be used in other front ends like applications. 1. **Consider this:** The Portable Text Editor is also "just" an array of objects. And yet its purpose can conjure a very different idea of what content is suitable to enter into it, and how it may be presented! As much as possible, your page builder should be modeling **content**, not **presentation**. While rare, consider future opportunities where other applications may consume this same content and display it in a different way, or a different context, with different meaning. ## Why is it important? With a page builder, you're allowing your content team to build pages without relying on developers. They can change the order of content, use repeatable components to create consistency and ultimately generate new pages in a fraction of the time. What's very useful about the above, is the fact that when you build a new block, your whole team has access to that block to use throughout any of the pages that use the page builder. ## What problems does it solve? * **Speed**: Content teams can build pages in a fraction of the time. * **Consistency**: Repeatable components can be used to create consistency across pages. * **Flexibility**: Content teams can change the order of content and add and remove blocks from a page. ## What problems does it create? You have to think about how you will structure your page builder. Whether you use a single page builder throughout all of your pages, or whether you want to have specific page builders for different pages. You shouldn't use it for something like a blog, where almost all blog posts follow the same formulaic structure of a big chunk of text with a few images. Always remember, as soon as you add a page builder, you're handing over the reins for page layout to your content team, and a lack of rigidity can result in inconsistent user experiences. Over time your authors may have many requests for unique blocks, and your page builder becomes difficult to maintain or understand. A strict adherence to consistency can reduce the likelihood of "runaway schema." ## About the author My name is Jono, and I'm the founder of [Roboto Studio](https://robotostudio.com/?utm-source=sanity-learn). I have been building websites for many years, focusing on delivering the best editorial experiences with cutting-edge technologies. I wrote this course to simplify the process of creating a page builder with Sanity and Next.js. The goal is to provide a straightforward course on crafting a solid page builder that allows your end users to have the best editorial experience. After years of building websites with Sanity and Next.js, I wrote this course based on what I wish I had learned when I first started building. Throughout this course, you will learn the process of building blocks and the positive and negative implications of content modeling decisions. By the end of the course you will be able to design and manage page builder blocks easily and have an understanding of the best practices. Now, let's go ahead and build the page builder blocks.## [Create page builder schema types](/learn/course/page-building/create-page-builder-schema-types) Setup the initial "blocks" of content and set the foundation of your page builder schema types. 1. The schema types you'll add in this lesson follow on from those created in the [Content-driven web application foundations](https://www.sanity.io/learn/course/content-driven-web-application-foundations) course. The choices you make at this stage will determine the efficiency of content creation. Keeping your schema simple and well-structured is the key to effortless authoring. ## Learning objectives By the end of this lesson, you'll be able to: * Structure a page builder * Implement new blocks * Understand when to use references vs objects ## Setting up a page builder The page builder is typically an [array of](https://www.sanity.io/docs/array-type) [`object`](https://www.sanity.io/docs/array-type) or [`reference`](https://www.sanity.io/docs/array-type) types that can be reordered. It's the container for all your building blocks. With Sanity, there are no pre-built blocks, but it's fast and easy to create what you need. * If you use **objects**, the content is simpler to query but trapped within the document. * If you use **references**, the content can be reused between documents, and your queries must resolve them. ### Create the block schema types Let's start by creating your first `pageBuilder` block. Note that this example uses an object type to create the block. Since you are new to page builders, the following example will start with objects, and the reasons for this choice will [be explained later](https://www.sanity.io/learn/course/page-building/scaling-page-builders-and-pitfalls#s-2e72d1b5ac82). The next step is to add a `splitImage` block, a simple layout with text on one side and an image on the other, either left or right. You've definitely seen this block on many websites. [Here's a link for what this block could look like](https://v0.dev/chat/zwZtgP2aDSB?b=b_LmbyJ1dlp4v) 1. **Create** the `splitImage` block ```typescript:src/sanity/schemaTypes/blocks/splitImageType.ts import { defineField, defineType } from "sanity"; export const splitImageType = defineType({ name: "splitImage", type: "object", fields: [ defineField({ name: "orientation", type: "string", options: { list: [ { value: "imageLeft", title: "Image Left" }, { value: "imageRight", title: "Image Right" }, ], }, }), defineField({ name: "title", type: "string", }), defineField({ name: "image", type: "image", }), ], preview: { select: { title: "title", media: "image", }, prepare({ title, media }) { return { title, subtitle: "Text and Image", media }; }, }, }); ``` Now, let's add a `hero` block. This is a simple block with a title text and image. Despite the schemas looking very similar, you would usually have a hero at the top of a page, so it's a good idea to have a dedicated block for it. You may have noticed that the code snippets use a block field for the text. This is known as portable text, a powerful way to render rich text within Sanity. While it's more complex than a simple string, its flexibility makes it incredibly useful. 1. See [Presenting Portable Text](https://www.sanity.io/learn/developer-guides/presenting-block-text) in the documentation for more details [Here's a link for what this block could look like](https://v0.dev/chat/MiPurXiE59K?b=b_gUmeXji6heI) 1. **Create** the `hero` block ```typescript:src/sanity/schemaTypes/blocks/heroType.ts import { defineField, defineType } from "sanity"; export const heroType = defineType({ name: "hero", type: "object", fields: [ defineField({ name: "title", type: "string", }), defineField({ name: "text", type: "blockContent", }), defineField({ name: "image", type: "image", }), ], }); ``` 1. Getting an error with `blockContent` missing? The schema types you'll add in this lesson follow on from those created in the [Content-driven web application foundations](https://www.sanity.io/learn/course/content-driven-web-application-foundations) course. An "FAQ Block" is an ideal block for using references, allowing the same document to be reused in multiple places. If you have a list of FAQs that you want to show on multiple pages, you can create a single FAQ document and reference it from each page rather than duplicating the FAQ content. This makes it easier to maintain since you only need to update the content in one place. Let's create a FAQ document type and reference it in a block. First, let's create the FAQ document schema: In this example, the name of the block is `faqBlock` [Here's a link for what this block could look like](https://v0.dev/chat/zl4hrYbDzOg?b=b_lTKHW2wTTS4) 1. **Create** the `faq` document schema type ```typescript:src/sanity/schemaTypes/faqType.ts import { defineField, defineType } from "sanity"; export const faqType = defineType({ name: "faq", title: "FAQ", type: "document", fields: [ defineField({ name: "title", type: "string", }), defineField({ name: "body", type: "blockContent", }), ], }); ``` 1. **Create** the `faqAccordion` block, which will reference the FAQ document. ```typescript:src/sanity/schemaTypes/blocks/faqsType.ts import { defineField, defineType } from "sanity"; export const faqsType = defineType({ name: "faqs", title: "FAQs", type: "object", fields: [ defineField({ name: "title", type: "string", }), defineField({ name: "faqs", title: "FAQs", type: "array", of: [{ type: "reference", to: [{ type: "faq" }] }], }), ], }); ``` Finally, one more block. This one is a features block, and this is going to get a little more complex with an array of features (as a block) inside a block. [Here's a link for what this block could look like](https://v0.dev/chat/z0zxFeAPm1z?b=b_ZpymafJ4kCT) 1. **Create** the `features` block ```typescript:src/sanity/schemaTypes/blocks/featuresType.ts import { defineField, defineType } from "sanity"; export const featuresType = defineType({ name: "features", type: "object", fields: [ defineField({ name: "title", type: "string", }), defineField({ name: "features", type: "array", of: [ defineField({ name: "feature", type: "object", fields: [ defineField({ name: "title", type: "string", }), defineField({ name: "text", type: "string", }), ], }), ], }), ], }); ``` ### Create the page builder schema type Okay great, you have got your blocks. Now let's put them together in your page builder component. The order of the blocks is how it will appear when a user adds a new block to the array. 1. **Create** the `pageBuilder` schema type ```typescript:src/sanity/schemaTypes/pageBuilderType.ts import { defineType, defineArrayMember } from "sanity"; export const pageBuilderType = defineType({ name: "pageBuilder", type: "array", of: [ defineArrayMember({ type: "hero" }), defineArrayMember({ type: "splitImage" }), defineArrayMember({ type: "features" }), defineArrayMember({ type: "faqs" }), ], }); ``` ### Create the page document type The schema types in your Sanity Studio for now are only useful for writing blog posts. The pages being built with this schema have a different intention to those, and so should be stored in a distinct schema type. 1. **Create** a `page` document schema type ```typescript:src/sanity/schemaTypes/pageType.ts import { DocumentIcon } from "@sanity/icons"; import { defineField, defineType } from "sanity"; export const pageType = defineType({ name: "page", title: "Page", type: "document", icon: DocumentIcon, fields: [ defineField({ name: "title", type: "string", }), defineField({ name: "slug", type: "slug", options: { source: "title", }, }), defineField({ name: "content", type: "pageBuilder", }), defineField({ name: "mainImage", type: "image", options: { hotspot: true, }, }), ], preview: { select: { title: "title", subtitle: "slug.current", }, }, }); ``` ### Add your new types to the Studio schema Finally, update the schema types index file to import all of these newly created schema types. 1. **Update** your registered schema types ```typescript:src/sanity/schemaTypes/index.ts // ...all your existing imports import { pageType } from "./pageType"; import { pageBuilderType } from "./pageBuilderType"; import { faqType } from "./faqType"; import { faqsType } from "./blocks/faqsType"; import { featuresType } from "./blocks/featuresType"; import { heroType } from "./blocks/heroType"; import { splitImageType } from "./blocks/splitImageType"; export const schema: { types: SchemaTypeDefinition[] } = { types: [ // ...all your existing schema types pageType, pageBuilderType, faqType, faqsType, featuresType, heroType, splitImageType, ], }; ``` ### Update Studio Structure The desk structure will now include `page` and `faq` type documents, but won't display them nicely with plurals. 1. **Update** the Studio's structure configuration ```typescript:src/sanity/structure.ts import type { StructureResolver } from "sanity/structure"; // https://www.sanity.io/docs/structure-builder-cheat-sheet export const structure: StructureResolver = (S) => S.list() .title("Blog") .items([ S.documentTypeListItem("post").title("Posts"), S.documentTypeListItem("category").title("Categories"), S.documentTypeListItem("author").title("Authors"), S.divider(), S.documentTypeListItem("page").title("Pages"), S.documentTypeListItem("faq").title("FAQs"), S.divider(), ...S.documentTypeListItems().filter( (item) => item.getId() && !["post", "category", "author", "page", "faq"].includes(item.getId()!) ), ]); ``` ## Check it's working With all of these new schema types registered you should now be able to create Page type documents with the Page Builder field allowing you to add any one of four blocks. ![Sanity Studio showing a page being edited with a block selector menu open](https://cdn.sanity.io/images/3do82whm/next/325dc7ddd53dfa0c57f833c81537e49e9497a4b9-2240x1480.png) Now that your page builder schema is set up, all the fundamental building blocks are in place. Next, you can add new blocks, reorder them, and update the array as needed.## [Improve authoring with previews and thumbnails](/learn/course/page-building/improved-ui-with-previews-and-thumbnails) Updates to the configuration of your page builder schema types can dramatically improve the content creation experience. Next, you will enhance the user interface (UI) with previews, thumbnails, and filters. These additions will help editors quickly find and use the blocks they need to create pages. Previews are the snippets of information that appear in the list view of the page builder. They're the first thing your editors will see when they're adding a new block to the page builder. Here's an example of what a _good_ preview looks like: ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/13884f45681aae33d08e747c953622cad5ae7694-1346x1286.webp) ## What makes a good preview? Previews should always have consistency. Consistency creates familiarity and familiarity improves user experience. The more consistency you have in your previews and page builders, the faster it will be for your editors to create pages. `object` and `document` schema types in Sanity Studio have a preview property which allow the following to be customized: * `title`: This is the title of the block, or the most important headline. Think what section a marketer would care about the most. * `subtitle`: Set this to the block name. * `media`: If the block has an image, use that, otherwise use an icon as a fallback. Let's revisit our blocks from the last lesson and improve the readability. Note, this lesson uses the [default icons from Sanity](https://icons.sanity.build/) to minimize dependencies. However, in a real-world project, [Lucide](https://lucide.dev/) may be preferred as it has a larger icon selection. ## Using prepare and preview Pay attention to the `preview` and `prepare` functions. This is where you are defining how the block appears in the preview. In the example below, there is a block with a title, subtitle and media. ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/8a3ca601a1b52ef91aa885e970c2b6ccc83339fa-2864x554.png) The pink cat on the left-hand side is the `media`, this can either be an image, or an icon. However, you should never leave it blank. 1. **Update** the `splitImage` schema type to include an `icon` and `preview` ```typescript:src/sanity/schemaTypes/blocks/splitImageType.ts import { defineField, defineType } from "sanity"; import { BlockContentIcon } from "@sanity/icons"; export const splitImageType = defineType({ name: "splitImage", // ...all other settings icon: BlockContentIcon, preview: { select: { title: "title", media: "image", }, prepare({title, media}) { return { title: title, subtitle: "Split Image", media: media ?? BlockContentIcon, }; }, }, }); ``` Insert a "Split Image" block and give it some content. The preview should now look like this: ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/169973097460feecb773977cd3a896db509387bd-2240x1488.png) This example is a simple block with a title, subtitle and media. The `prepare` function is used to set the title and media for the preview. 1. **Update** the `hero` type block ```typescript:src/sanity/schemaTypes/blocks/heroType.ts import { defineField, defineType } from "sanity"; import { TextIcon } from "@sanity/icons"; export const heroType = defineType({ name: "hero", // ...other settings icon: TextIcon, preview: { select: { title: "title", media: "image", }, prepare({ title, media }) { return { title, subtitle: "Hero", media: media ?? TextIcon, }; }, }, }); ``` 1. **Update** the faqs type block ```typescript:src/sanity/schemaTypes/blocks/faqsType.ts import { defineField, defineType } from "sanity"; import { HelpCircleIcon } from "@sanity/icons"; export const faqsType = defineType({ name: "faqs", // ...other settings icon: HelpCircleIcon, preview: { select: { title: "title", }, prepare({ title }) { return { title, subtitle: "FAQs", }; }, }, }); ``` 1. **Update** the features type block ```typescript:src/sanity/schemaTypes/blocks/featuresType.ts import { defineField, defineType } from "sanity"; import { StarIcon } from "@sanity/icons"; export const featuresType = defineType({ name: "features", // ...other settings icon: StarIcon, preview: { select: { title: "title", }, prepare({ title }) { return { title, subtitle: "Features", }; }, }, }); ``` ## Adding thumbnails Customized icons are good, but a visual preview of what a block can look like is even better. To implement these previews, you need to update your page builder schema. For this particular example, you'll add a `grid` view to the `options` property. This creates a grid view of the block, and the preview image is taken from the `static` folder. It should look something like this: ![Sanity Studio showing the "add item" picker with thumbnails](https://cdn.sanity.io/images/3do82whm/next/a041688bb996a45ef745f89b43e74d3f9711d8d5-2240x1480.png) 1. **Update** the page builder schema type ```typescript:src/sanity/schemaTypes/pageBuilderType.ts export const pageBuilderType = defineType({ // ... previous configuration options: { insertMenu: { views: [ { name: "grid", previewImageUrl: (schemaType) => `/block-previews/${schemaType}.png`, }, ], }, }, }); ``` ### Create your own thumbnails Next.js provides a public folder in the root of your application for serving static images. You could place your own images in this directory, ensuring they adhere to the following specifications: * Dimensions: 600x400px (maintain consistent sizing) * Format: PNG with transparent background * Naming: Match schema type names (e.g., `hero.png`, `splitImage.png`) There is a community file built for designing these, [it is available here](https://www.figma.com/community/file/1404904715260176924/arrayfield-template-for-sanity). ### Pre-designed thumbnails For the blocks in this lesson, we have prepared some example thumbnails you can use. Download these example images and place them in your application at `/public/block-previews` 1. **Download**, the example thumbnails, and extract them into a `/public/block-previews` directory Click the "Add item" button now and you should see the preview images. It's time to finally start getting this content to show up in your Next.js application. First let's create a new dynamic route for rendering page documents.## [Render pages](/learn/course/page-building/rendering-pages) Create a new dynamic route to render "page" documents and create links to them within Sanity Studio for an interactive live preview within Presentation. You've created your perfect schema, improved your editorial experience by adding thumbnails and now it's time to get your page builder blocks wired up on the frontend. ## Learning objectives Now that you've created "page" type documents in the Studio, you'll need the Next.js application to query them at a dynamic route. By the end of this chapter, you'll be able to: * Query for and render any page by its slug ## Query a page You are going to create a query to fetch the page and its page builder content from the Content Lake. Let's break down what's happening in this query. The `->` operator in Sanity is used for dereferencing documents. When you have a reference to another document (like our FAQs), by default you only get the reference ID. Adding `->` tells Sanity to "follow" that reference and include the full content of the referenced document in your query results. This is particularly useful when you need the actual content immediately and want to avoid making multiple separate queries. 1. **Update** the file with all your queries to include one for an individual page: ```typescript:src/sanity/lib/queries.ts // ...all other queries export const PAGE_QUERY = defineQuery(`*[_type == "page" && slug.current == $slug][0]{ ..., content[]{ ..., _type == "faqs" => { ..., faqs[]-> } } }`); ``` This query also demonstrates a shorthand syntax of the GROQ function `select()`, by which you can handle individual blocks differently by checking their `_type`. Since you've changed schema types and queries, it's time to regenerate Types as well. ```sh npm run typegen ``` 1. This command was setup in the [Generate TypeScript Types](https://www.sanity.io/learn/course/content-driven-web-application-foundations/generate-typescript-types) lesson of the [Content-driven web application foundations](https://www.sanity.io/learn/course/content-driven-web-application-foundations) course. ## Render a page Before rendering individual blocks, the Next.js application needs a route to render any individual page. 1. **Create** a new route for rendering any page document by its unique slug. ```tsx:src/app/(frontend)/[slug]/page.tsx import { sanityFetch } from "@/sanity/lib/live"; import { PAGE_QUERY } from "@/sanity/lib/queries"; export default async function Page({ params, }: { params: Promise<{ slug: string }>; }) { const { data: page } = await sanityFetch({ query: PAGE_QUERY, params: await params, }); return <div>{JSON.stringify(page)}</div>; } ``` ## Update the Presentation tool resolver To create a link between your Sanity Studio documents and their locations in the front-end, update the resolve function created for your Presentation tool to generate dynamic links to your live preview. 1. **Update** the document locations resolver ```typescript:src/sanity/presentation/resolve.ts import { defineLocations, PresentationPluginOptions, } from "sanity/presentation"; export const resolve: PresentationPluginOptions["resolve"] = { locations: { // ...other locations page: defineLocations({ select: { title: "title", slug: "slug.current", }, resolve: (doc) => ({ locations: [ { title: doc?.title || "Untitled", href: `/${doc?.slug}`, }, ], }), }), }, }; ``` You should now be able to open any page in Presentation by clicking the "Used on one page" link at the top of the document editor. ![Sanity Studio showing live preview of a new page document](https://cdn.sanity.io/images/3do82whm/next/4d769ae7eedca4c869b826cfdec32d215886e7d4-2240x1488.png) Now you can create pages in Sanity Studio and preview them live in the Presentation tool. The next step is to render each block as a unique component.## [Render page builder blocks](/learn/course/page-building/rendering-page-builder-blocks) Setup the unique components for each individual "block" to render on the page. The example components in this lesson have been given deliberately simple designs. Feel free to redesign them with much more _flair_. You'll notice also the props for each component has been typed from the `PAGE_QUERYResult` generated from Sanity TypeGen. The type itself looks _quite_ gnarly, but it will be constantly updated as you make future changes to your schema types and queries. ## Create block components 1. **Create** a component to render the Hero block ```tsx:src/components/blocks/hero.tsx import { PortableText } from "next-sanity"; import Image from "next/image"; import { Title } from "@/components/title"; import { urlFor } from "@/sanity/lib/image"; import { PAGE_QUERYResult } from "@/sanity/types"; type HeroProps = Extract< NonNullable<NonNullable<PAGE_QUERYResult>["content"]>[number], { _type: "hero" } >; export function Hero({ title, text, image }: HeroProps) { return ( <section className="isolate w-full aspect-[2/1] py-16 relative overflow-hidden"> <div className="relative flex flex-col justify-center items-center gap-8 h-full z-20"> {title ? ( <h1 className="text-2xl md:text-4xl lg:text-6xl font-semibold text-white text-pretty max-w-3xl"> {title} </h1> ) : null} <div className="prose-lg lg:prose-xl prose-invert flex items-center"> {text ? <PortableText value={text} /> : null} </div> </div> <div className="absolute inset-0 bg-pink-500 opacity-50 z-10" /> {image ? ( <Image className="absolute inset-0 object-cover blur-sm" src={urlFor(image).width(1600).height(800).url()} width={1600} height={800} alt="" /> ) : null} </section> ); } ``` 1. **Create** a component to render the FAQs block ```tsx:src/components/blocks/faqs.tsx import { PAGE_QUERYResult } from "@/sanity/types"; import { PortableText } from "next-sanity"; type FAQsProps = Extract< NonNullable<NonNullable<PAGE_QUERYResult>["content"]>[number], { _type: "faqs" } >; export function FAQs({ _key, title, faqs }: FAQsProps) { return ( <section className="container mx-auto flex flex-col gap-8 py-16"> {title ? ( <h2 className="text-xl mx-auto md:text-2xl lg:text-5xl font-semibold text-slate-800 text-pretty max-w-3xl"> {title} </h2> ) : null} {Array.isArray(faqs) ? ( <div className="max-w-2xl mx-auto border-b border-pink-200"> {faqs.map((faq) => ( <details key={faq._id} className="group [&[open]]:bg-pink-50 transition-colors duration-100 px-4 border-t border-pink-200" name={_key} > <summary className="text-xl font-semibold text-slate-800 list-none cursor-pointer py-4 flex items-center justify-between"> {faq.title} <span className="transform origin-center rotate-90 group-open:-rotate-90 transition-transform duration-200"> ← </span> </summary> <div className="pb-4"> {faq.body ? <PortableText value={faq.body} /> : null} </div> </details> ))} </div> ) : null} </section> ); } ``` 1. **Create** a component to render the Features block ```tsx:src/components/blocks/features.tsx import { PAGE_QUERYResult } from "@/sanity/types"; type FeaturesProps = Extract< NonNullable<NonNullable<PAGE_QUERYResult>["content"]>[number], { _type: "features" } >; export function Features({ features, title }: FeaturesProps) { return ( <section className="container mx-auto flex flex-col gap-8 py-16"> {title ? ( <h2 className="text-xl mx-auto md:text-2xl lg:text-5xl font-semibold text-slate-800 text-pretty max-w-3xl"> {title} </h2> ) : null} {Array.isArray(features) ? ( <div className="grid grid-cols-3 gap-8"> {features.map((feature) => ( <div key={feature._key} className="flex flex-col gap-4"> <h3 className="text-xl font-semibold text-slate-800"> {feature.title} </h3> <p className="text-lg text-slate-600">{feature.text}</p> </div> ))} </div> ) : null} </section> ); } ``` 1. **Create** a component to render the Split Image block ```tsx:src/components/blocks/split-image.tsx import Image from "next/image"; import { urlFor } from "@/sanity/lib/image"; import { PAGE_QUERYResult } from "@/sanity/types"; import { stegaClean } from "next-sanity"; type SplitImageProps = Extract< NonNullable<NonNullable<PAGE_QUERYResult>["content"]>[number], { _type: "splitImage" } >; export function SplitImage({ title, image, orientation }: SplitImageProps) { return ( <section className="container mx-auto flex gap-8 py-16 data-[orientation='imageRight']:flex-row-reverse" data-orientation={stegaClean(orientation) || "imageLeft"} > {image ? ( <Image className="rounded-xl w-2/3 h-auto" src={urlFor(image).width(800).height(600).url()} width={800} height={600} alt="" /> ) : null} <div className="w-1/3 flex items-center"> {title ? ( <h2 className="text-3xl mx-auto md:text-5xl lg:text-8xl font-light text-pink-500 text-pretty max-w-3xl"> {title} </h2> ) : null} </div> </section> ); } ``` ## Render the page builder content Now we have components for each block, we need to render them in order. Each array item has a distinct `_type` attribute, which you can switch over to render the correct component. Each item also contains a unique (to the array) `_key` value, which can be passed to React as a `key` prop—required by React for performant and consistent rendering of an array. We have also passed the remaining props to the block component using the spread operator. 1. **Create** the `PageBuilder` component to render all the content of the page ```tsx:src/components/page-builder.tsx import { Hero } from "@/components/blocks/hero"; import { Features } from "@/components/blocks/features"; import { SplitImage } from "@/components/blocks/split-image"; import { FAQs } from "@/components/blocks/faqs"; import { PAGE_QUERYResult } from "@/sanity/types"; type PageBuilderProps = { content: NonNullable<PAGE_QUERYResult>["content"]; }; export function PageBuilder({ content }: PageBuilderProps) { if (!Array.isArray(content)) { return null; } return ( <main> {content.map((block) => { switch (block._type) { case "hero": return <Hero key={block._key} {...block} />; case "features": return <Features key={block._key} {...block} />; case "splitImage": return <SplitImage key={block._key} {...block} />; case "faqs": return <FAQs key={block._key} {...block} />; default: // This is a fallback for when we don't have a block type return <div key={block._key}>Block not found: {block._type}</div>; } })} </main> ); } ``` 1. **Update** the dynamic page route to use the `PageBuilder` component ```tsx:src/app/(frontend)/[slug]/page.tsx import { PageBuilder } from "@/components/page-builder"; import { sanityFetch } from "@/sanity/lib/live"; import { PAGE_QUERY } from "@/sanity/lib/queries"; export default async function Page({ params, }: { params: Promise<{ slug: string }>; }) { const { data: page } = await sanityFetch({ query: PAGE_QUERY, params: await params, }); return page?.content ? <PageBuilder content={page.content} /> : null; } ``` You should now be able to create page documents, use all of the blocks from the Page Builder array we have created, and preview changes as you author them. ![Sanity Studio Presentation tool showing a website layout](https://cdn.sanity.io/images/3do82whm/next/24b3f7de0a18f9708281c266ec86366427261ebf-2240x1480.png) For now you have click-to-edit functionality in Presentation. Before going any further, let's use everything we've built so far to create the application's home page.## [Creating a "home" page](/learn/course/page-building/creating-a-home-page) Create a "singleton" document to store distinct content that is globally relevant to the application. A quick side mission before going further. A website's "home" page is typically used to show the same sort of content that our page builder can generate. So it makes sense to reuse this content structure on the home page like we would any other page. For any editable piece of content that has global relevance to an application—like site name, navigation, footer text, etc—most often you will use a "singleton" document. That is, a document type of which there should only ever be one in a dataset and it likely has a distinct `_id` value. ## Site settings schema type We'll keep the site settings simple for now. A new document type with just a single field—a reference to a page, which will be used as the home page on the site. 1. **Create** the `siteSettings` schema type ```typescript:src/sanity/schemaTypes/siteSettingsType.ts import { defineField, defineType } from "sanity"; import { ControlsIcon } from "@sanity/icons"; export const siteSettingsType = defineType({ name: "siteSettings", title: "Site Settings", type: "document", icon: ControlsIcon, fields: [ defineField({ name: "homePage", type: "reference", to: [{ type: "page" }], }), ], preview: { prepare() { return { title: "Site Settings", }; }, }, }); ``` 1. **Register** `siteSettings` to your Studio schema types ```typescript:src/sanity/schemaTypes/index.ts // ...all other imports import { siteSettingsType } from "./siteSettingsType"; export const schema: { types: SchemaTypeDefinition[] } = { types: [ // ...all other types siteSettingsType, ], }; ``` Singleton documents can be invoked with a distinct `_id` value by configuring it in your structure builder configuration. **Update** the structure builder configuration to include a singleton siteSettings document. ```typescript:src/sanity/structure.ts export const structure: StructureResolver = (S) => S.list() .title("Blog") .items([ // ...all other items S.listItem() .id("siteSettings") .schemaType("siteSettings") .title("Site Settings") .child( S.editor() .id("siteSettings") .schemaType("siteSettings") .documentId("siteSettings") ), ...S.documentTypeListItems().filter( (item) => item.getId() && ![ // ...all other ignored types "siteSettings", ].includes(item.getId()!) ), ]); ``` 1. See more examples of what you can do in the [Structure Builder cheat sheet](https://www.sanity.io/learn/studio/structure-builder-cheat-sheet) You should now see the Site Settings document on the left hand side of your Structure tool. Instead of opening a list of documents, it opens a single one. 1. **Select** a "Home Page" reference, and **publish** the Site Settings ### There can only be one To prevent the creation of any more site settings documents, the type can be removed from the "Create" menu at the top left of your Studio. 1. **Update** your Studio config to remove this document type from the list ```typescript:sanity.config.ts export default defineConfig({ // ...all other settings document: { newDocumentOptions: (prev) => prev.filter((item) => item.templateId !== "siteSettings"), }, }); ``` ## Query the home page The current "page" document type query relies on a page slug, so you'll need a different query for this site settings document first, and then query that page. 1. **Update** your `queries.ts` file to include this home page query ```typescript:src/sanity/lib/queries.ts // ...all other queries export const HOME_PAGE_QUERY = defineQuery(`*[_id == "siteSettings"][0]{ homePage->{ ..., content[]{ ..., _type == "faqs" => { ..., faqs[]-> } } } }`); ``` 1. **Run** the following command to update your schema extraction and generated types ```sh:Terminal pnpm run typegen ``` 1. This command was setup in the [Generate TypeScript Types](https://www.sanity.io/learn/course/content-driven-web-application-foundations/generate-typescript-types) lesson of the [Content-driven web application foundations](https://www.sanity.io/learn/course/content-driven-web-application-foundations) course. Then update your home page route file similar to the dynamic route for individual pages, but for just this distinct home page. 1. **Update** the home page route ```typescript:src/app/(frontend)/page.tsx import { PageBuilder } from "@/components/page-builder"; import { sanityFetch } from "@/sanity/lib/live"; import { HOME_PAGE_QUERY } from "@/sanity/lib/queries"; export default async function Page() { const { data: page } = await sanityFetch({ query: HOME_PAGE_QUERY, }); return page?.homePage?.content ? ( <PageBuilder content={page?.homePage.content} /> ) : null; } ``` The front page of your application at [http://localhost:3000](http://localhost:3000) should now show the page selected in your Site Settings document. Excellent! You might now imagine how you would build other global content like your heading, footer and navigation menus into this same Site Settings document. Let's keep enhancing the editing experience in the next lesson.## [Drag and drop in Visual Editing](/learn/course/page-building/drag-and-drop-in-visual-editing) Allow authors to re-order blocks on page, without editing the document. The same functionality you setup in [Add drag-and-drop elements](https://www.sanity.io/learn/course/visual-editing-with-next-js/add-drag-and-drop-elements) can be used here for your Page Builder array. This way authors can reorder array items on the page without needing to use the document editor. 1. You can setup drag-and-drop for _any_ array type field. Consider adding it to the Features and FAQs blocks as well. ## Adding drag handles Drag-and-drop support in Presentation requires the outer DOM element of an array—and every DOM element for an item within the array—to contain additional `data-sanity` attributes. These attributes are created with a `createDataAttribute` function exported from `next-sanity` and require the ID and Type of the source document. Additionally, for fast on-page changes, a `useOptimistic` hook is provided by `next-sanity`. Using this hook will require changing to a client component. The `PageBuilder` component you created in a previous lesson is where we can create and set these attributes, for the `content` array and its individual blocks. 1. **Update** your `PageBuilder` component to add attributes for drag-and-drop ```tsx:src/components/page-builder.tsx "use client"; import { Hero } from "@/components/blocks/hero"; import { Features } from "@/components/blocks/features"; import { SplitImage } from "@/components/blocks/split-image"; import { FAQs } from "@/components/blocks/faqs"; import { PAGE_QUERYResult } from "@/sanity/types"; import { client } from "@/sanity/lib/client"; import { createDataAttribute } from "next-sanity"; import { useOptimistic } from "next-sanity/hooks"; type PageBuilderProps = { content: NonNullable<PAGE_QUERYResult>["content"]; documentId: string; documentType: string; }; const { projectId, dataset, stega } = client.config(); export const createDataAttributeConfig = { projectId, dataset, baseUrl: typeof stega.studioUrl === "string" ? stega.studioUrl : "", }; export function PageBuilder({ content, documentId, documentType, }: PageBuilderProps) { const blocks = useOptimistic< NonNullable<PAGE_QUERYResult>["content"] | undefined, NonNullable<PAGE_QUERYResult> >(content, (state, action) => { if (action.id === documentId) { return action?.document?.content?.map( (block) => state?.find((s) => s._key === block?._key) || block ); } return state; }); if (!Array.isArray(blocks)) { return null; } return ( <main data-sanity={createDataAttribute({ ...createDataAttributeConfig, id: documentId, type: documentType, path: "content", }).toString()} > {blocks.map((block) => { const DragHandle = ({ children }: { children: React.ReactNode }) => ( <div data-sanity={createDataAttribute({ ...createDataAttributeConfig, id: documentId, type: documentType, path: `content[_key=="${block._key}"]`, }).toString()} > {children} </div> ); switch (block._type) { case "hero": return ( <DragHandle key={block._key}> <Hero {...block} /> </DragHandle> ); case "features": return ( <DragHandle key={block._key}> <Features {...block} /> </DragHandle> ); case "splitImage": return ( <DragHandle key={block._key}> <SplitImage {...block} /> </DragHandle> ); case "faqs": return ( <DragHandle key={block._key}> <FAQs {...block} /> </DragHandle> ); default: // This is a fallback for when we don't have a block type return <div key={block._key}>Block not found: {block._type}</div>; } })} </main> ); } ``` The `PageBuilder` component now requires the source document ID and document type. 1. **Update** your routes that load this component to include these props. ```tsx:src/app/(frontend)/[slug]/page.tsx import { PageBuilder } from "@/components/PageBuilder"; import { sanityFetch } from "@/sanity/lib/live"; import { PAGE_QUERY } from "@/sanity/lib/queries"; export default async function Page({ params, }: { params: Promise<{ slug: string }>; }) { const { data: page } = await sanityFetch({ query: PAGE_QUERY, params: await params, }); return page?.content ? ( <PageBuilder documentId={page._id} documentType={page._type} content={page.content} /> ) : null; } ``` Don't forget the home page route as well ```tsx:src/app/(frontend)/page.tsx import { PageBuilder } from "@/components/PageBuilder"; import { sanityFetch } from "@/sanity/lib/live"; import { HOME_PAGE_QUERY } from "@/sanity/lib/queries"; export default async function Page() { const { data: page } = await sanityFetch({ query: HOME_PAGE_QUERY, }); return page?.homePage?.content ? ( <PageBuilder documentId={page?.homePage._id} documentType={page?.homePage._type} content={page?.homePage.content} /> ) : null; } ``` ### Test it out Within Presentation you should now see the "drag handle" icon (two columns of three dots) when hovering over the outer edge of each block. Click and hold, to drag-and-drop. Additionally, hold shift while dragging to zoom the page out and see the entire array at once. You've created your page builder, wired it up to work on the frontend, and used the visual editor to rearrange block order. You have built the gold standard of editorial experience for your end users. Great job! ## Time to review What remains is to learn about some of the pitfalls and challenges of using the visual editor at scale, which will be covered in the final lesson.## [Scaling page builders and pitfalls](/learn/course/page-building/scaling-page-builders-and-pitfalls) How to keep your page builder tidy as your project grows over time. Your page builder works just fine at this stage. But what happens when you have 20 more components, 100 more pages, and 10 more users? This lesson covers those questions and the pitfalls ahead. ## The pitfalls ### Don't include too many variations If a block has too many variations, you're going to run into a lot of edge cases. It's also difficult to manage because you will have to pick a thumbnail as the `main` image for the block. If your variation is different from the `main` image it becomes confusing for the content team to differentiate between the variations. Here's a good rule of thumb: if you have more than two variations of a block, you should consider splitting them into individual blocks. ### Is it a page builder or a document? Think carefully about modeling your content as a page builder or a document type. If you want your content to be more rigid and you want to be able to reuse the same block in multiple places, then you should use a document type. Alternatively, if you want your content to be reordered, have different layouts, or have different components, then you should use a page builder. ### Paradox of choice for marketers To help your marketing team create pages efficiently, limit the number of block options available. Too many choices can confuse and overwhelm, leading to mistakes and delays in content creation. For example, if a "features" block is unnecessary for a "case study," remove it from the options for that document type. This streamlines the process and makes it easier for new team members to navigate. ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/dc699183c3d94393b51f91fd1954093b2460ad32-1590x2078.png) ### Use references sparingly This is the number one mistake I see in the wild. Avoid making your entire page builder an array of references; it's more difficult to scale. The reason is that often, you won't need to use the same referenced block in multiple places. One of the few exceptions is a repeated call to action where the text is identical or you might have two different versions that appear on many different pages. This is a good use case for references. If there is not a clear need to re-use content across many pages—use an object. ### Remember to prune As you scale your website and your page builder, you will naturally have blocks that you no longer use. You should prune them regularly to eliminate technical debt. If you do need to start removing blocks, consider the course [Handling schema changes confidently](https://www.sanity.io/learn/course/handling-schema-changes-confidently). ## Go deeper What you've built in this course is a basic implementation using default settings. The dynamic nature of building Sanity Schema with TypeScript lends itself to opinionated abstractions. Take a look at these resources for some inspiration: 1. [Vyuh Framework's "Structure Plugin"](https://docs.vyuh.tech/guides/sanity/structure-plugin/) ## Start building! You've made it to the end of the course. You're now fully equipped to start building your own page builders with confidence. We hope this structured approach to page building will make content management simpler and more efficient for your team. We'd love to see the page builders that you create, [tag us on X](https://x.com/sanity_io) or [join our community](https://slack.sanity.io/) to share any projects you create# [A/B Testing](/learn/course/a-b-testing) Understand the what, why, and how of A/B testing within Sanity Studio. Enable data-driven decision-making to lead to an improved product ## [Introduction to A/B Testing](/learn/course/a-b-testing/introduction-to-a-b-testing) Why would we want to A/B test, and how do we plan for an A/B test In this course you will learn: * What is A/B Testing * How to A/B Test * How to setup Field level experiments using `@sanity/personalization-plugin` * Adding an experiment to a field * Connecting an external service to get running experiments * Getting data for a running experiment * Running an experiment on a page. ## About the author Hi, I'm Jon, a Senior Solutions Architect at Sanity. I work with our customers to enable them to get the full value out of the Sanity platform. As part of my role I have been developing the `@sanity/personalization-plugin` and working with our customers on how to implement it. Prior to joining Sanity, I was a customer and worked on implementing Sanity into new and existing projects. I made sure we were making data driven decisions by implementing A/B testing of new features, and helped content editors to test their changes. ## What is A/B Testing We often want to make changes to our content, and often this is because we think this change will help our website, app, or other platform perform better. Now this could just be a hunch, but we need to check that we are actually improving the platform. A/B testing is a method that has been used to test out hypothesis for over 100 years (although it might not have had that name) from things like medicine to farming, to advertising. An A/B test is a basic kind of randomized controlled experiment, where you compare two versions of something to figure out which performs better. There are many ways to measure performance of a version, from increased conversion rate, decreased bounce rate, scroll depth, retention rate, average order value, customer satisfaction. What you choose will depend on what you are testing, but you are essentially seeing if users who have viewed the variant have improved against your chosen metric compared with those that have viewed the control. A/B testing should help us to better understand our customers, make more effective choices and increase conversions. A/B tests can be as simple as the choice of wording on a button (“buy now” vs “add to cart”), or it could be two completely separate versions of a page (this is split testing which often gets lumped in with A/B testing). A/B tests don't just need to have two versions of content. A/B/N testing has N number of versions of content that would all be included in the same experiment. It is now common practice for companies to run A/B testing on their digital platforms, with companies like Amazon, Facebook, and Google each conducting more than 10,000 experiments per year. ## How to A/B Test We need to start with a clear goal of what we want to measure, and a hypothesis of how we think we can make a change to this goal. In order to work out if our test is a success we need to be collecting data that we want to track, this could be conversion rates, user engagement, or bounce rate, among others. Once we have that we can create the versions of content we want to test (usually a control and a variant). We assign a user to a group and show each group a version of the content or a page, and then use statistical analysis of the data we have collected to determine which version performs better. Normally we would assign the user at random but we need to be aware of other factors that may influence the results of your test. For example you may be testing the wording of a button but that button may only appear of the desktop version of your website. In this case you might only split users by device first and then assign their group. Now we know a bit about A/B Testing, lets configure fields for experimentation using the `@sanity/personalization-plugin` .## [Field Level Experimentation](/learn/course/a-b-testing/field-level-experimentation) Sanity has created a plugin that allows A/B/N testing experiments to individual fields. You can set the experiments and their variations as config or use an async function to return the information. ## Prerequisites You should be able to follow this course if you have completed the Day One with Sanity Studio course. It will also build on the same studio, schema, and front end. 1. Go to the [Day one content operations](https://www.sanity.io/learn/course/day-one-with-sanity-studio) course ### Import example content to your dataset To save you from having to fill out and publish a bunch of content to make this course interesting, we have prepared a dataset that you can import and work on. Download `production.tar.gz` below and import it into your Sanity project by running the following command in your studio folder: 1. **Download**, the dataset export 2. **Run** the following command in the terminal to Import the dataset into your project's `production` dataset ```sh npx sanity@latest dataset import ~/path/to/production.tar.gz production ``` A successful import will give you a bunch of artists, venues, and events in the past and future between 2010–2030. ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/583efc78ea24be55801fbea2b64b975a8e1782ab-2144x1388.png) ## Configure Field Level Experiments You could create your own schema types to support A/B Testing from external sources, but Sanity has created a plugin that gives an opinionated way of handling A/B testing inside of the Sanity Studio. We are going to add a new field to run an experiment on the name of an event. Our hypothesis is that more information in the name of an event will encourage our users to read more about it. First you need to install and add the plugin `@sanity/personalization-plugin` to your Sanity Studio configuration 1. **Run** the following command in the terminal to install the [Personalization Plugin](https://github.com/sanity-io/sanity-plugin-personalization) ```sh npm install @sanity/personalization-plugin ``` When configuring the plugin you will need to specify which fields types to run experiments on, and the details of the experiments and their variants that we are running. 1. **Update** `sanity.config.ts` to configures the plugins field types and experiments ```typescript:sanity.config.ts // ...all other imports import {fieldLevelExperiments} from '@sanity/personalization-plugin' export default defineConfig({ // ... all other config settings plugins: [ // ...all other plugins fieldLevelExperiments({ // field types that you want to be able to emperiment on fields: ['string'], // hardcoded experiments and variants experiments: [ { id: 'event-name', label: 'Event Name', variants: [ { id: 'control', label: 'Control', }, { id: 'variant', label: 'Variant', }, ], }, ], }) ], }) ``` Adding a field will create a new type that we can use in our schema types. This type will be prefixed with `experiment`. In this example the new type to use in the schema will be `experimentString`. With this we can add the a new Name experimentation field to the schema of our event. 1. **Update** `eventType.ts` to add a `newName` field to schema of event ```typescript:schemaTypes/eventType.ts export const eventType = defineType({ name: 'event', title: 'Event', icon: CalendarIcon, type: 'document', groups: [ {name: 'details', title: 'Details'}, {name: 'editorial', title: 'Editorial'}, ], fields: [ defineField({ name: 'name', type: 'string', group: ['editorial', 'details'], }), defineField({ name: 'newName', type: 'experimentString', group: ['editorial', 'details'], }), // ... and the rest ``` You're adding a new field using the new `experimentString` type to ensure you don't accidentally clear the existing value. This will render the new experimentation field with a default value input. ![Sanity Studio showing new experimentation field](https://cdn.sanity.io/images/3do82whm/next/c1948ee024a5f99d719bf6352a0171e79b8242cc-3454x2164.png) Let's add content for our new field in the Studio: * Add a default value to be used when the experiment is not running or if the user is not included in the experiment. * Select add experiment on the field. * Select which experiment you want to add the content for. * Add content for the control 1. **Add** content for the A/B test ![Missing alt text](https://cdn.sanity.io/images/3do82whm/next/f0abfb2b5488f2ff66846c9bed25024fb9f8737e-1080x622.gif) ## Complex Field Types When specifying the field types in the plugin config you can use Sanity Studio primitives, your own custom defined types, types from other plugins, or you can defined a type in the plugin config. You can add validation/options in the plugin config or on the schema definition. ```typescript:sanity.config.ts // ...all other imports import {defineArrayMember, defineConfig, defineField} from 'sanity' import {fieldLevelExperiments} from '@sanity/personalization-plugin' export default defineConfig({ // ...all other config plugins: [ // ... all other plugins fieldLevelExperiments({ fields: [ // primitives 'string', 'image', 'slug', // custom defined types 'path', // internationalized Array plugin types 'internationalizedArrayString', // types defined in plugin config defineField({ name: 'featuredAuthor', type: 'reference', to: [{type: 'author'}], hidden: ({document}) => !document?.title, validation: (Rule) => Rule.required() }), defineField({ name: 'customArray', title: 'Custom Array', type: 'array', of: [defineArrayMember({type: 'object', fields: [{name: 'string', type: 'string'}]})], }), ], // experiements removed for brevity }), ], }) ``` ```typescript:schemaTypes/articleType.ts import {defineField, defineType} from 'sanity' export const path = defineType({ name: 'path', type: 'string', validation: (Rule) => Rule.required().custom((value: string | undefined, context) => { if (!value) return true if (!value.startsWith('/')) return 'Must start with "/"' return true }), }) export const article = defineType({ name: 'article', title: 'Article', type: 'document', fields: [ defineField({ name: 'title', title: 'Title', type: 'experimentString', }), defineField({ name: 'path', title: 'Path', type: 'experimentPath', }), defineField({ name: 'image', title: 'Image', type: 'experimentImage', }), defineField({ name: 'image', title: 'Image', type: 'experimentImage', validation: (Rule) => Rule.custom((value: any) => { if (!value.default) return 'Must have a default image' if (value.active && (!value.variants || value.variants?.length === 0)) { return 'Must have at least one variant' } return true }), }), defineField({ name: 'featuredArtist', title: 'Featured Artist', type: 'experimentFeaturedArtist', }), defineField({ name: 'array', type: 'experimentCustomArray', }), ], preview: { select: { title: 'title', media: 'image', }, prepare({title, media}) { return { title: title.default ? title.default : undefined, subtitle: title.variants ? title.variants.map((variant: any) => variant.value).join(' | ') : undefined, media: media?.default ?? undefined, } }, }, }) ``` ## Getting experiments You might consider using an external service to help with your A/B testing as they help with traffic allocation of your experiment on the frontend, connect with analytics platforms to give you information about your experiments, and they often have other features such as feature flagging and personalization. When choosing a service you need to consider if the service meets your needs, what other tools might you need, and what the cost of the service will be. You can connect `@sanity/personalization-plugin` to an external service by using an async function for getting your experiments and variants. This function will be passed a Sanity Client so you can use information stored in the Content Lake if needed, as in the example below. ```typescript:sanity.config.ts import { getExperiments } from './utils/experiments' export default defineConfig({ // ... all other config plugins: [ // ... all other plugins fieldLevelExperiments({ // .. field types experiments: (client: SanityClient) => getExperiments(client), } ], }) ``` ```typescript:utils/experiments.ts import {SanityClient} from 'sanity' import {ExperimentType} from '@sanity/personalization-plugin' export const getExperiments = async (client: SanityClient) => { // secret is stored in the content lake using @sanity/studio-secrets const query = `*[_id == 'secrets.namespace'][0].secrets.key` const secret = await client.fetch(query) if (!secret) { return [] } // call to external api to fetch experiments const response = await fetch('https://example.api/experiments', { headers: { Authorization: `Bearer ${secret}`, }, }) const {experiments: externalExperiments} = await response.json() // map and transform to get experiments and their variations const experiments: ExperimentType[] = externalExperiments?.map( (experiment: externalExperiment) => { const experimentId = experiment.id const experimentLabel = experiment.name const variants = experiment.variations?.map((variant) => { return { id: variant.variationId, label: variant.name, } }) return { id: experimentId, label: experimentLabel, variants, } }, ) return experiments } ``` In the following lesson you will get the data for the experiment and add it to your Next.js application.## [Implementing an A/B test](/learn/course/a-b-testing/implementing-an-a-b-test) How to query for data and how to setup an A/B test on a front end Depending on how content is rendered will influence how you fetch data to be used on the page. If your page is statically generated, changing its content is not achievable at build time. In this instance, you might want to apply a routing-based approach with middleware. For server-side rendering, pass the values for which variant to use as parameters to your query. This might require middleware to set the user group and store it in a cookie. For client-side rendering you can add the parameters to the fetch—or you can fetch all variants and then filter the results client-side. To ensure a consistent experience for a user it is a good idea to set a cookie to store the value of the user group they have been assigned. ## Server-side filtering, in the query For filtering on the query it may look something like this: ```groq *[ _type == "event" && slug.current == $slug ][0]{ ..., "name": coalesce( newName.variants[ experimentId == $experiment && variantId == $variant ][0].value, newName.default, name ), "date": coalesce(date, now()), "doorsOpen": coalesce(doorsOpen, 0), headline->, venue-> } ``` This query uses the `coalesce` function to get the correct variant based on the experiment and the variant. If those are not present or do not match you'll get the default value, and then fallback to the existing `name` field incase the data has not been migrated across. You will need to pass along values for the `$experiment` and `$variant` query params. These should come from an external service or a cookie set on the user. ## Client-side filtering, in JavaScript If you're unable to perform filtering server-side in the query, you may instead query for all variants, and then write a function like this which will look through all of the returned variants and return only one. ```typescript const getExperimentContent = (field, experimentId, variantId) => { return ( field.variants.find( (variant) => variant.experimentId === experimentId && variant.variantId === variantId )?.value || field.default ); }; ``` You also need to consider occasions where A/B tests are not running, and some users might be excluded from an experiment, so we need to ensure that we get a fallback value if this is the case. ## Implementing A/B Tests 1. The following details how to implement A/B tests in Next.js, but the same principles and implementation patterns could be repeated in any framework. Let's try implementing the A/B test created in the Studio in the Next.js application. For this you'll use the hardcoded experiment that was initially added. In order to get a working A/B test you'll need some way of assigning an ID and group to a user—using middleware—and retrieving the variant that user should see. 1. Learn more about [Next.js middleware](https://nextjs.org/docs/app/building-your-application/routing/middleware) in their documentation. You'll also need a way to track if a user has viewed an experiment. This tracking would typically be done on an external A/B testing service like Google analytics, Segment, or others that could be linked up to a A/B testing service. To be able to analyze the experiment we will need to know if a user has been included in an experiment and which variant they saw. What we can do is get the `userID` and `userGroup` and use this in a client component to send an event when the user has viewed the page with an experiment. 1. **Create** functions for getting variants, userId and setting user group ```typescript:src/lib/experiments.ts import { v4 } from "uuid"; import { cookies } from "next/headers"; import type { NextRequest, NextResponse } from "next/server"; type Experiment = Record< string, { label: string; variants: { id: string; label: string }[] } >; const EXPERIMENTS: Experiment = { "event-name": { label: "Event Name", variants: [ { id: "control", label: "Control", }, { id: "variant", label: "Variant", }, ], }, }; const getTestCookie = async () => { const cookieStore = await cookies(); return cookieStore.get("ab-test")?.value; }; export const getUserGroup = async () => { const testCookie = await getTestCookie(); return testCookie ? JSON.parse(testCookie)?.userGroup : "control"; }; // mocking a fetch to an external service for getting an experiment variant export const getExperimentValue = async (experimentName: string) => { const userGroup = await getUserGroup(); return { variant: EXPERIMENTS[experimentName].variants.find( (variant) => variant.id === userGroup ), }; }; export const setCookiesValue = ( request: NextRequest, response: NextResponse ) => { if (!request.cookies.has("ab-test")) { // randomly assign a user to a group const userGroup = Math.random() > 0.5 ? "control" : "variant"; // create a user ID const userId = v4(); // Setting cookies on the response using the `ResponseCookies` API response.cookies.set("ab-test", JSON.stringify({ userGroup, userId })); } return response; }; // If use is part of any experiments, get the tracking call data // This is passed into the <Tracking> client component export const getDeferredTrackingData = async (): Promise< | { userGroup: string; userId: string; } | undefined > => { const testCookie = await getTestCookie(); const data = testCookie ? JSON.parse(testCookie) : undefined; return data; }; ``` 1. **Create** middleware to incepted the page request and storing the user group and user id as a cookie ```typescript:src/middleware.ts import { NextResponse } from "next/server"; import type { NextRequest } from "next/server"; import { setCookiesValue } from "./lib/experiments"; export function middleware(request: NextRequest) { let response = NextResponse.next(); response = setCookiesValue(request, response); return response; } export const config = { matcher: [ /* * Match all request paths except for the ones starting with: * - api (API routes) * - _next/static (static files) * - _next/image (image optimization files) * - favicon.ico, sitemap.xml, robots.txt (metadata files) */ "/((?!api|_next/static|_next/image|favicon.ico|sitemap.xml|robots.txt).*)", ], }; ``` A new component is required to send back events when a user has been assigned an ID and a group and has partaken in an experiment. The example code below only logs these events to the console. You would need to replace this with the integration of your choice for measuring experiments. 1. **Create** client component for sending tracking data ```typescript:src/components/tracking.tsx "use client"; import { useEffect } from "react"; // Helper component to track experiment views from server components export function Tracking({ userGroup, userId, }: { userGroup: string; userId: string; }) { useEffect(() => { // TODO: track with Google Analytics, Segment, etc. console.log("Viewed Experiment, send tracking", { userGroup: userGroup, userId: userId, }); }, [userId, userGroup]); return null; } ``` Update the route for a single event to query for the correct variant as well as include the `Tracking` component. 1. **Update** the single event page route ```tsx:src/app/events/[slug]/page.tsx import { getExperimentValue, getDeferredTrackingData } from "@/lib/experiments"; import { Tracking } from "@/components/tracking.tsx"; import { client } from "@/sanity/client"; import { sanityFetch } from "@/sanity/live"; import imageUrlBuilder from "@sanity/image-url"; import { SanityImageSource } from "@sanity/image-url/lib/types/types"; import { defineQuery, PortableText } from "next-sanity"; import Image from "next/image"; import Link from "next/link"; import { notFound } from "next/navigation"; const EVENT_QUERY = defineQuery(`*[ _type == "event" && slug.current == $slug ][0]{ ..., "name": coalesce(newName.variants[experimentId == $experiment && variantId == $variant][0].value, newName.default, name), "date": coalesce(date, now()), "doorsOpen": coalesce(doorsOpen, 0), headline->, venue-> }`); const { projectId, dataset } = client.config(); const urlFor = (source: SanityImageSource) => projectId && dataset ? imageUrlBuilder({ projectId, dataset }).image(source) : null; export default async function EventPage({ params, }: { params: Promise<{ slug: string }>; }) { const { slug } = await params; const { variant } = await getExperimentValue("event-name"); const trackingData = await getDeferredTrackingData(); const queryParams = { slug, experiment: "event-name", variant: variant?.id || "", }; const { data: event } = await sanityFetch({ query: EVENT_QUERY, params: queryParams, }); if (!event) { notFound(); } const { name, date, headline, image, details, eventType, doorsOpen, venue, tickets, } = event; const eventImageUrl = image ? urlFor(image)?.width(550).height(310).url() : null; const eventDate = new Date(date).toDateString(); const eventTime = new Date(date).toLocaleTimeString(); const doorsOpenTime = new Date( new Date(date).getTime() - doorsOpen * 60000 ).toLocaleTimeString(); return ( <main className="container mx-auto grid gap-12 p-12"> <div className="mb-4"> <Link href="/">← Back to events</Link> </div> <div className="grid items-top gap-12 sm:grid-cols-2"> <Image src={eventImageUrl || "https://placehold.co/550x310/png"} alt={name || "Event"} className="mx-auto aspect-video overflow-hidden rounded-xl object-cover object-center sm:w-full" height="310" width="550" /> <div className="flex flex-col justify-center space-y-4"> <div className="space-y-4"> {eventType ? ( <div className="inline-block rounded-lg bg-gray-100 px-3 py-1 text-sm dark:bg-gray-800 capitalize"> {eventType.replace("-", " ")} </div> ) : null} {name ? ( <h1 className="text-4xl font-bold tracking-tighter mb-8"> {name} </h1> ) : null} {headline?.name ? ( <dl className="grid grid-cols-2 gap-1 text-sm font-medium sm:gap-2 lg:text-base"> <dd className="font-semibold">Artist</dd> <dt>{headline?.name}</dt> </dl> ) : null} <dl className="grid grid-cols-2 gap-1 text-sm font-medium sm:gap-2 lg:text-base"> <dd className="font-semibold">Date</dd> <div> {eventDate && <dt>{eventDate}</dt>} {eventTime && <dt>{eventTime}</dt>} </div> </dl> {doorsOpenTime ? ( <dl className="grid grid-cols-2 gap-1 text-sm font-medium sm:gap-2 lg:text-base"> <dd className="font-semibold">Doors Open</dd> <div className="grid gap-1"> <dt>Doors Open</dt> <dt>{doorsOpenTime}</dt> </div> </dl> ) : null} {venue?.name ? ( <dl className="grid grid-cols-2 gap-1 text-sm font-medium sm:gap-2 lg:text-base"> <div className="flex items-start"> <dd className="font-semibold">Venue</dd> </div> <div className="grid gap-1"> <dt>{venue.name}</dt> </div> </dl> ) : null} </div> {details && details.length > 0 && ( <div className="prose max-w-none"> <PortableText value={details} /> </div> )} {tickets && ( <a className="flex items-center justify-center rounded-md bg-blue-500 p-4 text-white" href={tickets} > Buy Tickets </a> )} </div> </div> {trackingData && ( <Tracking userGroup={trackingData.userGroup} userId={trackingData.userId} /> )} </main> ); } ``` With this done you will now see that a cookie is set when you visit an event page, and that based on that cookie and the content in your Sanity Studio the name of the event will show 1 of 3 options (the default if no experiment matches, the control for an experiment, or the variant). ![Removing A/B Testing cookie](https://cdn.sanity.io/images/3do82whm/next/4b07bb035898101349a9f48f92c4d468a9da85be-1080x647.gif) If you remove the cookie and refresh the page you will see a new cookie is added and there is a random chance it will be assigned a different group, thus potentially showing you a different name. Let's test everything you've learned in the final lesson with a quiz.## [A/B Testing quiz](/learn/course/a-b-testing/a-b-testing-quiz) You have made it this far 🎉, lets have a quiz to put your A/B testing knowledge to the ... test. **Question:** What is A/B Testing? 1. A way to compare two versions of content to see which has the most page views 2. A way to compare two versions of content to see which performs better. 3. A way to convince your boss that your idea is better by showing them two versions of it and hoping they pick the one you want. 4. A way to pick at random what content you should use **Question:** What are common names for the A/B test versions? 1. Control and Comparison 2. Control and Benchmark 3. Benchmark and Variant 4. Control and Variant 5. Comparison and Variant **Question:** How do you assign a user to a group? 1. Assign them based on what you think they will prefer 2. First x see version 1, then next y see version 2 3. At random but aware of biases like screen size 4. By location **Question:** When using the fieldLevelExperiments plugin from @sanity/personalization-plugin what would be the name of the type to use for an experiment on a text field? 1. experimentText 2. testText 3. customText 4. textExperiment **Question:** When using a function to get experiments in the fieldLevelExperiments plugin, what is passed in? 1. current user 2. schema 3. sanity client **Question:** How do you ensure a consistent user experience when doing A/B testing? 1. Only run one experiment at a time 2. Only show one variant at a time 3. Add a cookie to store which variant a user sees 4. Ask the user what version they want to see# [Integrated Visual Editing with Next.js](/learn/course/visual-editing-with-next-js) The ultimate upgrade for content authors is to have absolute confidence in the impact of their work before they press publish – as well as the tools to rapidly find and update even the most minor pieces of content. ## [Understanding Visual Editing](/learn/course/visual-editing-with-next-js/understanding-visual-editing) Visual Editing is powered by a combination of Sanity features, which is helpful to understand before implementation. Content creators will find it highly beneficial to preview the impact of their work before pressing publish. An interactive live preview will give them greater confidence to do so. Visual Editing is the catch-all term for the ability for content creators to make and see the impact of content changes in real-time, even when working on draft documents. It also describes navigating the website to find and edit content instead of browsing through document lists in the Sanity Studio Structure tool. ## Goals of this course Once you have completed this course, you will: * Know how to create, store, and access Sanity project tokens so your application can query for private documents such as draft documents. * Enable Next.js "draft mode" for authenticated users to put your application into a dynamic state. * Configure the Presentation plugin to browse and edit the application within Sanity Studio. * In the Studio, configure document "locations" so content creators can move quickly between the Structure and Presentation tools. * Switch data fetching to React Loader for enhanced Visual Editing with faster previews. ### Glossary The following terms describe the functions that combine to create an interactive live preview: [**Visual Editing**](https://www.sanity.io/docs/introduction-to-visual-editing). Visual Editing can be enabled on **any** hosting platform or [front end](https://www.sanity.io/glossary/front-end) framework. * [**Perspectives**](https://www.sanity.io/docs/perspectives) modify queries to return either draft or published content. These are especially useful for server-side fetching to display draft content on the initial load when previewing drafts. * [**Content Source Maps**](https://www.sanity.io/docs/content-source-maps) aren't something you'll need to interact with directly, but they are used by Stega encoding (below) when enabled. They are an extra response from the [Content Lake](https://www.sanity.io/docs/datastore) that notes the full path of every field of returned content. * [**Stega encoding**](https://www.sanity.io/docs/stega) is when the Sanity Client takes Content Source Maps and combines every field of returned content with an invisible string of characters, which contains the full path from the content to the field within its source document. * [**Overlays**](https://www.sanity.io/docs/visual-editing-overlays) are created by a dedicated package that looks through the DOM for these Stega encoded strings and creates clickable links to edit documents. * [**Presentation**](https://www.sanity.io/docs/presentation) is a plugin included with Sanity Studio to simplify displaying a front end inside an iframe with an adjacent document editor. It can communicate directly with the front end instead of making round-trips to the Content Lake for faster live preview. * [**Draft mode**](https://nextjs.org/docs/app/building-your-application/configuring/draft-mode) is a Next.js-specific way of enabling, checking, and disabling a global variable available during requests, primarily used to make your application query draft content. * In other frameworks, you might replace this with an environment variable, cookie, or session. Let's get started.## [Token handling and security](/learn/course/visual-editing-with-next-js/token-handling-and-security) To access draft content your application will need to be authenticated with a token. Learn how to do this securely. In a public dataset, documents are kept private in the Content Lake when they have a period (`.`) in the `_id` attribute. For example, draft document IDs begin with a `drafts.` prefix. Authentication will also be required to use the `previewDrafts` "perspective," a method of performing a GROQ query that returns the latest draft version of a document instead of an already-published document. 1. Learn more about [Perspectives for Content Lake](https://www.sanity.io/learn/content-lake/perspectives) in the documentation To view draft content, requests to the Content Lake require authentication. On the client side, the same credentials that allow authors to log in to Sanity Studio will handle this. On the server side, an API token will be required. 1. Learn more about [Authentication](https://www.sanity.io/learn/content-lake/http-auth) in the documentation ## Creating an API token Access tokens can be created from Manage or the API. You can access Manage for your project either from the menu at the top left of your Studio: ![Sanity Studio with "Manage project" button selected](https://cdn.sanity.io/images/3do82whm/next/58a1805b2385a3677dd409e4381e7207eb9e0ecf-2240x1488.png) Or you can automatically open your browser to the Manage page of your project from the command line: ```text pnpm dlx sanity manage ``` 1. In Manage, go to the "API" tab and create a token with "Viewer" permissions ![Creating a new token in Manage](https://cdn.sanity.io/images/3do82whm/next/fb7030e01dc7102aae21a597db2b724a137596b0-2240x1488.png) 1. **Update** your `.env.local` file to include the token ```text:.env.local NEXT_PUBLIC_SANITY_PROJECT_ID="your-project-id" NEXT_PUBLIC_SANITY_DATASET="your-dataset-name" # 👇 add this line SANITY_API_READ_TOKEN="your-new-token" ``` 1. It is your responsibility to secure this token. Unencrypted access could allow a user to read any document from any dataset in your project. The way it is implemented in this course should never lead to it being included in your code bundle. You may need to restart your development environment to make the token available. The file below will throw an error if the token is not found in your environment variables. 1. **Create** a new file to store, protect, and export this token ```typescript:src/sanity/lib/token.ts export const token = process.env.SANITY_API_READ_TOKEN if (!token) { throw new Error('Missing SANITY_API_READ_TOKEN') } ``` Now the token can be exported from a reliable location. In the next lesson you'll add it to the `defineLive` function.## [Receiving live edits to drafts](/learn/course/visual-editing-with-next-js/fetching-preview-content-in-draft-mode) Add perspectives to your Sanity data fetches to query for draft content, when Draft Mode is enabled. For interactive live preview to be truly immersive, the same fast, cached web application your end users interact with must be put into an API-first, fully dynamic state. Thankfully, Next.js provides "Draft Mode." 1. See the Next.js documentation for more details on [Draft Mode](https://nextjs.org/docs/app/building-your-application/configuring/draft-mode). For Visual Editing to work, the entire application must act differently when Draft Mode is enabled. Queries must use a different perspective and entirely skip the cache. Additional UI will be rendered into the page for clickable overlays. Thankfully this complexity is handled inside `SanityLive` and another component you'll import called `VisualEditing`. ## Fetching in draft mode First you'll need to update the content fetching functions to apply token authentication and settings required for Visual Editing. ### Update Sanity Client The update below adds Stega encoding to the Sanity Client configuration. This will only be used when Draft Mode is enabled. The URL is used to create clickable links in the preview, which open to the correct document and field from which the content came. 1. **Update** the Sanity Client file to include Stega encoding ```typescript:src/sanity/lib/client.ts export const client = createClient({ projectId, dataset, apiVersion, useCdn: true, stega: { studioUrl: '/studio' }, }) ``` ### Update live mode helpers The token you created in the previous lesson will now need to be passed to the live mode helpers, so that live draft content will be sent to the browser. These tokens will only be used when the site is in Draft Mode, which is only enabled by users in the Presentation tool in the Studio, or by anyone you share a preview link with. The token is not stored in the production app code. 1. **Update** the live mode helpers to set a `browserToken` and `serverToken` ```typescript:src/sanity/lib/live.ts import { client } from "@/sanity/lib/client"; import { token } from "@/sanity/lib/token" import { defineLive } from "next-sanity/live"; export const { sanityFetch, SanityLive } = defineLive({ client, browserToken: token, serverToken: token, }); ``` ## Powering Visual Editing in Draft Mode Interactive live preview works by listening client-side to changes from your dataset and, when detected, prefetching data server-side. The machinery to do this can be configured manually if you like, but it gets a little complicated, so thankfully, it's been packaged up for us in `next-sanity`. When Draft Mode is enabled, it's helpful to have a button to disable it. 1. **Create** a component to allow a user to disable Draft Mode ```tsx:src/components/disable-draft-mode.tsx 'use client' import { useDraftModeEnvironment } from 'next-sanity/hooks' export function DisableDraftMode() { const environment = useDraftModeEnvironment() // Only show the disable draft mode button when outside of Presentation Tool if (environment !== 'live' && environment !== 'unknown') { return null } return ( <a href="/api/draft-mode/disable" className="fixed bottom-4 right-4 bg-gray-50 px-4 py-2" > Disable Draft Mode </a> ) } ``` To power Visual Editing, all you need is one import. 1. **Update** your root layout to import the `VisualEditing` component from `next-sanity/visual-editing` ```tsx:src/app/(frontend)/layout.tsx import { draftMode } from 'next/headers' import { VisualEditing } from 'next-sanity/visual-editing' import { DisableDraftMode } from '@/components/disable-draft-mode' import { Header } from '@/components/header' import { SanityLive } from '@/sanity/lib/live' export default async function FrontendLayout({ children, }: Readonly<{ children: React.ReactNode }>) { return ( <section className="bg-white min-h-screen"> <Header /> {children} <SanityLive /> {(await draftMode()).isEnabled && ( <> <DisableDraftMode /> <VisualEditing /> </> )} </section> ) } ``` ## Activating draft mode The Presentation tool maintains an automatically rotating secret stored in the dataset. This is so your Next.js application can confirm that same secret before proceeding with any attempt to enable draft mode. Thankfully, this entire handshake has been made into a simple helper function from `next-sanity`. 1. **Create** a new API route to enable draft mode ```typescript:src/app/api/draft-mode/enable/route.ts /** * This file is used to allow Presentation to set the app in Draft Mode, which will load Visual Editing * and query draft content and preview the content as it will appear once everything is published */ import { defineEnableDraftMode } from 'next-sanity/draft-mode' import { client } from '@/sanity/lib/client' import { token } from '@/sanity/lib/token' export const { GET } = defineEnableDraftMode({ client: client.withConfig({ token }), }) ``` ### Disabling draft mode Once your browser is authenticated to view the web application in draft mode, you will see it in all other tabs in that browser. The earlier update to the root layout included a button to disable preview mode. This is useful when content authors have finished their changes and want to see the application with the same published content that end users will see. 1. **Create** a new API route to disable draft mode ```typescript:src/app/api/draft-mode/disable/route.ts import { draftMode } from 'next/headers' import { NextRequest, NextResponse } from 'next/server' export async function GET(request: NextRequest) { ;(await draftMode()).disable() return NextResponse.redirect(new URL('/', request.url)) } ``` Your app is ready to start receiving draft content updates—the next step is to actually make that happen. It's easiest to do this within the Presentation plugin; let's set that up in the next lesson.## [Configuring Presentation](/learn/course/visual-editing-with-next-js/configuring-presentation) Install and configure the Presentation plugin to enable draft preview and a web preview from within Sanity Studio Everything you have configured so far has been to prepare the site for when it is put into Draft Mode. To do this securely and automatically, you'll install and configure the [Presentation plugin](https://www.sanity.io/docs/configuring-the-presentation-tool). It will handle creating and requesting a URL to enable Draft Mode with a secret string in the URL that is checked in your API route. It is also the most convenient way to browse and edit the website in Draft Mode, with an iframe displaying an interactive preview inside the Sanity Studio. First you'll need to add the plugin to your Sanity Studio configuration. 1. **Update** your `sanity.config.ts` file to import the Presentation tool ```typescript:sanity.config.ts // ...all other imports import { presentationTool } from 'sanity/presentation' export default defineConfig({ // ... all other config settings plugins: [ // ...all other plugins presentationTool({ previewUrl: { previewMode: { enable: '/api/draft-mode/enable', }, }, }), ], }) ``` Notice how the plugin's configuration includes the "enable" API route you created in the previous lesson. Presentation will visit this route first, confirm an automatically generated secret from the dataset, and if successful, activate draft mode in the Next.js application. You should now see the Presentation tool in the top toolbar of the Studio or by visiting [http://localhost:3000/studio/presentation](http://localhost:3000/studio/presentation) where you can navigate the site and click on any Sanity content to open the document – and focus the field – from which that content came. Because of the `previewDrafts` perspective, the post index route now displays a list of draft **and** published post documents. The latest draft content should appear on already published post documents. Make edits to any content, and you should see them reflected live on the page. Success! You now have an interactive live preview conveniently located within your Sanity Studio. Content authors can browse the front end to find pieces of content they need to edit and instantly see the impact of their changes before pressing publish. We can go deeper. In the next lesson, you'll make the experience of switching between the Structure and Presentation tools even better.## [Setup document locations](/learn/course/visual-editing-with-next-js/setup-document-locations) Showing where in the application the document they're editing may be displayed can help content creators understand the impact of their changes. The content of a document may be used in multiple places. For example, currently in your blog, a post’s title is shown both on the individual post route and on the post index page. When viewing a document, to show where its content is used, you can list paths in your web application to generate one-click links to those pages and view them inside the Presentation tool. 1. **Create** a new file for the `resolve` option in the Presentation plugin options: ```typescript:src/sanity/presentation/resolve.ts import { defineLocations, PresentationPluginOptions } from 'sanity/presentation' export const resolve: PresentationPluginOptions['resolve'] = { locations: { // Add more locations for other post types post: defineLocations({ select: { title: 'title', slug: 'slug.current', }, resolve: (doc) => ({ locations: [ { title: doc?.title || 'Untitled', href: `/posts/${doc?.slug}`, }, { title: 'Posts index', href: `/posts` }, ], }), }), }, } ``` In the `locations` key above, `post` is the document schema type from which these locations will render. As you build out your web application and content model, you will extend this configuration to include more document types that render routes – or appear on them. 1. **Update** your `sanity.config.ts` file to import the locate function into the Presentation plugin. ```typescript:sanity.config.ts // ...all other imports import { resolve } from '@/sanity/presentation/resolve' export default defineConfig({ // ... all other config settings plugins: [ // ...all other plugins presentationTool({ resolve, previewUrl: { draftMode: { enable: '/api/draft-mode/enable', }, }, }), ], }) ``` You should now see the locations at the top of all `post`-type documents: ![Sanity Studio with the Presentation tool showing a preview](https://cdn.sanity.io/images/3do82whm/next/dce8bfd1b6739fe4e57f09c68e998907d22448fd-2144x1388.png) Now, your content authors can seamlessly move between the front-end-focused Presentation tool – and the structure-focused Structure tool. Believe it or not, we can go _even deeper_. One part of the Presentation tool is disabled: the toggle to switch between Drafts and Published perspectives. To enable that, we need to implement React Loader, which has the added benefit of even faster live previews.## [Add drag-and-drop elements](/learn/course/visual-editing-with-next-js/add-drag-and-drop-elements) Go beyond "click-to-edit" with additional affordances for rearranging arrays in your front end ### Add "related posts" to your posts 1. **Update** the `post` schema type fields to include an array of "related posts" to render at the bottom of your `post` type documents. ```typescript:src/sanity/schemaTypes/postType.ts export const postType = defineType({ // ...all other settings fields: [ // ...all other fields defineField({ name: "relatedPosts", type: "array", of: [{ type: "reference", to: { type: "post" } }], }), ], }); ``` 1. **Update** your single post query to return the array and resolve any references. ```typescript:src/sanity/lib/queries.ts export const POST_QUERY = defineQuery(`*[_type == "post" && slug.current == $slug][0]{ _id, title, body, mainImage, publishedAt, "categories": coalesce( categories[]->{ _id, slug, title }, [] ), author->{ name, image }, relatedPosts[]{ _key, // required for drag and drop ...@->{_id, title, slug} // get fields from the referenced post } }`); ``` 1. **Update** your types now that the GROQ query has changed. ```sh pnpm run typegen ``` 1. **Create** a new component to render the related Posts ```tsx:src/components/related-posts.tsx 'use client' import Link from 'next/link' import { createDataAttribute } from 'next-sanity' import { useOptimistic } from 'next-sanity/hooks' import { POST_QUERYResult } from '@/sanity/types' import { client } from '@/sanity/lib/client' const { projectId, dataset, stega } = client.config() export const createDataAttributeConfig = { projectId, dataset, baseUrl: typeof stega.studioUrl === 'string' ? stega.studioUrl : '', } export function RelatedPosts({ relatedPosts, documentId, documentType, }: { relatedPosts: NonNullable<POST_QUERYResult>['relatedPosts'] documentId: string documentType: string }) { const posts = useOptimistic< NonNullable<POST_QUERYResult>['relatedPosts'] | undefined, NonNullable<POST_QUERYResult> >(relatedPosts, (state, action) => { if (action.id === documentId && action?.document?.relatedPosts) { // Optimistic document only has _ref values, not resolved references return action.document.relatedPosts.map( (post) => state?.find((p) => p._key === post._key) ?? post ) } return state }) if (!posts) { return null } return ( <aside className="border-t"> <h2>Related Posts</h2> <div className="not-prose text-balance"> <ul className="flex flex-col sm:flex-row gap-0.5" data-sanity={createDataAttribute({ ...createDataAttributeConfig, id: documentId, type: documentType, path: 'relatedPosts', }).toString()} > {posts.map((post) => ( <li key={post._key} className="p-4 bg-blue-50 sm:w-1/3 flex-shrink-0" data-sanity={createDataAttribute({ ...createDataAttributeConfig, id: documentId, type: documentType, path: `relatedPosts[_key=="${post._key}"]`, }).toString()} > <Link href={`/posts/${post?.slug?.current}`}>{post.title}</Link> </li> ))} </ul> </div> </aside> ) } ``` You will notice `data-sanity` attributes being added to the wrapping and individual tags of the list. As well as a useOptimistic hook to apply these changes in the UI immediately, while the mutation in the content lake is still happening. 1. **Update** the `Post` component to include the `RelatedPosts` component. ```tsx:src/components/post.tsx import Image from "next/image"; import { PortableText } from "next-sanity"; import { Author } from "@/components/author"; import { Categories } from "@/components/categories"; import { components } from "@/sanity/portableTextComponents"; import { POST_QUERYResult } from "@/sanity/types"; import { PublishedAt } from "@/components/published-at"; import { Title } from "@/components/title"; import { urlFor } from "@/sanity/lib/image"; import { RelatedPosts } from "@/components/related-posts"; export function Post(props: NonNullable<POST_QUERYResult>) { const { _id, title, author, mainImage, body, publishedAt, categories, relatedPosts, } = props; return ( <article className="grid lg:grid-cols-12 gap-y-12"> <header className="lg:col-span-12 flex flex-col gap-4 items-start"> <div className="flex gap-4 items-center"> <Categories categories={categories} /> <PublishedAt publishedAt={publishedAt} /> </div> <Title>{title} {mainImage ? (
) : null} {body ? (
) : null} ); } ``` Add a few Related Posts to any post document. Now within Presentation, you should be able to drag-and-drop to reorder their position, and see the content change in the Studio.## [Conclusion](/learn/course/visual-editing-with-next-js/conclusions) Let's review With Visual Editing, your content creators have absolute confidence in pressing publish. They also have a more convenient method of browsing your content, using the application to find even the most minor text that needs editing. Here are a few questions to clarify what you've learned in this course. **Question:** What kind of Sanity token is required to enable draft mode? 1. Admin 2. Writer 3. Viewer 4. Guest **Question:** "Draft Mode" is a... 1. Web standard 2. Next.js feature 3. Sanity feature 4. Networking protocol **Question:** Visual Editing can be implemented... 1. On any framework and hosting 2. Only in Next.js but on any hosting 3. Only in Next.js and only on Vercel 4. Only by enterprise customers **Question:** The Presentation tool... 1. Replaces the Structure tool 2. Simplifies Visual Editing setup 3. Is required for Visual Editing 4. Requires a separate package **Question:** Why would you use React Loader? 1. It is required for Visual Editing 2. It sounds cooler 3. It enhances Visual Editing 4. It's cheaper than Sanity Client# [Users, roles and using roles](/learn/course/introduction-to-users-and-roles) Core concepts around setting up custom access roles and permissions. Help editors work faster by configuring your Studio to provide role-based customizations. ## [Introduction](/learn/course/introduction-to-users-and-roles/introduction) Why – and how – to use custom roles to deliver effective workflows and tailored user experiences. In content operations, being able to effectively handle users and roles is hugely important – this is even more apparent when you scale content operations across a large organization using a content operating system such as Sanity. 1. This course covers the **custom roles** features available exclusively on Sanity’s Enterprise plans. That said, some elements of this course like Studio Customizations could be applied on a per-user level on all plans. In this course, we’ll introduce core concepts of the Sanity platform for setting up custom roles, as well as getting hands-on with some example scenarios and Studio customizations. You will learn the following: * The common reasons for setting up users and roles * How to set up and configure custom roles and resources in Sanity to meet your unique requirements * How you can customize your Studio with role-based customizations to give your editors a tailored experience, enabling them to work faster 1. For enterprise customers, this will typically be followed up with an onboarding workshop led by your Solution Architect to help with the practical application of concepts from this course. ## Recommended reading This course makes an assumption you have a baseline knowledge of Sanity. If you’re new to Sanity, then the following courses are recommended before progressing: 1. [Day one content operations](https://www.sanity.io/learn/course/day-one-with-sanity-studio) 2. [Studio excellence](https://www.sanity.io/learn/course/studio-excellence) 3. [Between GROQ and a hard place](https://www.sanity.io/learn/course/between-groq-and-a-hard-place)## [Typical use cases](/learn/course/introduction-to-users-and-roles/typical-use-cases) Motivations for setting up custom roles and permissions in content operations The most common requirements for configuration of users and roles we see are: * Security * Content integrity and compliance * Workflow management * Localization * Scalability & maintenance * Customizing the user experience ## Security This could cover ensuring that only certain users have access to view or update some sensitive content (such as embargoed press releases) and would typically involve defining what actions a user can perform – from full publishing control through to not being able to view the content at all. 1. If content privacy is crucial, consider [making your dataset private](https://www.sanity.io/docs/keeping-your-data-safe#5c2e941ea03c). Sanity datasets are public by default, meaning **published** content can be queried from anywhere using your public API endpoint. ## Content integrity and compliance In some industries such as the financial and legal sector, accuracy of content is critical. Preventing errors being published is therefore of utmost importance. Configuring roles and permissions to limit who can create, modify or publish information can reduce – or even remove entirely – the risk of inexperienced users publishing incorrect information. 1. Read more about our History Experience to see how Sanity keeps a record of all changes to content – and who made them. Combined with users and roles, you can create fully accountable and controlled content workflows. ## Workflow management Sometimes it’s necessary for users to have different responsibilities. For instance, a writer could draft content, an editor could review it, and a publisher could review and publish it. Role-specific permissions can streamline and guarantee an effective process. ## Localization When you need to publish content in a business serving multiple international markets, it can be useful to control rights for content that belongs to a particular geographic locale. This could be for different translators who speak different languages, or different legal teams who need to ensure compliance in different jurisdictions. 1. Read [Localization](https://www.sanity.io/learn/studio/localization) in the documentation for general guidance around configuring localized content with Sanity ## Scalability and maintenance As organizations grow, managing users through roles becomes more efficient - new users can be quickly assigned roles that define their permissions. By controlling what access is given to users, it can become easier to train new users and maintain the system. ## Customizing the user experience If users have certain content that aligns with their responsibilities or interests based on their role, customizing the Sanity Studio to show only content that is relevant for them can reduce complexity and improve their overall user experience. It might also be relevant to customize the tools, plugins and dashboards available to certain users. For example, a dashboard could show all documents awaiting approval by a publisher – but this might not be relevant for a writer. 1. The [Member-specific options](https://www.sanity.io/learn/course/studio-excellence/member-mastery) lesson in the [Studio excellence](https://www.sanity.io/learn/course/studio-excellence) course shows several examples of how the Sanity Studio experience can be modified for a user.## [Custom roles and resources](/learn/course/introduction-to-users-and-roles/custom-roles-and-resources) Set up content resources and roles to meet your requirements around security, compliance, workflows and user experience ## Default roles Each plan has a number of [default roles](https://www.sanity.io/docs/roles#e2daad192df9). These have predefined permissions which are applied to all datasets within the project. They also have default project-level permissions, such as which roles can create API tokens for the project. Whilst these default roles cover many common role-based workflows (such as draft → review → publish), for many of the use cases above, it’s necessary to configure custom roles. It’s also important to note that users can have _multiple_ roles. This is particularly useful in combining custom roles to create unique combinations of permissions. ## Custom roles Core to being able customize content operations to meet your requirements around security, compliance, workflow and user experience are _custom roles._ 1. See our documentation on [Roles](https://www.sanity.io/learn/user-guides/roles), all roles can be configured at [sanity.io/manage](https://www.sanity.io/manage) Custom roles are made up of two key elements: * **Management permissions:** control over the changes a role can make to project settings - like API/webhook configuration, dataset management and user access. * **Content permissions:** control over which roles have permissions to make changes to certain content _resources_. These content permissions can be granted across _all_ datasets_,_ a group of _tagged_ datasets, or an _individual_ dataset. 1. Tagged datasets – as with custom roles – are an enterprise only feature. 2. When configuring custom roles, you may need to assign certain permissions for handling generation of preview tokens. See the [Visual Editing readme](https://github.com/sanity-io/visual-editing/blob/main/packages/preview-url-secret/README.md#permissions-model) for the latest information. ### **Dataset privacy** In many scenarios with custom roles, it may be that requirements involve removing the ability for a user to _see_ content. This means that it might be necessary to [set the dataset to private](https://www.sanity.io/docs/keeping-your-data-safe#5c2e941ea03c). In a public dataset, all documents are readable by all users regardless of authentication. That means documents you may want to hide will still show up in the Studio search as well as in public API calls (when published). If in doubt, it's safest to make your dataset private. Just remember that your front-ends will then need to make authenticated calls and you'll need to consider [securing your API token](https://www.sanity.io/docs/http-auth#504058b73b71). 1. When removing access to certain content, it's important to remember that default roles grant access to _everything_ in a dataset, rather than scoped access. Combining roles without consideration could unintentionally give incorrect access levels. ## Content resources Whilst the default roles apply to _all_ content in a dataset, custom roles support applying permissions to a _subset_ of content in a dataset. This is done by creating content resources. 1. **Content resources** are essentially a set of documents in a dataset, defined by a GROQ filter. This provides a high level of flexibility to assign permissions not just to particular document types, but to filtered scopes, too. Let’s consider a few examples based on the common use cases we outlined earlier. ### Example - Embargoed Articles Let’s say we have a document type called “article”, which our _editorial_ team want to lock down to prevent edits from users in our _merchandising_ team that should only manage our product catalogue. A GROQ filter could create an “Article” content resource: ```groq:Article Content Resource _type == "article" ``` Below shows how this configuration would look at [sanity.io/manage](https://www.sanity.io/manage): ![A custom content resource targeting article documents](https://cdn.sanity.io/images/3do82whm/next/0a425bbe7dbfde3081f915d25e1666db602d5e08-1609x1620.png) Using this, we could create a role to target all article types. However, if our editorial team wanted to add further permissions to ensure that embargoed content could be seen only by _managers_ in the _editorial_ team, they could add an “embargoed” boolean field to their schema and create an “Embargoed Article” resource: ```groq:Embargoed Article Content Resource _type == "article" && embargoed == true ``` Consider creating a “Non-Embargoed Article” content resource to explicitly _exclude_ the embargoed articles. A grant simply on “Article” would include all articles. This illustrates a key concept that roles are additive. ```groq:Non-Embargoed Article Content Resource _type == "article" && embargoed != true ``` Using these content resources, we could create new roles to assign these content types to our users, which might look something like: #### **Role: Creator Team** * **Article:** No Access * **Embargoed Article:** No Access * **Unembargoed Article:** Publish * **Product:** Publish #### **Role: Article Editor** * **Article:** Publish * **Embargoed Article:** Publish * **Product:** Publish Technically, the “Embargoed Article” permission is not needed as the simpler “Article” resource gives publish access to _all_ articles. However it can be good to positively add this as a future-facing permission - this also ensures visibility of the non-embargoed articles. 1. When embargoing content, you might also want to consider assets. Whilst [asset documents](https://www.sanity.io/docs/assets#2cee91f4f62d) can be hidden from API calls with roles, the direct URL of the asset itself is not authenticated. Usually the autogenerated URL of the asset provides enough security through obscurity – but the file itself _is_ publicly accessible, even in private datasets. ### Example - Legal Policies It might be that you have legal documents stored within your dataset, and want to restrict the ability of users not in your legal team from making changes to these documents. In this case, you might create a “Legal Policy” content resource: ```groq:Legal Policy Content Resource _type == "policy" ``` Let’s say we want to make it so each lawyer in our team is ultimately responsible for the documents they create. In this scenario, we might have a field on our document which declares the user ID of the user that created our document. We could create a “My Policies” content resource: ```groq:My Policies Content Resource _type == "policy" && createdBy == identity() ``` In this case, we can see how we can use GROQ functions in the context of our content resources. 1. The `createdBy` field above isn’t a system field - it must be added to your Sanity Studio schema, and populated with an initial value. More on this in the Studio Customizations lesson of this course. Now we could create a single role to ensure lawyers can see and edit all policies, but only publish their own: #### **Role: Legal Team** * **Legal Policy:** Update and Create * **My Policies:** Publish ### Example - Locales In this example, we’ll cover a couple of scenarios. One in which you have a multilingual setup and want to restrict access to documents of a particular language. The other is where documents might belong to a particular location - let’s say a particular store. #### Languages Let’s create an “English Document” content resource which will cover all documents in English: ```groq:English Document Content Resource language == 'en' // this could be 'en-gb' or 'en-us' ``` You might notice in this case we don’t add a type, as we want this content resource to simply look at the language field and control access for _all_ English content, regardless of its `_type`. #### Locations For the store example, imagine a `store` document type, with each document representing a store location. Each store needs to have a number of document types associated with it - we have types for `offer`, `person` and `product`. Offers and people belong to a single store, but a product can belong to many stores - therefore offers and people have a single `reference` field called `store` whereas `product` has an `array` of `references` called `stores[]` . In this scenario, you might assume to create a “Tom’s Toy Store Manager” role you can follow the reference in the content resource GROQ query… but this won’t work: ```groq:Tom’s Toy Store Content Resoure // This won't work... store->name == "Tom's Toy Store" || "Tom's Toy Store" in store[]->name ``` This won’t work because content resources can only be based on values within a document, and therefore cannot resolve references. Instead you’ll need to know the ID of the document for Tom’s Toy Store and use this in the query instead: ```groq:Tom’s Toy Store Content Resource // This will work store._ref == "toms-toy-store-id" || "toms-toy-store-id" in store[]._ref ``` With the latter, you could create the “Tom’s Toy Store” content resource, and then apply it to your “Tom’s Toy Store Manager” role as necessary.## [Defining roles](/learn/course/introduction-to-users-and-roles/defining-roles) Combine your resources with permission levels to define which roles can perform which actions When it comes to creating custom roles, it’s a case of combining your shiny new content resources with permission levels - basically defining the rules “_this role_ has _this level_ of access to _this resource_”. When defining roles and resources, there are a couple of key decisions to be made: * Will your roles be wide ranging and attribute a number of permissions to a number of content resources? * Will your roles be very precise with users assigned multiple roles to cover their required access levels? These types of decisions are subject to your specific use case, and for our Enterprise customers we’d recommend workshopping these with your Solution Architect. ## Dataset Permissions Custom roles can have permissions applied to all datasets or to specific datasets. Often, adding permissions for all datasets will be perfectly acceptable – but if you have specific workflows or a more complex dataset configuration it can be useful to tailor permissions for each dataset. An example is creating a custom **developer** role whereby developers can create content in a **development** dataset but not in a **production** dataset. If you have a more complex project with many datasets – for example a multi-brand configuration where each brand has a number of datasets – then using [dataset tags](https://www.sanity.io/docs/roles#0db30012bd04) can be very helpful. You can tag each dataset with the brand it belongs to and grant access to all those tagged datasets in a single role definition. Don’t forget datasets can have multiple tags, too. ## Roles are additive This means you can’t remove a permission given to a user in one role by removing it on another role. As an example, if you were to assign the default **Editor** role to a user, this role includes the **Publish** permission for **All documents** in **All datasets.** If you were then to give this same user the **Creator Team** role from our first example above – which has **Read** only permissions for articles – they would still be able to publish the **Article** and **Embargoed Article** content resources as a result of the **Editor** role. ## Roles and the API One thing to remember is that roles – including custom roles – can be applied to the [API tokens](https://www.sanity.io/docs/http-auth#4c21d7b829fe) that you generate, too. This can be really helpful if you need to restrict the types of content that can be written to by middleware, or with particular cases where you might want to give third parties controlled access via the API. ## Role Mapping with SAML SSO If your organization has SAML SSO configured with Sanity to enable single sign-on – for example with Azure AD, Okta or another Identity Provider (IdP) – then you may benefit from [role mapping](https://www.sanity.io/docs/sso-saml#647d8f0f9ee4) to sync user roles in your IdP to user roles in Sanity. Particularly for projects with many users, this can be a real time saver! ## Customized permissions You might notice when applying permissions to roles via the user interface at [sanity.io/manage](http://sanity.io/manage) you are restricted in the types of permissions you can create. These are baseline assumptions about the types of permission levels needed based on common practices. These allow: * **No access** - no access at all (except with public datasets which are publicly readable) * **Read** - read only * **Update and Create** - create, read and edit * **Publish** - create, read, edit, delete and publish/unpublish What if you want to have a unique permission that grants the ability to _delete but not create,_ or to _create but not read_? These are rare requirements, but in these cases specific permissions can be created via the [Roles API](https://www.sanity.io/docs/roles-reference) rather than the UI.## [Studio customizations](/learn/course/introduction-to-users-and-roles/studio-customizations) Change the user experience of the Sanity Studio based on roles and deliver a personalized user experience to accelerate editor workflows When creating roles, it can be a great next step to change the Sanity Studio and customize the experience of your users based on their role. Studio customizations might include: * Showing, hiding or filtering certain content types using the Structure Builder API * Automatically populating initial values based on a user or role * Making a field hidden or readonly based on a user or role * Initializing different plugins / configuration based on a user or role * Using a role to introduce or adjust a custom component * Changing the available document actions or ability to create new documents based on a role * Enabling / disabling [workspaces](https://www.sanity.io/docs/workspaces) based on a role. _There are some caveats to this, covered in the module below._ ## Example Scenario For each of the above points – with the exception of role-based workspaces – we will introduce customizations based on a [reference Github repository](https://github.com/thebiggianthead/sanity-roles-workshop-demo) The concept for this lesson is: > Our organization has a number of stores across a number of cities. Each store has offers which are unique to the store. Each of the stores has a manager who should only be able to view, edit and publish offers for their own store. Regional managers can be assigned to multiple stores to manage offers, and administrators can manage all offers across all stores. Additionally, we publish articles – however, only admins can see all articles. Other users should only be able to work on articles they have created themselves. ### Intended outcomes In this exercise, we’ll be demonstrating a few example customizations to a Studio – the intention of this is to _inspire ideas_ as to how you might customize your own Sanity Studio(s) to meet your own unique requirements. ### Initial configuration To follow along with this scenario you can either take the schema directly from the reference repository or create your own schema. 1. Create an **offer** and **article** schema type 2. Ensure the **offer** type has a **store**, as illustrated below ```typescript:src/schemaTypes/offer.ts // define your stores type Store = { id: string name: string } const stores: Store[] = [ { id: 'store-1', name: 'Store 1', }, { id: 'store-2', name: 'Store 2', }, ] // ... rest of offer definition defineField({ name: 'store', title: 'Store', type: 'string', options: { list: stores.map((store) => { return { value: store.id, title: store.name, } }), layout: 'radio', }, }) ``` 1. Create the below **content resources** and user **roles** #### Content Resource: Store 1 * **Title:** Store 1 * **Identifier:** store-1 * **GROQ filter:** `store == "store-1"` #### Content Resource: Store 2 * **Title:** Store 2 * **Identifier:** store-2 * **GROQ filter:** `store == "store-2"` #### Content Resource: User Articles * **Title:** User Articles * **Identifier:** user-articles * **GROQ filter:** `_type == "article" && (createdBy == identity() || createdBy == $identity)` 1. Why `identity()` and `$identity`? This covers all versions of the API and Studio, so will make you a little more bulletproof. `$identity` may be removed entirely in a future version of the API. #### Role: Store 1 Manager * **Title:** Store 1 Manager * **Identifier:** store-1-manager * **Permissions in all datasets:** * Store 1 - Publish * Image / file assets - Update and create * All other resources - No access #### Role: Store 2 Manager * **Title:** Store 2 Manager * **Identifier:** store-2-manager * **Permissions in all datasets:** * Store 2 - Publish * Image / file assets - Update and create * All other resources - No access #### Role: Article Editor * **Title:** Article Editor * **Identifier:** article-editor * **Permissions in all datasets:** * User Articles - Publish * Image / file assets - Update and create * All other resources - No access ## How to test Studio customizations The simplest way to test out the role-based customizations you make is to simply have a number of user accounts to switch between in different browser profiles / incognito windows. This way you can have a admin user and easily make changes to your secondary users' roles to test out the changes to the Studio as you make them in a local development environment. It’s simple and effective. 1. It's important to bear in mind that if you change your user to remove the administrator role, you might not be able to change it back. ## Customizing with User Context To customize the Studio based on user and role, it’s necessary to know information about the current user. Thankfully, the Studio provides this context in a number of places including – but not limited to – the Structure Builder API, the Tool API, the Document Actions API and hidden / readonly callback functions. Where this is available, the context will provide the `currentUser` object: ```typescript:CurrentUser type definition interface CurrentUser { email: string id: string name: string profileImage?: string provider?: string role: string // deprecated, use roles instead roles: Role[] } // And for reference, the Role: interface Role { name: string title: string description?: string } ``` Inside React components or custom hooks, you can use the `useCurrentUser()` hook to return the same data. There’s also the `userHasRole()` helper function to determine if a particular user has a provided role - this accepts a user object as it’s first argument and a role identifier string as it’s second. 1. Users can have multiple roles – it’s important to consider this in your customizations and role checks. ## Structure that makes sense Customizing the content types a user sees – and how they see them – can shorten the user journeys in a Studio, greatly improving the overall Studio experience. Let’s expand on some of the principles established in the [Structure customization](https://www.sanity.io/learn/course/studio-excellence/structure-customization) lesson in the [Day one content operations](https://www.sanity.io/learn/course/day-one-with-sanity-studio) course. This initial lesson on the Structure Builder API focused on the `StructureBuilder` object which the `StructureResolver` returns as it’s first argument. In order to customize this based on users and roles the second argument can be used – the `StructureResolverContext`. This `context` object returns a number of useful things in addition to the user – like the `getClient` function which can be used to query your dataset(s). The key for customizing based on users is the `currentUser` object this context provides. Using this, it's possible to change the Structure for different users and roles. ### User specific articles In the scenario outlined above, one of the steps required is to hide articles from the user if they didn’t create them. If all articles are listed, then users may end up seeing articles they can’t do anything with. This isn’t a great user experience: ![All articles - including those the user can't edit](https://cdn.sanity.io/images/3do82whm/next/4b41ebad300ef093accaeeb9bd83089fe6a1624c-2528x1660.png) Instead, it's better to hide articles the user can’t edit – which declutters the Studio, and displays only the articles the user is able to work with: ![Filtered articles - showing only those the user can edit](https://cdn.sanity.io/images/3do82whm/next/aaeb3a12acb16acc2d8e52bfa9762fe2cad13bae-2528x1660.png) To achieve this, we need to do a couple of things. Firstly, the `createdBy` field in our article document needs to be populated – this isn’t a system field. 1. Add a `createdBy` field with an `initialValue` to the article schema type ```typescript:src/schemaTypes/article.ts defineField({ name: 'createdBy', title: 'Created By', type: 'string', initialValue: (param, context) => context.currentUser?.id || '', readOnly: (context) => !context.currentUser?.roles.flatMap((r) => r.name).includes('administrator'), }) ``` Note that the field is also made `readOnly` for users that aren’t administrators - meaning nobody but admins can change the creator of a document. This could also be hidden. 1. Another approach for user scoped documents is to create an array of users - perhaps an `allowedUsers` array field. This scales to allow multiple users to access a document. 2. **Good to know** – If you prefer to choose from a list of users for the `createdBy` field, then the `` component from [`sanity-plugin-utils`](https://github.com/SimeonGriggs/sanity-plugin-utils) makes for a nice user experience. 1. **Gotcha** – initial values are only applied at the time of document creation. If you’re adding them retrospectively, you’ll need to patch the user values to pre-existing documents. The [Handling schema changes confidently](https://www.sanity.io/learn/course/handling-schema-changes-confidently) course covers handling data migrations / modifications like this. Following the addition of this field, we can make use of the `StructureResolverContext` to make adjustments in the Structure of our Studio. 1. **Update** your Structure to the code below to create your filtered list of articles ```typescript:src/structure/index.tsx import {DocumentsIcon} from '@sanity/icons' import type {ConfigContext} from 'sanity' import type { DocumentListBuilder, StructureBuilder, StructureResolver, } from 'sanity/structure' const API_VERSION = '2023-01-01' function defineStructure( factory: (S: StructureBuilder, context: ConfigContext) => StructureType, ) { return factory } export const structure: StructureResolver = (S, context) => S.list() .id('root') .title('Content') .items([ S.listItem() .title('Articles') .icon(DocumentsIcon) .schemaType('article') .child(createArticleList(S, context)), // other structure items... ]) const createArticleList = defineStructure((S, context) => { const user = context?.currentUser const roles = user?.roles.map((r) => r.name) const isLimited = roles?.includes('article-editor') let userQuery = `` if (isLimited) { userQuery = `createdBy == $userId` } else { userQuery = `` } return S.documentTypeList('article') .title(`Articles`) .filter([`_type == "article"`, userQuery].filter(Boolean).join(` && `)) .params({userId: user?.id}) .apiVersion(API_VERSION) }) ``` What's going on here? Firstly, a `defineStructure()` factory function helps Typescript to determine the types we expect for the various customizations being made. In the Structure, the list item child for articles is passed a `createArticleList()` function - this grabs the user from the context and maps their roles into an array of strings. These roles can then be tested to see if the `article-editor` role is present. In the case where this role is present, the article list should be limited. Therefore a filter is applied to the `documentTypeList` by joining `createdBy == $userId` to our query and passing the user ID as a parameter. ### User specific stores Another requirement is to only show relevant store offers in the Structure. Essentially, the requirement is to show offers only when a user is a store manager, a regional manager or an administrator. If I am none of these, I don’t want to see anything concerning stores or offers in the Structure in order to remove clutter and improve my experience: ![No stores shown if I am not a store manager or admin](https://cdn.sanity.io/images/3do82whm/next/0559daca7e69c61ff606e95054ade929d743d166-2528x1660.png) If I am the manager of a single store – “Store 1” – then I’m only interested in offers for my particular store: ![A single store shown if I manage a single store](https://cdn.sanity.io/images/3do82whm/next/139cd026334dae98357acb1a9ec80041189ae9cb-2528x1660.png) However, if I manage multiple stores, I want to be able to access offers for all of these stores: ![Multiple stores shown if I manage multiple stores](https://cdn.sanity.io/images/3do82whm/next/290a7686f6ccd0ccc29747bac3fbb477efe7495a-2528x1660.png) Note that in the case of managing a single store, the item is at the _top-level_ of the Structure - whereas when I manage multiple, these are nested under an “Offers” list item. 1. **Update** your Structure to the code below to add offers to your structure ```typescript:src/structure/index.tsx import {DocumentsIcon, HomeIcon, TagIcon} from '@sanity/icons' import type {ConfigContext} from 'sanity' import type { DocumentListBuilder, ListItemBuilder, StructureBuilder, StructureResolver, } from 'sanity/structure' import {stores} from '../lib/constants' const API_VERSION = '2023-01-01' function defineStructure( factory: (S: StructureBuilder, context: ConfigContext) => StructureType, ) { return factory } export const structure: StructureResolver = (S, context) => S.list() .id('root') .title('Content') .items([ S.listItem() .title('Articles') .icon(DocumentsIcon) .schemaType('article') .child(createArticleList(S, context)), ...[createOffers(S, context) as ListItemBuilder].filter(Boolean), ]) const createArticleList = defineStructure((S, context) => { const user = context?.currentUser const roles = user?.roles.map((r) => r.name) const isLimited = roles?.includes('article-editor') let userQuery = `` if (isLimited) { userQuery = `createdBy == $userId` } else { userQuery = `` } return S.documentTypeList('article') .title(`Articles`) .filter([`_type == "article"`, userQuery].filter(Boolean).join(` && `)) .params({userId: user?.id}) .apiVersion(API_VERSION) }) const createOffers = defineStructure((S, context) => { const roles = context?.currentUser?.roles.map((r) => r.name) const storesManaged = roles?.filter((r) => r.endsWith('-manager')).length if ((storesManaged && storesManaged > 1) || roles?.includes('administrator')) { return S.listItem() .title('Offers') .icon(TagIcon) .child( S.list() .title('Offers') .items(createStoreOffers(S, context) as ListItemBuilder[]), ) } else if (storesManaged && storesManaged == 1) { return createStoreOffers(S, context) as ListItemBuilder } }) const createStoreOffers = defineStructure((S, context) => { const roles = context?.currentUser?.roles.map((r) => r.name) const userStores = stores .map((store) => { if (roles?.includes(`${store.id}-manager`) || roles?.includes('administrator')) { return S.listItem() .title(`${store.name} Offers`) .icon(HomeIcon) .child( S.documentTypeList('offer') .title(`${store.name} Offers`) .filter(`_type == "offer" && store == $storeId`) .params({storeId: store.id}) .apiVersion(API_VERSION) ) } }) .filter((item) => !!item) || [] return userStores?.length == 1 ? userStores[0] : userStores }) ``` This adds two new functions - `createOffers()` and `createStoreOffers()`. The `createOffers` function essentially checks whether the user has access to multiple stores or a single store. If they manage multiple, then a `listItem` is returned to add a top level item. In the case of single store management, then the `createStoreOffers()` function is returned. At the top level, the returned values are spread into an array and a filter applied to remove empty items: ```typescript:src/structure/index.tsx ...[createOffers(S, context) as ListItemBuilder].filter(Boolean) ``` This allows us to return `undefined` from `createOffers` where the user does not have access to a store - as list items can’t accept undefined, this ensures compatibility with the expected types. The `createStoreOffers()` function handles the output of the list item for each store, and includes a child `documentTypeList` which includes a filter to filter offers based on store ID. 1. Here we’ve customized just one section of our Structure, but you could create a completely unique Structures for different user roles in your business. For example, legal teams could just see legal content and merchandising teams could just see product information. ## Creating new documents With this type of setup, it’s important to ensure that users can create new documents that they have relevant permissions for. Right now, even though a user profile may have “Store 1 Manager” and “Store 2 Manager” permissions, they won’t be able to create a new document: ![User can't create a new document, even though they should be able to](https://cdn.sanity.io/images/3do82whm/next/ec3918e88600980578baba1c0631f0e1641b26b7-2528x1660.png) The reason for this is that when a new offer document is created, the “store” field will be empty – and these users only have permission to make edits when this field matches their store. ### Initial Value Templates To solve this, initial value templates can be implemented in the Structure. These allow parameters to be passed based on where the user is in the Structure, so users can create offers relevant to the store list they’re viewing. 1. **Update** your `sanity.config.ts` to define a new [parameterized initial value template](https://www.sanity.io/docs/initial-value-templates#66d873e2136f) ```typescript:sanity.config.ts export default defineConfig({ // rest of config schema: { types: schemaTypes, templates: (prev, context) => { const {currentUser} = context return [ ...prev, { id: 'offer-by-store', title: 'Offer by store', description: 'Offer from a specific store', schemaType: 'offer', parameters: [ {name: 'store', type: 'string'}, {name: 'createdBy', type: 'string'}, ], value: (params: {store: string}) => ({ store: params.store, createdBy: currentUser?.id, }), }, ] }, }, }) ``` Note that context can be used here - so contextual values based on the user could be populated - like user ID to a `createdBy` field. The key here though is that a `store` parameter is defined which can be passed from our Structure. 1. **Update** the `createStoreOffers()` function in the Structure with the below code to set an initial value template for the store list ```typescript:src/structure/index.tsx const createStoreOffers = defineStructure((S, context) => { const roles = context?.currentUser?.roles.map((r) => r.name) const userStores = stores .map((store) => { if (roles?.includes(`${store.id}-manager`) || roles?.includes('administrator')) { return S.listItem() .title(`${store.name} Offers`) .icon(HomeIcon) .child( S.documentTypeList('offer') .title(`${store.name} Offers`) .filter(`_type == "offer" && store == $storeId`) .params({storeId: store.id}) .apiVersion(API_VERSION) .initialValueTemplates([ S.initialValueTemplateItem('offer-by-store', {store: store.id}), ]), ) } }) .filter((item) => !!item) || [] return userStores?.length == 1 ? userStores[0] : userStores }) ``` Now that this is setup, users can create new offers based on the context of the store they’re looking at, and the `store` field will be populated automatically: ![A document populated with initial values based on the structure](https://cdn.sanity.io/images/3do82whm/next/1a32e644049d5d04c6ed7b37610b5807db8bdf22-2528x1660.png) This concept can be very useful outside of the context of role-based customizations, too! ### New Document Options In addition to adding new documents via the Structure, users might also look to the “Create +” button in the Studio navigation bar. ![The "new document options" menu](https://cdn.sanity.io/images/3do82whm/next/6ad60d34517b7b9e979af904c71755398a990493-776x442.png) Similarly to the Structure, the default options here may be disabled due to the permissions and initial values set up, again meaning a user can’t create documents with blank fields. 1. **Update** your `sanity.config.ts` with the below code to change the available new document options ```typescript:sanity.config.ts export default defineConfig({ // rest of config document: { newDocumentOptions: (prev, {currentUser}) => { let removeTypes = ['media.tag', 'offer'] const storeTemplates = stores.map((store) => { if ( userHasRole(currentUser, `${store.id}-manager`) || userHasRole(currentUser, 'administrator') ) { return { id: `${store.id}-offer`, templateId: 'offer-by-store', title: `${store.name} Offer`, parameters: { store: store.id, }, type: 'template', } } }) as TemplateItem[] if ( !userHasRole(currentUser, 'administrator') && !userHasRole(currentUser, 'article-editor') ) { removeTypes.push('article') } return [...prev, ...storeTemplates.filter(Boolean)].filter( (templateItem) => !removeTypes.includes(templateItem.templateId), ) }, }, }) ``` This customizes the available options when creating new documents to hide some options based on the user role – for example, removing the article type for users who aren’t admins or article editors and hiding metadata documents created by `sanity-plugin-media` . Additionally, we’re using this menu to add the parameterized initial value templates too. This allows users to create documents for their own stores from the global menu. ## Custom components There are some occasions that you might want to create custom components that are conditional based on user or role. This might include form components such as custom input components or could include other components like customizing the Studio layout, navbar or tool menu. 1. **Create** a `StoreInput` component and add the below code to it ```typescript:src/components/StoreInput.tsx import {Button, Grid, Text} from '@sanity/ui' import {useCallback} from 'react' import {set, type StringInputProps, type TitledListValue, useCurrentUser, userHasRole} from 'sanity' export default function StoreInput(props: StringInputProps) { const {value, onChange, schemaType} = props const user = useCurrentUser() const roles = user?.roles.flatMap((r) => r.name) const handleClick = useCallback( (event: React.MouseEvent) => { const nextValue = event.currentTarget.value onChange(set(nextValue)) }, [onChange], ) const stores = (schemaType?.options?.list as Array>)?.filter( (option) => { return roles?.includes(`${option.value}-manager`) || userHasRole(user, 'administrator') }, ) return ( {stores?.map((store) => ( ))} ) } ``` 1. Add the custom input component to the `store` field of your offer document ```typescript:src/schemaTypes/offer.ts defineField({ name: 'store', title: 'Store', type: 'string', options: { list: stores.map((store) => { return { value: store.id, title: store.name, } }), }, components: { input: StoreInput, }, }), ``` This input component demonstrates a few ideas: 1. Replacing the default radio input with buttons. 2. When the user is an admin, ensure the default options are rendered. 3. When the user is not an admin, adjust the available buttons to only show stores the user has access to, rather than the full list. The screenshot below illustrates the Studio with no custom input alongside the views of an admin and a user with a limited number of stores. The custom component simplifies the user interface based on the users' role(s). ![Studios with and without the custom input component](https://cdn.sanity.io/images/3do82whm/next/3de0315606c260b61b6e529086b18e6f33401b4b-3530x1738.png) This is a simple example to illustrate the point – you could implement similar principles to: * Provide additional instructions alongside a field for users of a certain role. * Change how a third party API is called in an input component based on user role. * Amend the UI for content input based on whether a user is a developer or marketer. ## Conditional plugin configuration Unfortunately, it’s not currently possible to customize plugin initialization based on role, as there is no user context here. However, it is possible to selective (de)compose elements of a plugin in order to remove elements of a plugin based on user role. For example, the custom tool in `sanity-plugin-media` could be removed for some users by adding a custom plugin _after_ the media plugin: ```typescript:sanity.config.ts export default defineConfig({ // rest of config plugins: [ // other plugins media(), { name: 'disable-media-tool', tools: (prev, {currentUser}) => userHasRole(currentUser, 'article-editor') ? prev.filter((tool) => tool.name !== 'media') : prev, }, ], }) ``` ## Workspaces per role The Sanity Studio can have multiple workspaces - and each of these can have it’s own configuration. It’s a great idea to enable workspaces _per role…_ for example, if I’m in a certain team, I want to work on certain content, in a particular workspace. This _is_ possible, but there is a caveat: when the Studio is initialized, the Studio configuration is handled before the user is authenticated - this means we don't know who the user is until after the workspaces are set up. Because of this, you need to _wrap your Studio_ to embed it in another React application. This allows you to make an API call to the user endpoint prior to initialization and provide different configuration based on the result. Alternatively, you can [amend the Vite configuration](https://www.sanity.io/docs/development#9c7158c423fb) to allow for top-level async – but this can impact on some CLI commands such as GraphQL deployments, Typegen and document validation. If this is something you would like to achieve, please speak to your Solution Architect.## [Roles quiz](/learn/course/introduction-to-users-and-roles/roles-quiz) A short test of everything you've learned through this course. **Question:** Custom roles are great for 1. Customizing the user experience of the Studio 2. Ensuring security, compliance and content integrity 3. Content workflows 4. Localization and market-specific content 5. All of the above **Question:** If a user is assigned the default "editor" role and a custom role which can only edit "article" documents, what can that user edit? 1. Nothing, because the roles conflict 2. Only the "article" documents 3. All documents **Question:** If I need to restrict the ability of a role to view certain documents, how should I configure my dataset? 1. It should be public 2. It should be private **Question:** What are content resources? 1. A set of documents in a dataset defined by a GROQ filter 2. Custom roles created for specific users 3. An API endpoint for selecting a group of documents 4. Schemas that define document structures **Question:** What does SAML role mapping allow you to do? 1. Control Studio access for groups of users 2. Set conditional access rules for specific content 3. Assign roles to users based on roles from a third-party identity provider 4. Navigate between two users' geographical locations **Question:** Which functions in Sanity Studio can you use to check the current user and validate their role? 1. getUser() and hasPermission() 2. useCurrentUser() and userHasRole() 3. fetchUser() and roleChecker() 4. whoAmI() and canIHazRole() **Question:** What value do end users get from studio customizations based on their role(s)? 1. A tailored editing experience that aligns with their responsibilities 2. Access to all hidden fields and content, regardless of their role 3. Dynamic Studio themes that match their favorite colors and personal tastes 4. An additional eight week vacation because of the efficiencies they enjoy