Content Lake (Datastore)

Schema validation and the Content Lake

Understand why schema validation only runs in Sanity Studio and what that means when writing data through the API or client libraries.

Validation rules you define in your schema run in Sanity Studio. They do not run on the server. When you create or update documents through the HTTP API, a client library like @sanity/client, content migrations, or data imports, the Content Lake accepts the write without checking your validation rules.

This is by design. The Content Lake is intentionally schemaless, and that flexibility is what makes several important features possible.

Why the Content Lake doesn't enforce schemas

A schemaless Content Lake means you can evolve your schema without running database migrations. You can add, rename, or remove fields and the existing data stays intact. Old documents don't break when the schema changes, and new schemas can coexist with old data.

It also means multiple Studios with different schemas can point at the same dataset. This is useful for teams that maintain separate Studio configurations for different roles or workflows.

The trade-off is that schema constraints, including validation rules, are the responsibility of the client application making the write. Sanity Studio handles this automatically. Other clients do not.

What this means in practice

When an editor saves a document in Sanity Studio, the Studio checks every validation rule before allowing the publish. Required fields, minimum and maximum values, custom validators, and document-level rules all run in the browser.

When your code calls the mutation API or uses @sanity/client to create or patch a document, none of those rules run. The Content Lake accepts the document as-is. This applies to all programmatic writes, including the HTTP mutations API, client library methods like create(), createOrReplace(), and patch(), content migrations, and data imports.

There are some schema-aware tools, like Agent Actions Patch and some MCP tools, but in general most API tools won’t validate your input.

How to validate data outside the Studio

If you're writing data programmatically and need to ensure it conforms to your schema, you have a few options.

Validate after writing with the CLI

The Sanity CLI can check all documents in a dataset against your current schema. This is useful as a post-flight check after bulk operations like migrations or imports.

This runs validation locally against your schema definition. It surfaces the same errors and warnings you would see in the Studio.

Validate in your application layer

For applications that write to the Content Lake on an ongoing basis, validate the data in your own code before making the API call. This is standard practice when writing to any data store through an API. Check required fields, value ranges, and any business rules before calling create() or patch().

Use generated types for structural safety

If you're using TypeScript, sanity typegen generates types from your schema. This catches structural issues at compile time: wrong field types, missing required fields, and incorrect document shapes. It won't catch value-level constraints like min(5) or custom validators, but it covers a significant class of errors.

Was this page helpful?