Embeddings (Experimental)
Was this page helpful?
How to derisk changes to your schema and plan for successful content migrations
Content and schema migrations are potentially high-stakes operations, especially for projects that are in production. At the same time, it can be hard to nail a content model on the first try and anticipate all needs and requirements ahead of time. Our aim at Sanity is to enable you to work with content models and content through APIs early in your projects without being penalized for it when these need to change.
The considerations you need to take can differ depending on whether your project is in development or has been put into production. Below are some overarching considerations for both scenarios.
Keep in mind that editors may be editing in the Studio while the migration is running. It's good to give them a heads-up before running a content migration on a dataset that is being worked on.
Most changes to a content model for projects in development that haven’t been put into production are additive; you add new document types and fields. Often, you will not have as much content that needs to be changed or updated either. Sometimes, editing documents manually in the Studio might be as efficient as running automated scripts to change them.
That said, there are also cases where you have a lot of content because you have engaged the content team to work in parallel to enhance the design and implementation process or have imported content from another system. You wish to take the opportunity to improve its structure.
In these cases, you should always consider to:
sanity documents validate to check what documents give errors against your schema changessanity migration create to scaffold file and boilerplate codesanity migration <ID> and validate that the patches seem correctsanity migrate run <ID> --no-dry-run to make the changesIf you aren’t quite ready to change the code that implements your content, you can use the coalesce function in GROQ to “alias” the new patterns to the old variable/shape:
"oldFieldName": coalesce(newFieldName, oldFieldName)
Non-additive changes to the content model for projects in production require more diligence — as you might be used to from any database migration, especially if you aim for as little downtime as possible. Migrations like these are easier if you support PR/branch deployments in your CI/CD tooling. We recommend deploying the Studio from a git-based platform if you have more than simple needs.
To prepare a migration for projects in production:
sanity documents validate to check what documents give errors against your new schema changessanity migration create to scaffold file and boilerplate codesanity migration run <ID> and validate that the patches seem correctsanity migration run <ID> --no-dry-run to make the changes in your staging datasetAn idempotent migration is a migration that can safely be run multiple times. Typically, an idempotent migration will start by checking if a precondition is met before it runs, and if this condition isn’t met, the migration will do nothing.
defineMigration({
title: 'Convert product category from string to array of strings'
documentTypes: ['product'],
filter: 'defined(category) && !defined(categories)'
migrate: {
document(doc) {
return [
at('categories', setIfMissing([])),
at('categories', insert(doc.category])),
at('category', unset())
]
}
}
})Example of an idempotent operation:
at('name', set(person.name.toUpperCase())
This will produce the same result no matter how often you run it.
Example of a non-idempotent operation:
at('members', insert({name: 'Some One'}))
This inserts a new member into the array every time it’s run, giving different results every time it’s run.
If there’s no way to write your migrations idempotent, you can instead write an idempotence marker to your document along with the migration.
const idempotenceKey = 'xyz' // should be unique for the migration but never change
export default defineMigration({
name: 'Convert product from reference to array of references'
filter: 'defined(product) && !defined(products)'
migrate: {
document(doc) {
if ((document?._migrations||[]).includes(idempotenceKey) {
// Document already migrated, so we can skip
return
}
return [
// migration
at('members', insert({name: 'Some One'}))
//… add idempotence key
at('_migrations', setIfMissing([]),
at('_migrations', insert(idempotenceKey)
]
}
}
})defineMigration({
title: 'Convert product category from string to array of strings'
documentTypes: ['product'],
filter: 'defined(category) && !defined(categories)'
migrate: {
document(doc) {
return [
at('categories', setIfMissing([])),
at('categories', insert(doc.category])),
at('category', unset())
]
}
}
})const idempotenceKey = 'xyz' // should be unique for the migration but never change
export default defineMigration({
name: 'Convert product from reference to array of references'
filter: 'defined(product) && !defined(products)'
migrate: {
document(doc) {
if ((document?._migrations||[]).includes(idempotenceKey) {
// Document already migrated, so we can skip
return
}
return [
// migration
at('members', insert({name: 'Some One'}))
//… add idempotence key
at('_migrations', setIfMissing([]),
at('_migrations', insert(idempotenceKey)
]
}
}
})