how are people managing staging vs production datasets? is there an easy way to push a staging dataset to production?
Managing staging vs production datasets in Sanity is a common workflow challenge, and there are several approaches depending on your plan level and needs.
Common Approaches
Most teams use separate datasets for different environments (like development, staging, and production). These datasets are completely isolated from each other within your project, which is great for testing changes without affecting live content.
Pushing Staging to Production
Unfortunately, there's no one-click "push to production" button in Sanity. Here are your options:
1. CLI Export/Import (Available on all plans)
The standard approach is using the Sanity CLI to export and import datasets:
# Export from staging
sanity dataset export staging staging.tar.gz
# Import to production (use --replace to overwrite)
sanity dataset import staging.tar.gz production --replaceYou can also export/import specific document types:
sanity dataset export staging staging.tar.gz --types products,articlesThis works but can be slow for large datasets since you're downloading and re-uploading all the data.
2. Cloud Clone (Enterprise only)
If you're on an Enterprise plan, you get access to Advanced Dataset Management which includes Cloud Clone. This lets you duplicate datasets directly in the cloud without the export/import dance. It's much faster and more efficient, especially for large datasets.
Enterprise customers also get Hot Swap functionality, which lets you use aliases to switch between datasets seamlessly - super useful for testing migrations before going live.
3. Cross Dataset Duplicator Plugin
The Cross Dataset Duplicator plugin provides a UI-based way to migrate documents and assets between datasets from within Studio. It's more selective than full dataset exports - useful when you want to copy specific documents rather than entire datasets.
npm i @sanity/cross-dataset-duplicatorHow People Actually Manage This
From community discussions, here are common patterns:
Content flows production β staging/dev: Many teams regularly export production data and import it to staging/dev environments so developers work with realistic data. This is the opposite of what you asked, but it's the more common workflow.
Schema changes go staging β production: Developers test schema changes in staging first, then deploy the same schema to production. The content itself usually originates in production.
Separate workflows: Content editors work directly in production datasets, while developers work in development datasets. Schema changes are version-controlled and deployed through your normal CI/CD process.
Important Considerations
Datasets can have different schemas: Each dataset is schemaless, so your staging and production datasets can technically have different content models, though keeping them in sync is usually desired.
Cross-dataset references: If you use cross-dataset references, be aware that export/import can have issues with these.
Assets are included: When you export/import, assets (images, files) are included in the process.
The reality is that most teams don't regularly "push" staging content to production. Instead, they use staging to test schema changes and new features, while actual content is created directly in production. If you need frequent dataset synchronization, the Enterprise plan's Cloud Clone feature would be your best bet.
Show original thread9 replies
Sanity β Build the way you think, not the way your CMS thinks
Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.