
Grab your gear: The official Sanity swag store
Read Grab your gear: The official Sanity swag storeManaging staging vs production datasets in Sanity is a common workflow challenge, and there are several approaches depending on your plan level and needs.
Most teams use separate datasets for different environments (like development, staging, and production). These datasets are completely isolated from each other within your project, which is great for testing changes without affecting live content.
Unfortunately, there's no one-click "push to production" button in Sanity. Here are your options:
The standard approach is using the Sanity CLI to export and import datasets:
# Export from staging
sanity dataset export staging staging.tar.gz
# Import to production (use --replace to overwrite)
sanity dataset import staging.tar.gz production --replaceYou can also export/import specific document types:
sanity dataset export staging staging.tar.gz --types products,articlesThis works but can be slow for large datasets since you're downloading and re-uploading all the data.
If you're on an Enterprise plan, you get access to Advanced Dataset Management which includes Cloud Clone. This lets you duplicate datasets directly in the cloud without the export/import dance. It's much faster and more efficient, especially for large datasets.
Enterprise customers also get Hot Swap functionality, which lets you use aliases to switch between datasets seamlessly - super useful for testing migrations before going live.
The Cross Dataset Duplicator plugin provides a UI-based way to migrate documents and assets between datasets from within Studio. It's more selective than full dataset exports - useful when you want to copy specific documents rather than entire datasets.
npm i @sanity/cross-dataset-duplicatorFrom community discussions, here are common patterns:
Content flows production → staging/dev: Many teams regularly export production data and import it to staging/dev environments so developers work with realistic data. This is the opposite of what you asked, but it's the more common workflow.
Schema changes go staging → production: Developers test schema changes in staging first, then deploy the same schema to production. The content itself usually originates in production.
Separate workflows: Content editors work directly in production datasets, while developers work in development datasets. Schema changes are version-controlled and deployed through your normal CI/CD process.
Datasets can have different schemas: Each dataset is schemaless, so your staging and production datasets can technically have different content models, though keeping them in sync is usually desired.
Cross-dataset references: If you use cross-dataset references, be aware that export/import can have issues with these.
Assets are included: When you export/import, assets (images, files) are included in the process.
The reality is that most teams don't regularly "push" staging content to production. Instead, they use staging to test schema changes and new features, while actual content is created directly in production. If you need frequent dataset synchronization, the Enterprise plan's Cloud Clone feature would be your best bet.
Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.
Content operations
Content backend


The only platform powering content operations
By Industry


Tecovas strengthens their customer connections
Build and Share

Grab your gear: The official Sanity swag store
Read Grab your gear: The official Sanity swag store