How to import array of objects into Sanity schema?
I totally get it—the docs can be a bit overwhelming when you're just trying to get data into Sanity! Since you have 280+ objects with regular structure that almost matches your schema, you've got a few solid approaches:
The Easiest Path: CLI Migration Tool (Recommended)
The CLI migration tool is probably your best bet here. It's designed exactly for this scenario and gives you real-time feedback as you work.
Quick start:
npx sanity@latest migration createThis creates a migration script where you can transform your data. Here's a simple example:
import {defineMigration, at, set} from 'sanity/migrate'
export default defineMigration({
title: 'Import my 280 objects',
documentTypes: ["yourDocumentType"],
migrate: {
async document(doc, context) {
// Your transformation logic here
return [
at('fieldName', set(transformedValue)),
// ... more field mappings
]
}
}
})Why this is great:
- Runs in dry-run mode by default so you can see what'll happen before committing
- Handles batching automatically (no rate limit headaches)
- Validates against your Studio schema as it runs
- You can run it multiple times safely (idempotent)
- Real-time feedback in Studio as documents appear
Alternative: NDJSON Import
If your data transformation is straightforward, you could also generate an NDJSON file and use the CLI import command. NDJSON is just newline-delimited JSON—one document per line.
Your workflow would be:
- Write a Node script to transform your 280 objects into Sanity document format
- Output as
.ndjson(one JSON object per line) - Import with:
sanity dataset import yourfile.ndjson production
Important flags:
--replace- Replaces existing documents with matching IDs--missing- Only imports documents that don't exist yet--allow-failing-assets- Continues if some assets fail to upload
Each document needs:
{"_id": "unique-id", "_type": "yourType", "title": "Something", ...}The _id and _type fields are required. The _id should be unique and deterministic (so you can re-run imports safely).
Practical Tips
Start simple: Don't try to map everything perfectly on the first run. Get the basic structure working with just IDs and titles, then add complexity incrementally. This makes debugging way easier.
Handle missing data: Your source data might have gaps. Use optional chaining and fallbacks:
at('description', set(sourceObj.description || 'No description'))For arrays with objects: Remember that objects in arrays need a _key field (unique identifier). You can generate these with a utility like uuid:
items: sourceArray.map(item => ({
_key: uuid(),
...item
}))Which Should You Choose?
- Use the migration tool if you want the safety net of dry-runs and schema validation, or if you might need to run this multiple times
- Use NDJSON import if your transformation is dead simple and you just want to get data in quickly
Given that you mentioned the fields "almost match," I'd lean toward the migration tool—it'll catch any mismatches before you commit, and you can iterate on the transformation logic easily.
Hope this helps! Feel free to share more about your specific data structure if you want more targeted advice.
Show original thread2 replies
Sanity – Build the way you think, not the way your CMS thinks
Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.