How to import array of objects into Sanity schema?

2 replies
Last updated: May 13, 2020
Any advice on data imports? I have an array of objects (280+ objects) that need mapped into a sanity schema—the fields almost match, and the data is very regular—so transforming it to match the sanity required terminology wouldn't be too difficult.
I saw the docs here
https://www.sanity.io/docs/importing-data — but not really following, to be honest
AI Update

I totally get it—the docs can be a bit overwhelming when you're just trying to get data into Sanity! Since you have 280+ objects with regular structure that almost matches your schema, you've got a few solid approaches:

The CLI migration tool is probably your best bet here. It's designed exactly for this scenario and gives you real-time feedback as you work.

Quick start:

npx sanity@latest migration create

This creates a migration script where you can transform your data. Here's a simple example:

import {defineMigration, at, set} from 'sanity/migrate'

export default defineMigration({
  title: 'Import my 280 objects',
  documentTypes: ["yourDocumentType"],
  
  migrate: {
    async document(doc, context) {
      // Your transformation logic here
      return [
        at('fieldName', set(transformedValue)),
        // ... more field mappings
      ]
    }
  }
})

Why this is great:

  • Runs in dry-run mode by default so you can see what'll happen before committing
  • Handles batching automatically (no rate limit headaches)
  • Validates against your Studio schema as it runs
  • You can run it multiple times safely (idempotent)
  • Real-time feedback in Studio as documents appear

Alternative: NDJSON Import

If your data transformation is straightforward, you could also generate an NDJSON file and use the CLI import command. NDJSON is just newline-delimited JSON—one document per line.

Your workflow would be:

  1. Write a Node script to transform your 280 objects into Sanity document format
  2. Output as .ndjson (one JSON object per line)
  3. Import with: sanity dataset import yourfile.ndjson production

Important flags:

  • --replace - Replaces existing documents with matching IDs
  • --missing - Only imports documents that don't exist yet
  • --allow-failing-assets - Continues if some assets fail to upload

Each document needs:

{"_id": "unique-id", "_type": "yourType", "title": "Something", ...}

The _id and _type fields are required. The _id should be unique and deterministic (so you can re-run imports safely).

Practical Tips

Start simple: Don't try to map everything perfectly on the first run. Get the basic structure working with just IDs and titles, then add complexity incrementally. This makes debugging way easier.

Handle missing data: Your source data might have gaps. Use optional chaining and fallbacks:

at('description', set(sourceObj.description || 'No description'))

For arrays with objects: Remember that objects in arrays need a _key field (unique identifier). You can generate these with a utility like uuid:

items: sourceArray.map(item => ({
  _key: uuid(),
  ...item
}))

Which Should You Choose?

  • Use the migration tool if you want the safety net of dry-runs and schema validation, or if you might need to run this multiple times
  • Use NDJSON import if your transformation is dead simple and you just want to get data in quickly

Given that you mentioned the fields "almost match," I'd lean toward the migration tool—it'll catch any mismatches before you commit, and you can iterate on the transformation logic easily.

Hope this helps! Feel free to share more about your specific data structure if you want more targeted advice.

Show original thread
2 replies
Could you elaborate what you need help with? Any particular thing you don't understand?
I actually just figured this out—I am not 100% sure how it worked, but I just copied the format of the docs example, and followed the errors to format it until sanity was happy.
Thank you though!

Sanity – Build the way you think, not the way your CMS thinks

Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.

Was this answer helpful?