Best practices for creating/updating multiple documents in Sanity using JS client and p-throttle

8 replies
Last updated: Apr 7, 2020
Let's say I need to create/update 100+ documents. Is there a best practice? Tried a naive approach to just loop through the data, and do a
client.create()
for each entry, but then things quickly starts to fail. Using the JS client.
AI Update

When creating or updating 100+ documents with the Sanity JS client, a naive loop with client.create() will quickly hit rate limits (max 25 requests per second), which is exactly what you're experiencing. Here are the best practices:

The Rate Limit Problem

Sanity's API has a rate limit of 25 requests per second. When you fire off 100+ simultaneous client.create() calls, you'll blow past this limit and start seeing failures.

Best Approaches (in order of preference)

The most efficient approach combines throttling with transactions to batch documents into groups. This example uses p-throttle:

const pThrottle = require('p-throttle')

const persistBatch = pThrottle(
  // Batch documents into transactions of 10
  batch => batch.reduce(
    (trx, doc) => trx.createOrReplace(doc),
    client.transaction()
  ).commit(),
  // Max 20 requests per second
  20,
  1000 // 1 second window
)

// Split your documents into batches of 10
let batch = []
for (let i = 0; i < documents.length; i++) {
  batch.push(documents[i])
  
  if (batch.length === 10 || (i === documents.length - 1 && batch.length > 0)) {
    persistBatch(batch)
    batch = []
  }
}

This approach is optimal because:

  • Transactions reduce the number of API calls (10 docs = 1 request)
  • Throttling prevents rate limit violations
  • Keep transaction payloads reasonable (< 500kB recommended)

2. Simple Throttling

If your documents are large or you prefer simpler code:

const pThrottle = require('p-throttle')

const persistDocument = pThrottle(
  doc => client.createOrReplace(doc, {visibility: 'async'}),
  20, // Max 20 requests
  1000 // Within 1 second
)

await Promise.all(documents.map(persistDocument))

Note the {visibility: 'async'} option - this returns immediately when changes are committed rather than waiting for them to be queryable, giving you better performance.

3. Use the CLI for Large Imports

For one-time imports or very large datasets (1000+ documents), use the Sanity CLI with NDJSON files:

# Export your data as NDJSON (newline-delimited JSON)
# Each line should be a complete document
sanity dataset import data.ndjson production

This bypasses rate limits and is much faster for bulk operations.

Key Considerations

  • Transaction size: Keep individual transactions under ~500kB
  • Visibility option: Use {visibility: 'async'} or 'deferred' for better performance when you don't need immediate query visibility
  • Error handling: Wrap operations in try-catch blocks to handle failures gracefully
  • Libraries: p-throttle is recommended, but throttled-queue also works

The throttling + batched transactions approach gives you the best balance of speed, reliability, and staying within API limits for most use cases involving 100+ documents.

Works with 10-20 requests at a time (hence the
slice()
, but starts to fail when increased to about 30 simultaneous requests.
Yeah, that'll be the rate limiting kicking in (max 25 requests per second)!
I'd suggest using a throttler such as `p-throttle`:
npm install p-throttle


const pThrottle = require('p-throttle')

const persistSpeaker = pThrottle(
  // Define the function to be called when ready
  speaker => client.createOrReplace(speaker, {visibility: 'async'}),
  // Max 20 requests
  20,
  // Within a 1 second window
  1000
)

Promise.all(
  speakers
    .map(transformSpeaker)
    .map(persistSpeaker)
)
  .then(console.log)
  .catch(console.error)
You could also use a transaction, but if you've got 100+ speakers I'm not sure I would suggest it - one should try to keep the size of the transaction payload below a reasonable size (&lt; 500kB perhaps?)
So kind of depends on the size and number of those documents.
// Use a transaction (not great for a large number of documents)
speakers
  .map(transformSpeaker)
  .reduce(
    (trx, speaker) => trx.createOrReplace(speaker),
    client.transaction()
  )
  .commit()
  .then(console.log)
  .catch(console.error)
Even better, combine the two approaches, batching the speakers up into groups of a certain size (10 in this case) and doing transactions for them:

const pThrottle = require('p-throttle')

const persistSpeakerBatch = pThrottle(
  // Define the function to be called when ready
  batch => batch.reduce(
    (trx, speaker) => trx.createOrReplace(speaker),
    client.transaction()
  ),
  // Max 20 requests
  20,
  // Within a 1 second window
  1000
)

let batch = []
for (let i = 0; i < speakers.length; i++) {
  batch.push(speakers[i])

  if (batch.length === 10 || (i === speakers.length - 1 && batch.length > 0)) {
    persistSpeakerBatch(batch)
    batch = []
  }
}
Thanks for clarifying! πŸ˜„ I did manage to get it to work with the
throttled-queue
library, but
p-throttle
looks a bit nicer, and has more users so think I will replace it. πŸ™‚
This is for a importer (importing from
Sessionize.com ), so speed is not that important. Manually executed once in a while.
But really like the combined example.
πŸ˜„
I also looked at the
transaction()
API to see if I could use that, but didn't figure out how to reduce the list into it, so really cool to see that example. πŸ™‚

Sanity – Build the way you think, not the way your CMS thinks

Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.

Was this answer helpful?