Best practices for creating/updating multiple documents in Sanity using JS client and p-throttle
When creating or updating 100+ documents with the Sanity JS client, a naive loop with client.create() will quickly hit rate limits (max 25 requests per second), which is exactly what you're experiencing. Here are the best practices:
The Rate Limit Problem
Sanity's API has a rate limit of 25 requests per second. When you fire off 100+ simultaneous client.create() calls, you'll blow past this limit and start seeing failures.
Best Approaches (in order of preference)
1. Use Throttling with Batched Transactions (Recommended)
The most efficient approach combines throttling with transactions to batch documents into groups. This example uses p-throttle:
const pThrottle = require('p-throttle')
const persistBatch = pThrottle(
// Batch documents into transactions of 10
batch => batch.reduce(
(trx, doc) => trx.createOrReplace(doc),
client.transaction()
).commit(),
// Max 20 requests per second
20,
1000 // 1 second window
)
// Split your documents into batches of 10
let batch = []
for (let i = 0; i < documents.length; i++) {
batch.push(documents[i])
if (batch.length === 10 || (i === documents.length - 1 && batch.length > 0)) {
persistBatch(batch)
batch = []
}
}This approach is optimal because:
- Transactions reduce the number of API calls (10 docs = 1 request)
- Throttling prevents rate limit violations
- Keep transaction payloads reasonable (< 500kB recommended)
2. Simple Throttling
If your documents are large or you prefer simpler code:
const pThrottle = require('p-throttle')
const persistDocument = pThrottle(
doc => client.createOrReplace(doc, {visibility: 'async'}),
20, // Max 20 requests
1000 // Within 1 second
)
await Promise.all(documents.map(persistDocument))Note the {visibility: 'async'} option - this returns immediately when changes are committed rather than waiting for them to be queryable, giving you better performance.
3. Use the CLI for Large Imports
For one-time imports or very large datasets (1000+ documents), use the Sanity CLI with NDJSON files:
# Export your data as NDJSON (newline-delimited JSON)
# Each line should be a complete document
sanity dataset import data.ndjson productionThis bypasses rate limits and is much faster for bulk operations.
Key Considerations
- Transaction size: Keep individual transactions under ~500kB
- Visibility option: Use
{visibility: 'async'}or'deferred'for better performance when you don't need immediate query visibility - Error handling: Wrap operations in try-catch blocks to handle failures gracefully
- Libraries:
p-throttleis recommended, butthrottled-queuealso works
The throttling + batched transactions approach gives you the best balance of speed, reliability, and staying within API limits for most use cases involving 100+ documents.
Sanity β Build the way you think, not the way your CMS thinks
Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.