API Rate Limit Error and Waiting Time for Batch Deletion

5 replies
Last updated: Apr 5, 2022
Hi, I’m trying to batch delete a whole bunch of documents for a fresh start on an existing dataset; unfortunately, running into API Rate Limit error. I’m placing each mutation call in a queue per the docs. How long do I need to wait until I can try again?

ERROR: exception caught trying to delete Student: 04639232 ClientError: API rate limit exceeded
    at handler (/Users/svu/__projects/showcase/cms/node_modules/@sanity/client/lib/http/request.js:28:13)
    at applyMiddleware (/Users/svu/__projects/showcase/cms/node_modules/@sanity/client/node_modules/get-it/src/util/middlewareReducer.js:8:15)
    at onResponse (/Users/svu/__projects/showcase/cms/node_modules/@sanity/client/node_modules/get-it/src/index.js:90:22)
    at cb (/Users/svu/__projects/showcase/cms/node_modules/@sanity/client/node_modules/get-it/src/index.js:57:55)
    at callback (/Users/svu/__projects/showcase/cms/node_modules/@sanity/client/node_modules/get-it/src/request/node-request.js:53:46)
    at /Users/svu/__projects/showcase/cms/node_modules/@sanity/client/node_modules/get-it/src/request/node-request.js:146:14
    at DestroyableTransform.<anonymous> (/Users/svu/__projects/showcase/cms/node_modules/simple-concat/index.js:8:13)
    at Object.onceWrapper (events.js:421:28)
    at DestroyableTransform.emit (events.js:327:22)
    at endReadableNT (/Users/svu/__projects/showcase/cms/node_modules/readable-stream/lib/_stream_readable.js:1010:12) {
  response: {
    body: { message: 'API rate limit exceeded' },
    url: '<https://uvdp4b76.api.sanity.io/v2022-03-26/data/mutate/showcase-spr-2022?returnIds=true&returnDocuments=true&visibility=sync>',
    method: 'POST',
    headers: {
      date: 'Thu, 31 Mar 2022 20:07:00 GMT',
      'content-type': 'application/json; charset=utf-8',
      'x-ratelimit-remaining-second': '0',
      'retry-after': '1',
      'ratelimit-reset': '1',
      'x-ratelimit-limit-second': '50',
      'ratelimit-remaining': '0',
      'ratelimit-limit': '50',
      'content-length': '41',
      vary: 'Origin',
      via: '1.1 google',
      'alt-svc': 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000',
      connection: 'close'
    },
    statusCode: 429,
    statusMessage: 'Too Many Requests'
  },
  statusCode: 429,
  responseBody: '{\n  "message": "API rate limit exceeded"\n}'
}
AI Update

I can see from your error message that the response headers contain the exact information you need! The key headers are:

  • 'retry-after': '1' - This tells you to wait 1 second before retrying
  • 'x-ratelimit-remaining-second': '0' - You've exhausted your current second's quota

So the immediate answer is: wait 1 second (as indicated by the retry-after header) before making your next request.

However, for batch deletions, you're hitting rate limits because you're likely making too many individual requests. Here are much more efficient approaches:

1. Use transactions with multiple mutations (most efficient)

Instead of one mutation per request, batch multiple mutations into a single transaction. You can include many delete operations in one request:

const mutations = documents.map(doc => ({ delete: { id: doc._id } }));

// Batch them into chunks
const chunkSize = 100;
for (let i = 0; i < mutations.length; i += chunkSize) {
  const chunk = mutations.slice(i, i + chunkSize);
  await client.mutate(chunk);
  // Add a small delay between batches if needed
  await new Promise(resolve => setTimeout(resolve, 50));
}

2. Use the Sanity CLI migration tool

The Sanity CLI migration tooling has built-in abstractions that automatically handle batching and rate limits. This is the recommended approach for bulk operations:

# For deleting all documents of a type
sanity documents delete '*[_type == "student"]'

# Or for a complete fresh start
sanity dataset delete <dataset-name>
sanity dataset create <dataset-name>

The CLI migration tool provides automatic batching of mutations into transactions to avoid rate limits, which is exactly what you need for this scenario.

3. Implement retry logic with exponential backoff

When you do hit rate limits, implement a retry strategy that respects the retry-after header:

async function mutateWithRetry(mutations, maxRetries = 5) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await client.mutate(mutations);
    } catch (error) {
      if (error.statusCode === 429) {
        const retryAfter = parseInt(error.response.headers['retry-after'] || '1', 10);
        console.log(`Rate limited. Waiting ${retryAfter}s before retry...`);
        await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
      } else {
        throw error;
      }
    }
  }
  throw new Error('Max retries exceeded');
}

By batching mutations properly (100+ deletes per request) instead of making individual mutation calls, you can delete thousands of documents efficiently without hitting rate limits. The key is reducing the number of HTTP requests by grouping multiple operations into single transactions.

Hi Steve. Api limits cab be found here https://www.sanity.io/docs/technical-limits#50838b4c19db
Thanks
user P
… if the API rate limit is exceeded, is there a time range we have to wait until submitting the batch of requests?
We count it by a rolling window, so it depends on how many you trigger simultaneously
Hi
user P
/
user A
, thanks for following up. I did after some trial and errors. I added some improvements to my data clean up script will test again on the next new project.
Hi
user P
/
user A
, thanks for following up. I did after some trial and errors. I added some improvements to my data clean up script will test again on the next new project.

Sanity – Build the way you think, not the way your CMS thinks

Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.

Was this answer helpful?