Issue with API rate limits causing errors in build script

53 replies
Last updated: Mar 9, 2022
It seems we hit one of the api rate limits per IP when one of our developers was migrating data using the API and now he seems to be blocked: https://www.sanity.io/docs/technical-limits
How can we enable his IP again? How long does the block stays?
AI Update

Based on the technical limits documentation, I can help clarify the situation with the IP rate limit block.

How Long Does the Block Last?

The good news is that IP rate limit blocks are temporary and reset automatically. According to the documentation, the rate limits are enforced per second. This means:

  • Once you exceed the rate limit (e.g., more than 25 mutations/second or 500 total API calls/second), you'll receive HTTP 429 responses
  • The block lasts only until the current one-second period ends
  • After that second passes, the counter resets and requests can resume

So your developer should be able to make requests again within seconds of stopping the migration activity that triggered the limit. There's no permanent IP blacklist - it's just a per-second throttle.

Understanding the Rate Limits

The per-IP rate limits are:

  • Maximum mutation rate: 25 req/s (POST to /data/mutate)
  • Maximum upload rate: 25 req/s (POST to /assets/)
  • Maximum global API call rate: 500 req/s

Best Practices for Data Migration

To avoid hitting these limits during migrations:

  1. Use the CLI import tool: The sanity dataset import command is specifically designed for bulk imports and handles rate limiting automatically with built-in retry logic and proper throttling

  2. Implement throttling: If writing a custom migration script, add delays between batches of requests to stay under the 25 req/s limit for mutations

  3. Use transactions: Batch multiple mutations into single transaction requests where possible to reduce the total number of API calls

  4. Add retry logic: Implement exponential backoff when you receive 429 responses - wait a bit longer after each retry

If You Need Higher Limits

If your developer continues to experience issues or if you need to perform large-scale migrations that require higher limits, you should contact Sanity support to discuss your specific needs. Enterprise customers can work with Sanity to customize limits for their use case.

For immediate relief, the simplest solution is to slow down the migration script to stay under 25 requests per second, or better yet, use the official sanity dataset import CLI tool which handles all of this automatically.

Hi User. Sorry for not getting back to you earlier. Is your teammate still being blocked?
Hello
user E
, yep I'm still getting this error (changed the project id for
foo
) :
web:build: Error occurred prerendering page "/de/about". Read more: <https://nextjs.org/docs/messages/prerender-error>
web:build: Error: getaddrinfo ENOTFOUND <http://foo.apicdn.sanity.io|foo.apicdn.sanity.io>
web:build:     at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)
It seems like User is being blacklisted from this slack too 😅
it's strange because it happening randomly, the build script render like 300 pages then I got this error and after that the script render more 200 pages or so without any error.
it's strange because it happening randomly, the build script render like 300 pages then I got this error and after that the script render more 200 pages or so without any error.
some times I get like 5 same error for different pages
Hi both! Sorry User, did we miss a different thread that you posted? We try our best to keep on top of new messages, but sometimes it can take a little while for us to get to them.
Do you have an estimation of how many API calls you are sending? Is it just one per page?
I just posted in this thread, User told about slack blacklist because my user appears blank for some reason lol
well in this case it's one query per page (build script) and we have 469 pages
but I think the main problem is because I ran some times a migration script that probably did something around 1k requests each time that I ran it
Thank you for providing the further details. The Slack block list is news to me!
Can you confirm your build script is issuing requests to the API CDN? It looks like it is based on your error message, but I just wanted to double check.
Thank you for providing the further details. The Slack block list is news to me!
Can you confirm your build script is issuing requests to the API CDN? It looks like it is based on your error message, but I just wanted to double check.
I think so, because of the subdomain in the error, but we are not calling directly requests to the API CDN we are using the property
useCdn: true
in our client
I just set the
useCdn
to
false
now for test and now I got this error running the build again:
web:build: Error occurred prerendering page "/fr/404". Read more: <https://nextjs.org/docs/messages/prerender-error>
web:build: Error: getaddrinfo ENOTFOUND <http://foo.api.sanity.io|foo.api.sanity.io>
web:build:     at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)
what is strange for me is:I'm not getting the error for all requests, for example 200 pages are rendered (200 queries runs with success) then I get the error and after that I have more queries that runs with success as well
and seems like I'm the only one with this problem lol
I just checked in with the team. I don't think this is due to rate limiting. Our API CDN doesn't have a rate limit in place, and any requests to our APIs that are rate limited will fail with a
429 - Too many requests
status.
What might be happening is that you're firing off too many requests simultaneously for node to handle. Node doesn't cache DNS, so it has to resolve for every request. I think this would explain why your issue is sporadic.

Would you be able to try throttling your build script? `p-throttle`
is a great tool for doing this .
Let me try do that, I'm just thinking why just me? lol even the build running in the Vercel don't get this error
I'm using the config for the throttle package:
const throttle = pThrottle({
	limit: 1,
	interval: 1000
});
the first time that I ran it build successfully, but I triggered again the build using the same thrittle config and got the error again after 117 queries
increasing the interval for test
const throttle = pThrottle({
	limit: 1,
	interval: 5000
});
I'm still getting the error
Error: getaddrinfo ENOTFOUND <http://zzz.apicdn.sanity.io|zzz.apicdn.sanity.io>
I'm thinking if it couldn't be an issue with the DNS in my region maybe
I'm getting different results every time I refresh the DNS Checker page
I'm getting different results every time I refresh the DNS Checker page
Thank you for trying the throttling! Have you tried an alternative DNS resolver?

Cloudflare provides `1.1.1.1` , and Google provides `8.8.8.8` .
It would be interesting to run your build script while one of these alternative resolution services are active for your system.
Thank you for trying the throttling! Have you tried an alternative DNS resolver?

Cloudflare provides `1.1.1.1` , and Google provides `8.8.8.8` .
It would be interesting to run your build script while one of these alternative resolution services are active for your system.
let me try that
let me try that
I'm Still getting the error 😞
I think this is unlikely to be the issue, but can you try flushing your DNS cache by running this command in the terminal?

sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
Same error after run the command
Same error after run the command
Could it be something with my network provider maybe?
I'm in moving process now so in some days I will test with another provider and let you know
I'm having similar problems. It's a nuxt site with the @nuxtjs/sanity plugin.
yarn dev
works fine but
yarn generate
emits spurious errors like
getaddrinfo ENOTFOUND <http://xyz.apicdn.sanity.io|xyz.apicdn.sanity.io> 
or
read ECONNRESET
I'm not really sure when this started, possibly in the last couple of days
same here, it started some days ago
same here, it started some days ago
Now in my new place the error stopped, seems like it was something related to the network provider
Thanks for letting us know, Userian! Really glad this is working for you now. I'd love to get to the bottom of this one, but it's quite tricky to debug remotely.
Yeah totally understand that User, Thank you for your support
Having the same issue now. Why does the DNS change propagation that often?
now I got the same error again
😞
Really sorry to hear this issue is back 😔. What happens if you run a
traceroute
for the failed request domain from your terminal?
❯ traceroute <http://apicdn.sanity.io|apicdn.sanity.io>
traceroute to apicdn.sanity.onl (34.102.168.221), 64 hops max, 52 byte packets
 1  192.168.3.1 (192.168.3.1)  3.825 ms  2.256 ms  2.193 ms
 2  172.31.255.255 (172.31.255.255)  4.801 ms  5.117 ms  4.388 ms
 3  201.16.64.12 (201.16.64.12)  5.185 ms  5.530 ms  5.512 ms
 4  * * *
 5  * * *
 6  * * *
 7  <http://ae5-0.core02.spo1.commcorp.net.br|ae5-0.core02.spo1.commcorp.net.br> (201.16.1.93)  46.940 ms  17.810 ms  15.728 ms
 8  142.250.162.184 (142.250.162.184)  17.183 ms  16.140 ms  16.512 ms
 9  108.170.227.19 (108.170.227.19)  15.305 ms  15.032 ms
    108.170.226.225 (108.170.226.225)  14.828 ms
10  142.251.67.85 <tel:(142.251.67.85|(142.251.67.85>)  14.919 ms
    209.85.251.5 (209.85.251.5)  15.156 ms  14.931 ms
11  <http://221.168.102.34.bc.googleusercontent.com|221.168.102.34.bc.googleusercontent.com> (34.102.168.221)  14.245 ms  14.670 ms  14.597 ms
williantamagi in ~ took 45s
Hmm. Thanks for trying that. Would it be convenient to add a retry mechanism to your requests? `p-retry` would probably be the easiest way to do this .
If there is an intermittent connectivity issue between Sanity and yourself, this would help make your build script more fault tolerant.
Also: User and me are located in two different continents. It's not possible that this is related to sanity and ourselves. I think it has to do with weird dns propagation. If you see the screenshot I shared, many places were red.
I can try that
We found the problem, it's related to the Promise.all that was using in our project with 5 requests to the sanity and it was executing in every generated page, the fast fix was remove the Promise.all and replace to the sequential request using the
await request1 ... await request2 ....
approach.For reference:
https://github.com/vercel/next.js/issues/12494
I'm so glad you got to the bottom of this, and thank you for sharing the solution! That is certainly an interesting one.
I wonder whether allowing
some concurrency would be a better solution than running the requests serially. Either way, this is incredibly useful debugging. I'm sorry for the pain you had to go through to find the answer!
No problem at all User, we are trying to solve this thing of request a lot of time the fetches, I think when we fix that maybe we can use the Promise.all again
No problem at all User, we are trying to solve this thing of request a lot of time this fetches, I think when we fix that maybe we can use the Promise.all again

Sanity – Build the way you think, not the way your CMS thinks

Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.

Was this answer helpful?