Adding Rate Limit to nuxt.config.js - Example
I totally understand your frustration! This is a common pain point for teams using static site generators like Nuxt with build services like Netlify. Let me break down what's happening and provide solutions that actually work.
What's Happening
According to the technical limits documentation, Sanity enforces rate limits per client IP address per second:
- Maximum global API call rate: 500 requests/second per IP
- Maximum mutation rate: 25 req/s per IP
- Maximum upload rate: 25 req/s per IP
When Nuxt generates your site statically, all requests come from Netlify's build server (single IP), and if you exceed 500 req/s, you get the 429 ClientError: API rate limit exceeded.
To answer your specific question about when the limit resets: it's per second. If you hit 500 requests in one second, you'll get rate limited for the remainder of that second. Once the next second begins, you get another 500 requests. So "until the period ends" means until the current one-second window completes.
Your team plan's monthly API allowance is separate from this per-second rate limit - both exist independently. You have plenty of monthly quota, but you need to spread requests out during builds to avoid bursting past 500 req/s.
Solutions
1. Use the API CDN (Most Important!)
The API CDN has unlimited rate for cached responses. Make sure your Sanity client is configured to use it:
import {createClient} from '@sanity/client'
const client = createClient({
// ... your config
useCdn: true, // Enable CDN - unlimited cached requests!
apiVersion: '2025-05-16'
})On subsequent builds, most content will be served from the CDN without hitting rate limits. This is the biggest win for static generation.
2. Control Parallelism in Your Data Fetching
Limit how many concurrent requests your build makes. You can use a concurrency helper:
// Helper to limit concurrent promises
async function pLimit(tasks, limit) {
const results = []
for (let i = 0; i < tasks.length; i += limit) {
const batch = tasks.slice(i, i + limit)
results.push(...await Promise.all(batch))
}
return results
}
// In your prerender logic
const slugs = ['page-1', 'page-2', /* ... */]
const fetchTasks = slugs.map(slug =>
() => client.fetch(`*[slug.current == $slug][0]`, {slug})
)
// Fetch 10 at a time instead of all at once
const pages = await pLimit(fetchTasks.map(fn => fn()), 10)You can also use libraries like p-limit for more sophisticated concurrency control. Start with a low number (10-20) and gradually increase until you find the sweet spot.
3. Batch Your GROQ Queries
Instead of fetching each page individually, batch queries to reduce total request count:
// Instead of one query per page (100 pages = 100 requests)
const pages = await Promise.all(
slugs.map(slug => client.fetch(`*[slug.current == $slug][0]`, {slug}))
)
// Batch into a single query (100 pages = 1 request)
const pages = await client.fetch(
`*[_type == "page" && slug.current in $slugs]`,
{slugs: allSlugs}
)4. Use Incremental Static Regeneration
Instead of rebuilding everything on each deploy, use Nuxt's ISR capabilities:
// nuxt.config.ts
export default defineNuxtConfig({
routeRules: {
'/blog/**': { isr: 3600 }, // Regenerate blog posts every hour
'/': { prerender: true }
}
})This way you're only rebuilding changed pages, dramatically reducing API requests per build.
5. Add Delays Between Sequential Requests
If you're making many sequential calls, add a small delay:
async function fetchWithDelay(query, params, delayMs = 50) {
await new Promise(resolve => setTimeout(resolve, delayMs))
return client.fetch(query, params)
}About IP Whitelisting
Unfortunately, Sanity doesn't currently offer IP whitelisting to bypass rate limits. The per-second rate limits protect the platform for all users, even on paid plans. However, the 500 req/s limit is quite generous - the issue is the burst nature of static builds hitting it all at once.
Immediate Action Plan
- Verify
useCdn: truein your Sanity client config - this is the biggest fix - Implement request batching where possible to reduce total requests
- Add concurrency control to your data fetching code (limit to 10-20 parallel requests)
- Consider ISR instead of full rebuilds for frequently changing content
The combination of CDN caching (unlimited rate) and spreading out uncached requests should resolve your build failures. Once you have CDN caching working properly, subsequent builds will be much faster and won't hit rate limits nearly as often.
Let me know if you need help implementing any of these solutions!
Sanity – Build the way you think, not the way your CMS thinks
Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.