Tips for reducing API requests and bandwidth usage on a high-traffic Next.js blog

8 replies
Last updated: Mar 23, 2022
Hi, I have a next.js blog that receives a lot of traffic. Currently, my API Requests and Bandwidth usage are 30x more than what's included....with very expensive overage costs 😞 Is there anything I can do to reduce this usage? Any guides to refer to I'm not finding? I won't be able to afford this long term. Thank you
AI Update

I understand your concern - those overage costs can definitely add up quickly! The good news is there are several proven strategies to dramatically reduce your Sanity API requests and bandwidth usage. Let me break down the most effective approaches:

1. Implement Sanity's Live Content API

The most modern solution is Sanity's Live Content API, which is specifically designed to reduce API requests while keeping content fresh. It handles fetching, rendering, caching, and cache invalidation automatically - this is now the recommended approach for Next.js applications and can significantly reduce your API call volume since it intelligently manages caching for you without complex setup.

2. Optimize Your Next.js Caching Strategy

Since you're on Next.js with high traffic, proper caching is crucial:

Use Incremental Static Regeneration (ISR):

  • Set appropriate revalidate times based on how often your content actually changes
  • For a blog, most posts probably don't change frequently after publication - consider revalidate: 3600 (1 hour) or even longer for older posts
  • Only use short revalidation times (60 seconds or less) for truly dynamic content

Example:

const posts = await sanityFetch({
  query: POSTS_QUERY,
  revalidate: 3600, // Revalidate every hour instead of on every request
})

For App Router:

  • Use force-static for pages that don't change often
  • Configure proper cache segments with appropriate revalidation periods
  • Consider using on-demand revalidation with Sanity Functions (the modern, recommended approach) or webhooks to only revalidate when content actually changes, rather than checking constantly

3. Optimize Your GROQ Queries

This can have a huge impact on bandwidth:

Use projections to only fetch the fields you need:

// Bad - fetches everything
*[_type == "post"]

// Good - only fetches what you need
*[_type == "post"] {
  title,
  slug,
  publishedAt,
  "imageUrl": image.asset->url
}

Avoid deep reference resolution when not necessary - each -> traversal adds to your query cost and response size. Read more about query optimization techniques in Sanity's documentation.

4. Optimize Image Delivery

Images often account for the majority of bandwidth:

  • Make sure you're using next/image which automatically optimizes images
  • Use Sanity's Image Pipeline with appropriate transformations (resize images to the actual display size, not serving full-resolution images)
  • Add auto=format to serve WebP/AVIF to supported browsers
  • The images are cached on Sanity's global CDN, so proper sizing prevents unnecessary bandwidth usage

Example:

import imageUrlBuilder from '@sanity/image-url'

const builder = imageUrlBuilder(client)

// Request appropriately sized images
const imageUrl = builder
  .image(post.image)
  .width(800)
  .auto('format')
  .url()

5. Disable Unnecessary Prefetching

Next.js Link components prefetch by default, which generates API requests even for links users never click:

  • Consider disabling global prefetching in App Router for high-traffic sites
  • Use prefetch={false} on individual links that don't need it
  • This alone can reduce API calls by 50% or more on content-heavy sites

6. Check for Runaway Processes

Review your Sanity project's usage dashboard to identify:

  • Which queries are running most frequently
  • Whether you have preview/draft mode running more than necessary (draft queries aren't cached by the CDN)
  • If any automated processes or bots are hitting your API unnecessarily
  • Consider disabling Sanity CDN (useCdn: false) only for preview/draft contexts, not production

7. Implement Smart Pagination

If you're loading large lists of posts, implement efficient pagination strategies rather than fetching all posts at once.

Quick Wins to Start With

  1. Add ISR with reasonable revalidation times (this alone can reduce requests by 80-90%)
  2. Add projections to your GROQ queries to reduce response sizes
  3. Optimize image URLs with proper sizing and format parameters
  4. Disable unnecessary prefetching

The combination of proper caching strategy + query optimization should reduce your usage by 80-95% in most cases. Start with implementing ISR or the Live Content API with appropriate revalidation times, then optimize your queries and images. Monitor your usage dashboard to see which changes have the biggest impact for your specific use case.

Are you using getStaticProps for fetching blog data or getServerSideProps?
GetStaticProps
Do you host any videos or high-resolution images on your blog? Also, DM me your project ID!
GetStaticProps
My requests and bandwidth were also quite more than expected; in my case it had to do with non-human bots visiting the site, which triggered the code of the page just like a regular visit, which means my queries ran.
I was in a position where I could safely keep them from visiting without adversely affecting the site. If you have or can create/track some sort of log mechanism, pay attention to where the visits are coming from and, if possible, the user agents. There may be an opportunity to trim the fat ( just make sure you still allow responsible/moderate and probably-necessary bots like Google to visit )

We were headed for 25/mo between bandwidth and requests and now it's probably going to be 2 to 4.
user S
Thank you for the tip. Will look into bots -- had a logger running on my Vercel instance for the past week. Doing a deep dive today. How did you engineer the bot wall?
user B
We are actually using the PHP version of the Sanity client and on and old-timey Apache site, but one with a weird config and that we couldn't use .htaccess or robots.txt for; the client was imported with the require function into the few relevant pages that needed to pull in Sanity content, so I used PHP to look at the user agent string and wrapped the client's "fetch" command in a condition that only let the code inside get run if it it didn't match one of a few different kinds of substrings ( like "petal" for PetalBot, for example). They still visited but they stopped triggering my Sanity code.
In the case of the particular bots we targeted, like an uptime bot, no issue arrises from that content missing from the page, so it was okay to proceed with that approach.

If I ever do something Next-y or
jamstack -y, I might try the FingerprintJS middleware
Wow cool. Super helpful, thank you!

Sanity – Build the way you think, not the way your CMS thinks

Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.

Was this answer helpful?