Frontend vs Backend Proxying for Sanity API Requests

2 replies
Last updated: Nov 9, 2021
Hello, my team is evaluating Sanity as a possible headless CMS API provider. We have a several FE applications (one where an adminstrator would configure content, one where the end user would see the content, and then a backend API service. My question is - is it recommended that FE requests for content be proxied through our backend service to Sanity or just query the Sanity API directly from the FE?
AI Update

Based on Sanity's architecture and best practices, I recommend querying the Sanity API directly from your frontend for most use cases, with some important security considerations.

Sanity is designed for direct frontend queries and provides several features that make this the optimal approach:

1. API CDN for Performance Sanity's API CDN provides unlimited rate limits for cached content and global distribution. This means your frontend can query Sanity directly without hitting rate limits or experiencing latency issues.

2. Public Datasets For content that end users should see, you can configure your dataset as public, which allows anyone to query it without authentication. This is perfect for typical CMS use cases where you're serving published content to visitors.

3. CORS Support Sanity has built-in CORS configuration that you manage through the Sanity dashboard (Settings > API > CORS Origins). You can whitelist your frontend domains to allow direct browser requests.

Security Best Practices

The key is understanding when and how to use authentication tokens:

For Read-Only Public Content:

  • Configure your dataset as public (no token needed)
  • Query directly from frontend
  • Use the CDN for optimal performance
  • No authentication required for published content

For Private/Draft Content (like your admin app):

For Write Operations:

  • Always proxy through your backend for any create/update/delete operations
  • Use Sanity Functions (serverless compute within Sanity) for handling writes without managing your own backend infrastructure
  • Validate and transform data server-side before submitting to Sanity
  • Never expose write tokens in client-side code

Your Architecture

For your specific setup:

  1. End user app (read-only): Query Sanity API directly using a public dataset or read-only token for the best performance
  2. Admin app (read/write): Query directly for reads, but route writes through your backend API or Sanity Functions
  3. Backend service: Use a token with appropriate write permissions, handle validation and business logic

When to Use a Proxy

You should proxy requests through your backend when:

  • You need to perform write operations
  • You need to add custom business logic or validation
  • You're accessing private data and want an extra security layer
  • You need to combine Sanity data with other backend services
  • You need to filter or transform data before sending to clients

Key Takeaway

The beauty of Sanity's architecture is that you can mix approaches—direct queries for public content (leveraging the CDN for performance) and proxied requests for sensitive operations. This gives you both performance and security. For most read operations, going direct to Sanity is the recommended pattern and will give you the best performance through their global CDN.

Show original thread
2 replies
I guess it depends if all your data is public. We opted for no proxy and treat all data in Sanity as public knowledge
Another consideration is whether you expect heavy traffic. It's not terribly expensive to go over the limits on a plan, but it may still be worth adding a proxy to cache the Sanity contents. One of my projects already had an API requirement because the data is only semi-public (paywalled), so we've also implemented some caching that cut our Sanity API usage significantly.

Sanity – Build the way you think, not the way your CMS thinks

Sanity is the developer-first content operating system that gives you complete control. Schema-as-code, GROQ queries, and real-time APIs mean no more workarounds or waiting for deployments. Free to start, scale as you grow.

Was this answer helpful?