Testing Stateful Studio Logic
In this lesson, you'll test validation functions that need context—rules that query your Content Lake to check conditions across multiple documents.
The validation logic you tested in the previous lesson worked in isolation: it took a user as input and returned a boolean to determine access control. But some business rules require checking other documents in your dataset.
Consider validation that depends on dataset state:
- "Only one event can be featured at a time" (needs to check if others are featured)
- "Artist cannot have overlapping performances" (needs to check other event dates)
These validations need to query Content Lake. To test them, you'll mock the Sanity client and validation context, creating reusable fixtures that keep your tests clean and focused on business logic.
Event companies need to promote one event above others—the "featured" event appears on the homepage, gets social media promotion, and drives ticket sales. Only one event can be featured at a time.
This business rule needs enforcement at the data layer. If two events are featured simultaneously, the homepage breaks and marketing campaigns become confused.
import { DEFAULT_STUDIO_CLIENT_OPTIONS, getPublishedId, type BooleanRule, type ValidationBuilder, type ValidationContext,} from 'sanity'
/** * Checks if setting this event as featured would result in a single featured event * Business logic function that queries the dataset for other featured events */export async function isSingleFeaturedEvent( value: boolean | undefined, context: ValidationContext,): Promise<boolean> { // If not setting to featured, no need to check if (!value) return true
const {getClient, document} = context
if (!document) { throw new Error('Document context required for validation') }
const client = getClient(DEFAULT_STUDIO_CLIENT_OPTIONS) const documentId = getPublishedId(document._id)
// Query for other featured events (excluding this document's versions) const existingFeatured = await client.fetch<boolean>( `defined(*[_type == "event" && featured == true && !sanity::versionOf($documentId)][0]._id)`, {documentId}, {tag: 'validation.single-featured-event', perspective: 'raw'}, )
// Return true if no other featured event exists return !existingFeatured}
/** * Validation builder for the featured field * Ensures only one event can be featured at a time * * @example * ```ts * defineField({ * name: 'featured', * type: 'boolean', * validation: validateSingleFeaturedEvent * }) * ``` */export const validateSingleFeaturedEvent: ValidationBuilder<BooleanRule, boolean> = (rule) => rule.custom(async (value, context) => { if (await isSingleFeaturedEvent(value, context)) { return true }
return 'Only one event can be featured at a time' })
There is a clean separation between the testable business logic function (isSingleFeaturedEvent), which returns a boolean indicating validity, and the validation builder that wraps it with the error message.
When testing functions with external dependencies, you need a controlled environment where you can verify behavior without relying on external systems. Mocking creates this test "harness" by replacing real dependencies with controlled test doubles that you configure precisely for each test scenario.
A mock is a fake implementation that mimics the behavior of a real object. You control what the mock returns, letting you simulate different scenarios without needing the real dependency. Mocks also track how they're called—which methods were invoked, with what arguments, and how many times—letting you verify your code interacts with dependencies correctly.
For validation functions that query Sanity's Content Lake, you'll mock the Sanity client's fetch() method. Instead of running actual database queries, the mock returns predefined values you specify. This lets you test scenarios like "no featured events exist" or "another event is already featured" without populating a real dataset. The tests run in milliseconds instead of seconds, and always produce the same results regardless of what data exists in your actual Content Lake.
Testing our various stateful functions requires setup—mock clients, mock contexts, test data. Rather than recreate this setup in every test, you'll use fixtures: reusable building blocks that encapsulate common test setup patterns.
A fixture is a function that creates consistent test data or dependencies. Instead of writing the same mock setup repeatedly, you call a fixture function that handles the details. This keeps tests focused on what's unique (the scenario being tested) rather than boilerplate (how to create a mock client).
First let's create a client fixture will be reused across all validation tests, ensuring consistency and reducing repetitive code:
import {test as base, vi, type Mock} from 'vitest'import type {SanityClient} from 'sanity'
type MockSanityClient = SanityClient & { fetch: Mock}
/** * Helper function to create a mock Sanity client * Use this when you need a client outside of the test fixture * * @example * ```tsx * const mockClient = createMockClient() * mockClient.fetch.mockResolvedValue({...}) * vi.mocked(useClient).mockReturnValue(mockClient) * ``` */export function createMockClient(): MockSanityClient { return { fetch: vi.fn(), } as unknown as MockSanityClient}
/** * Mock Sanity client fixture * * Provides a mocked Sanity client for testing components that use useClient(). * The client has a mocked fetch() method that can be configured per-test. * * @example * ```tsx * import {test, expect} from '@/__tests__/fixtures/client' * * test('fetches data', async ({mockClient}) => { * mockClient.fetch.mockResolvedValue({_id: '123', title: 'Test'}) * * // Your test code here * }) * ``` */export const test = base.extend<{ mockClient: MockSanityClient}>({ // eslint-disable-next-line no-empty-pattern async mockClient({}, use) { // eslint-disable-next-line react-hooks/rules-of-hooks await use(createMockClient()) },})
mockClient fixture will be reused across all validation tests in this course and beyond. Investing in good fixtures pays off quickly.This fixture extends Vitest's base test function with a mockClient property. Each test automatically gets a fresh mock client, preventing tests from interfering with each other. The fixture pattern keeps test setup minimal while ensuring consistency.
Now let's test our stateful validation function using our mockClient fixture as well as some locally defined ones:
import {describe, expect} from 'vitest'import type {ValidationContext, ID} from 'sanity'
import {isSingleFeaturedEvent} from '../helpers'import {it} from './__tests__/fixtures/client'
describe('isSingleFeaturedEvent', () => { // Local helper - creates mock event document const createMockEventDocument = (id: ID) => ({ _id: id, _type: 'event', _createdAt: '2025-01-01T00:00:00Z', _updatedAt: '2025-01-01T00:00:00Z', _rev: 'mock-rev', })
// Local fixture - creates validation context for featured event tests const createValidationContext = ({documentId, client}: {documentId: string; client: any}) => ({ getClient: () => client, document: createMockEventDocument(documentId), path: ['featured'], }) as unknown as ValidationContext
it('returns `true` when no other featured event exists', async ({mockClient}) => { mockClient.fetch.mockResolvedValue(false) // No existing featured event
const context = createValidationContext({documentId: 'event-1', client: mockClient})
expect(await isSingleFeaturedEvent(true, context)).toBe(true) })
it('returns `false` when another event is already featured', async ({ mockClient }) => { mockClient.fetch.mockResolvedValue(true) // Another event is featured
const context = createValidationContext({documentId: 'event-2', client: mockClient})
expect(await isSingleFeaturedEvent(true, context)).toBe(false) })
it('returns true when unsetting featured (no query needed)', async ({ mockClient }) => { const context = createValidationContext({documentId: 'event-3', client: mockClient})
expect(await isSingleFeaturedEvent(false, context)).toBe(true) // Should not query when value is false expect(mockClient.fetch).not.toHaveBeenCalled() })
it('queries with correct parameters and excludes document versions', async ({ mockClient }) => { mockClient.fetch.mockResolvedValue(false)
const documentId = getDraftId('event-4') const context = createValidationContext({documentId, client: mockClient})
await isSingleFeaturedEvent(true, context)
expect(mockClient.fetch).toHaveBeenCalledWith( expect.any(String), expect.objectContaining({documentId: getPublishedId(documentId)}), // Published ID, not draft expect.objectContaining({tag: 'validation.single-featured-event', perspective: 'raw'}), ) })})await when calling these validators.The local helper functions (createMockEventDocument, createContext) keep test setup close to the tests that use them. While createMockClient() is imported from fixtures (reusable across all tests), the validation context helper is specific to featured event validation—it knows about the event type and featured path.
This pattern balances reusability with specificity:
- Global fixtures - Broadly useful (mock clients)
- Local helpers - Test-suite specific (event documents, featured field context)
These four tests verify the business logic returns correct booleans:
- No existing featured event → Returns
true(can set featured) - Existing featured event → Returns
false(cannot set featured) - Unsetting featured → Returns
truewithout querying (performance) - Correct query → Verifies GROQ uses published IDs and tags
By testing the business logic function, we verify the core decision-making. The validation builder just wraps this with an error message—that's simple enough to trust without testing.
Validation functions that query your dataset might have more moving parts than pure functions—async operations, client queries, document ID handling—but they're equally critical to test. The complexity makes them more fragile and the business impact makes them more important:
Without this test:
- Refactor the query → accidentally allow multiple featured events
- Change the document ID logic → validation blocks the wrong documents
- Remove the early return → unnecessary queries slow down the editor
With this test:
- Query changes break tests immediately
- Document ID handling is verified
- Performance optimizations are protected
This is the kind of business logic that justifies test investment. A broken featured event selector means confused marketing, broken homepage, and lost ticket sales.
You've learned to test validation functions that query your Content Lake to enforce business rules. By creating a reusable mock client fixture and test-specific local helpers for validation contexts, you've built a testing harness that keeps tests focused on business logic rather than setup boilerplate. You now know how to test async validation with controlled mock return values, verify that queries use correct parameters, and protect performance optimizations with assertions that functions don't query unnecessarily. These patterns work for any validation rule that accesses document state or queries your Content Lake to check conditions across multiple documents.
In the next lesson, you'll test custom input components that render UI, use Sanity hooks, and handle user interactions. You'll learn to set up a browser-like test environment and simulate real user behavior.