# Continuous Integration and Test Strategy https://www.sanity.io/learn/course/testing-sanity-studio/continuous-integration-and-test-strategy.md Move tests from local development into CI pipelines that verify changes before they reach production. Configure automated test runs on pull requests, report test results directly in GitHub, and develop a strategic framework for deciding what to test. Learn to prioritize test coverage based on business impact, complexity, and change frequency—balancing protection with development velocity. ## From local development to production You've been running tests locally with `pnpm test` in watch mode. This provides instant feedback while developing. But tests become more valuable when integrated into your workflow at key points: * **Pull requests** - Automated checks prevent broken code from being merged * **Before deployment** - Tests catch issues before they reach content editors * **Scheduled runs** - Detect drift from dependencies or external changes Running tests as part of your continuous integration (CI) ensures every code change is validated automatically, regardless of who wrote it or what they tested locally. 1. [Architecture & DevOps](https://www.sanity.io/learn/course/architecture-and-devops) covers setting up schema validation, linting, and preview deployments for your pull requests. ## Reporting test output in pull requests When tests fail in CI, Vitest reports the failure(s) in your pull request. In Github, for example, failing pull requests will show: * ❌ Red X next to the commit * Detailed logs showing which tests failed * Line numbers and error messages * Option to re-run failed tests This prevents merging broken code and makes code review more efficient. Reviewers can focus on logic and design, trusting that tests verify correctness. ## What is important to test? Not all code is equally important to test. Prioritize based on: ### High priority - Always test **Validation functions** - Protect data integrity **Data transformation** - Shape content for display **Critical business logic** - Features that could break revenue/experience ### Medium priority - Test when complex **Custom input components** - When they have non-trivial logic **Schema structure helpers** - When they involve logic ### Low priority - Usually skip **Simple schema definitions** - No logic to test **Thin wrappers** - Just pass through to libraries **UI-only components** - Styling with no behavior 1. Testing strategy is about making intentional trade-offs. Perfect coverage isn't the goal—protecting critical business logic while maintaining development velocity is. ## Test organization patterns As your test suite grows, maintain structure: `apps/studio/ ├── schemaTypes/ │ ├── validation/ │ │ ├── eventValidation.ts │ │ └── eventValidation.test.ts │ ├── components/ │ │ ├── DoorsOpenInput.tsx │ │ └── DoorsOpenInput.test.tsx │ └── eventType.ts └── __tests__/ ├── fixtures/ │ ├── validation.ts │ ├── client.ts │ └── providers.tsx └── setup.ts` Key principles: * **Co-locate tests** with the code they test * **Share fixtures** in a central location (`__tests__/fixtures`) * **Name tests** after the file being tested (`DoorsOpenInput.test.tsx`) ## Test maintenance best practices Tests require maintenance like production code. Follow these practices: ### Keep tests simple ```typescript // ❌ Complex test with multiple concerns it('handles everything', async () => { const result1 = await validateVenue(venue1, context1) const result2 = await validateVenue(venue2, context2) const result3 = await validateVenue(venue3, context3) expect(result1).toBe(true) expect(result2).toBe(false) expect(result3).toBe(true) }) // ✅ Focused tests, one concept each it('allows venue for in-person events', async () => { const result = await validateVenue(venue, inPersonContext) expect(result).toBe(true) }) it('rejects venue for virtual events', async () => { const result = await validateVenue(venue, virtualContext) expect(result).toBe('Only in-person events can have a venue') }) ``` ### Use descriptive test names ```typescript // ❌ Vague it('works', () => {}) it('test1', () => {}) // ✅ Clear intent it('allows venue for in-person events', () => {}) it('rejects venue for virtual events', () => {}) it('calculates doors open time 60 minutes before event', () => {}) ``` ### Extract test helpers ```typescript // ❌ Repeated setup in every test it('test 1', () => { const context = {document: {_id: '1', _type: 'event', eventType: 'in-person'}} // ... test }) it('test 2', () => { const context = {document: {_id: '2', _type: 'event', eventType: 'virtual'}} // ... test }) // ✅ Reusable fixture function createEventContext(eventType: string) { return createMockValidationContext({ _id: `event-${eventType}`, _type: 'event', eventType, }) } it('allows venue for in-person events', () => { const context = createEventContext('in-person') // ... test }) ``` ## Start small, grow strategically Building a test suite is an investment. Start small and expand strategically: ### Phase 1: Test critical validation Begin with functions that protect data integrity: * Required field validation * Business rule enforcement * Data consistency checks **Goal**: Prevent content editors from creating invalid documents ### Phase 2: Test complex helpers Add tests for helper functions with non-trivial logic: * Date/time calculations * Formatting utilities * Data transformations **Goal**: Catch bugs in commonly-used utilities ### Phase 3: Test custom components Test custom inputs with complex behavior: * Components with conditional rendering * Components with user interactions * Components that query the dataset **Goal**: Ensure editor UI works correctly ### Phase 4: Integrate CI Add GitHub Actions to run tests automatically: * On every pull request * Before merging to main * Before deployments **Goal**: Prevent untested code from reaching production 1. Review your current Studio code. Categorize your validation functions, helpers, and components into high/medium/low priority based on business impact and complexity. ## Maintaining test quality As your suite grows, periodically review test quality: ### Red-green-refactor cycle 1. **Red** - Write a failing test 2. **Green** - Make it pass with minimal code 3. **Refactor** - Improve both test and production code 1. [Test-driven development (TDD)](https://en.wikipedia.org/wiki/Test-driven_development) naturally produces better-designed code. Writing tests first forces you to think about interfaces and edge cases before implementation. This discipline prevents over-engineering and keeps tests focused. ### Delete obsolete tests When you remove features or refactor code, delete tests that no longer serve a purpose. Dead code in tests is still maintenance burden. ## Key takeaways **Strategic investment** * Tests pay dividends through confident refactoring and faster debugging * Start with high-value tests (validation, critical logic) * Grow your suite incrementally as complexity increases **Technical implementation** * Pure functions are easiest to test * Mock external dependencies (clients, contexts) * Test user-facing behavior, not implementation details **Workflow integration** * Watch mode for instant local feedback * CI runs for automated validation * Coverage reports to find gaps **Sustainable testing** * Keep tests simple and focused * Co-locate tests with code * Delete obsolete tests * Prioritize readability over cleverness