Continuous Integration and Test Strategy
You've been running tests locally with pnpm test in watch mode. This provides instant feedback while developing. But tests become more valuable when integrated into your workflow at key points:
- Pull requests - Automated checks prevent broken code from being merged
- Before deployment - Tests catch issues before they reach content editors
- Scheduled runs - Detect drift from dependencies or external changes
Running tests as part of your continuous integration (CI) ensures every code change is validated automatically, regardless of who wrote it or what they tested locally.
When tests fail in CI, Vitest reports the failure(s) in your pull request. In Github, for example, failing pull requests will show:
- ❌ Red X next to the commit
- Detailed logs showing which tests failed
- Line numbers and error messages
- Option to re-run failed tests
This prevents merging broken code and makes code review more efficient. Reviewers can focus on logic and design, trusting that tests verify correctness.
Not all code is equally important to test. Prioritize based on:
Validation functions - Protect data integrity
Data transformation - Shape content for display
Critical business logic - Features that could break revenue/experience
Custom input components - When they have non-trivial logic
Schema structure helpers - When they involve logic
Simple schema definitions - No logic to test
Thin wrappers - Just pass through to libraries
UI-only components - Styling with no behavior
As your test suite grows, maintain structure:
apps/studio/
├── schemaTypes/
│ ├── validation/
│ │ ├── eventValidation.ts
│ │ └── eventValidation.test.ts
│ ├── components/
│ │ ├── DoorsOpenInput.tsx
│ │ └── DoorsOpenInput.test.tsx
│ └── eventType.ts
└── __tests__/
├── fixtures/
│ ├── validation.ts
│ ├── client.ts
│ └── providers.tsx
└── setup.ts
Key principles:
- Co-locate tests with the code they test
- Share fixtures in a central location (
__tests__/fixtures) - Name tests after the file being tested (
DoorsOpenInput.test.tsx)
Tests require maintenance like production code. Follow these practices:
// ❌ Complex test with multiple concernsit('handles everything', async () => { const result1 = await validateVenue(venue1, context1) const result2 = await validateVenue(venue2, context2) const result3 = await validateVenue(venue3, context3) expect(result1).toBe(true) expect(result2).toBe(false) expect(result3).toBe(true)})
// ✅ Focused tests, one concept eachit('allows venue for in-person events', async () => { const result = await validateVenue(venue, inPersonContext) expect(result).toBe(true)})
it('rejects venue for virtual events', async () => { const result = await validateVenue(venue, virtualContext) expect(result).toBe('Only in-person events can have a venue')})// ❌ Vagueit('works', () => {})it('test1', () => {})
// ✅ Clear intentit('allows venue for in-person events', () => {})it('rejects venue for virtual events', () => {})it('calculates doors open time 60 minutes before event', () => {})// ❌ Repeated setup in every testit('test 1', () => { const context = {document: {_id: '1', _type: 'event', eventType: 'in-person'}} // ... test})
it('test 2', () => { const context = {document: {_id: '2', _type: 'event', eventType: 'virtual'}} // ... test})
// ✅ Reusable fixturefunction createEventContext(eventType: string) { return createMockValidationContext({ _id: `event-${eventType}`, _type: 'event', eventType, })}
it('allows venue for in-person events', () => { const context = createEventContext('in-person') // ... test})Building a test suite is an investment. Start small and expand strategically:
Begin with functions that protect data integrity:
- Required field validation
- Business rule enforcement
- Data consistency checks
Goal: Prevent content editors from creating invalid documents
Add tests for helper functions with non-trivial logic:
- Date/time calculations
- Formatting utilities
- Data transformations
Goal: Catch bugs in commonly-used utilities
Test custom inputs with complex behavior:
- Components with conditional rendering
- Components with user interactions
- Components that query the dataset
Goal: Ensure editor UI works correctly
Add GitHub Actions to run tests automatically:
- On every pull request
- Before merging to main
- Before deployments
Goal: Prevent untested code from reaching production
As your suite grows, periodically review test quality:
- Red - Write a failing test
- Green - Make it pass with minimal code
- Refactor - Improve both test and production code
This discipline prevents over-engineering and keeps tests focused.
When you remove features or refactor code, delete tests that no longer serve a purpose. Dead code in tests is still maintenance burden.
Strategic investment
- Tests pay dividends through confident refactoring and faster debugging
- Start with high-value tests (validation, critical logic)
- Grow your suite incrementally as complexity increases
Technical implementation
- Pure functions are easiest to test
- Mock external dependencies (clients, contexts)
- Test user-facing behavior, not implementation details
Workflow integration
- Watch mode for instant local feedback
- CI runs for automated validation
- Coverage reports to find gaps
Sustainable testing
- Keep tests simple and focused
- Co-locate tests with code
- Delete obsolete tests
- Prioritize readability over cleverness