# Deployment and DevOps Standards for Vitest
This document outlines coding standards for deploying and integrating Vitest into DevOps workflows, focusing on building robust, efficient, and maintainable testing pipelines.
## 1. Build Processes and CI/CD Integration
### 1.1. Standard: Integrate Vitest into CI/CD pipelines for automated testing.
**Do This:** Configure your CI/CD system to automatically run Vitest tests upon every code commit, pull request, or merge to the main branch.
**Don't Do This:** Rely on manual testing or delaying test execution until late in the development cycle.
**Why:** Automated testing ensures that code changes do not introduce regressions and provides rapid feedback to developers.
**Example (GitHub Actions):**
"""yaml
# .github/workflows/vitest.yml
name: Vitest Tests
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: 20 # Use a supported Node.js version
- name: Install dependencies
run: npm install # or yarn install, pnpm install
- name: Run Vitest tests
run: npm run test:ci # Or specify your Vitest command
"""
**Anti-Pattern:** Not configuring CI/CD to fail a build when tests fail.
### 1.2. Standard: Define a specific CI test command.
**Do This:** Add a dedicated script in "package.json" for CI testing, focusing on speed and reliability in a non-interactive environment.
**Don't Do This:** Use the same command that developers use locally, which might include watch mode or other development-specific features (e.g. UI).
**Why:** A dedicated CI command ensures consistent test execution across different environments and avoids potential conflicts caused by interactive features.
**Example ("package.json"):**
"""json
{
"scripts": {
"test": "vitest",
"test:ci": "vitest run --coverage" // Run tests once with coverage
}
}
"""
**"vitest run" Explanation:**
* "vitest run" specifically executes the tests in a non-watch mode, making it suitable for CI environments.
* "--coverage" is crucial for tracking test coverage ensuring sufficient testing depth before merging code.
### 1.3. Standard: Utilize caching for dependencies and test results.
**Do This:** Enable caching for "node_modules" and Vitest's cache to reduce build times in CI/CD.
**Don't Do This:** Repeatedly reinstall dependencies or re-run tests from scratch on every build.
**Why:** Caching dramatically speeds up CI/CD pipelines by reusing previously downloaded dependencies and cached test results.
**Example (GitHub Actions):**
"""yaml
# .github/workflows/vitest.yml (continued)
name: Vitest Tests
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: 20
- name: Cache dependencies
uses: actions/cache@v3
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: npm install
- name: Run Vitest tests
run: npm run test:ci
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }} # Optional: Only if you use Codecov
flags: unittests
name: codecov-vitest
"""
**Explanation:** The configuration above caches "~/.npm" based on the hash of "package-lock.json". This avoids re-downloading dependencies if the package lockfile hasn't changed. Using tools like Codecov or Coveralls helps track the code coverage produced during testing.
### 1.4 Standard: Use environment variables for configuration.
**Do this:** Configure Vitest using environment variables, especially for CI/CD configurations such as API keys, database URLs or other deployment specific settings.
**Don't do this:** Hardcode sensitive information directly in your Vitest configuration files.
**Why:** Allows configuring different testing environments (development, staging, production) without modifying code.
**Example:**
"""javascript
// vitest.config.js
import { defineConfig } from 'vite';
import { default as viteTsconfigPaths } from 'vite-tsconfig-paths';
export default defineConfig({
test: {
globals: true,
environment: 'jsdom',
setupFiles: ['./src/test/setup.ts'],
coverage: {
reporter: ['text', 'json', 'html'],
},
api: {
port: 3003,
host: 'localhost',
},
// Environment variables
API_URL: process.env.API_URL || 'http://localhost:3000',
},
plugins: [viteTsconfigPaths({ loose: true })],
});
"""
In your test files:
"""javascript
// Example.test.ts
import { describe, it, expect } from 'vitest';
describe('Example Test', () => {
it('should use the API URL from environment variables', () => {
const apiUrl = import.meta.env.API_URL;
expect(apiUrl).toBeDefined();
// You can also use process.env directly if you prefer:
// expect(process.env.API_URL).toBeDefined(); // Alternative
});
});
"""
In your CI environment (e.g., GitHub Actions):
"""yaml
# .github/workflows/vitest
name: Vitest Tests
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Use Node.js
uses: actions/setup-node@v3
with:
node-version: 20
- name: Install Dependencies
run: npm install
- name: Run Vitest Tests
env:
API_URL: 'https://production-api.example.com' # Set the environment variable
run: npm run test:ci
"""
### 1.5 Standard: Enforce code coverage thresholds.
**Do this:** Use "vitest"'s coverage features, coupled with CI configuration, to enforce minimum acceptable code coverage percentages.
**Don't do this:** Allow merges with insufficient test coverage, leading to potential regressions in production.
**Why:** Prevents code merges that lack thorough testing.
**Example ("vitest.config.js"):**
"""javascript
import { defineConfig } from 'vite';
export default defineConfig({
test: {
coverage: {
reporter: ['text', 'json', 'html'],
thresholds: { // Ensure you have reasonable thresholds
lines: 80,
branches: 80,
functions: 80,
statements: 80,
},
},
},
});
"""
**CI Integration (GitHub Actions):**
"""yaml
# .github/workflows/vitest.yml (continued)
name: Vitest Tests
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: 20
- name: Install dependencies
run: npm install
- name: Run Vitest tests
run: npm run test:ci
- name: Check Coverage
run: |
if [[ $(grep -c "All files" coverage/clover.xml) -eq 0 ]]; then
echo "ERROR: Coverage thresholds not met."
exit 1
fi
"""
**Explanation:** This example uses "thresholds" inside "coverage" to ensure that test coverage remains above 80% for lines, branches, functions and statements.
### 1.6 Standard: Handle flaky tests gracefully.
**Do this:** Implement retry mechanisms for flaky tests in your CI/CD pipeline.
**Don't do this:** Ignore flaky tests or permanently disable them. Identify the root cause of the flakiness but if that's not possible immediately, implement temporary retries.
**Why:** Flaky tests can cause false negatives and disrupt the development process.
**Example ("vitest.config.js"):**
"""javascript
import { defineConfig } from 'vite';
export default defineConfig({
test: {
retry: 2, // Retry failed tests up to 2 times
},
});
"""
**Explanation:** This configuration tells Vitest to retry any failed test up to 2 times before marking it as a failure.
## 2. Production Considerations
### 2.1. Standard: Exclude test files from production builds.
**Do This:** Configure your build tools (e.g., Webpack, Rollup, esbuild, Vite) to exclude test files (e.g., ".test.ts", ".spec.js") from production bundles.
**Don't Do This:** Include test files in production builds, as they unnecessarily increase bundle size and expose testing logic.
**Why:** Reduces the size of production bundles, improving loading times and security.
**Example (Vite Configuration):**
Vite, by default, should handle this automatically as it only bundles necessary code. Make sure your "tsconfig.json" has appropriate "exclude" entries as well.
"""json
// tsconfig.json
{
"compilerOptions": {
"target": "esnext",
"module": "esnext",
"moduleResolution": "node",
"strict": true,
"jsx": "preserve",
"sourceMap": true,
"resolveJsonModule": true,
"esModuleInterop": true,
"lib": ["esnext", "dom"],
"types": ["vite/client"],
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", "**/__tests__/*", "**/*.spec.ts","**/*.test.ts"]
}
"""
### 2.2. Standard: Use mock implementations wisely in production.
**Do This:** Ensure all mock implementations used during testing are removed or replaced with real implementations before deploying to production.
**Don't Do This:** Leave mock implementations in production code, as they can lead to unexpected behavior and data inconsistencies.
**Why:** Prevents incorrect data or functionality in production environments.
**Example:**
During testing:
"""javascript
// __mocks__/apiClient.ts
export const fetchData = async () => {
return Promise.resolve({ data: "Mock Data" });
};
// apiClient.test.ts
import { fetchData } from '../src/apiClient';
vi.mock('../src/apiClient', async () => {
const actual = await vi.importActual('../src/apiClient')
return {
...actual,
fetchData: vi.fn(() => Promise.resolve({ data: "Mocked Data" })),
}
})
it('should use mock data during testing', async () => {
const data = await fetchData();
expect(data).toEqual({ data: "Mocked Data" }); // Confirm mock is in place
});
"""
Before production, ensure "__mocks__" directory is not part of the final build or use conditional logic:
"""javascript
// apiClient.ts
export const fetchData = async () => {
if (process.env.NODE_ENV === 'test') {
// Use testing mocks or fixtures in test environments
return Promise.resolve({ data: "Mock Data" });
} else {
// Real implementation in production
const response = await fetch('your-api-endpoint');
return response.json();
}
};
"""
### 2.3. Standard: Implement logging and monitoring for test failures in production-like environments.
**Do This:** Integrate Vitest tests into staging or pre-production environments to detect issues before they reach production. Log the test results. Also integrate with monitoring tools.
**Don't Do This:** Rely solely on local testing and CI/CD without performing tests in environments closely resembling production.
**Why:** Helps identify environment-specific issues and prevent regressions in production.
**Example:**
"""javascript
// vitest.config.js
export default defineConfig({
test: {
onTestFinished: (test, result) => {
if (result.errors.length > 0) {
console.error("Test "${test.name}" failed:", result.errors);
// Integrate with a logging service like Sentry or Rollbar
// Sentry.captureException(result.errors);
}
},
},
});
"""
### 2.4 Standard: Secure environment variables
**Do this:** Use secure storage mechanisms for environment variables in CI/CD and deployment systems (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault).
**Don't do this:** Store secrets directly in code or in plain-text configuration files.
**Why:** Prevents unauthorized access to sensitive information.
### 2.5 Standard: Automate database migrations
**Do this:** Integrate database migrations into your CI/CD pipeline to ensure that database schema changes are automatically applied during deployment. For tests, consider using temporary databases.
**Don't do this:** Manually apply database migrations, which can lead to inconsistencies and errors.
**Why:** Ensures that the database schema is always in sync with the application code.
## 3. Modern Approaches and Patterns
### 3.1. Standard: Embrace component testing with Vitest and testing libraries like Testing Library.
**Do This:** Utilize component testing strategies to verify the behavior of individual components in isolation.
**Don't Do This:** Rely solely on end-to-end tests, which can be slower and more brittle.
**Why:** Component tests are faster, more focused, and provide better isolation than end-to-end tests.
**Example (React Component Test):**
"""jsx
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import MyComponent from './MyComponent';
import { describe, it, expect } from 'vitest';
describe('MyComponent', () => {
it('renders with the correct text', () => {
render();
expect(screen.getByText('Hello World')).toBeInTheDocument();
});
it('calls the onClick handler when the button is clicked', async () => {
const onClick = vi.fn();
render();
await userEvent.click(screen.getByText('Click Me'));
expect(onClick).toHaveBeenCalledTimes(1);
});
});
"""
### 3.2. Standard: Parallelize Tests
**Do this:** Use Vitest's built-in concurrency options to run tests in parallel. Configure the number of workers based on the CI/CD environment resources.
**Don't do this:** Run tests sequentially if possible, as it significantly increases test execution time.
**Why:** Reduces overall test execution time, providing faster feedback.
**Example ("vitest.config.js"):**
"""javascript
import { defineConfig } from 'vite';
export default defineConfig({
test: {
threads: true, // Enable concurrent test execution
pool: 'forks', // "forks" or "threads" - forks are generally faster
// maxConcurrency: 5 // limit concurrent tests - if omitted it will detect the number of CPUs available
},
});
"""
### 3.3 Standard: Test isolation
**Do this:** Utilize Vitest's "beforeEach" and "afterEach" hooks to reset the test environment before and after each test case. Or use "beforeAll" and "afterAll" when set up cost is high.
**Don't do this:** Allow tests to depend implicitly on the state of previous tests.
**Why:** Ensures that each test is independent and repeatable.
**Example:**
"""javascript
import { beforeEach, afterEach, it, expect, vi } from 'vitest';
let mockApiCall;
beforeEach(() => {
mockApiCall = vi.fn().mockResolvedValue({ data: 'some data' });
vi.mock('./api', () => ({
fetchData: mockApiCall,
}));
});
afterEach(() => {
vi.clearAllMocks(); // Resets all mocks after each test
});
it('should call the API', async () => {
const { fetchData } = await import('./api'); // Import inside test or beforeEach
await fetchData();
expect(mockApiCall).toHaveBeenCalled();
});
it('should return the correct data', async () => {
const { fetchData } = await import('./api');
const data = await fetchData();
expect(data).toEqual({ data: 'some data' });
});
"""
### 3.4 Standard: Utilize Vitest's API mocking features.
**Do this:** Use "vi.mock", "vi.spyOn" for intercepting and mocking API calls and dependencies. Ensure that mocks are properly reset after each test with "vi.restoreAllMocks" or "vi.clearAllMocks".
**Don't do this:** Modify global state for the test, avoid using real network calls in unit tests.
**Why:** Prevents external dependencies from affecting test results.
**Example:**
"""javascript
import { describe, it, expect, vi } from 'vitest';
import { fetchData } from './api';
describe('fetchData', () => {
it('should return data from the API', async () => {
// Mock the fetch function to return a resolved promise with sample data
global.fetch = vi.fn(() =>
Promise.resolve({
json: () => Promise.resolve({ data: 'Mocked data' }),
})
);
const result = await fetchData();
expect(result).toEqual({ data: 'Mocked data' });
expect(global.fetch).toHaveBeenCalledTimes(1); // Assert that fetch was called only once
});
});
"""
### 3.5 Standard: Use Vitest UI for debugging
**Do this:** For debugging, use the Vitest UI. Use debugging tools provided by VSCode or other IDEs.
**Don't do this:** Relying solely on console logs or not leveraging the debugging tools.
**Why:** Improves Developer Experience and Test Debugging efficiency.
**Example:**
Launch Vitest UI: Run "vitest --ui" in terminal or add the following script to package.json: ""test:ui": "vitest --ui""
### 3.6: Keep Tests Isolated from External State
**Do This:** Ensure each test is entirely self-contained. Use mocking strategies for dependencies as well as external resources like the file system or network.
**Don't Do This:** Write tests that rely on global state which could affect other tests or the production environment.
**Why**: Isolated tests are more reliable, predictable, and easier to debug.
By adhering to these standards, you can create a robust and efficient Vitest testing pipeline that supports continuous integration, continuous delivery, and overall software quality.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Performance Optimization Standards for Vitest This document outlines performance optimization standards for Vitest, focusing on improving application speed, responsiveness, and resource usage during testing. These standards aim to guide developers in writing efficient and effective tests, ensuring a fast and reliable development workflow. ## 1. Test Suite Structure and Organization ### 1.1. Grouping Related Tests **Standard:** Group related tests within "describe" blocks to improve readability and enable focused execution and reduce setup costs. **Why:** Grouping allows developers to quickly identify the scope of a test suite, facilitates targeted test execution (e.g., "vitest run --testNamePattern "MyComponent""), and can enable shared setup/teardown logic for related tests. **Do This:** """typescript // src/components/MyComponent.test.ts import { describe, it, expect, beforeEach } from 'vitest'; import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; describe('MyComponent', () => { let wrapper; beforeEach(() => { wrapper = mount(MyComponent); }); it('renders correctly', () => { expect(wrapper.exists()).toBe(true); }); it('displays the correct default message', () => { expect(wrapper.text()).toContain('Hello from MyComponent!'); }); it('updates the message when the prop changes', async () => { await wrapper.setProps({ msg: 'New Message' }); expect(wrapper.text()).toContain('New Message'); }); }); """ **Don't Do This:** """typescript // Avoid a flat structure with ungrouped tests import { it, expect } from 'vitest'; import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; it('renders correctly', () => { const wrapper = mount(MyComponent); expect(wrapper.exists()).toBe(true); }); it('displays the correct default message', () => { const wrapper = mount(MyComponent); expect(wrapper.text()).toContain('Hello from MyComponent!'); }); """ ### 1.2. Test File Naming Conventions **Standard:** Use consistent and meaningful naming conventions for test files. A common convention is "[componentName].test.ts/js" or "[moduleName].spec.ts/js". **Why:** Clear naming conventions make it easier to locate and understand tests, improving maintainability and collaboration. Vitest can also leverage naming patterns for targeted test runs. **Do This:** """ src/ ├── components/ │ ├── MyComponent.vue │ └── MyComponent.test.ts // Or MyComponent.spec.ts └── utils/ ├── stringUtils.ts └── stringUtils.test.ts // Or stringUtils.spec.ts """ **Don't Do This:** """ src/ ├── components/ │ ├── MyComponent.vue │ └── test.ts // Vague and unclear └── utils/ ├── stringUtils.ts └── utils.test.ts // Ambiguous, doesn't clearly specify what's being tested """ ## 2. Efficient Test Setup and Teardown ### 2.1. Minimizing Global Setup and Teardown **Standard:** Avoid unnecessary global setup and teardown operations. Use "beforeEach" and "afterEach" hooks within "describe" blocks for context-specific setup and teardown. **Why:** Global setup and teardown can significantly slow down test execution, especially in large projects. Isolating setup and teardown to specific test suites reduces overhead and makes tests more predictable. **Do This:** """typescript import { describe, it, expect, beforeEach, afterEach } from 'vitest'; describe('MyComponent with specific data', () => { let component; let mockData; beforeEach(() => { mockData = { name: 'Test Name', value: 123 }; component = createComponent(mockData); // Hypothetical createComponent function }); afterEach(() => { component.destroy(); // Hypothetical destroy function to clean up resources mockData = null; }); it('renders the name correctly', () => { expect(component.getName()).toBe('Test Name'); }); it('renders the value correctly', () => { expect(component.getValue()).toBe(123); }); }); """ **Don't Do This:** """typescript // Avoid using global beforeAll and afterAll unless absolutely necessary import { describe, it, expect, beforeAll, afterAll } from 'vitest'; let globalComponent; beforeAll(() => { // This will run once before *all* tests, potentially creating unnecessary overhead globalComponent = createGlobalComponent(); }); afterAll(() => { // This will run once after *all* tests, even those that don't need cleanup globalComponent.destroy(); }); describe('Test Suite 1', () => { it('test 1', () => { // ... }); }); describe('Test Suite 2', () => { it('test 2', () => { // ... }); }); """ ### 2.2. Leveraging "beforeAll" and "afterAll" Strategically **Standard:** Use "beforeAll" and "afterAll" for expensive operations that only need to be performed once per test suite, such as database connections or large data set initialization. However, carefully consider the impact on test isolation. **Why:** "beforeAll" and "afterAll" can optimize performance by avoiding redundant setup. However, global state changes within these hooks can introduce dependencies between tests, leading to flaky results. **Do This (with caution and clear documentation):** """typescript import { describe, it, expect, beforeAll, afterAll } from 'vitest'; import { connectToDatabase, disconnectFromDatabase } from './database'; describe('Database Interactions', () => { let dbConnection; beforeAll(async () => { dbConnection = await connectToDatabase(); }); afterAll(async () => { await disconnectFromDatabase(dbConnection); }); it('fetches data correctly', async () => { const data = await dbConnection.query('SELECT * FROM users'); expect(data).toBeDefined(); }); it('inserts data correctly', async () => { await dbConnection.query('INSERT INTO users (name) VALUES ("Test User")'); // ... }); }); """ **Don't Do This (if not necessary for performance):** """typescript // Avoid overusing beforeAll if the setup is not truly expensive. import { describe, it, expect, beforeAll } from 'vitest'; describe('Simple Operations', () => { let simpleValue; beforeAll(() => { // Unnecessary use of beforeAll for a simple assignment simpleValue = 10; }); it('adds 5 to the value', () => { expect(simpleValue + 5).toBe(15); }); }); """ ### 2.3. Lazy Initialization **Standard:** Use lazy initialization for resources that are only needed by a subset of tests. Initialize these resources only when they are first accessed. **Why:** Lazy initialization avoids unnecessary setup costs for tests that don't require specific resources. This can significantly improve test suite run time, especially when dealing with complex or large-scale applications. **Do This:** """typescript import { describe, it, expect } from 'vitest'; describe('Conditional Resource Initialization', () => { let expensiveResource = null; const getExpensiveResource = () => { if (!expensiveResource) { expensiveResource = createExpensiveResource(); // Only create when needed } return expensiveResource; }; it('test 1 does not need the resource', () => { expect(true).toBe(true); // Simple assertion }); it('test 2 needs the expensive resource', () => { const resource = getExpensiveResource(); expect(resource).toBeDefined(); // ... use the resource }); }); """ **Don't Do This:** """typescript import { describe, it, expect, beforeAll } from 'vitest'; describe('Unconditional Resource Initialization', () => { let expensiveResource; beforeAll(() => { // Expensive resource is created even if tests don't use it, wasting resources expensiveResource = createExpensiveResource(); }); it('test 1 does not need the resource', () => { expect(true).toBe(true); }); it('test 2 needs the expensive resource', () => { expect(expensiveResource).toBeDefined(); // ... use the resource }); }); """ ## 3. Mocking and Stubbing Strategies ### 3.1. Minimizing External Dependencies **Standard:** Mock or stub external dependencies (e.g., API calls, database queries, third-party services) to isolate units under test and avoid slow or unreliable external factors. **Why:** Mocking and stubbing allows for predictable and fast test execution by eliminating dependence on external systems that may be unavailable, slow, or change unexpectedly. Vitest provides built-in mocking capabilities for this purpose. **Do This:** """typescript import { describe, it, expect, vi } from 'vitest'; import { fetchData } from './api'; // Hypothetical API function import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; vi.mock('./api', () => { return { fetchData: vi.fn(() => Promise.resolve({ data: 'Mocked Data' })), }; }); describe('MyComponent with Mocked API', () => { it('displays mocked data correctly', async () => { const wrapper = mount(MyComponent); await wrapper.vm.$nextTick(); // Ensure data is fetched and rendered expect(wrapper.text()).toContain('Mocked Data'); expect(fetchData).toHaveBeenCalled(); }); }); """ **Don't Do This:** """typescript // Avoid making actual API calls during testing import { describe, it, expect } from 'vitest'; import { fetchData } from './api'; import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; describe('MyComponent without Mocking', () => { it('displays fetched data (potentially slow and unreliable)', async () => { const wrapper = mount(MyComponent); await wrapper.vm.$nextTick(); // May fail if the API is down or slow expect(wrapper.text()).toContain('Expected Data from API'); }); }); """ ### 3.2. Mocking Only What's Necessary **Standard:** Only mock the specific functions or modules that are directly involved in the test. Avoid mocking entire modules or services unless absolutely necessary. **Why:** Over-mocking can lead to brittle tests that don't accurately reflect the behavior of the system. Mocking only the relevant parts allows for more focused and reliable tests. **Do This:** """typescript // Mock only the fetchData function, not the entire api module. import { describe, it, expect, vi } from 'vitest'; import { fetchData, processData } from './api'; // Now processData remains real import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; vi.mock('./api', async () => { const actual = await vi.importActual('./api') return { ...actual, // import all the existing exports fetchData: vi.fn(() => Promise.resolve({ data: 'Mocked Data' })), } }) describe('MyComponent with Specific Mocking', () => { it('displays processed data correctly', async () => { const wrapper = mount(MyComponent); await wrapper.vm.$nextTick(); expect(wrapper.text()).toContain('Mocked Data'); // Relies on the mocked fetchData result expect(fetchData).toHaveBeenCalled(); // processData still runs with real logic }); }); """ **Don't Do This:** """typescript // Avoid mocking the entire module if only one function needs to be mocked import { describe, it, expect, vi } from 'vitest'; import * as api from './api'; // Import the whole module import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; vi.mock('./api', () => { return { fetchData: vi.fn(() => Promise.resolve({ data: 'Mocked Data' })), processData: vi.fn(() => 'Mocked Processed Data'), // Unnecessary mocking }; }); describe('MyComponent with Over-Mocking', () => { it('displays mocked data correctly (but over-mocked)', async () => { const wrapper = mount(MyComponent); await wrapper.vm.$nextTick(); expect(wrapper.text()).toContain('Mocked Processed Data'); // Using the mocked processData, even if it's unnecessary expect(api.fetchData).toHaveBeenCalled(); }); }); """ ### 3.3. Using "vi.spyOn" for Partial Mocking **Standard:** Use "vi.spyOn" to mock specific methods on an existing object or module *without* replacing the entire object. This allows you to verify that the method was called and observe its arguments, while still executing the original implementation. **Why:** "vi.spyOn" provides a more granular and less disruptive way to mock functionality, especially when you need to test interactions with a method while still relying on its original behaviour. **Do This:** """typescript import { describe, it, expect, vi } from 'vitest'; import * as MyModule from './myModule'; // Hypothetical module with several functions describe('Using vi.spyOn', () => { it('should call the original function and allow assertions', () => { const spy = vi.spyOn(MyModule, 'myFunction'); // Spy on myFunction const result = MyModule.myFunction(1, 2); expect(spy).toHaveBeenCalledTimes(1); expect(spy).toHaveBeenCalledWith(1, 2); expect(result).toBe(3); // Assuming myFunction returns the sum of its arguments spy.mockRestore(); // Restore the original implementation of myFunction }); }); """ **Don't Do This:** """typescript // Avoid using vi.mock if you only need to spy on a function import { describe, it, expect, vi } from 'vitest'; import * as MyModule from './myModule'; vi.mock('./myModule', () => { return { myFunction: vi.fn((a, b) => a + b), // Replaces myFunction entirely - less ideal if you want to call the original. }; }); describe('Avoid replacing the function completely with vi.mock', () => { it('should call the original function and allow assertions', () => { // ... less flexible for observing calls and executing original code }); }); """ ## 4. Efficient Assertions and Expectations ### 4.1. Avoiding Excessive Assertions **Standard:** Focus assertions on the specific behavior being tested in each test case. Avoid including unrelated or redundant assertions. **Why:** Excessive assertions can make tests harder to understand and maintain, and can also slow down test execution. Each assertion adds overhead. **Do This:** """typescript import { describe, it, expect } from 'vitest'; describe('Focused Assertions', () => { it('correctly calculates the sum', () => { const result = calculateSum(2, 3); expect(result).toBe(5); // Focus on the sum itself }); }); """ **Don't Do This:** """typescript // Avoid including irrelevant assertions. import { describe, it, expect } from 'vitest'; describe('Excessive Assertions', () => { it('calculates the sum and checks unrelated properties', () => { const result = calculateSum(2, 3); expect(result).toBe(5); expect(typeof result).toBe('number'); // Redundant and unnecessary expect(result > 0).toBe(true); // Redundant and unnecessary }); }); """ ### 4.2. Using Specific Matchers **Standard:** Use the most specific and appropriate Vitest matchers for each assertion. For example, use "toBe" for primitive values, "toEqual" for objects, "toContain" for arrays, and "toHaveBeenCalled" for mocks. **Why:** Specific matchers improve the clarity and expressiveness of tests, and can also provide better performance by avoiding unnecessary comparisons or type conversions. Using type-safe matchers (where possible) offer type safety and performance improvements. **Do This:** """typescript import { describe, it, expect, vi } from 'vitest'; describe('Specific Matchers', () => { it('uses toBe for primitive values', () => { expect(1 + 1).toBe(2); }); it('uses toEqual for objects', () => { const obj1 = { a: 1, b: 2 }; const obj2 = { a: 1, b: 2 }; expect(obj1).toEqual(obj2); }); it('uses toContain for arrays', () => { const arr = [1, 2, 3]; expect(arr).toContain(2); }); it('uses toHaveBeenCalled for mocks', () => { const mockFn = vi.fn(); mockFn(); expect(mockFn).toHaveBeenCalled(); }); }); """ **Don't Do This:** """typescript // Avoid using generic matchers when more specific ones are available. import { describe, it, expect, vi } from 'vitest'; describe('Generic Matchers', () => { it('incorrectly uses toEqual for primitive values', () => { expect(1 + 1).toEqual(2); // Inefficient; 'toBe' is better for primitives }); it('incorrectly uses toContain for objects', () => { const obj1 = { a: 1, b: 2 }; const obj2 = { a: 1, b: 2 }; expect(obj1).toContain(obj2); // Incorrect and will likely fail }); }); """ ### 4.3. Asynchronous Assertions **Standard:** Use "async/await" with Vitest's built-in support for asynchronous testing. Ensure you wait for asynchronous operations to complete before making assertions. **Why:** Asynchronous tests can be prone to errors if assertions are made before asynchronous operations have finished. Using "async/await" ensures that tests wait for completion, leading to more reliable results. **Do This:** """typescript import { describe, it, expect } from 'vitest'; import { fetchData } from './api'; // Hypothetical async function describe('Asynchronous Assertions', () => { it('fetches data correctly', async () => { const data = await fetchData(); expect(data).toBeDefined(); // ... more assertions on the fetched data }); }); """ **Don't Do This:** """typescript // Avoid making assertions before asynchronous operations complete. import { describe, it, expect } from 'vitest'; import { fetchData } from './api'; describe('Incorrect Asynchronous Assertions', () => { it('attempts to assert before data is fetched (likely to fail)', () => { let data; fetchData().then(result => { data = result; }); expect(data).toBeDefined(); // Assertion made before fetchData resolves. }); }); """ ## 5. Code Coverage Considerations ### 5.1. Balancing Coverage and Performance **Standard:** While code coverage is important, prioritize writing meaningful tests that cover critical functionality and edge cases. Avoid striving for 100% coverage at the expense of test performance or maintainability. **Why:** High code coverage can provide a false sense of security if tests are not well-designed or if they focus on trivial code paths. Focus on writing tests that thoroughly validate the most important aspects of the system. **Do This:** * Identify critical functionalities and prioritize testing these areas thoroughly. * Focus on covering boundary conditions, edge cases, and potential error scenarios. * Use code coverage tools (e.g., c8 via Vitest's "--coverage" flag) to identify uncovered areas, but don't treat 100% coverage as the primary goal. **Don't Do This:** * Write tests solely to increase code coverage without considering their value in validating functionality. * Create complex or convoluted tests to cover trivial code paths that are unlikely to cause issues. * Neglect testing important areas simply because they are difficult to cover with tests. ### 5.2. Excluding Non-Essential Files **Standard:** Exclude non-essential files (e.g., configuration files, third-party libraries) from code coverage analysis to avoid skewing the results and wasting resources. **Why:** Including non-essential files in code coverage analysis can make it difficult to identify areas of the codebase that truly need more attention. **Do This:** Configure the coverage reporter in "vitest.config.ts" to exclude files. """typescript // vitest.config.ts import { defineConfig } from 'vitest/config'; export default defineConfig({ test: { coverage: { exclude: [ '**/node_modules/**', '**/dist/**', '**/coverage/**', 'src/config.ts', // Example: exclude configuration file 'src/external/**', //Ignore external libraries ], }, }, }); """ **Don't Do This:** * Include all files in code coverage analysis without considering their relevance. * Fail to exclude generated files or build artifacts from coverage reports. ## 6. Parallelization and Concurrency ### 6.1. Enabling Test Parallelization **Standard:** Enable parallel test execution in Vitest to reduce overall test run time, especially for large projects. **Why:** Vitest can run tests in parallel, leveraging multiple CPU cores to significantly speed up execution. This is especially beneficial for tests that involve I/O operations or long-running computations. **Do This:** By default, Vitest parallelizes test execution. You can control the level of parallelism within "vitest.config.ts". Make sure your tests are properly isolated for parallelism. """typescript // vitest.config.ts import { defineConfig } from 'vitest/config'; export default defineConfig({ test: { threads: true, // or number of threads }, }); """ **Don't Do This:** * Disable parallelization unless there are specific reasons to do so (e.g., tests that rely on shared mutable state). * Ignore potential concurrency issues in tests when running them in parallel (e.g., race conditions when accessing shared resources). ### 6.2. Managing Shared State in Parallel Tests **Standard:** Avoid shared mutable state between tests, or carefully manage access to shared resources using appropriate synchronization mechanisms (e.g., locks, mutexes) to prevent race conditions. **Why:** Parallel tests that share mutable state can lead to non-deterministic results and flaky test runs. **Do This:** * Ensure each test operates on its own isolated data set. * Use database transactions or other isolation techniques to prevent interference between tests that interact with shared databases. * If shared state is unavoidable, use appropriate locking mechanisms to protect access. **Don't Do This:** * Allow tests to modify global variables or shared data structures without proper synchronization. * Assume that tests will always run in a specific order when running them in parallel. ## 7. Performance Monitoring and Analysis ### 7.1. Using Performance Measurement Tools **Standard:** Use performance measurement tools (e.g., "console.time" and "console.timeEnd" for basic timing, profiling tools for detailed analysis) to identify performance bottlenecks in tests. **Why:** Performance measurement tools can help pinpoint slow-running tests or inefficient code within tests, allowing developers to optimize them. **Do This:** """typescript import { describe, it, expect } from 'vitest'; describe('Performance Measurement', () => { it('measures the execution time of a function', () => { console.time('myFunction'); myFunction(); // Hypothetical function to measure console.timeEnd('myFunction'); }); }); """ **Don't Do This:** * Ignore performance issues in tests. * Rely solely on intuition when identifying performance bottlenecks without using measurement tools. * Leave performance measurement code in production code. Add it only to tests when performance measurements are needed and remove it afterwards. By adhering to these performance optimization standards, developers can create Vitest test suites that are fast, reliable, and maintainable, ensuring a smooth and efficient development process.
# Core Architecture Standards for Vitest This document outlines the core architectural standards for developing and maintaining Vitest itself. It provides guidelines for project structure, fundamental patterns, and principles to ensure maintainability, performance, and scalability. These standards are designed to be used by Vitest developers, contributors, and AI coding assistants. ## 1. Project Structure and Organization A well-defined project structure is crucial for navigating and understanding the Vitest codebase. It promotes discoverability, reduces cognitive load, and simplifies maintenance. **Standard:** Adhere to a modular, decoupled architecture with clear boundaries between components. **Do This:** * Organize code into meaningful modules based on functionality (e.g., "runner", "reporter", "config", "api"). * Maintain a flat directory structure within modules to avoid excessive nesting, promoting discoverability. * Use descriptive file and directory names that clearly indicate their purpose. **Don't Do This:** * Create tightly coupled components that are difficult to test or refactor. * Overuse deeply nested directory structures. * Use vague or ambiguous names for files and directories. **Why:** A modular architecture allows for independent development and testing of components, reducing the impact of changes in one area on other parts of the system. Clean, descriptive names improve code readability and maintainability. **Code Example (Project Structure):** """ vitest/ ├── packages/ │ ├── vitest/ # Core Vitest package │ │ ├── src/ # Source code │ │ │ ├── api/ # API layer │ │ │ ├── config/ # Configuration loading and handling │ │ │ ├── runner/ # Test runner logic │ │ │ ├── reporter/ # Test reporter implementations │ │ │ ├── utils/ # Utility functions │ │ │ ├── types/ # Type definitions │ │ │ └── index.ts # Entry point │ │ ├── test/ # Unit and integration tests │ │ ├── index.ts # Package entry point │ │ ├── package.json # Package manifest │ │ └── tsconfig.json # TypeScript configuration │ ├── vite-node/ # Vite Node integration package │ │ └── ... │ └── ... ├── playground/ # Example projects for testing Vitest ├── scripts/ # Build and development scripts ├── tsconfig.json # Root TypeScript configuration ├── package.json # Root package manifest ├── README.md # Project README └── ... """ ## 2. Fundamental Architectural Patterns Vitest's architecture should leverage proven design patterns to promote code reusability, maintainability, and testability. **Standard:** Employ established design patterns where appropriate, favoring simplicity and clarity over complex solutions. **Do This:** * Use the **Observer pattern** for event handling and communication between components (e.g., test lifecycle events). * Implement the **Strategy pattern** for handling different test environments or reporters. * Apply the **Factory pattern** for creating instances of classes, providing abstraction and flexibility. * Favor functional programming principles where appropriate for pure functions and immutable data, increasing predictability. **Don't Do This:** * Over-engineer solutions by applying patterns unnecessarily. * Create tightly coupled dependencies by avoiding interfaces and abstraction. * Rely on global state, which can lead to unpredictable behavior and difficult debugging. **Why:** Design patterns provide reusable solutions to common problems, making the codebase more understandable and maintainable. They also promote loose coupling, which enhances testability and reduces the impact of changes. Functional programming improves code clarity and reduces side effects. **Code Example (Observer Pattern - simplified):** """typescript // types/index.ts interface EventListener<T> { (event: T): void; } interface Emitter<T> { on(event: string, listener: EventListener<T>): void; off(event: string, listener: EventListener<T>): void; emit(event: string, data: T): void; } // utils/emitter.ts class TypedEmitter<T> implements Emitter<T> { private listeners: { [event: string]: EventListener<T>[] } = {}; on(event: string, listener: EventListener<T>): void { if (!this.listeners[event]) { this.listeners[event] = []; } this.listeners[event].push(listener); } off(event: string, listener: EventListener<T>): void { if (this.listeners[event]) { this.listeners[event] = this.listeners[event].filter(l => l !== listener); } } emit(event: string, data: T): void { this.listeners[event]?.forEach(listener => listener(data)); } } // runner/testRunner.ts (Example Usage) interface TestEvent { testId: string; status: 'running' | 'passed' | 'failed'; } const testEmitter = new TypedEmitter<TestEvent>(); testEmitter.on('test:start', (event) => { console.log("Test ${event.testId} started"); }); testEmitter.emit('test:start', { testId: 'test-1', status: 'running' }); export { testEmitter }; """ ## 3. Configuration Management Robust configuration management is essential for adapting Vitest to different environments and use cases. **Standard:** Centralize configuration loading and validation to ensure consistency and prevent errors. **Do This:** * Use a dedicated "config" module to handle configuration loading from files and command-line arguments. * Implement schema validation to ensure that configuration values are of the correct type and within acceptable ranges. * Provide sensible default values for configuration options. * Support configuration files in common formats (e.g., "vitest.config.ts", "package.json"). **Don't Do This:** * Scatter configuration logic throughout the codebase. * Assume that configuration values are always valid. * Hardcode configuration values. **Why:** Centralized configuration management simplifies the process of modifying and extending Vitest's behavior. Schema validation prevents configuration errors, improving reliability. **Code Example (Configuration Loading and Validation):** """typescript // config/index.ts import { loadConfigFromFile, mergeConfig, UserConfig } from 'vite'; import { resolve } from 'path'; import { InlineConfig, VitestConfig } from '../types'; import { defaultVitestConfig } from './defaults'; import { defaults } from 'vitest/config'; export async function resolveConfig( inlineConfig: InlineConfig = {}, rootDir: string = process.cwd(), command: 'serve' | 'build' = 'serve', mode = 'development' ): Promise<VitestConfig> { let config = mergeConfig(defaultVitestConfig, inlineConfig) as VitestConfig; const resolvedRoot = resolve(rootDir); let configFile: { path: string; config: UserConfig } | undefined; try { configFile = await loadConfigFromFile( { command, mode }, inlineConfig.configFile || resolve(resolvedRoot, 'vitest.config.ts'), resolvedRoot ); } catch (e: any) { // ...error handling... configFile = undefined; } if (configFile) config = mergeConfig(config, configFile.config) as VitestConfig; // ... other config resolution and validation ... return config; } """ ## 4. Asynchronous Operations Vitest relies heavily on asynchronous operations for test execution and reporting. Handling these operations correctly is crucial for performance and stability. **Standard:** Use modern asynchronous patterns, such as "async/await" and Promises. **Do This:** * Prefer "async/await" over callbacks for asynchronous control flow. * Use "Promise.all" or "Promise.allSettled" to execute multiple asynchronous operations concurrently. * Implement proper error handling for asynchronous operations using "try/catch" blocks. **Don't Do This:** * Use callback-based asynchronous patterns. * Block the event loop with long-running synchronous operations. * Ignore errors from asynchronous operations without proper handling. **Why:** "async/await" simplifies asynchronous code, making it more readable and maintainable. Concurrent execution improves performance. Proper error handling prevents unhandled exceptions and ensures stability. **Code Example (Asynchronous Test Execution):** """typescript // runner/testRunner.ts import { testEmitter } from './emitter'; async function runTest(testFile: string) { try { // Dynamically import the test file const testModule = await import(testFile); // Check if the module has a default export, or named exports if (testModule.default && typeof testModule.default === 'function') { await testModule.default(); } else if (testModule) { // Iterate over named exports and execute them if they are functions for (const key in testModule) { if (typeof testModule[key] === 'function') { await testModule[key](); } } } testEmitter.emit('test:passed', { testId: testFile, status: 'passed' }); } catch (error) { console.error("Test ${testFile} failed:", error); testEmitter.emit('test:failed', { testId: testFile, status: 'failed' }); } } export { runTest }; """ ## 5. Error Handling and Logging Effective error handling and logging are essential for diagnosing and resolving issues in Vitest. **Standard:** Implement comprehensive error handling and logging throughout the codebase. **Do This:** * Use "try/catch" blocks to handle potential exceptions. * Log errors with sufficient context, including stack traces where appropriate. * Provide clear and informative error messages to users. * Use different logging levels (e.g., "debug", "info", "warn", "error") to categorize log messages. **Don't Do This:** * Swallow exceptions without logging them. * Log sensitive information (e.g., passwords, API keys). * Use generic error messages that don't provide useful information. **Why:** Proper error handling prevents unhandled exceptions, ensuring stability. Detailed logs allow for efficient debugging and issue resolution. Informative error messages help users understand and resolve problems. **Code Example (Error Handling and Logging):** """typescript // utils/fileSystem.ts async function readFile(filePath: string): Promise<string | null> { try { const content = await fs.promises.readFile(filePath, 'utf-8'); return content; } catch (error: any) { console.error("Failed to read file ${filePath}: ${error.message}", error); return null; } } """ ## 6. Extensibility and Plugins Vitest's architecture should be designed to be extensible, allowing users to add new features and integrations through plugins. **Standard:** Provide a well-defined plugin API that allows developers to extend Vitest's functionality without modifying the core codebase. **Do This:** * Define clear interfaces and extension points for plugins. * Provide comprehensive documentation and examples for plugin development. * Support different types of plugins (e.g., reporters, transformers, resolvers). **Don't Do This:** * Expose internal implementation details to plugins. * Introduce breaking changes to the plugin API without a clear migration path. * Limit the types of extensions that plugins can provide. **Why:** Extensibility allows Vitest to adapt to a wide range of use cases and integrate with different tools and frameworks. A well-defined plugin API ensures that plugins are reliable and easy to develop. **Code Example (Plugin API):** """typescript // types/index.ts export interface Plugin { name: string; config?: (config: VitestConfig, env: { mode: string, command: string }) => VitestConfig | void | Promise<VitestConfig | void>; configureServer?: (server: any) => void | Promise<void>; // ViteDevServer transform?: (code: string, id: string) => any; // TransformResult | null | void } // config/index.ts (Apply plugins) async function resolveConfig( inlineConfig: InlineConfig = {}, rootDir: string = process.cwd(), command: 'serve' | 'build' = 'serve', mode = 'development' ): Promise<VitestConfig> { // ... other config resolution code ... if (config.plugins) { for (const plugin of config.plugins) { if (plugin.config) { const pluginConfig = await plugin.config(config, { mode, command }); if (pluginConfig) { config = mergeConfig(config, pluginConfig) as VitestConfig; } } } } return config; } """ ## 7. Testing Thorough testing is crucial for ensuring the reliability and stability of Vitest. **Standard:** Implement comprehensive unit, integration, and end-to-end tests. **Do This:** * Write unit tests for individual functions and classes. * Write integration tests to verify the interaction between different components. * Write end-to-end tests to validate the overall behavior of the system. * Use code coverage tools to identify areas of the codebase that are not adequately tested. * Follow the Arrange-Act-Assert pattern for writing clear and maintainable tests. **Don't Do This:** * Skip writing tests for complex or critical code. * Write tests that are tightly coupled to implementation details. * Ignore code coverage metrics. **Why:** Comprehensive testing helps to prevent bugs, improve code quality, and reduce the risk of regressions. Code coverage metrics provide valuable insights into the effectiveness of testing efforts. The Arrange-Act-Assert pattern simplifies test writing. **Code Example (Unit Test):** """typescript // utils/string.test.ts import { describe, it, expect } from 'vitest'; import { capitalize } from './string'; describe('capitalize', () => { it('should capitalize the first letter of a string', () => { expect(capitalize('hello')).toBe('Hello'); }); it('should handle empty strings', () => { expect(capitalize('')).toBe(''); }); it('should handle strings with only one character', () => { expect(capitalize('a')).toBe('A'); }); it('should handle strings that already start with a capital letter', () => { expect(capitalize('World')).toBe('World'); }); }); """ ## 8. Documentation Clear and comprehensive documentation is essential for helping developers understand and use Vitest. **Standard:** Maintain thorough documentation for all aspects of Vitest, including the core architecture, API, and plugin development. **Do This:** * Write clear and concise documentation that is easy to understand. * Provide examples and tutorials to help users get started. * Keep the documentation up-to-date with the latest changes. * Use a consistent style and format for all documentation. **Don't Do This:** * Write documentation that is vague or incomplete. * Fail to update the documentation when making changes to the codebase. * Use inconsistent terminology or formatting. **Why:** Documentation is essential for helping developers understand and use Vitest effectively. Up-to-date documentation reduces the burden on maintainers by providing a clear source of information. ## 9. Performance Optimization Performance is a crucial consideration for Vitest, especially when running large test suites. **Standard:** Optimize code for performance, minimizing overhead and maximizing throughput. **Do This:** * Use efficient algorithms and data structures. * Avoid unnecessary work, such as redundant calculations or data transformations. * Use caching to store frequently accessed data. * Profile code to identify performance bottlenecks. **Don't Do This:** * Prioritize premature optimization over readability. * Ignore performance considerations during development. * Introduce performance regressions without proper analysis. **Why:** Performance optimization improves the overall user experience and reduces the time it takes to run tests. Efficient code also consumes less resources and reduces energy consumption. ## 10. Security Security is a critical consideration for Vitest, especially when executing user-provided code. **Standard:** Implement security best practices to prevent vulnerabilities and protect against malicious attacks. **Do This:** * Sanitize user input to prevent code injection attacks. * Limit the privileges of the test runner to prevent unauthorized access to system resources. * Use secure coding practices, such as avoiding buffer overflows and race conditions. * Regularly review and update dependencies to address known vulnerabilities. * Implement permission policies to control resource access during test execution. **Don't Do This:** * Trust user input without validation. * Run tests with elevated privileges. * Ignore security warnings or vulnerabilities. **Why:** Security best practices protect against malicious attacks and ensure the integrity of the system. Vulnerabilities can lead to data breaches, system compromise, or denial of service. These standards will guide the development of Vitest to ensure a robust, maintainable, and scalable testing framework. Adherence to these principles will facilitate contributions, improve code quality, and ultimately provide a better experience for the entire Vitest community.
# Component Design Standards for Vitest This document outlines the coding standards for component design within the Vitest testing framework. It aims to guide developers in creating reusable, maintainable, and performant test components. These standards are tailored to the latest features and best practices of Vitest. ## 1. Componentization Principles in Vitest This section establishes the fundamental principles underlying effective component design within the context of Vitest. ### 1.1. Abstraction and Encapsulation * **Standard:** Encapsulate complex testing logic within reusable components. Abstract away implementation details to simplify test setup and assertions. * **Why:** Reduces duplication, improves readability, and makes tests more resilient to changes in the underlying system. * **Do This:** Create helper functions or custom matchers to handle common assertion patterns. * **Don't Do This:** Repeat the same setup or assertion logic across multiple test files. """typescript // Good: Abstraction with a custom matcher import { expect } from 'vitest'; expect.extend({ toBeWithinRange(received, floor, ceiling) { const pass = received >= floor && received <= ceiling; if (pass) { return { message: () => "expected ${received} not to be within range ${floor} - ${ceiling}", pass: true, }; } else { return { message: () => "expected ${received} to be within range ${floor} - ${ceiling}", pass: false, }; } }, }); // Use the custom matcher it('number should be within range', () => { expect(100).toBeWithinRange(90, 110); }); // Bad: Repeating assertion logic it('number should be within range (bad)', () => { const number = 100; const floor = 90; const ceiling = 110; expect(number >= floor && number <= ceiling).toBe(true); }); """ ### 1.2. Single Responsibility Principle (SRP) * **Standard:** Each test component or helper should have one, and only one, reason to change. * **Why:** Promotes modularity, reduces complexity, and makes it easier to isolate and fix issues. * **Do This:** Break down large, monolithic test functions into smaller, more focused components. * **Don't Do This:** Create test functions that handle multiple aspects of a component's behavior. """typescript // Good: SRP - Separate setup and assertion logic import { describe, it, expect, beforeEach } from 'vitest'; describe('MyComponent', () => { let component; beforeEach(() => { component = { /* ... */ }; // Simulate component initialization }); it('should render correctly', () => { expect(component).toBeDefined(); // Assertion logic }); it('should handle user input', () => { // Test input handling expect(true).toBe(true); }); }); // Bad: Violating SRP - Mixing setup and assertions it('MyComponent - should render and handle input (BAD)', () => { const component = { /* ... */ }; // Setup expect(component).toBeDefined(); // Assertion // Test input handling expect(true).toBe(true); }); """ ### 1.3. Composition over Inheritance * **Standard:** Prefer composing test components from smaller, reusable parts rather than relying on deep inheritance hierarchies. * **Why:** Improves flexibility, reduces coupling, and avoids the complexities of inheritance. * **Do This:** Create utility functions that encapsulate specific testing behaviors and compose them as needed. * **Don't Do This:** Create deeply nested inheritance structures for your test components. """typescript // Good: Composition using utility functions import { describe, it, expect } from 'vitest'; const createMockComponent = (props = {}) => ({ ...props, isMock: true }); const assertComponentRenders = (component) => expect(component).toBeDefined(); describe('ComposableComponent', () => { it('should create a mock component', () => { const mock = createMockComponent({ name: 'Test' }); expect(mock.name).toBe('Test'); expect(mock.isMock).toBe(true); }); it('should assert if the component renders', () => { const mockComponent = createMockComponent(); assertComponentRenders(mockComponent); }); }); """ ## 2. Creating Reusable Components in Vitest This section focuses on the specific techniques and best practices for creating modular, easily reusable components for your tests. ### 2.1. Custom Matchers * **Standard:** Utilize custom matchers to encapsulate complex or domain-specific assertions. * **Why:** Improves test readability, reduces code duplication, and provides a more expressive API for your tests. * **Do This:** Implement custom matchers for common validation scenarios like date formatting or data structure validation. * **Don't Do This:** Hardcode complex assertion logic directly within your tests. """typescript // Custom Matcher Example import { expect } from 'vitest'; expect.extend({ toBeValidDate(received) { const pass = received instanceof Date && !isNaN(received.getTime()); if (pass) { return { message: () => "expected ${received} not to be a valid date", pass: true, }; } else { return { message: () => "expected ${received} to be a valid date", pass: false, }; } }, }); it('should be a valid date', () => { expect(new Date()).toBeValidDate(); }); """ ### 2.2. Test Factories * **Standard:** Use test factories to create consistent and configurable test data. * **Why:** Reduces boilerplate code, makes tests more maintainable, and allows for easy customization of test scenarios. * **Do This:** Create factory functions for generating mock data or component props. * **Don't Do This:** Hardcode test data directly within your tests. """typescript // Test Factory Example import { describe, it, expect } from 'vitest'; const createMockUser = (overrides = {}) => ({ id: '123', name: 'Test User', email: 'test@example.com', ...overrides, }); describe('User Creation', () => { it('should create a default user', () => { const user = createMockUser(); expect(user.id).toBe('123'); expect(user.name).toBe('Test User'); }); it('should allow overrides', () => { const user = createMockUser({ name: 'Custom Name' }); expect(user.name).toBe('Custom Name'); }); }); """ ### 2.3. Test Data Builders * **Standard:** Employ test data builders for constructing complex, nested test data objects in a readable and maintainable way. * **Why:** Simplifies the creation of intricate test data structures, promoting clarity and reducing the likelihood of errors. * **Do This:** Implement builder classes or functions to manage the construction of complex test data scenarios. * **Don't Do This:** Manually construct complex test data objects within test cases, leading to verbose and error-prone tests. """typescript // Test Data Builder Example import { describe, it, expect } from 'vitest'; class UserBuilder { private id: string = '123'; private name: string = 'Test User'; private email: string = 'test@example.com'; private addresses: string[] = []; withId(id: string): UserBuilder { this.id = id; return this; } withName(name: string): UserBuilder { this.name = name; return this; } withEmail(email: string): UserBuilder { this.email = email; return this; } withAddress(address: string): UserBuilder { this.addresses.push(address); return this; } build(): any { return { id: this.id, name: this.name, email: this.email, addresses: this.addresses, }; } } describe('UserBuilder', () => { it('should build a user with default values', () => { const user = new UserBuilder().build(); expect(user.id).toBe('123'); expect(user.name).toBe('Test User'); expect(user.email).toBe('test@example.com'); expect(user.addresses).toEqual([]); }); it('should build a user with custom values', () => { const user = new UserBuilder() .withId('456') .withName('Jane Doe') .withEmail('jane.doe@example.com') .withAddress('123 Main St') .build(); expect(user.id).toBe('456'); expect(user.name).toBe('Jane Doe'); expect(user.email).toBe('jane.doe@example.com'); expect(user.addresses).toEqual(['123 Main St']); }); }); """ ### 2.4. Page Objects (for UI Testing) * **Standard:** Create page object classes to represent UI elements and interactions. Use with libraries like Playwright or Cypress when testing UI components. * **Why:** Isolates UI-specific logic, making tests more resilient to UI changes and improving maintainability. * **Do This:** Define page object classes that encapsulate locators, actions, and assertions for specific UI pages or components. * **Don't Do This:** Directly interact with UI elements within your tests. """typescript // Hypothetical Page Object Example (with Playwright) import { expect, Page } from '@playwright/test'; class LoginPage { private readonly page: Page; private readonly usernameInput = '#username'; private readonly passwordInput = '#password'; private readonly loginButton = '#login-button'; constructor(page: Page) { this.page = page; } async goto() { await this.page.goto('/login'); } async login(username, password) { await this.page.fill(this.usernameInput, username); await this.page.fill(this.passwordInput, password); await this.page.click(this.loginButton); } async assertLoginSuccess() { await expect(this.page.locator('#success-message')).toBeVisible(); } } export { LoginPage }; // In your test: import { test } from '@playwright/test'; import { LoginPage } from './LoginPage'; test('Login should succeed', async ({ page }) => { const loginPage = new LoginPage(page); await loginPage.goto(); await loginPage.login('testuser', 'password'); await loginPage.assertLoginSuccess(); }); """ ## 3. Maintaining Test Components This section deals with strategies for keeping test components up-to-date, easy-to-understand, and performant, along with addressing common anti-patterns. ### 3.1. Test Component Documentation * **Standard:** Document your test helper functions, custom matchers, and test factories using JSDoc or TypeScriptDoc. * **Why:** Improves the understandability of your test code and makes it easier for other developers (and your future self) to use and maintain. * **Do This:** Add comments explaining the purpose, usage, and parameters of your test components. * **Don't Do This:** Leave your test components undocumented. """typescript /** * Creates a mock user object. * * @param {object} overrides - Optional properties to override the default values. * @returns {object} A mock user object. */ const createMockUser = (overrides = {}) => ({ id: '123', name: 'Test User', email: 'test@example.com', ...overrides, }); """ ### 3.2. Consistent Naming Conventions * **Standard:** Use consistent and descriptive names for your test components. * **Why:** Makes your test code easier to read and understand. * **Do This:** Use prefixes like "mock", "stub", or "fake" to indicate the purpose of your test components. * **Don't Do This:** Use ambiguous or inconsistent names. """typescript // Good: Consistent Naming const mockApiService = () => ({ getData: vi.fn().mockResolvedValue([{ id: 1, name: 'Item 1' }]), }); describe('MyComponent', () => { it('should fetch data correctly', async () => { const api = mockApiService(); //Use of appropriate naming //..rest of test }); }); //Bad: Inconsistent Naming const apiService = () => ({ getData: vi.fn().mockResolvedValue([{ id: 1, name: 'Item 1' }]), }); describe('MyComponent', () => { it('should fetch data correctly', async () => { const api = apiService(); //naming does not make it clear it is for mocking //..rest of test }); }); """ ### 3.3. Avoiding Over-Abstraction * **Standard:** Avoid creating overly complex or abstract test components that are difficult to understand and use. * **Why:** Simplicity and clarity are key in testing. Over-abstraction can make tests harder to debug and maintain. * **Do This:** Keep your test components focused and easy to understand. If a component becomes too complex, consider breaking it down into smaller parts. * **Don't Do This:** Create deeply nested inheritance hierarchies or overly generic test components. ### 3.4. Component Updates with Refactoring * **Standard:** Regularly review and refactor your test components to keep them aligned with the latest best practices and the evolving codebase. * **Why:** Prevents test components from becoming outdated or brittle, ensuring they remain effective and maintainable. * **Do This:** Schedule regular code reviews and refactoring sessions specifically for your test codebase. Update your components as Vitest itself releases updates. * **Don't Do This:** Neglect your test codebase and allow it to stagnate. ## 4. Advanced Component Design Patterns In Vitest This section will cover more advanced patterns when creating test components. ### 4.1. Dependency Injection for Testability * **Standard:** Design components to facilitate dependency injection, making it easier to mock or stub dependencies during testing. * **Why:** Allows you to isolate the unit under test and control its dependencies, creating more reliable and focused unit tests. * **Do This:** Pass dependencies as arguments to your component's constructor or functions. Use Vitest's mocking capabilities (e.g., "vi.mock", "vi.spyOn") to replace dependencies with mock implementations. * **Don't Do This:** Hardcode dependencies within your components, making them difficult to test in isolation. """typescript // Good: Dependency Injection import { describe, it, expect, vi } from 'vitest'; const fetchData = async (apiClient) => { const response = await apiClient.getData(); return response.data; }; describe('fetchData', () => { it('should fetch data correctly', async () => { const mockApiClient = { getData: vi.fn().mockResolvedValue({ data: [{ id: 1, name: 'Item 1' }] }), }; const data = await fetchData(mockApiClient); expect(data).toEqual([{ id: 1, name: 'Item 1' }]); expect(mockApiClient.getData).toHaveBeenCalled(); }); }); // Bad: Hardcoded Dependency, making testing difficult. const fetchDataBad = async () => { const apiClient = { //Real Implementation getData: async () => { return { data: [{ id: 1, name: 'Item 1' }] }; } }; const response = await apiClient.getData(); return response.data; }; describe('fetchDataBad', () => { //Hard to isolate API calls for testing it.skip('should fetch data correctly (Hard to test)', async () => { const data = await fetchDataBad(); expect(data).toEqual([{ id: 1, name: 'Item 1' }]); }); }); """ ### 4.2. State Management Patterns (e.g., Redux, Vuex Pinia) * **Standard:** When testing components that interact with state management libraries, create dedicated test components to manage the state and mock store interactions. * **Why:** Simplifies testing complex stateful components and ensures that state transitions are handled correctly. * **Do This:** Create mock store instances or specialized test reducers to isolate the component under test. Use libraries like "@vue/test-utils" or "@reduxjs/toolkit" to simplify state management testing. * **Don't Do This:** Directly manipulate the global store within your component tests. ### 4.3 Mocking Strategies with "vi" * **Standard:** Employ the "vi" object (from Vitest) judiciously for creating mocks, stubs, and spies to isolate units of code and control their behavior during testing. * **Why:** Facilitates focused testing of individual components by replacing real dependencies with controlled substitutes, ensuring predictable test outcomes. * **Do This:** Utilize "vi.fn()" to create mock functions, "vi.spyOn()" to observe method calls on existing objects, and "vi.mock()" to replace entire modules with mock implementations. * **Don't Do This:** Overuse mocking, which can lead to brittle tests that are tightly coupled to implementation details. Strive for a balance between isolation and integration testing. """typescript // Mocking Strategies with vi import { describe, it, expect, vi } from 'vitest'; describe('MyComponent', () => { it('should call the API service on mount', () => { const apiService = { fetchData: vi.fn().mockResolvedValue([{ id: 1, name: 'Item 1' }]), }; // Simulate component using the API Service. const component = { mounted: () => { apiService.fetchData(); }, }; component.mounted(); expect(apiService.fetchData).toHaveBeenCalled(); }); it('should update the component state with the fetched data', async () => { const apiService = { fetchData: vi.fn().mockResolvedValue([{ id: 1, name: 'Item 1' }]), }; // Simulate component using the API Service. const component = { data: null, mounted: async () => { component.data = await apiService.fetchData(); }, }; await component.mounted(); expect(component.data).toEqual([{ id: 1, name: 'Item 1' }]); }); }); """ By following these component design standards, you can create a robust, maintainable, and efficient test suite for Vitest. This will lead to higher-quality software and a more productive development process.
# State Management Standards for Vitest This document outlines the coding standards for state management when writing tests with Vitest, the next-gen testing framework powered by Vite. It aims to guide developers in creating robust, maintainable, and performant tests by adopting modern best practices for managing application state, data flow, and reactivity within the testing context. ## 1. Introduction to State Management in Vitest While Vitest primarily focuses on unit and integration testing, understanding state management principles is crucial for creating effective and reliable tests, especially when dealing with complex components or application logic. State might refer to various entities: component internal state, data fetched from external sources, or the overall application state managed by tools like Vuex, Redux, or Pinia. ### 1.1. Why State Management Matters in Testing * **Reproducibility:** Clearly defined state makes tests reproducible, ensuring that failures always indicate real issues. * **Isolation:** Proper state isolation prevents tests from interfering with each other, avoiding flaky test suites. * **Maintainability:** Well-structured state management simplifies test setup and teardown, making tests easier to understand and maintain. * **Accuracy:** Accurate state representation guarantees that tests accurately reflect the actual application behavior. ### 1.2. Scope of these Standards These standards cover: * Approaches for setting up and managing state within tests. * Strategies for isolating state between tests. * Best practices for mocking and stubbing external dependencies that influence state. * Specific considerations for testing reactive state with frameworks like Vue, React, and Svelte. ## 2. General Principles for State Management in Vitest ### 2.1. Declarative vs. Imperative State Setup * **Do This:** Prefer declarative state setup using "beforeEach" or "beforeAll" hooks to define the initial state for each test or test suite. """typescript import { beforeEach, describe, expect, it } from 'vitest'; describe('Counter component', () => { let counter: { value: number }; beforeEach(() => { counter = { value: 0 }; // Declarative state setup }); it('should increment the counter value', () => { counter.value++; expect(counter.value).toBe(1); }); }); """ * **Don't Do This:** Avoid directly modifying the state within the test body unless it's the action being tested. This makes the test harder to read and understand as the initial state becomes implicit. ### 2.2. Isolate State Between Tests * **Do This:** Use "beforeEach" to reset the state before each test, ensuring that tests do not interfere with each other. Consider using a factory function to create fresh state instances. """typescript import { beforeEach, describe, expect, it } from 'vitest'; // Factory function to create a new state object const createCounter = () => ({ value: 0 }); describe('Counter component', () => { let counter: { value: number }; beforeEach(() => { counter = createCounter(); // Creates a fresh counter object for each test }); it('should increment the counter value', () => { counter.value++; expect(counter.value).toBe(1); }); it('should not be affected by previous test', () => { expect(counter.value).toBe(0); // Reset to initial state }); }); """ * **Don't Do This:** Share mutable state directly between tests without resetting it. This can lead to unexpected test failures and makes debugging difficult. ### 2.3. Minimize Global State * **Do This:** Encapsulate the state as much as possible within the component or module being tested. Use dependency injection to provide state dependencies. * **Don't Do This:** Rely heavily on global variables or shared mutable objects to manage state. This introduces tight coupling and makes tests harder to isolate. ### 2.4. Use Mocks and Stubs for External Dependencies * **Do This:** Use "vi.mock" or manual mocks to isolate the component being tested from external dependencies (e.g., databases, APIs). This allows you to control the state returned by the dependencies and focus on testing the component's logic. """typescript // api.ts const fetchData = async () => { const response = await fetch('/api/data'); return await response.json(); }; export default fetchData; // component.test.ts import { describe, expect, it, vi } from 'vitest'; import fetchData from '../api'; // Import the original module import MyComponent from '../MyComponent.vue'; //Example using Vue - framework agnostic principle vi.mock('../api', () => ({ //Mock the whole module default: vi.fn(() => Promise.resolve({ data: 'mocked data' })), })); describe('MyComponent', () => { it('should display mocked data', async () => { const wrapper = mount(MyComponent); await vi.waitFor(() => { // Adjust timeout as needed expect(wrapper.text()).toContain('mocked data'); }); expect(fetchData).toHaveBeenCalled(); }); }); """ * **Don't Do This:** Make real API calls or database queries during tests. This can make tests slow, unreliable, and dependent on external factors. Also avoid tightly coupling tests to a real database or API. ### 2.5. Understanding "vi.mock" vs. "vi.spyOn" * **"vi.mock":** Replaces an entire module (or specific functions within that module) with a mock implementation. Useful when you need to completely control the behavior of a dependency. Importantly, Vitest hoists the mock to the top of the scope, meaning the mock is defined *before* the actual import. This allows you to mock even before the component is imported. * **"vi.spyOn":** Wraps an existing function (either a function on an object or a directly imported function) and allows you to track its calls, arguments, and return values *without* replacing the original implementation. Useful when you want to assert that a function was called with specific arguments, or a certain number of times. However, "vi.spyOn" works on existing objects/functions and can only be used *after* the object is imported and the function exists. """typescript import {describe, expect, it, vi} from 'vitest'; const myModule = { myFunction: (x: number) => x * 2, }; describe('myFunction', () => { it('should call the function with correct arguments', () => { const spy = vi.spyOn(myModule, 'myFunction'); myModule.myFunction(5); expect(spy).toHaveBeenCalledWith(5); }); it('should return the correct value', () => { const spy = vi.spyOn(myModule, 'myFunction'); spy.mockReturnValue(100); expect(myModule.myFunction(5)).toBe(100); }); }); """ ### 2.6. Async State and "vi.waitFor" * **Do This:** When dealing with asynchronous state updates (e.g., fetching data from an API), use "vi.waitFor" to ensure that the state has been updated before making assertions. This is crucial to prevent race conditions and flaky tests. """typescript import { describe, expect, it, vi } from 'vitest'; describe('Async Component', () => { it('should update state after async operation', async () => { let state = { data: null }; const fetchData = async () => { return new Promise((resolve) => { setTimeout(() => { state.data = 'Async Data'; resolve(state.data); }, 100); }); }; await fetchData(); await vi.waitFor(() => { expect(state.data).toBe('Async Data'); // Assert that the state has been updated }); }); }); """ * **Don't Do This:** Rely on fixed timeouts to wait for asynchronous operations to complete. This can lead to flaky tests if the operation takes longer than expected. Manually trigger the resolve using "mockResolvedValue" when mocking async functions. ### 2.7. Testing Reactivity with Testing Frameworks * **Do This:** Leverage framework-specific testing utilities to properly observe and interact with reactive state. For Vue, use "vue-test-utils", for React, use "@testing-library/react", and so on. These utilities provide methods for triggering state changes and waiting for updates to propagate. **Example (Vue with vue-test-utils):** """typescript import {describe, expect, it} from 'vitest'; import {mount} from '@vue/test-utils'; import { ref } from 'vue'; const MyComponent = { template: '<div>{{ count }}</div>', setup() { const count = ref(0); return { count }; } }; describe('MyComponent', () => { it('should render the correct count', async () => { const wrapper = mount(MyComponent); expect(wrapper.text()).toContain('0'); //Simulate interaction with the component (e.g., by emitting an event) wrapper.vm.count = 5; //Direct state change in a simple example - typically you'd trigger event await wrapper.vm.$nextTick(); //Wait for DOM update expect(wrapper.text()).toContain('5'); }); }); """ * **Don't Do This:** Directly manipulate internal component state without using the testing framework's utilities. This can bypass reactivity mechanisms and lead to incorrect test results. Also it results to fragile tests that are dependent on the internal component implementation. ### 2.8. Immutability Where Possible * **Do This:** Favor immutable data structures and state management techniques where applicable to avoid unintended side effects and simplify reasoning about state changes. Libraries like Immer can be helpful for working with immutable data. * **Don't Do This:** Mutate state directly without considering the consequences for other parts of the application or tests. ### 2.9. Testing State Transitions * **Do This:** Explicitly test all possible state transitions in a component or module. Use "describe" blocks to group tests related to specific state transitions. """typescript import { beforeEach, describe, expect, it } from 'vitest'; describe('Component with State Transitions', () => { let componentState: { isLoading: boolean; data: any; error: any }; beforeEach(() => { componentState = { isLoading: false, data: null, error: null }; }); describe('Initial State', () => { it('should start in the loading state', () => { expect(componentState.isLoading).toBe(false); expect(componentState.data).toBeNull(); expect(componentState.error).toBeNull(); }); }); describe('Loading State', () => { it('should set isLoading to true when fetching data', () => { componentState.isLoading = true; expect(componentState.isLoading).toBe(true); }); }); describe('Success State', () => { it('should set data when data is successfully fetched', () => { const mockData = { name: 'Test Data' }; componentState.data = mockData; componentState.isLoading = false; expect(componentState.data).toEqual(mockData); expect(componentState.isLoading).toBe(false); }); }); describe('Error State', () => { it('should set error when fetching data fails', () => { const mockError = new Error('Failed to fetch data'); componentState.error = mockError; componentState.isLoading = false; expect(componentState.error).toEqual(mockError); expect(componentState.isLoading).toBe(false); }); }); }); """ * **Don't Do This:** Assume that state transitions will work correctly without explicit tests. Missing tests for state transitions are frequently causes of bugs. ## 3. Testing Specific State Management Patterns ### 3.1. Testing Vuex/Pinia Stores * **Do This:** Mock the store's actions, mutations, and getters to isolate the component being tested. Use "createLocalVue" (for Vuex) to create a local Vue instance with the mocked store. For Pinia, mock the store directly using "vi.mock". **Example (Pinia with Vitest):** """typescript import {describe, expect, it, vi} from 'vitest'; import {useMyStore} from '../src/stores/myStore'; //Replace with your actual path vi.mock('../src/stores/myStore', () => { return { useMyStore: vi.fn(() => ({ count: 10, increment: vi.fn(), doubleCount: vi.fn().mockReturnValue(20), })), }; }); describe('Component using Pinia store', () => { it('should display the count from the store', () => { const store = useMyStore(); expect(store.count).toBe(10); }); it('should call the increment action when a button is clicked', () => { const store = useMyStore(); //Simulate user interaction or similar that is expected to call store.increment() // ... expect(store.increment).toHaveBeenCalled(); }); }); """ * **Don't Do This:** Directly interact with the real store during component tests. This can make tests slow, and introduces dependencies between tests, increases complexity. Test the store in a separate test file dedicated to store logic. ### 3.2. Testing Redux Reducers and Actions * **Do This:** Test reducers in isolation by providing them with different actions and verifying that they produce the expected state changes. Test actions by dispatching them and asserting on the side effects (e.g., API calls). """typescript import { describe, expect, it } from 'vitest'; import reducer from './reducer'; import { increment, decrement } from './actions'; describe('Counter Reducer', () => { it('should return the initial state', () => { expect(reducer(undefined, {})).toEqual({ value: 0 }); }); it('should handle INCREMENT', () => { expect(reducer({ value: 0 }, increment())).toEqual({ value: 1 }); }); it('should handle DECREMENT', () => { expect(reducer({ value: 1 }, decrement())).toEqual({ value: 0 }); }); }); """ * **Don't Do This:** Test reducers and actions together in a complex integration test. This makes it harder to isolate the cause of failures. ### 3.3 Testing React Context * **Do This:** Create custom test providers to mock the context values and test components within a controlled context. Use "@testing-library/react" to render and interact with components. """typescript import { render, screen, fireEvent } from '@testing-library/react'; import { describe, expect, it } from 'vitest'; import React, { createContext, useContext, useState } from 'react'; // Context setup const CounterContext = createContext({ count: 0, setCount: (value: number) => {}, }); const useCounter = () => useContext(CounterContext); const CounterProvider = ({ children, initialCount = 0 }) => { const [count, setCount] = useState(initialCount); return ( <CounterContext.Provider value={{ count, setCount }}> {children} </CounterContext.Provider> ); }; // Component const CounterComponent = () => { const { count, setCount } = useCounter(); return ( <div> <span>{count}</span> <button onClick={() => setCount(count + 1)}>Increment</button> </div> ); }; describe('Counter Component with Context', () => { it('should display the initial count from context', () => { render( <CounterProvider initialCount={5}> <CounterComponent /> </CounterProvider> ); expect(screen.getByText('5')).toBeInTheDocument(); }); it('should increment the count when the button is clicked', () => { render( <CounterProvider initialCount={0}> <CounterComponent /> </CounterProvider> ); const incrementButton = screen.getByText('Increment'); fireEvent.click(incrementButton); expect(screen.getByText('1')).toBeInTheDocument(); }); it('should use a custom context value', () => { // Custom Provider for testing const TestProvider = ({ children }) => ( <CounterContext.Provider value={{ count: 100, setCount: () => {} }}> {children} </CounterContext.Provider> ); render( <TestProvider> <CounterComponent /> </TestProvider> ); expect(screen.getByText('100')).toBeInTheDocument(); }); }); """ * **Don't Do This:** Rely on the default context values during tests, as they might not accurately reflect the component's behavior in different scenarios. ## 4. Performance Considerations ### 4.1. Optimize State Setup * **Do This:** Minimize the amount of state that needs to be set up for each test. Only set up the state that is relevant to the specific test case. Use lazy initialization where possible. * **Don't Do This:** Create large, complex state objects unnecessarily. ### 4.2. Avoid Unnecessary State Updates * **Do This:** Only update the state when necessary. Avoid unnecessary state updates that can trigger re-renders or other performance-intensive operations. * **Don't Do This:** Continuously update state in a loop or in response to every event. ## 5. Security Considerations ### 5.1. Secure State Storage * **Do This:** If you are testing code that deals with sensitive data (e.g., passwords, API keys), ensure that the data is stored securely and is not exposed in test logs or reports. * **Don't Do This:** Store sensitive data in plain text or commit it to version control. Use environment variables or dedicated secrets management tools. ### 5.2. Input Validation * **Do This:** Test state updates with invalid or malicious input to ensure that the application handles errors gracefully and does not become vulnerable to security exploits. * **Don't Do This:** Assume that all input will be valid. ## 6. Code Style and Formatting * Follow established code style guidelines (e.g., Airbnb, Google) for whitespace, indentation, and naming conventions. * Use descriptive variable names to clearly indicate the purpose of the state being managed. * Add comments to explain complex state transitions or mocking strategies. ## 7. Conclusion By following these coding standards, developers can write more robust, maintainable, and performant tests for state management in Vitest. Adhering to these best practices will improve the overall quality of the codebase and reduce the risk of bugs and security vulnerabilities. Remember to always use the latest version of the stack and its libraries to maximize performance, security, and compatibility with the latest features. This document is intended as a living document, and it will be updated as new best practices and technologies emerge.
# Testing Methodologies Standards for Vitest This document outlines the coding standards for testing methodologies using Vitest. These standards aim to ensure consistent, maintainable, performant, and secure tests across our projects. This document covers strategies for unit, integration, and end-to-end tests with Vitest. ## 1. Testing Pyramid & Levels of Testing ### 1.1. Standard: Adhere to the Testing Pyramid * **Do This:** Prioritize unit tests. Strive for a higher number of unit tests compared to integration or end-to-end tests. Reduce the number of end-to-end tests. * **Don't Do This:** Create a "top-heavy" testing pyramid with a large number of slow, brittle end-to-end tests. * **Why:** Unit tests are faster and more granular, leading to quicker feedback and easier debugging. End-to-end tests are slower, less specific, and more likely to break due to UI changes or environment issues. ### 1.2. Unit Tests * **Standard:** Unit tests should focus on testing a single unit of code in isolation (e.g., a function, a class method). * **Do This:** Mock dependencies to isolate the unit under test. * **Don't Do This:** Perform database operations, network requests, or file system accesses directly in unit tests. * **Why:** Isolating units makes tests faster and more reliable. Dependencies can introduce external factors that make tests flaky or slow. """typescript // Example: Unit test for a function that formats a date import { formatDate } from '../src/utils'; import { describe, expect, it, vi } from 'vitest'; describe('formatDate', () => { it('should format a date correctly', () => { const mockDate = new Date('2024-01-01T12:00:00.000Z'); vi.spyOn(global, 'Date').mockImplementation(() => mockDate); // mock Date object, to ensure date is always the same for testing expect(formatDate(new Date())).toBe('2024-01-01'); // now it won't be "new Date()", but mockDate. }); it('should handle different date objects', () => { const date = new Date('2024-02-15T08:30:00.000Z'); expect(formatDate(date)).toBe('2024-02-15'); }); }); """ ### 1.3. Integration Tests * **Standard:** Integration tests should verify the interaction between different components or modules. * **Do This:** Make API calls within your application and mock the external service responses. * **Don't Do This:** Test intricate business logic which should go into unit tests if possible. * **Why:** Validates that modules work together as expected, ensuring the larger system functions correctly. """typescript // Example: Integration test for an API client import { describe, expect, it, vi } from 'vitest'; import { fetchUserData } from '../src/apiClient'; global.fetch = vi.fn(() => Promise.resolve({ json: () => Promise.resolve({ id: 1, name: 'John Doe' }), }) ) as any; describe('fetchUserData', () => { it('should fetch user data from the API', async () => { const userData = await fetchUserData(1); expect(userData).toEqual({ id: 1, name: 'John Doe' }); expect(fetch).toHaveBeenCalledWith('/api/users/1'); }); it('should handle API errors', async () => { (fetch as any).mockImplementationOnce(() => Promise.reject('API Error')); await expect(fetchUserData(1)).rejects.toEqual('API Error'); }); }); """ ### 1.4. End-to-End (E2E) Tests * **Standard:** E2E tests should simulate real user interactions to validate the entire application flow. * **Do This:** Use tools like Playwright or Cypress for browser automation. Test critical user journeys. * **Don't Do This:** Use slow, brittle E2E tests for verifying logic that can be easily unit-tested. * **Why:** E2E tests provide confidence that the entire system works correctly from the user's perspective. """typescript // Example: Playwright E2E Test import { test, expect } from '@playwright/test'; test('Homepage has title', async ({ page }) => { await page.goto('http://localhost:3000/'); await expect(page).toHaveTitle("My App"); }); test('Navigation to about page', async ({ page }) => { await page.goto('http://localhost:3000/'); await page.getByRole('link', { name: 'About' }).click(); await expect(page).toHaveURL(/.*about/); }); """ ## 2. Test Structure and Organization ### 2.1. Standard: Arrange, Act, Assert (AAA) Pattern * **Do This:** Structure your tests into three distinct parts: Arrange (setup the test environment), Act (execute the code being tested), and Assert (verify the expected outcome). * **Don't Do This:** Mix setup, execution, and verification logic within a single block of code. * **Why:** AAA makes tests more readable, maintainable, and easier to understand. """typescript // Example: AAA pattern import { describe, expect, it } from 'vitest'; import { add } from '../src/math'; describe('add', () => { it('should add two numbers correctly', () => { // Arrange const a = 5; const b = 3; // Act const result = add(a, b); // Assert expect(result).toBe(8); }); }); """ ### 2.2. Standard: Test File Structure * **Do This:** Create a "test" directory mirroring your "src" directory for test files. Use "*.test.ts" or "*.spec.ts" naming convention. * **Don't Do This:** Place test files directly alongside source files. * **Why:** A consistent file structure makes it easier to locate and maintain tests. """ src/ components/ Button.tsx utils/ formatDate.ts test/ components/ Button.test.tsx utils/ formatDate.test.ts """ ### 2.3. Standard: Descriptive Test Names * **Do This:** Write descriptive test names that clearly explain what the test is verifying. Follow a convention like "should [verb] [expected result] when [scenario]". * **Don't Do This:** Use generic, unclear, or vague test names. * **Why:** Clear test names facilitate debugging and give a good overview of the tested functionality. """typescript // Good: it('should return "Hello, World!" when no name is provided', () => { /* ... */ }); // Bad: it('test', () => { /* ... */ }); """ ### 2.4. Standard: Grouping Tests with 'describe' * **Do This:** Use the "describe" block to group related tests for clarity and organization. * **Don't Do This:** Create a single, monolithic test file with no logical grouping. * **Why:** "describe" blocks improve test readability and help identify the area of the code being tested. """typescript import { describe, expect, it } from 'vitest'; import { calculateDiscount } from '../src/utils'; describe('calculateDiscount', () => { it('should apply a 10% discount for orders over $100', () => { expect(calculateDiscount(150)).toBe(15); }); it('should not apply a discount for orders under $100', () => { expect(calculateDiscount(50)).toBe(0); }); }); """ ## 3. Mocking and Stubbing ### 3.1. Standard: Minimize Mocking * **Do This:** Use mocks only when necessary to isolate the unit under test. Prefer real dependencies when possible. * **Don't Do This:** Mock everything by default. Over-mocking can blur the line between implementation change and test update. * **Why:** Reduces the risk of false positives and increases confidence in the tests' accuracy. ### 3.2. Standard: Use Vitest's Built-in Mocking * **Do This:** Use "vi.mock", "vi.spyOn", and "vi.fn" for mocking and stubbing in Vitest. * **Don't Do This:** Use external mocking libraries that may not be compatible with Vitest. * **Why:** Vitest's built-in mocking is well-integrated and performant. """typescript // Example: Mocking a module function import { describe, expect, it, vi } from 'vitest'; import { fetchData } from '../src/dataService'; import { processData } from '../src/processor'; vi.mock('../src/dataService', () => ({ fetchData: vi.fn(() => Promise.resolve([{ id: 1, name: 'Test Data' }])), })); describe('processData', () => { it('should process data from the dataService', async () => { const result = await processData(); expect(result).toEqual([{ id: 1, name: 'Processed Test Data' }]); }); }); """ ### 3.3. Standard: Restore Mocks After Each Test * **Do This:** Use "vi.restoreAllMocks()" (or "afterEach(vi.restoreAllMocks())") to reset mocks after each test. * **Don't Do This:** Leave mocks active between tests, which can lead to unexpected behavior and test pollution. * **Why:** Prevents interference between tests and ensures reliable results. """typescript import { describe, expect, it, vi, afterEach } from 'vitest'; import { externalService } from '../src/externalService'; import { myModule } from '../src/myModule'; describe('myModule', () => { afterEach(() => { vi.restoreAllMocks(); }); it('should call externalService correctly', () => { const spy = vi.spyOn(externalService, 'doSomething'); myModule.run(); expect(spy).toHaveBeenCalled(); }); it('should handle errors from externalService', async () => { vi.spyOn(externalService, 'doSomething').mockRejectedValue(new Error('Service Unavailable')); await expect(myModule.run()).rejects.toThrowError('Service Unavailable'); }); }); """ ### 3.4 Standard: Mocking In-Source Testing * **Do This:** Use "import.meta.vitest" inside of the scope you want to test. Run tests directly within component or module. * **Why:** Tests share the same scope making them able to test against private states. """typescript // src/index.ts export const add = (a: number, b: number) => a + b if (import.meta.vitest) { const { it, expect } = import.meta.vitest it('add', () => { expect(add(1, 2)).eq(3) }) } """ ## 4. Asynchronous Testing ### 4.1. Standard: Use "async/await" for Asynchronous Operations * **Do This:** Use "async" and "await" to handle asynchronous operations in your tests. * **Don't Do This:** Rely on callbacks or Promises without "async/await", which can make tests harder to read and debug. * **Why:** "async/await" makes asynchronous code look and behave more like synchronous code, improving readability and maintainability. """typescript // Example: Testing an asynchronous function import { describe, expect, it } from 'vitest'; import { fetchData } from '../src/apiClient'; describe('fetchData', () => { it('should fetch data successfully', async () => { const data = await fetchData('https://example.com/api/data'); expect(data).toBeDefined(); }); it('should handle errors when fetching data', async () => { try { await fetchData('https://example.com/api/error'); } catch (error: any) { expect(error.message).toBe('Failed to fetch data'); } }); }); """ ### 4.2. Standard: Handle Promises with "expect.resolves" and "expect.rejects" * **Do This:** Use "expect.resolves" to assert that a Promise resolves with a specific value, and "expect.rejects" to assert that a Promise rejects with a specific error. * **Don't Do This:** Use "try/catch" for successful asynchronous calls. * **Why:** "expect.resolves" and "expect.rejects" provide a more concise and readable way to test Promises. """typescript import { describe, expect, it } from 'vitest'; import { createUser } from '../src/userService'; describe('createUser', () => { it('should create a user successfully', async () => { await expect(createUser('john.doe@example.com')).resolves.toBe('user123'); }); it('should reject with an error if the email is invalid', async () => { await expect(createUser('invalid-email')).rejects.toThrowError('Invalid email format'); }); }); """ ### 4.3. Standard: Use Fake Timers for Time-Dependent Tests * **Do This:** Use "vi.useFakeTimers()" with "vi.advanceTimersByTime()" to control the passage of time in your tests. * **Don't Do This:** Rely on "setTimeout" or "setInterval" with real timers, which can make tests slow and unreliable. * **Why:** Fake timers make time-dependent tests faster, more deterministic, and easier to control. """typescript import { describe, expect, it, vi, beforeEach } from 'vitest'; import { delayedFunction } from '../src/utils'; describe('delayedFunction', () => { beforeEach(() => { vi.useFakeTimers(); }); it('should execute the callback after a delay', () => { let executed = false; delayedFunction(() => { executed = true; }, 1000); expect(executed).toBe(false); vi.advanceTimersByTime(1000); expect(executed).toBe(true); }); it('should execute the callback at least after a delay of x milliseconds', () => { const callback = vi.fn(); delayedFunction(callback, 1000); vi.advanceTimersByTime(999); expect(callback).not.toHaveBeenCalled(); // not called yet vi.advanceTimersByTime(1); // advance another 1 ms expect(callback).toHaveBeenCalled(); // now is called }) }); """ ## 5. Test Data Management ### 5.1. Standard: Use Test Data Factories or Fixtures * **Do This:** Create test data factories or fixtures to generate consistent and reusable test data. * **Don't Do This:** Hardcode test data directly in your tests, leading to duplication and maintenance issues. * **Why:** Test data factories make it easier to create complex test data structures and ensure consistency across tests. """typescript // Example: Test data factory import { faker } from '@faker-js/faker'; export const createUser = (overrides = {}) => ({ id: faker.number.int(), email: faker.internet.email(), name: faker.person.fullName(), ...overrides, }); //Use it in your tests: import { describe, expect, it } from 'vitest'; import { createUser } from './factories'; describe('User', () => { it('should create a valid user', () => { const user = createUser(); expect(user).toHaveProperty('id'); expect(user).toHaveProperty('email'); }); it('should allow overriding properties', () => { const user = createUser({ name: 'Custom Name' }); expect(user.name).toBe('Custom Name'); }); }); """ ### 5.2. Standard: Avoid Sharing Test Data * **Do This:** Create new test data for each test case to avoid interference between tests. If you must, use a method like the "beforeEach" hook. * **Don't Do This:** Mutate shared test data, which can lead to unpredictable test results and flaky tests. * **Why:** Prevents test pollution and ensures that each test case is independent and reliable. ### 5.3. Standard: Seed Your Testing Database Before Tests * **Do This:** Seed database with a known state before any tests are run. * **Don't Do This:** Allow tests to create dependencies on each other's data and state. * **Why:** Ensures tests have a consistent, predictable, isolated environment. """typescript // Example: Seeding database before running tests import { describe, expect, it, beforeAll, afterAll } from 'vitest'; import { seedDatabase, clearDatabase } from './db'; // your database setup file import { User } from '../src/models/User'; describe('User Model', () => { beforeAll(async () => { await seedDatabase(); // Seed the database with consistent test data }); afterAll(async () => { await clearDatabase(); // Clear the database after all tests are complete }); it('should create a user correctly', async () => { const newUser = await User.create({ name: 'Test User', email: 'test@example.com' }); expect(newUser.name).toBe('Test User'); }); it('should find a user by email', async () => { const user = await User.findOne({ where: { email: 'test@example.com' } }); expect(user).toBeDefined(); }); }); """ ## 6. Performance Testing ### 6.1. Standard: Use "performance.mark" and "performance.measure" for Performance Measurement * **Do This:** Utilize the "performance.mark" and "performance.measure" APIs to measure the execution time of critical code sections. * **Don't Do This:** Rely on manual time tracking or inaccurate timing methods. * **Why:** Provides precise performance metrics for optimizing code execution. """typescript import { describe, expect, it } from 'vitest'; import { expensiveFunction } from '../src/utils'; describe('expensiveFunction', () => { it('should execute within a reasonable time', () => { performance.mark('start'); expensiveFunction(); performance.mark('end'); const measure = performance.measure('expensiveFunction', 'start', 'end'); expect(measure.duration).toBeLessThan(100); // Milliseconds }); }); """ ### 6.2. Standard: Threshold-Based Assertions * **Do This:** Set thresholds or performance budgets by setting "expect(measure.duration).toBeLessThan(100)". * **Don't Do This:** Test performance in an unmeasurable relative "it's fast" way. * **Why:** Prevents performance regressions by ensuring code executes within acceptable time limits. ## 7. Security Considerations ### 7.1. Standard: Avoid Hardcoding Secrets in Tests * **Do This:** Use environment variables or configuration files to store sensitive information used in tests. * **Don't Do This:** Hardcode API keys, passwords, or other secrets directly in your test code. * **Why:** Protects sensitive information from exposure and reduces the risk of security breaches. ### 7.2. Standard: Sanitize Test Inputs * **Do This:** Sanitize test inputs to prevent injection attacks or other security vulnerabilities. * **Don't Do This:** Use unsanitized user inputs directly in your tests, which can introduce security risks. * **Why:** Helps identify potential security vulnerabilities in your code and prevent real-world attacks. Added security can come from "vi.fn()" mocking return or argument sanitization. ### 7.3 Standard: Mock Authentication and Authorization * **Do This:** Mock authentication and authorization services during tests to avoid making real external calls. * **Why:** Prevents sensitive credentials from use and allows for specific permissions to be tested. """typescript // Example: Mock Authenticated User import { describe, expect, it, vi } from 'vitest'; import { getUserProfile } from '../src/authService'; import { getRestrictedData } from '../src/dataService'; vi.mock('../src/authService', () => ({ getUserProfile: vi.fn(() => ({ id: 'mocked-user', isAdmin: true })) }); describe('accessControlTesting', () => { it('should allow access for admin users', async () => { const profile = await getUserProfile(); const data = await getRestrictedData(profile.id); expect(data).toBeDefined(); }); }); """ ## 8. Continuous Integration (CI) ### 8.1. Standard: Run Tests on Every Commit * **Do This:** Configure your CI/CD pipeline to automatically run all tests on every commit or pull request. * **Don't Do This:** Only run tests manually or on a schedule, which can delay feedback and increase the risk of regressions. * **Why:** Provides immediate feedback on code changes and prevents regressions from making their way into production. ### 8.2. Standard: Use a Dedicated Test Environment * **Do This:** Run tests in a dedicated environment that is isolated from other processes and has a known configuration. * **Don't Do This:** Run tests in a shared environment or on your local machine, which can introduce inconsistencies and dependencies. * **Why:** Ensures consistent and reliable test results, regardless of the environment. ### 8.3 Standard: Utilize Vitest CLI Flags in CI * **Do This:** Use "--run" and "--reporter=junit" to ensure tests are running correctly on the CI process. JUnit provides a way to look back at test results. ## 9. Code Coverage ### 9.1. Standard: Aim for High Code Coverage * **Do This:** Strive for high code coverage (e.g., 80% or higher) to ensure that most of your code is being tested. * **Don't Do This:** Focus solely on code coverage metrics without considering the quality and effectiveness of your tests. * **Why:** Provides a measure of how much of your code is being tested and helps identify areas that may need more coverage. ### 9.2. Standard: Use "--coverage" Flag for Coverage Reports * **Do This:** Use the "--coverage" flag in Vitest to generate code coverage reports. * **Don't Do This:** Rely on external coverage tools that may not be compatible with Vitest. * **Why:** Provides detailed information about code coverage, including line, branch, and function coverage. ## 10. Test Doubles Test doubles mimic real components for testing purposes. Common types include: * **Stubs:** Provide predefined responses to calls. * **Mocks:** Verify interactions and behaviors. * **Spies:** Track how a function or method is used. * **Fakes:** Simplified implementations of a component. * **Dummies:** Pass placeholders when a value is needed but not used. ### 10.1 Standard: Use Doubles to Verify Proper Code Execution * **Do This:** Mock components or functions and verify those mocks got called as expected. * **Why:** Allows for test driven development when mocking. Helps isolate code. """typescript // mock axios and check if it gets called with the correct URL" import axios from 'axios'; import { fetchData } from '../src/apiFunctions'; vi.mock('axios'); it('fetches data from the correct URL', async () => { const mockAxios = vi.mocked(axios); mockAxios.get.mockResolvedValue({ data: { message: 'Success!' } }); const result = await fetchData('test-url'); expect(mockAxios.get).toHaveBeenCalledWith('test-url'); expect(result).toEqual({ message: 'Success!' }); }); """