# Component Design Standards for Vitest
This document outlines the coding standards for component design within the Vitest testing framework. It aims to guide developers in creating reusable, maintainable, and performant test components. These standards are tailored to the latest features and best practices of Vitest.
## 1. Componentization Principles in Vitest
This section establishes the fundamental principles underlying effective component design within the context of Vitest.
### 1.1. Abstraction and Encapsulation
* **Standard:** Encapsulate complex testing logic within reusable components. Abstract away implementation details to simplify test setup and assertions.
* **Why:** Reduces duplication, improves readability, and makes tests more resilient to changes in the underlying system.
* **Do This:** Create helper functions or custom matchers to handle common assertion patterns.
* **Don't Do This:** Repeat the same setup or assertion logic across multiple test files.
"""typescript
// Good: Abstraction with a custom matcher
import { expect } from 'vitest';
expect.extend({
toBeWithinRange(received, floor, ceiling) {
const pass = received >= floor && received <= ceiling;
if (pass) {
return {
message: () =>
"expected ${received} not to be within range ${floor} - ${ceiling}",
pass: true,
};
} else {
return {
message: () =>
"expected ${received} to be within range ${floor} - ${ceiling}",
pass: false,
};
}
},
});
// Use the custom matcher
it('number should be within range', () => {
expect(100).toBeWithinRange(90, 110);
});
// Bad: Repeating assertion logic
it('number should be within range (bad)', () => {
const number = 100;
const floor = 90;
const ceiling = 110;
expect(number >= floor && number <= ceiling).toBe(true);
});
"""
### 1.2. Single Responsibility Principle (SRP)
* **Standard:** Each test component or helper should have one, and only one, reason to change.
* **Why:** Promotes modularity, reduces complexity, and makes it easier to isolate and fix issues.
* **Do This:** Break down large, monolithic test functions into smaller, more focused components.
* **Don't Do This:** Create test functions that handle multiple aspects of a component's behavior.
"""typescript
// Good: SRP - Separate setup and assertion logic
import { describe, it, expect, beforeEach } from 'vitest';
describe('MyComponent', () => {
let component;
beforeEach(() => {
component = { /* ... */ }; // Simulate component initialization
});
it('should render correctly', () => {
expect(component).toBeDefined(); // Assertion logic
});
it('should handle user input', () => {
// Test input handling
expect(true).toBe(true);
});
});
// Bad: Violating SRP - Mixing setup and assertions
it('MyComponent - should render and handle input (BAD)', () => {
const component = { /* ... */ }; // Setup
expect(component).toBeDefined(); // Assertion
// Test input handling
expect(true).toBe(true);
});
"""
### 1.3. Composition over Inheritance
* **Standard:** Prefer composing test components from smaller, reusable parts rather than relying on deep inheritance hierarchies.
* **Why:** Improves flexibility, reduces coupling, and avoids the complexities of inheritance.
* **Do This:** Create utility functions that encapsulate specific testing behaviors and compose them as needed.
* **Don't Do This:** Create deeply nested inheritance structures for your test components.
"""typescript
// Good: Composition using utility functions
import { describe, it, expect } from 'vitest';
const createMockComponent = (props = {}) => ({ ...props, isMock: true });
const assertComponentRenders = (component) => expect(component).toBeDefined();
describe('ComposableComponent', () => {
it('should create a mock component', () => {
const mock = createMockComponent({ name: 'Test' });
expect(mock.name).toBe('Test');
expect(mock.isMock).toBe(true);
});
it('should assert if the component renders', () => {
const mockComponent = createMockComponent();
assertComponentRenders(mockComponent);
});
});
"""
## 2. Creating Reusable Components in Vitest
This section focuses on the specific techniques and best practices for creating modular, easily reusable components for your tests.
### 2.1. Custom Matchers
* **Standard:** Utilize custom matchers to encapsulate complex or domain-specific assertions.
* **Why:** Improves test readability, reduces code duplication, and provides a more expressive API for your tests.
* **Do This:** Implement custom matchers for common validation scenarios like date formatting or data structure validation.
* **Don't Do This:** Hardcode complex assertion logic directly within your tests.
"""typescript
// Custom Matcher Example
import { expect } from 'vitest';
expect.extend({
toBeValidDate(received) {
const pass = received instanceof Date && !isNaN(received.getTime());
if (pass) {
return {
message: () => "expected ${received} not to be a valid date",
pass: true,
};
} else {
return {
message: () => "expected ${received} to be a valid date",
pass: false,
};
}
},
});
it('should be a valid date', () => {
expect(new Date()).toBeValidDate();
});
"""
### 2.2. Test Factories
* **Standard:** Use test factories to create consistent and configurable test data.
* **Why:** Reduces boilerplate code, makes tests more maintainable, and allows for easy customization of test scenarios.
* **Do This:** Create factory functions for generating mock data or component props.
* **Don't Do This:** Hardcode test data directly within your tests.
"""typescript
// Test Factory Example
import { describe, it, expect } from 'vitest';
const createMockUser = (overrides = {}) => ({
id: '123',
name: 'Test User',
email: 'test@example.com',
...overrides,
});
describe('User Creation', () => {
it('should create a default user', () => {
const user = createMockUser();
expect(user.id).toBe('123');
expect(user.name).toBe('Test User');
});
it('should allow overrides', () => {
const user = createMockUser({ name: 'Custom Name' });
expect(user.name).toBe('Custom Name');
});
});
"""
### 2.3. Test Data Builders
* **Standard:** Employ test data builders for constructing complex, nested test data objects in a readable and maintainable way.
* **Why:** Simplifies the creation of intricate test data structures, promoting clarity and reducing the likelihood of errors.
* **Do This:** Implement builder classes or functions to manage the construction of complex test data scenarios.
* **Don't Do This:** Manually construct complex test data objects within test cases, leading to verbose and error-prone tests.
"""typescript
// Test Data Builder Example
import { describe, it, expect } from 'vitest';
class UserBuilder {
private id: string = '123';
private name: string = 'Test User';
private email: string = 'test@example.com';
private addresses: string[] = [];
withId(id: string): UserBuilder {
this.id = id;
return this;
}
withName(name: string): UserBuilder {
this.name = name;
return this;
}
withEmail(email: string): UserBuilder {
this.email = email;
return this;
}
withAddress(address: string): UserBuilder {
this.addresses.push(address);
return this;
}
build(): any {
return {
id: this.id,
name: this.name,
email: this.email,
addresses: this.addresses,
};
}
}
describe('UserBuilder', () => {
it('should build a user with default values', () => {
const user = new UserBuilder().build();
expect(user.id).toBe('123');
expect(user.name).toBe('Test User');
expect(user.email).toBe('test@example.com');
expect(user.addresses).toEqual([]);
});
it('should build a user with custom values', () => {
const user = new UserBuilder()
.withId('456')
.withName('Jane Doe')
.withEmail('jane.doe@example.com')
.withAddress('123 Main St')
.build();
expect(user.id).toBe('456');
expect(user.name).toBe('Jane Doe');
expect(user.email).toBe('jane.doe@example.com');
expect(user.addresses).toEqual(['123 Main St']);
});
});
"""
### 2.4. Page Objects (for UI Testing)
* **Standard:** Create page object classes to represent UI elements and interactions. Use with libraries like Playwright or Cypress when testing UI components.
* **Why:** Isolates UI-specific logic, making tests more resilient to UI changes and improving maintainability.
* **Do This:** Define page object classes that encapsulate locators, actions, and assertions for specific UI pages or components.
* **Don't Do This:** Directly interact with UI elements within your tests.
"""typescript
// Hypothetical Page Object Example (with Playwright)
import { expect, Page } from '@playwright/test';
class LoginPage {
private readonly page: Page;
private readonly usernameInput = '#username';
private readonly passwordInput = '#password';
private readonly loginButton = '#login-button';
constructor(page: Page) {
this.page = page;
}
async goto() {
await this.page.goto('/login');
}
async login(username, password) {
await this.page.fill(this.usernameInput, username);
await this.page.fill(this.passwordInput, password);
await this.page.click(this.loginButton);
}
async assertLoginSuccess() {
await expect(this.page.locator('#success-message')).toBeVisible();
}
}
export { LoginPage };
// In your test:
import { test } from '@playwright/test';
import { LoginPage } from './LoginPage';
test('Login should succeed', async ({ page }) => {
const loginPage = new LoginPage(page);
await loginPage.goto();
await loginPage.login('testuser', 'password');
await loginPage.assertLoginSuccess();
});
"""
## 3. Maintaining Test Components
This section deals with strategies for keeping test components up-to-date, easy-to-understand, and performant, along with addressing common anti-patterns.
### 3.1. Test Component Documentation
* **Standard:** Document your test helper functions, custom matchers, and test factories using JSDoc or TypeScriptDoc.
* **Why:** Improves the understandability of your test code and makes it easier for other developers (and your future self) to use and maintain.
* **Do This:** Add comments explaining the purpose, usage, and parameters of your test components.
* **Don't Do This:** Leave your test components undocumented.
"""typescript
/**
* Creates a mock user object.
*
* @param {object} overrides - Optional properties to override the default values.
* @returns {object} A mock user object.
*/
const createMockUser = (overrides = {}) => ({
id: '123',
name: 'Test User',
email: 'test@example.com',
...overrides,
});
"""
### 3.2. Consistent Naming Conventions
* **Standard:** Use consistent and descriptive names for your test components.
* **Why:** Makes your test code easier to read and understand.
* **Do This:** Use prefixes like "mock", "stub", or "fake" to indicate the purpose of your test components.
* **Don't Do This:** Use ambiguous or inconsistent names.
"""typescript
// Good: Consistent Naming
const mockApiService = () => ({
getData: vi.fn().mockResolvedValue([{ id: 1, name: 'Item 1' }]),
});
describe('MyComponent', () => {
it('should fetch data correctly', async () => {
const api = mockApiService(); //Use of appropriate naming
//..rest of test
});
});
//Bad: Inconsistent Naming
const apiService = () => ({
getData: vi.fn().mockResolvedValue([{ id: 1, name: 'Item 1' }]),
});
describe('MyComponent', () => {
it('should fetch data correctly', async () => {
const api = apiService(); //naming does not make it clear it is for mocking
//..rest of test
});
});
"""
### 3.3. Avoiding Over-Abstraction
* **Standard:** Avoid creating overly complex or abstract test components that are difficult to understand and use.
* **Why:** Simplicity and clarity are key in testing. Over-abstraction can make tests harder to debug and maintain.
* **Do This:** Keep your test components focused and easy to understand. If a component becomes too complex, consider breaking it down into smaller parts.
* **Don't Do This:** Create deeply nested inheritance hierarchies or overly generic test components.
### 3.4. Component Updates with Refactoring
* **Standard:** Regularly review and refactor your test components to keep them aligned with the latest best practices and the evolving codebase.
* **Why:** Prevents test components from becoming outdated or brittle, ensuring they remain effective and maintainable.
* **Do This:** Schedule regular code reviews and refactoring sessions specifically for your test codebase. Update your components as Vitest itself releases updates.
* **Don't Do This:** Neglect your test codebase and allow it to stagnate.
## 4. Advanced Component Design Patterns In Vitest
This section will cover more advanced patterns when creating test components.
### 4.1. Dependency Injection for Testability
* **Standard:** Design components to facilitate dependency injection, making it easier to mock or stub dependencies during testing.
* **Why:** Allows you to isolate the unit under test and control its dependencies, creating more reliable and focused unit tests.
* **Do This:** Pass dependencies as arguments to your component's constructor or functions. Use Vitest's mocking capabilities (e.g., "vi.mock", "vi.spyOn") to replace dependencies with mock implementations.
* **Don't Do This:** Hardcode dependencies within your components, making them difficult to test in isolation.
"""typescript
// Good: Dependency Injection
import { describe, it, expect, vi } from 'vitest';
const fetchData = async (apiClient) => {
const response = await apiClient.getData();
return response.data;
};
describe('fetchData', () => {
it('should fetch data correctly', async () => {
const mockApiClient = {
getData: vi.fn().mockResolvedValue({ data: [{ id: 1, name: 'Item 1' }] }),
};
const data = await fetchData(mockApiClient);
expect(data).toEqual([{ id: 1, name: 'Item 1' }]);
expect(mockApiClient.getData).toHaveBeenCalled();
});
});
// Bad: Hardcoded Dependency, making testing difficult.
const fetchDataBad = async () => {
const apiClient = {
//Real Implementation
getData: async () => {
return { data: [{ id: 1, name: 'Item 1' }] };
}
};
const response = await apiClient.getData();
return response.data;
};
describe('fetchDataBad', () => {
//Hard to isolate API calls for testing
it.skip('should fetch data correctly (Hard to test)', async () => {
const data = await fetchDataBad();
expect(data).toEqual([{ id: 1, name: 'Item 1' }]);
});
});
"""
### 4.2. State Management Patterns (e.g., Redux, Vuex Pinia)
* **Standard:** When testing components that interact with state management libraries, create dedicated test components to manage the state and mock store interactions.
* **Why:** Simplifies testing complex stateful components and ensures that state transitions are handled correctly.
* **Do This:** Create mock store instances or specialized test reducers to isolate the component under test. Use libraries like "@vue/test-utils" or "@reduxjs/toolkit" to simplify state management testing.
* **Don't Do This:** Directly manipulate the global store within your component tests.
### 4.3 Mocking Strategies with "vi"
* **Standard:** Employ the "vi" object (from Vitest) judiciously for creating mocks, stubs, and spies to isolate units of code and control their behavior during testing.
* **Why:** Facilitates focused testing of individual components by replacing real dependencies with controlled substitutes, ensuring predictable test outcomes.
* **Do This:** Utilize "vi.fn()" to create mock functions, "vi.spyOn()" to observe method calls on existing objects, and "vi.mock()" to replace entire modules with mock implementations.
* **Don't Do This:** Overuse mocking, which can lead to brittle tests that are tightly coupled to implementation details. Strive for a balance between isolation and integration testing.
"""typescript
// Mocking Strategies with vi
import { describe, it, expect, vi } from 'vitest';
describe('MyComponent', () => {
it('should call the API service on mount', () => {
const apiService = {
fetchData: vi.fn().mockResolvedValue([{ id: 1, name: 'Item 1' }]),
};
// Simulate component using the API Service.
const component = {
mounted: () => {
apiService.fetchData();
},
};
component.mounted();
expect(apiService.fetchData).toHaveBeenCalled();
});
it('should update the component state with the fetched data', async () => {
const apiService = {
fetchData: vi.fn().mockResolvedValue([{ id: 1, name: 'Item 1' }]),
};
// Simulate component using the API Service.
const component = {
data: null,
mounted: async () => {
component.data = await apiService.fetchData();
},
};
await component.mounted();
expect(component.data).toEqual([{ id: 1, name: 'Item 1' }]);
});
});
"""
By following these component design standards, you can create a robust, maintainable, and efficient test suite for Vitest. This will lead to higher-quality software and a more productive development process.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Performance Optimization Standards for Vitest This document outlines performance optimization standards for Vitest, focusing on improving application speed, responsiveness, and resource usage during testing. These standards aim to guide developers in writing efficient and effective tests, ensuring a fast and reliable development workflow. ## 1. Test Suite Structure and Organization ### 1.1. Grouping Related Tests **Standard:** Group related tests within "describe" blocks to improve readability and enable focused execution and reduce setup costs. **Why:** Grouping allows developers to quickly identify the scope of a test suite, facilitates targeted test execution (e.g., "vitest run --testNamePattern "MyComponent""), and can enable shared setup/teardown logic for related tests. **Do This:** """typescript // src/components/MyComponent.test.ts import { describe, it, expect, beforeEach } from 'vitest'; import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; describe('MyComponent', () => { let wrapper; beforeEach(() => { wrapper = mount(MyComponent); }); it('renders correctly', () => { expect(wrapper.exists()).toBe(true); }); it('displays the correct default message', () => { expect(wrapper.text()).toContain('Hello from MyComponent!'); }); it('updates the message when the prop changes', async () => { await wrapper.setProps({ msg: 'New Message' }); expect(wrapper.text()).toContain('New Message'); }); }); """ **Don't Do This:** """typescript // Avoid a flat structure with ungrouped tests import { it, expect } from 'vitest'; import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; it('renders correctly', () => { const wrapper = mount(MyComponent); expect(wrapper.exists()).toBe(true); }); it('displays the correct default message', () => { const wrapper = mount(MyComponent); expect(wrapper.text()).toContain('Hello from MyComponent!'); }); """ ### 1.2. Test File Naming Conventions **Standard:** Use consistent and meaningful naming conventions for test files. A common convention is "[componentName].test.ts/js" or "[moduleName].spec.ts/js". **Why:** Clear naming conventions make it easier to locate and understand tests, improving maintainability and collaboration. Vitest can also leverage naming patterns for targeted test runs. **Do This:** """ src/ ├── components/ │ ├── MyComponent.vue │ └── MyComponent.test.ts // Or MyComponent.spec.ts └── utils/ ├── stringUtils.ts └── stringUtils.test.ts // Or stringUtils.spec.ts """ **Don't Do This:** """ src/ ├── components/ │ ├── MyComponent.vue │ └── test.ts // Vague and unclear └── utils/ ├── stringUtils.ts └── utils.test.ts // Ambiguous, doesn't clearly specify what's being tested """ ## 2. Efficient Test Setup and Teardown ### 2.1. Minimizing Global Setup and Teardown **Standard:** Avoid unnecessary global setup and teardown operations. Use "beforeEach" and "afterEach" hooks within "describe" blocks for context-specific setup and teardown. **Why:** Global setup and teardown can significantly slow down test execution, especially in large projects. Isolating setup and teardown to specific test suites reduces overhead and makes tests more predictable. **Do This:** """typescript import { describe, it, expect, beforeEach, afterEach } from 'vitest'; describe('MyComponent with specific data', () => { let component; let mockData; beforeEach(() => { mockData = { name: 'Test Name', value: 123 }; component = createComponent(mockData); // Hypothetical createComponent function }); afterEach(() => { component.destroy(); // Hypothetical destroy function to clean up resources mockData = null; }); it('renders the name correctly', () => { expect(component.getName()).toBe('Test Name'); }); it('renders the value correctly', () => { expect(component.getValue()).toBe(123); }); }); """ **Don't Do This:** """typescript // Avoid using global beforeAll and afterAll unless absolutely necessary import { describe, it, expect, beforeAll, afterAll } from 'vitest'; let globalComponent; beforeAll(() => { // This will run once before *all* tests, potentially creating unnecessary overhead globalComponent = createGlobalComponent(); }); afterAll(() => { // This will run once after *all* tests, even those that don't need cleanup globalComponent.destroy(); }); describe('Test Suite 1', () => { it('test 1', () => { // ... }); }); describe('Test Suite 2', () => { it('test 2', () => { // ... }); }); """ ### 2.2. Leveraging "beforeAll" and "afterAll" Strategically **Standard:** Use "beforeAll" and "afterAll" for expensive operations that only need to be performed once per test suite, such as database connections or large data set initialization. However, carefully consider the impact on test isolation. **Why:** "beforeAll" and "afterAll" can optimize performance by avoiding redundant setup. However, global state changes within these hooks can introduce dependencies between tests, leading to flaky results. **Do This (with caution and clear documentation):** """typescript import { describe, it, expect, beforeAll, afterAll } from 'vitest'; import { connectToDatabase, disconnectFromDatabase } from './database'; describe('Database Interactions', () => { let dbConnection; beforeAll(async () => { dbConnection = await connectToDatabase(); }); afterAll(async () => { await disconnectFromDatabase(dbConnection); }); it('fetches data correctly', async () => { const data = await dbConnection.query('SELECT * FROM users'); expect(data).toBeDefined(); }); it('inserts data correctly', async () => { await dbConnection.query('INSERT INTO users (name) VALUES ("Test User")'); // ... }); }); """ **Don't Do This (if not necessary for performance):** """typescript // Avoid overusing beforeAll if the setup is not truly expensive. import { describe, it, expect, beforeAll } from 'vitest'; describe('Simple Operations', () => { let simpleValue; beforeAll(() => { // Unnecessary use of beforeAll for a simple assignment simpleValue = 10; }); it('adds 5 to the value', () => { expect(simpleValue + 5).toBe(15); }); }); """ ### 2.3. Lazy Initialization **Standard:** Use lazy initialization for resources that are only needed by a subset of tests. Initialize these resources only when they are first accessed. **Why:** Lazy initialization avoids unnecessary setup costs for tests that don't require specific resources. This can significantly improve test suite run time, especially when dealing with complex or large-scale applications. **Do This:** """typescript import { describe, it, expect } from 'vitest'; describe('Conditional Resource Initialization', () => { let expensiveResource = null; const getExpensiveResource = () => { if (!expensiveResource) { expensiveResource = createExpensiveResource(); // Only create when needed } return expensiveResource; }; it('test 1 does not need the resource', () => { expect(true).toBe(true); // Simple assertion }); it('test 2 needs the expensive resource', () => { const resource = getExpensiveResource(); expect(resource).toBeDefined(); // ... use the resource }); }); """ **Don't Do This:** """typescript import { describe, it, expect, beforeAll } from 'vitest'; describe('Unconditional Resource Initialization', () => { let expensiveResource; beforeAll(() => { // Expensive resource is created even if tests don't use it, wasting resources expensiveResource = createExpensiveResource(); }); it('test 1 does not need the resource', () => { expect(true).toBe(true); }); it('test 2 needs the expensive resource', () => { expect(expensiveResource).toBeDefined(); // ... use the resource }); }); """ ## 3. Mocking and Stubbing Strategies ### 3.1. Minimizing External Dependencies **Standard:** Mock or stub external dependencies (e.g., API calls, database queries, third-party services) to isolate units under test and avoid slow or unreliable external factors. **Why:** Mocking and stubbing allows for predictable and fast test execution by eliminating dependence on external systems that may be unavailable, slow, or change unexpectedly. Vitest provides built-in mocking capabilities for this purpose. **Do This:** """typescript import { describe, it, expect, vi } from 'vitest'; import { fetchData } from './api'; // Hypothetical API function import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; vi.mock('./api', () => { return { fetchData: vi.fn(() => Promise.resolve({ data: 'Mocked Data' })), }; }); describe('MyComponent with Mocked API', () => { it('displays mocked data correctly', async () => { const wrapper = mount(MyComponent); await wrapper.vm.$nextTick(); // Ensure data is fetched and rendered expect(wrapper.text()).toContain('Mocked Data'); expect(fetchData).toHaveBeenCalled(); }); }); """ **Don't Do This:** """typescript // Avoid making actual API calls during testing import { describe, it, expect } from 'vitest'; import { fetchData } from './api'; import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; describe('MyComponent without Mocking', () => { it('displays fetched data (potentially slow and unreliable)', async () => { const wrapper = mount(MyComponent); await wrapper.vm.$nextTick(); // May fail if the API is down or slow expect(wrapper.text()).toContain('Expected Data from API'); }); }); """ ### 3.2. Mocking Only What's Necessary **Standard:** Only mock the specific functions or modules that are directly involved in the test. Avoid mocking entire modules or services unless absolutely necessary. **Why:** Over-mocking can lead to brittle tests that don't accurately reflect the behavior of the system. Mocking only the relevant parts allows for more focused and reliable tests. **Do This:** """typescript // Mock only the fetchData function, not the entire api module. import { describe, it, expect, vi } from 'vitest'; import { fetchData, processData } from './api'; // Now processData remains real import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; vi.mock('./api', async () => { const actual = await vi.importActual('./api') return { ...actual, // import all the existing exports fetchData: vi.fn(() => Promise.resolve({ data: 'Mocked Data' })), } }) describe('MyComponent with Specific Mocking', () => { it('displays processed data correctly', async () => { const wrapper = mount(MyComponent); await wrapper.vm.$nextTick(); expect(wrapper.text()).toContain('Mocked Data'); // Relies on the mocked fetchData result expect(fetchData).toHaveBeenCalled(); // processData still runs with real logic }); }); """ **Don't Do This:** """typescript // Avoid mocking the entire module if only one function needs to be mocked import { describe, it, expect, vi } from 'vitest'; import * as api from './api'; // Import the whole module import MyComponent from './MyComponent.vue'; import { mount } from '@vue/test-utils'; vi.mock('./api', () => { return { fetchData: vi.fn(() => Promise.resolve({ data: 'Mocked Data' })), processData: vi.fn(() => 'Mocked Processed Data'), // Unnecessary mocking }; }); describe('MyComponent with Over-Mocking', () => { it('displays mocked data correctly (but over-mocked)', async () => { const wrapper = mount(MyComponent); await wrapper.vm.$nextTick(); expect(wrapper.text()).toContain('Mocked Processed Data'); // Using the mocked processData, even if it's unnecessary expect(api.fetchData).toHaveBeenCalled(); }); }); """ ### 3.3. Using "vi.spyOn" for Partial Mocking **Standard:** Use "vi.spyOn" to mock specific methods on an existing object or module *without* replacing the entire object. This allows you to verify that the method was called and observe its arguments, while still executing the original implementation. **Why:** "vi.spyOn" provides a more granular and less disruptive way to mock functionality, especially when you need to test interactions with a method while still relying on its original behaviour. **Do This:** """typescript import { describe, it, expect, vi } from 'vitest'; import * as MyModule from './myModule'; // Hypothetical module with several functions describe('Using vi.spyOn', () => { it('should call the original function and allow assertions', () => { const spy = vi.spyOn(MyModule, 'myFunction'); // Spy on myFunction const result = MyModule.myFunction(1, 2); expect(spy).toHaveBeenCalledTimes(1); expect(spy).toHaveBeenCalledWith(1, 2); expect(result).toBe(3); // Assuming myFunction returns the sum of its arguments spy.mockRestore(); // Restore the original implementation of myFunction }); }); """ **Don't Do This:** """typescript // Avoid using vi.mock if you only need to spy on a function import { describe, it, expect, vi } from 'vitest'; import * as MyModule from './myModule'; vi.mock('./myModule', () => { return { myFunction: vi.fn((a, b) => a + b), // Replaces myFunction entirely - less ideal if you want to call the original. }; }); describe('Avoid replacing the function completely with vi.mock', () => { it('should call the original function and allow assertions', () => { // ... less flexible for observing calls and executing original code }); }); """ ## 4. Efficient Assertions and Expectations ### 4.1. Avoiding Excessive Assertions **Standard:** Focus assertions on the specific behavior being tested in each test case. Avoid including unrelated or redundant assertions. **Why:** Excessive assertions can make tests harder to understand and maintain, and can also slow down test execution. Each assertion adds overhead. **Do This:** """typescript import { describe, it, expect } from 'vitest'; describe('Focused Assertions', () => { it('correctly calculates the sum', () => { const result = calculateSum(2, 3); expect(result).toBe(5); // Focus on the sum itself }); }); """ **Don't Do This:** """typescript // Avoid including irrelevant assertions. import { describe, it, expect } from 'vitest'; describe('Excessive Assertions', () => { it('calculates the sum and checks unrelated properties', () => { const result = calculateSum(2, 3); expect(result).toBe(5); expect(typeof result).toBe('number'); // Redundant and unnecessary expect(result > 0).toBe(true); // Redundant and unnecessary }); }); """ ### 4.2. Using Specific Matchers **Standard:** Use the most specific and appropriate Vitest matchers for each assertion. For example, use "toBe" for primitive values, "toEqual" for objects, "toContain" for arrays, and "toHaveBeenCalled" for mocks. **Why:** Specific matchers improve the clarity and expressiveness of tests, and can also provide better performance by avoiding unnecessary comparisons or type conversions. Using type-safe matchers (where possible) offer type safety and performance improvements. **Do This:** """typescript import { describe, it, expect, vi } from 'vitest'; describe('Specific Matchers', () => { it('uses toBe for primitive values', () => { expect(1 + 1).toBe(2); }); it('uses toEqual for objects', () => { const obj1 = { a: 1, b: 2 }; const obj2 = { a: 1, b: 2 }; expect(obj1).toEqual(obj2); }); it('uses toContain for arrays', () => { const arr = [1, 2, 3]; expect(arr).toContain(2); }); it('uses toHaveBeenCalled for mocks', () => { const mockFn = vi.fn(); mockFn(); expect(mockFn).toHaveBeenCalled(); }); }); """ **Don't Do This:** """typescript // Avoid using generic matchers when more specific ones are available. import { describe, it, expect, vi } from 'vitest'; describe('Generic Matchers', () => { it('incorrectly uses toEqual for primitive values', () => { expect(1 + 1).toEqual(2); // Inefficient; 'toBe' is better for primitives }); it('incorrectly uses toContain for objects', () => { const obj1 = { a: 1, b: 2 }; const obj2 = { a: 1, b: 2 }; expect(obj1).toContain(obj2); // Incorrect and will likely fail }); }); """ ### 4.3. Asynchronous Assertions **Standard:** Use "async/await" with Vitest's built-in support for asynchronous testing. Ensure you wait for asynchronous operations to complete before making assertions. **Why:** Asynchronous tests can be prone to errors if assertions are made before asynchronous operations have finished. Using "async/await" ensures that tests wait for completion, leading to more reliable results. **Do This:** """typescript import { describe, it, expect } from 'vitest'; import { fetchData } from './api'; // Hypothetical async function describe('Asynchronous Assertions', () => { it('fetches data correctly', async () => { const data = await fetchData(); expect(data).toBeDefined(); // ... more assertions on the fetched data }); }); """ **Don't Do This:** """typescript // Avoid making assertions before asynchronous operations complete. import { describe, it, expect } from 'vitest'; import { fetchData } from './api'; describe('Incorrect Asynchronous Assertions', () => { it('attempts to assert before data is fetched (likely to fail)', () => { let data; fetchData().then(result => { data = result; }); expect(data).toBeDefined(); // Assertion made before fetchData resolves. }); }); """ ## 5. Code Coverage Considerations ### 5.1. Balancing Coverage and Performance **Standard:** While code coverage is important, prioritize writing meaningful tests that cover critical functionality and edge cases. Avoid striving for 100% coverage at the expense of test performance or maintainability. **Why:** High code coverage can provide a false sense of security if tests are not well-designed or if they focus on trivial code paths. Focus on writing tests that thoroughly validate the most important aspects of the system. **Do This:** * Identify critical functionalities and prioritize testing these areas thoroughly. * Focus on covering boundary conditions, edge cases, and potential error scenarios. * Use code coverage tools (e.g., c8 via Vitest's "--coverage" flag) to identify uncovered areas, but don't treat 100% coverage as the primary goal. **Don't Do This:** * Write tests solely to increase code coverage without considering their value in validating functionality. * Create complex or convoluted tests to cover trivial code paths that are unlikely to cause issues. * Neglect testing important areas simply because they are difficult to cover with tests. ### 5.2. Excluding Non-Essential Files **Standard:** Exclude non-essential files (e.g., configuration files, third-party libraries) from code coverage analysis to avoid skewing the results and wasting resources. **Why:** Including non-essential files in code coverage analysis can make it difficult to identify areas of the codebase that truly need more attention. **Do This:** Configure the coverage reporter in "vitest.config.ts" to exclude files. """typescript // vitest.config.ts import { defineConfig } from 'vitest/config'; export default defineConfig({ test: { coverage: { exclude: [ '**/node_modules/**', '**/dist/**', '**/coverage/**', 'src/config.ts', // Example: exclude configuration file 'src/external/**', //Ignore external libraries ], }, }, }); """ **Don't Do This:** * Include all files in code coverage analysis without considering their relevance. * Fail to exclude generated files or build artifacts from coverage reports. ## 6. Parallelization and Concurrency ### 6.1. Enabling Test Parallelization **Standard:** Enable parallel test execution in Vitest to reduce overall test run time, especially for large projects. **Why:** Vitest can run tests in parallel, leveraging multiple CPU cores to significantly speed up execution. This is especially beneficial for tests that involve I/O operations or long-running computations. **Do This:** By default, Vitest parallelizes test execution. You can control the level of parallelism within "vitest.config.ts". Make sure your tests are properly isolated for parallelism. """typescript // vitest.config.ts import { defineConfig } from 'vitest/config'; export default defineConfig({ test: { threads: true, // or number of threads }, }); """ **Don't Do This:** * Disable parallelization unless there are specific reasons to do so (e.g., tests that rely on shared mutable state). * Ignore potential concurrency issues in tests when running them in parallel (e.g., race conditions when accessing shared resources). ### 6.2. Managing Shared State in Parallel Tests **Standard:** Avoid shared mutable state between tests, or carefully manage access to shared resources using appropriate synchronization mechanisms (e.g., locks, mutexes) to prevent race conditions. **Why:** Parallel tests that share mutable state can lead to non-deterministic results and flaky test runs. **Do This:** * Ensure each test operates on its own isolated data set. * Use database transactions or other isolation techniques to prevent interference between tests that interact with shared databases. * If shared state is unavoidable, use appropriate locking mechanisms to protect access. **Don't Do This:** * Allow tests to modify global variables or shared data structures without proper synchronization. * Assume that tests will always run in a specific order when running them in parallel. ## 7. Performance Monitoring and Analysis ### 7.1. Using Performance Measurement Tools **Standard:** Use performance measurement tools (e.g., "console.time" and "console.timeEnd" for basic timing, profiling tools for detailed analysis) to identify performance bottlenecks in tests. **Why:** Performance measurement tools can help pinpoint slow-running tests or inefficient code within tests, allowing developers to optimize them. **Do This:** """typescript import { describe, it, expect } from 'vitest'; describe('Performance Measurement', () => { it('measures the execution time of a function', () => { console.time('myFunction'); myFunction(); // Hypothetical function to measure console.timeEnd('myFunction'); }); }); """ **Don't Do This:** * Ignore performance issues in tests. * Rely solely on intuition when identifying performance bottlenecks without using measurement tools. * Leave performance measurement code in production code. Add it only to tests when performance measurements are needed and remove it afterwards. By adhering to these performance optimization standards, developers can create Vitest test suites that are fast, reliable, and maintainable, ensuring a smooth and efficient development process.
# Core Architecture Standards for Vitest This document outlines the core architectural standards for developing and maintaining Vitest itself. It provides guidelines for project structure, fundamental patterns, and principles to ensure maintainability, performance, and scalability. These standards are designed to be used by Vitest developers, contributors, and AI coding assistants. ## 1. Project Structure and Organization A well-defined project structure is crucial for navigating and understanding the Vitest codebase. It promotes discoverability, reduces cognitive load, and simplifies maintenance. **Standard:** Adhere to a modular, decoupled architecture with clear boundaries between components. **Do This:** * Organize code into meaningful modules based on functionality (e.g., "runner", "reporter", "config", "api"). * Maintain a flat directory structure within modules to avoid excessive nesting, promoting discoverability. * Use descriptive file and directory names that clearly indicate their purpose. **Don't Do This:** * Create tightly coupled components that are difficult to test or refactor. * Overuse deeply nested directory structures. * Use vague or ambiguous names for files and directories. **Why:** A modular architecture allows for independent development and testing of components, reducing the impact of changes in one area on other parts of the system. Clean, descriptive names improve code readability and maintainability. **Code Example (Project Structure):** """ vitest/ ├── packages/ │ ├── vitest/ # Core Vitest package │ │ ├── src/ # Source code │ │ │ ├── api/ # API layer │ │ │ ├── config/ # Configuration loading and handling │ │ │ ├── runner/ # Test runner logic │ │ │ ├── reporter/ # Test reporter implementations │ │ │ ├── utils/ # Utility functions │ │ │ ├── types/ # Type definitions │ │ │ └── index.ts # Entry point │ │ ├── test/ # Unit and integration tests │ │ ├── index.ts # Package entry point │ │ ├── package.json # Package manifest │ │ └── tsconfig.json # TypeScript configuration │ ├── vite-node/ # Vite Node integration package │ │ └── ... │ └── ... ├── playground/ # Example projects for testing Vitest ├── scripts/ # Build and development scripts ├── tsconfig.json # Root TypeScript configuration ├── package.json # Root package manifest ├── README.md # Project README └── ... """ ## 2. Fundamental Architectural Patterns Vitest's architecture should leverage proven design patterns to promote code reusability, maintainability, and testability. **Standard:** Employ established design patterns where appropriate, favoring simplicity and clarity over complex solutions. **Do This:** * Use the **Observer pattern** for event handling and communication between components (e.g., test lifecycle events). * Implement the **Strategy pattern** for handling different test environments or reporters. * Apply the **Factory pattern** for creating instances of classes, providing abstraction and flexibility. * Favor functional programming principles where appropriate for pure functions and immutable data, increasing predictability. **Don't Do This:** * Over-engineer solutions by applying patterns unnecessarily. * Create tightly coupled dependencies by avoiding interfaces and abstraction. * Rely on global state, which can lead to unpredictable behavior and difficult debugging. **Why:** Design patterns provide reusable solutions to common problems, making the codebase more understandable and maintainable. They also promote loose coupling, which enhances testability and reduces the impact of changes. Functional programming improves code clarity and reduces side effects. **Code Example (Observer Pattern - simplified):** """typescript // types/index.ts interface EventListener<T> { (event: T): void; } interface Emitter<T> { on(event: string, listener: EventListener<T>): void; off(event: string, listener: EventListener<T>): void; emit(event: string, data: T): void; } // utils/emitter.ts class TypedEmitter<T> implements Emitter<T> { private listeners: { [event: string]: EventListener<T>[] } = {}; on(event: string, listener: EventListener<T>): void { if (!this.listeners[event]) { this.listeners[event] = []; } this.listeners[event].push(listener); } off(event: string, listener: EventListener<T>): void { if (this.listeners[event]) { this.listeners[event] = this.listeners[event].filter(l => l !== listener); } } emit(event: string, data: T): void { this.listeners[event]?.forEach(listener => listener(data)); } } // runner/testRunner.ts (Example Usage) interface TestEvent { testId: string; status: 'running' | 'passed' | 'failed'; } const testEmitter = new TypedEmitter<TestEvent>(); testEmitter.on('test:start', (event) => { console.log("Test ${event.testId} started"); }); testEmitter.emit('test:start', { testId: 'test-1', status: 'running' }); export { testEmitter }; """ ## 3. Configuration Management Robust configuration management is essential for adapting Vitest to different environments and use cases. **Standard:** Centralize configuration loading and validation to ensure consistency and prevent errors. **Do This:** * Use a dedicated "config" module to handle configuration loading from files and command-line arguments. * Implement schema validation to ensure that configuration values are of the correct type and within acceptable ranges. * Provide sensible default values for configuration options. * Support configuration files in common formats (e.g., "vitest.config.ts", "package.json"). **Don't Do This:** * Scatter configuration logic throughout the codebase. * Assume that configuration values are always valid. * Hardcode configuration values. **Why:** Centralized configuration management simplifies the process of modifying and extending Vitest's behavior. Schema validation prevents configuration errors, improving reliability. **Code Example (Configuration Loading and Validation):** """typescript // config/index.ts import { loadConfigFromFile, mergeConfig, UserConfig } from 'vite'; import { resolve } from 'path'; import { InlineConfig, VitestConfig } from '../types'; import { defaultVitestConfig } from './defaults'; import { defaults } from 'vitest/config'; export async function resolveConfig( inlineConfig: InlineConfig = {}, rootDir: string = process.cwd(), command: 'serve' | 'build' = 'serve', mode = 'development' ): Promise<VitestConfig> { let config = mergeConfig(defaultVitestConfig, inlineConfig) as VitestConfig; const resolvedRoot = resolve(rootDir); let configFile: { path: string; config: UserConfig } | undefined; try { configFile = await loadConfigFromFile( { command, mode }, inlineConfig.configFile || resolve(resolvedRoot, 'vitest.config.ts'), resolvedRoot ); } catch (e: any) { // ...error handling... configFile = undefined; } if (configFile) config = mergeConfig(config, configFile.config) as VitestConfig; // ... other config resolution and validation ... return config; } """ ## 4. Asynchronous Operations Vitest relies heavily on asynchronous operations for test execution and reporting. Handling these operations correctly is crucial for performance and stability. **Standard:** Use modern asynchronous patterns, such as "async/await" and Promises. **Do This:** * Prefer "async/await" over callbacks for asynchronous control flow. * Use "Promise.all" or "Promise.allSettled" to execute multiple asynchronous operations concurrently. * Implement proper error handling for asynchronous operations using "try/catch" blocks. **Don't Do This:** * Use callback-based asynchronous patterns. * Block the event loop with long-running synchronous operations. * Ignore errors from asynchronous operations without proper handling. **Why:** "async/await" simplifies asynchronous code, making it more readable and maintainable. Concurrent execution improves performance. Proper error handling prevents unhandled exceptions and ensures stability. **Code Example (Asynchronous Test Execution):** """typescript // runner/testRunner.ts import { testEmitter } from './emitter'; async function runTest(testFile: string) { try { // Dynamically import the test file const testModule = await import(testFile); // Check if the module has a default export, or named exports if (testModule.default && typeof testModule.default === 'function') { await testModule.default(); } else if (testModule) { // Iterate over named exports and execute them if they are functions for (const key in testModule) { if (typeof testModule[key] === 'function') { await testModule[key](); } } } testEmitter.emit('test:passed', { testId: testFile, status: 'passed' }); } catch (error) { console.error("Test ${testFile} failed:", error); testEmitter.emit('test:failed', { testId: testFile, status: 'failed' }); } } export { runTest }; """ ## 5. Error Handling and Logging Effective error handling and logging are essential for diagnosing and resolving issues in Vitest. **Standard:** Implement comprehensive error handling and logging throughout the codebase. **Do This:** * Use "try/catch" blocks to handle potential exceptions. * Log errors with sufficient context, including stack traces where appropriate. * Provide clear and informative error messages to users. * Use different logging levels (e.g., "debug", "info", "warn", "error") to categorize log messages. **Don't Do This:** * Swallow exceptions without logging them. * Log sensitive information (e.g., passwords, API keys). * Use generic error messages that don't provide useful information. **Why:** Proper error handling prevents unhandled exceptions, ensuring stability. Detailed logs allow for efficient debugging and issue resolution. Informative error messages help users understand and resolve problems. **Code Example (Error Handling and Logging):** """typescript // utils/fileSystem.ts async function readFile(filePath: string): Promise<string | null> { try { const content = await fs.promises.readFile(filePath, 'utf-8'); return content; } catch (error: any) { console.error("Failed to read file ${filePath}: ${error.message}", error); return null; } } """ ## 6. Extensibility and Plugins Vitest's architecture should be designed to be extensible, allowing users to add new features and integrations through plugins. **Standard:** Provide a well-defined plugin API that allows developers to extend Vitest's functionality without modifying the core codebase. **Do This:** * Define clear interfaces and extension points for plugins. * Provide comprehensive documentation and examples for plugin development. * Support different types of plugins (e.g., reporters, transformers, resolvers). **Don't Do This:** * Expose internal implementation details to plugins. * Introduce breaking changes to the plugin API without a clear migration path. * Limit the types of extensions that plugins can provide. **Why:** Extensibility allows Vitest to adapt to a wide range of use cases and integrate with different tools and frameworks. A well-defined plugin API ensures that plugins are reliable and easy to develop. **Code Example (Plugin API):** """typescript // types/index.ts export interface Plugin { name: string; config?: (config: VitestConfig, env: { mode: string, command: string }) => VitestConfig | void | Promise<VitestConfig | void>; configureServer?: (server: any) => void | Promise<void>; // ViteDevServer transform?: (code: string, id: string) => any; // TransformResult | null | void } // config/index.ts (Apply plugins) async function resolveConfig( inlineConfig: InlineConfig = {}, rootDir: string = process.cwd(), command: 'serve' | 'build' = 'serve', mode = 'development' ): Promise<VitestConfig> { // ... other config resolution code ... if (config.plugins) { for (const plugin of config.plugins) { if (plugin.config) { const pluginConfig = await plugin.config(config, { mode, command }); if (pluginConfig) { config = mergeConfig(config, pluginConfig) as VitestConfig; } } } } return config; } """ ## 7. Testing Thorough testing is crucial for ensuring the reliability and stability of Vitest. **Standard:** Implement comprehensive unit, integration, and end-to-end tests. **Do This:** * Write unit tests for individual functions and classes. * Write integration tests to verify the interaction between different components. * Write end-to-end tests to validate the overall behavior of the system. * Use code coverage tools to identify areas of the codebase that are not adequately tested. * Follow the Arrange-Act-Assert pattern for writing clear and maintainable tests. **Don't Do This:** * Skip writing tests for complex or critical code. * Write tests that are tightly coupled to implementation details. * Ignore code coverage metrics. **Why:** Comprehensive testing helps to prevent bugs, improve code quality, and reduce the risk of regressions. Code coverage metrics provide valuable insights into the effectiveness of testing efforts. The Arrange-Act-Assert pattern simplifies test writing. **Code Example (Unit Test):** """typescript // utils/string.test.ts import { describe, it, expect } from 'vitest'; import { capitalize } from './string'; describe('capitalize', () => { it('should capitalize the first letter of a string', () => { expect(capitalize('hello')).toBe('Hello'); }); it('should handle empty strings', () => { expect(capitalize('')).toBe(''); }); it('should handle strings with only one character', () => { expect(capitalize('a')).toBe('A'); }); it('should handle strings that already start with a capital letter', () => { expect(capitalize('World')).toBe('World'); }); }); """ ## 8. Documentation Clear and comprehensive documentation is essential for helping developers understand and use Vitest. **Standard:** Maintain thorough documentation for all aspects of Vitest, including the core architecture, API, and plugin development. **Do This:** * Write clear and concise documentation that is easy to understand. * Provide examples and tutorials to help users get started. * Keep the documentation up-to-date with the latest changes. * Use a consistent style and format for all documentation. **Don't Do This:** * Write documentation that is vague or incomplete. * Fail to update the documentation when making changes to the codebase. * Use inconsistent terminology or formatting. **Why:** Documentation is essential for helping developers understand and use Vitest effectively. Up-to-date documentation reduces the burden on maintainers by providing a clear source of information. ## 9. Performance Optimization Performance is a crucial consideration for Vitest, especially when running large test suites. **Standard:** Optimize code for performance, minimizing overhead and maximizing throughput. **Do This:** * Use efficient algorithms and data structures. * Avoid unnecessary work, such as redundant calculations or data transformations. * Use caching to store frequently accessed data. * Profile code to identify performance bottlenecks. **Don't Do This:** * Prioritize premature optimization over readability. * Ignore performance considerations during development. * Introduce performance regressions without proper analysis. **Why:** Performance optimization improves the overall user experience and reduces the time it takes to run tests. Efficient code also consumes less resources and reduces energy consumption. ## 10. Security Security is a critical consideration for Vitest, especially when executing user-provided code. **Standard:** Implement security best practices to prevent vulnerabilities and protect against malicious attacks. **Do This:** * Sanitize user input to prevent code injection attacks. * Limit the privileges of the test runner to prevent unauthorized access to system resources. * Use secure coding practices, such as avoiding buffer overflows and race conditions. * Regularly review and update dependencies to address known vulnerabilities. * Implement permission policies to control resource access during test execution. **Don't Do This:** * Trust user input without validation. * Run tests with elevated privileges. * Ignore security warnings or vulnerabilities. **Why:** Security best practices protect against malicious attacks and ensure the integrity of the system. Vulnerabilities can lead to data breaches, system compromise, or denial of service. These standards will guide the development of Vitest to ensure a robust, maintainable, and scalable testing framework. Adherence to these principles will facilitate contributions, improve code quality, and ultimately provide a better experience for the entire Vitest community.
# State Management Standards for Vitest This document outlines the coding standards for state management when writing tests with Vitest, the next-gen testing framework powered by Vite. It aims to guide developers in creating robust, maintainable, and performant tests by adopting modern best practices for managing application state, data flow, and reactivity within the testing context. ## 1. Introduction to State Management in Vitest While Vitest primarily focuses on unit and integration testing, understanding state management principles is crucial for creating effective and reliable tests, especially when dealing with complex components or application logic. State might refer to various entities: component internal state, data fetched from external sources, or the overall application state managed by tools like Vuex, Redux, or Pinia. ### 1.1. Why State Management Matters in Testing * **Reproducibility:** Clearly defined state makes tests reproducible, ensuring that failures always indicate real issues. * **Isolation:** Proper state isolation prevents tests from interfering with each other, avoiding flaky test suites. * **Maintainability:** Well-structured state management simplifies test setup and teardown, making tests easier to understand and maintain. * **Accuracy:** Accurate state representation guarantees that tests accurately reflect the actual application behavior. ### 1.2. Scope of these Standards These standards cover: * Approaches for setting up and managing state within tests. * Strategies for isolating state between tests. * Best practices for mocking and stubbing external dependencies that influence state. * Specific considerations for testing reactive state with frameworks like Vue, React, and Svelte. ## 2. General Principles for State Management in Vitest ### 2.1. Declarative vs. Imperative State Setup * **Do This:** Prefer declarative state setup using "beforeEach" or "beforeAll" hooks to define the initial state for each test or test suite. """typescript import { beforeEach, describe, expect, it } from 'vitest'; describe('Counter component', () => { let counter: { value: number }; beforeEach(() => { counter = { value: 0 }; // Declarative state setup }); it('should increment the counter value', () => { counter.value++; expect(counter.value).toBe(1); }); }); """ * **Don't Do This:** Avoid directly modifying the state within the test body unless it's the action being tested. This makes the test harder to read and understand as the initial state becomes implicit. ### 2.2. Isolate State Between Tests * **Do This:** Use "beforeEach" to reset the state before each test, ensuring that tests do not interfere with each other. Consider using a factory function to create fresh state instances. """typescript import { beforeEach, describe, expect, it } from 'vitest'; // Factory function to create a new state object const createCounter = () => ({ value: 0 }); describe('Counter component', () => { let counter: { value: number }; beforeEach(() => { counter = createCounter(); // Creates a fresh counter object for each test }); it('should increment the counter value', () => { counter.value++; expect(counter.value).toBe(1); }); it('should not be affected by previous test', () => { expect(counter.value).toBe(0); // Reset to initial state }); }); """ * **Don't Do This:** Share mutable state directly between tests without resetting it. This can lead to unexpected test failures and makes debugging difficult. ### 2.3. Minimize Global State * **Do This:** Encapsulate the state as much as possible within the component or module being tested. Use dependency injection to provide state dependencies. * **Don't Do This:** Rely heavily on global variables or shared mutable objects to manage state. This introduces tight coupling and makes tests harder to isolate. ### 2.4. Use Mocks and Stubs for External Dependencies * **Do This:** Use "vi.mock" or manual mocks to isolate the component being tested from external dependencies (e.g., databases, APIs). This allows you to control the state returned by the dependencies and focus on testing the component's logic. """typescript // api.ts const fetchData = async () => { const response = await fetch('/api/data'); return await response.json(); }; export default fetchData; // component.test.ts import { describe, expect, it, vi } from 'vitest'; import fetchData from '../api'; // Import the original module import MyComponent from '../MyComponent.vue'; //Example using Vue - framework agnostic principle vi.mock('../api', () => ({ //Mock the whole module default: vi.fn(() => Promise.resolve({ data: 'mocked data' })), })); describe('MyComponent', () => { it('should display mocked data', async () => { const wrapper = mount(MyComponent); await vi.waitFor(() => { // Adjust timeout as needed expect(wrapper.text()).toContain('mocked data'); }); expect(fetchData).toHaveBeenCalled(); }); }); """ * **Don't Do This:** Make real API calls or database queries during tests. This can make tests slow, unreliable, and dependent on external factors. Also avoid tightly coupling tests to a real database or API. ### 2.5. Understanding "vi.mock" vs. "vi.spyOn" * **"vi.mock":** Replaces an entire module (or specific functions within that module) with a mock implementation. Useful when you need to completely control the behavior of a dependency. Importantly, Vitest hoists the mock to the top of the scope, meaning the mock is defined *before* the actual import. This allows you to mock even before the component is imported. * **"vi.spyOn":** Wraps an existing function (either a function on an object or a directly imported function) and allows you to track its calls, arguments, and return values *without* replacing the original implementation. Useful when you want to assert that a function was called with specific arguments, or a certain number of times. However, "vi.spyOn" works on existing objects/functions and can only be used *after* the object is imported and the function exists. """typescript import {describe, expect, it, vi} from 'vitest'; const myModule = { myFunction: (x: number) => x * 2, }; describe('myFunction', () => { it('should call the function with correct arguments', () => { const spy = vi.spyOn(myModule, 'myFunction'); myModule.myFunction(5); expect(spy).toHaveBeenCalledWith(5); }); it('should return the correct value', () => { const spy = vi.spyOn(myModule, 'myFunction'); spy.mockReturnValue(100); expect(myModule.myFunction(5)).toBe(100); }); }); """ ### 2.6. Async State and "vi.waitFor" * **Do This:** When dealing with asynchronous state updates (e.g., fetching data from an API), use "vi.waitFor" to ensure that the state has been updated before making assertions. This is crucial to prevent race conditions and flaky tests. """typescript import { describe, expect, it, vi } from 'vitest'; describe('Async Component', () => { it('should update state after async operation', async () => { let state = { data: null }; const fetchData = async () => { return new Promise((resolve) => { setTimeout(() => { state.data = 'Async Data'; resolve(state.data); }, 100); }); }; await fetchData(); await vi.waitFor(() => { expect(state.data).toBe('Async Data'); // Assert that the state has been updated }); }); }); """ * **Don't Do This:** Rely on fixed timeouts to wait for asynchronous operations to complete. This can lead to flaky tests if the operation takes longer than expected. Manually trigger the resolve using "mockResolvedValue" when mocking async functions. ### 2.7. Testing Reactivity with Testing Frameworks * **Do This:** Leverage framework-specific testing utilities to properly observe and interact with reactive state. For Vue, use "vue-test-utils", for React, use "@testing-library/react", and so on. These utilities provide methods for triggering state changes and waiting for updates to propagate. **Example (Vue with vue-test-utils):** """typescript import {describe, expect, it} from 'vitest'; import {mount} from '@vue/test-utils'; import { ref } from 'vue'; const MyComponent = { template: '<div>{{ count }}</div>', setup() { const count = ref(0); return { count }; } }; describe('MyComponent', () => { it('should render the correct count', async () => { const wrapper = mount(MyComponent); expect(wrapper.text()).toContain('0'); //Simulate interaction with the component (e.g., by emitting an event) wrapper.vm.count = 5; //Direct state change in a simple example - typically you'd trigger event await wrapper.vm.$nextTick(); //Wait for DOM update expect(wrapper.text()).toContain('5'); }); }); """ * **Don't Do This:** Directly manipulate internal component state without using the testing framework's utilities. This can bypass reactivity mechanisms and lead to incorrect test results. Also it results to fragile tests that are dependent on the internal component implementation. ### 2.8. Immutability Where Possible * **Do This:** Favor immutable data structures and state management techniques where applicable to avoid unintended side effects and simplify reasoning about state changes. Libraries like Immer can be helpful for working with immutable data. * **Don't Do This:** Mutate state directly without considering the consequences for other parts of the application or tests. ### 2.9. Testing State Transitions * **Do This:** Explicitly test all possible state transitions in a component or module. Use "describe" blocks to group tests related to specific state transitions. """typescript import { beforeEach, describe, expect, it } from 'vitest'; describe('Component with State Transitions', () => { let componentState: { isLoading: boolean; data: any; error: any }; beforeEach(() => { componentState = { isLoading: false, data: null, error: null }; }); describe('Initial State', () => { it('should start in the loading state', () => { expect(componentState.isLoading).toBe(false); expect(componentState.data).toBeNull(); expect(componentState.error).toBeNull(); }); }); describe('Loading State', () => { it('should set isLoading to true when fetching data', () => { componentState.isLoading = true; expect(componentState.isLoading).toBe(true); }); }); describe('Success State', () => { it('should set data when data is successfully fetched', () => { const mockData = { name: 'Test Data' }; componentState.data = mockData; componentState.isLoading = false; expect(componentState.data).toEqual(mockData); expect(componentState.isLoading).toBe(false); }); }); describe('Error State', () => { it('should set error when fetching data fails', () => { const mockError = new Error('Failed to fetch data'); componentState.error = mockError; componentState.isLoading = false; expect(componentState.error).toEqual(mockError); expect(componentState.isLoading).toBe(false); }); }); }); """ * **Don't Do This:** Assume that state transitions will work correctly without explicit tests. Missing tests for state transitions are frequently causes of bugs. ## 3. Testing Specific State Management Patterns ### 3.1. Testing Vuex/Pinia Stores * **Do This:** Mock the store's actions, mutations, and getters to isolate the component being tested. Use "createLocalVue" (for Vuex) to create a local Vue instance with the mocked store. For Pinia, mock the store directly using "vi.mock". **Example (Pinia with Vitest):** """typescript import {describe, expect, it, vi} from 'vitest'; import {useMyStore} from '../src/stores/myStore'; //Replace with your actual path vi.mock('../src/stores/myStore', () => { return { useMyStore: vi.fn(() => ({ count: 10, increment: vi.fn(), doubleCount: vi.fn().mockReturnValue(20), })), }; }); describe('Component using Pinia store', () => { it('should display the count from the store', () => { const store = useMyStore(); expect(store.count).toBe(10); }); it('should call the increment action when a button is clicked', () => { const store = useMyStore(); //Simulate user interaction or similar that is expected to call store.increment() // ... expect(store.increment).toHaveBeenCalled(); }); }); """ * **Don't Do This:** Directly interact with the real store during component tests. This can make tests slow, and introduces dependencies between tests, increases complexity. Test the store in a separate test file dedicated to store logic. ### 3.2. Testing Redux Reducers and Actions * **Do This:** Test reducers in isolation by providing them with different actions and verifying that they produce the expected state changes. Test actions by dispatching them and asserting on the side effects (e.g., API calls). """typescript import { describe, expect, it } from 'vitest'; import reducer from './reducer'; import { increment, decrement } from './actions'; describe('Counter Reducer', () => { it('should return the initial state', () => { expect(reducer(undefined, {})).toEqual({ value: 0 }); }); it('should handle INCREMENT', () => { expect(reducer({ value: 0 }, increment())).toEqual({ value: 1 }); }); it('should handle DECREMENT', () => { expect(reducer({ value: 1 }, decrement())).toEqual({ value: 0 }); }); }); """ * **Don't Do This:** Test reducers and actions together in a complex integration test. This makes it harder to isolate the cause of failures. ### 3.3 Testing React Context * **Do This:** Create custom test providers to mock the context values and test components within a controlled context. Use "@testing-library/react" to render and interact with components. """typescript import { render, screen, fireEvent } from '@testing-library/react'; import { describe, expect, it } from 'vitest'; import React, { createContext, useContext, useState } from 'react'; // Context setup const CounterContext = createContext({ count: 0, setCount: (value: number) => {}, }); const useCounter = () => useContext(CounterContext); const CounterProvider = ({ children, initialCount = 0 }) => { const [count, setCount] = useState(initialCount); return ( <CounterContext.Provider value={{ count, setCount }}> {children} </CounterContext.Provider> ); }; // Component const CounterComponent = () => { const { count, setCount } = useCounter(); return ( <div> <span>{count}</span> <button onClick={() => setCount(count + 1)}>Increment</button> </div> ); }; describe('Counter Component with Context', () => { it('should display the initial count from context', () => { render( <CounterProvider initialCount={5}> <CounterComponent /> </CounterProvider> ); expect(screen.getByText('5')).toBeInTheDocument(); }); it('should increment the count when the button is clicked', () => { render( <CounterProvider initialCount={0}> <CounterComponent /> </CounterProvider> ); const incrementButton = screen.getByText('Increment'); fireEvent.click(incrementButton); expect(screen.getByText('1')).toBeInTheDocument(); }); it('should use a custom context value', () => { // Custom Provider for testing const TestProvider = ({ children }) => ( <CounterContext.Provider value={{ count: 100, setCount: () => {} }}> {children} </CounterContext.Provider> ); render( <TestProvider> <CounterComponent /> </TestProvider> ); expect(screen.getByText('100')).toBeInTheDocument(); }); }); """ * **Don't Do This:** Rely on the default context values during tests, as they might not accurately reflect the component's behavior in different scenarios. ## 4. Performance Considerations ### 4.1. Optimize State Setup * **Do This:** Minimize the amount of state that needs to be set up for each test. Only set up the state that is relevant to the specific test case. Use lazy initialization where possible. * **Don't Do This:** Create large, complex state objects unnecessarily. ### 4.2. Avoid Unnecessary State Updates * **Do This:** Only update the state when necessary. Avoid unnecessary state updates that can trigger re-renders or other performance-intensive operations. * **Don't Do This:** Continuously update state in a loop or in response to every event. ## 5. Security Considerations ### 5.1. Secure State Storage * **Do This:** If you are testing code that deals with sensitive data (e.g., passwords, API keys), ensure that the data is stored securely and is not exposed in test logs or reports. * **Don't Do This:** Store sensitive data in plain text or commit it to version control. Use environment variables or dedicated secrets management tools. ### 5.2. Input Validation * **Do This:** Test state updates with invalid or malicious input to ensure that the application handles errors gracefully and does not become vulnerable to security exploits. * **Don't Do This:** Assume that all input will be valid. ## 6. Code Style and Formatting * Follow established code style guidelines (e.g., Airbnb, Google) for whitespace, indentation, and naming conventions. * Use descriptive variable names to clearly indicate the purpose of the state being managed. * Add comments to explain complex state transitions or mocking strategies. ## 7. Conclusion By following these coding standards, developers can write more robust, maintainable, and performant tests for state management in Vitest. Adhering to these best practices will improve the overall quality of the codebase and reduce the risk of bugs and security vulnerabilities. Remember to always use the latest version of the stack and its libraries to maximize performance, security, and compatibility with the latest features. This document is intended as a living document, and it will be updated as new best practices and technologies emerge.
# Testing Methodologies Standards for Vitest This document outlines the coding standards for testing methodologies using Vitest. These standards aim to ensure consistent, maintainable, performant, and secure tests across our projects. This document covers strategies for unit, integration, and end-to-end tests with Vitest. ## 1. Testing Pyramid & Levels of Testing ### 1.1. Standard: Adhere to the Testing Pyramid * **Do This:** Prioritize unit tests. Strive for a higher number of unit tests compared to integration or end-to-end tests. Reduce the number of end-to-end tests. * **Don't Do This:** Create a "top-heavy" testing pyramid with a large number of slow, brittle end-to-end tests. * **Why:** Unit tests are faster and more granular, leading to quicker feedback and easier debugging. End-to-end tests are slower, less specific, and more likely to break due to UI changes or environment issues. ### 1.2. Unit Tests * **Standard:** Unit tests should focus on testing a single unit of code in isolation (e.g., a function, a class method). * **Do This:** Mock dependencies to isolate the unit under test. * **Don't Do This:** Perform database operations, network requests, or file system accesses directly in unit tests. * **Why:** Isolating units makes tests faster and more reliable. Dependencies can introduce external factors that make tests flaky or slow. """typescript // Example: Unit test for a function that formats a date import { formatDate } from '../src/utils'; import { describe, expect, it, vi } from 'vitest'; describe('formatDate', () => { it('should format a date correctly', () => { const mockDate = new Date('2024-01-01T12:00:00.000Z'); vi.spyOn(global, 'Date').mockImplementation(() => mockDate); // mock Date object, to ensure date is always the same for testing expect(formatDate(new Date())).toBe('2024-01-01'); // now it won't be "new Date()", but mockDate. }); it('should handle different date objects', () => { const date = new Date('2024-02-15T08:30:00.000Z'); expect(formatDate(date)).toBe('2024-02-15'); }); }); """ ### 1.3. Integration Tests * **Standard:** Integration tests should verify the interaction between different components or modules. * **Do This:** Make API calls within your application and mock the external service responses. * **Don't Do This:** Test intricate business logic which should go into unit tests if possible. * **Why:** Validates that modules work together as expected, ensuring the larger system functions correctly. """typescript // Example: Integration test for an API client import { describe, expect, it, vi } from 'vitest'; import { fetchUserData } from '../src/apiClient'; global.fetch = vi.fn(() => Promise.resolve({ json: () => Promise.resolve({ id: 1, name: 'John Doe' }), }) ) as any; describe('fetchUserData', () => { it('should fetch user data from the API', async () => { const userData = await fetchUserData(1); expect(userData).toEqual({ id: 1, name: 'John Doe' }); expect(fetch).toHaveBeenCalledWith('/api/users/1'); }); it('should handle API errors', async () => { (fetch as any).mockImplementationOnce(() => Promise.reject('API Error')); await expect(fetchUserData(1)).rejects.toEqual('API Error'); }); }); """ ### 1.4. End-to-End (E2E) Tests * **Standard:** E2E tests should simulate real user interactions to validate the entire application flow. * **Do This:** Use tools like Playwright or Cypress for browser automation. Test critical user journeys. * **Don't Do This:** Use slow, brittle E2E tests for verifying logic that can be easily unit-tested. * **Why:** E2E tests provide confidence that the entire system works correctly from the user's perspective. """typescript // Example: Playwright E2E Test import { test, expect } from '@playwright/test'; test('Homepage has title', async ({ page }) => { await page.goto('http://localhost:3000/'); await expect(page).toHaveTitle("My App"); }); test('Navigation to about page', async ({ page }) => { await page.goto('http://localhost:3000/'); await page.getByRole('link', { name: 'About' }).click(); await expect(page).toHaveURL(/.*about/); }); """ ## 2. Test Structure and Organization ### 2.1. Standard: Arrange, Act, Assert (AAA) Pattern * **Do This:** Structure your tests into three distinct parts: Arrange (setup the test environment), Act (execute the code being tested), and Assert (verify the expected outcome). * **Don't Do This:** Mix setup, execution, and verification logic within a single block of code. * **Why:** AAA makes tests more readable, maintainable, and easier to understand. """typescript // Example: AAA pattern import { describe, expect, it } from 'vitest'; import { add } from '../src/math'; describe('add', () => { it('should add two numbers correctly', () => { // Arrange const a = 5; const b = 3; // Act const result = add(a, b); // Assert expect(result).toBe(8); }); }); """ ### 2.2. Standard: Test File Structure * **Do This:** Create a "test" directory mirroring your "src" directory for test files. Use "*.test.ts" or "*.spec.ts" naming convention. * **Don't Do This:** Place test files directly alongside source files. * **Why:** A consistent file structure makes it easier to locate and maintain tests. """ src/ components/ Button.tsx utils/ formatDate.ts test/ components/ Button.test.tsx utils/ formatDate.test.ts """ ### 2.3. Standard: Descriptive Test Names * **Do This:** Write descriptive test names that clearly explain what the test is verifying. Follow a convention like "should [verb] [expected result] when [scenario]". * **Don't Do This:** Use generic, unclear, or vague test names. * **Why:** Clear test names facilitate debugging and give a good overview of the tested functionality. """typescript // Good: it('should return "Hello, World!" when no name is provided', () => { /* ... */ }); // Bad: it('test', () => { /* ... */ }); """ ### 2.4. Standard: Grouping Tests with 'describe' * **Do This:** Use the "describe" block to group related tests for clarity and organization. * **Don't Do This:** Create a single, monolithic test file with no logical grouping. * **Why:** "describe" blocks improve test readability and help identify the area of the code being tested. """typescript import { describe, expect, it } from 'vitest'; import { calculateDiscount } from '../src/utils'; describe('calculateDiscount', () => { it('should apply a 10% discount for orders over $100', () => { expect(calculateDiscount(150)).toBe(15); }); it('should not apply a discount for orders under $100', () => { expect(calculateDiscount(50)).toBe(0); }); }); """ ## 3. Mocking and Stubbing ### 3.1. Standard: Minimize Mocking * **Do This:** Use mocks only when necessary to isolate the unit under test. Prefer real dependencies when possible. * **Don't Do This:** Mock everything by default. Over-mocking can blur the line between implementation change and test update. * **Why:** Reduces the risk of false positives and increases confidence in the tests' accuracy. ### 3.2. Standard: Use Vitest's Built-in Mocking * **Do This:** Use "vi.mock", "vi.spyOn", and "vi.fn" for mocking and stubbing in Vitest. * **Don't Do This:** Use external mocking libraries that may not be compatible with Vitest. * **Why:** Vitest's built-in mocking is well-integrated and performant. """typescript // Example: Mocking a module function import { describe, expect, it, vi } from 'vitest'; import { fetchData } from '../src/dataService'; import { processData } from '../src/processor'; vi.mock('../src/dataService', () => ({ fetchData: vi.fn(() => Promise.resolve([{ id: 1, name: 'Test Data' }])), })); describe('processData', () => { it('should process data from the dataService', async () => { const result = await processData(); expect(result).toEqual([{ id: 1, name: 'Processed Test Data' }]); }); }); """ ### 3.3. Standard: Restore Mocks After Each Test * **Do This:** Use "vi.restoreAllMocks()" (or "afterEach(vi.restoreAllMocks())") to reset mocks after each test. * **Don't Do This:** Leave mocks active between tests, which can lead to unexpected behavior and test pollution. * **Why:** Prevents interference between tests and ensures reliable results. """typescript import { describe, expect, it, vi, afterEach } from 'vitest'; import { externalService } from '../src/externalService'; import { myModule } from '../src/myModule'; describe('myModule', () => { afterEach(() => { vi.restoreAllMocks(); }); it('should call externalService correctly', () => { const spy = vi.spyOn(externalService, 'doSomething'); myModule.run(); expect(spy).toHaveBeenCalled(); }); it('should handle errors from externalService', async () => { vi.spyOn(externalService, 'doSomething').mockRejectedValue(new Error('Service Unavailable')); await expect(myModule.run()).rejects.toThrowError('Service Unavailable'); }); }); """ ### 3.4 Standard: Mocking In-Source Testing * **Do This:** Use "import.meta.vitest" inside of the scope you want to test. Run tests directly within component or module. * **Why:** Tests share the same scope making them able to test against private states. """typescript // src/index.ts export const add = (a: number, b: number) => a + b if (import.meta.vitest) { const { it, expect } = import.meta.vitest it('add', () => { expect(add(1, 2)).eq(3) }) } """ ## 4. Asynchronous Testing ### 4.1. Standard: Use "async/await" for Asynchronous Operations * **Do This:** Use "async" and "await" to handle asynchronous operations in your tests. * **Don't Do This:** Rely on callbacks or Promises without "async/await", which can make tests harder to read and debug. * **Why:** "async/await" makes asynchronous code look and behave more like synchronous code, improving readability and maintainability. """typescript // Example: Testing an asynchronous function import { describe, expect, it } from 'vitest'; import { fetchData } from '../src/apiClient'; describe('fetchData', () => { it('should fetch data successfully', async () => { const data = await fetchData('https://example.com/api/data'); expect(data).toBeDefined(); }); it('should handle errors when fetching data', async () => { try { await fetchData('https://example.com/api/error'); } catch (error: any) { expect(error.message).toBe('Failed to fetch data'); } }); }); """ ### 4.2. Standard: Handle Promises with "expect.resolves" and "expect.rejects" * **Do This:** Use "expect.resolves" to assert that a Promise resolves with a specific value, and "expect.rejects" to assert that a Promise rejects with a specific error. * **Don't Do This:** Use "try/catch" for successful asynchronous calls. * **Why:** "expect.resolves" and "expect.rejects" provide a more concise and readable way to test Promises. """typescript import { describe, expect, it } from 'vitest'; import { createUser } from '../src/userService'; describe('createUser', () => { it('should create a user successfully', async () => { await expect(createUser('john.doe@example.com')).resolves.toBe('user123'); }); it('should reject with an error if the email is invalid', async () => { await expect(createUser('invalid-email')).rejects.toThrowError('Invalid email format'); }); }); """ ### 4.3. Standard: Use Fake Timers for Time-Dependent Tests * **Do This:** Use "vi.useFakeTimers()" with "vi.advanceTimersByTime()" to control the passage of time in your tests. * **Don't Do This:** Rely on "setTimeout" or "setInterval" with real timers, which can make tests slow and unreliable. * **Why:** Fake timers make time-dependent tests faster, more deterministic, and easier to control. """typescript import { describe, expect, it, vi, beforeEach } from 'vitest'; import { delayedFunction } from '../src/utils'; describe('delayedFunction', () => { beforeEach(() => { vi.useFakeTimers(); }); it('should execute the callback after a delay', () => { let executed = false; delayedFunction(() => { executed = true; }, 1000); expect(executed).toBe(false); vi.advanceTimersByTime(1000); expect(executed).toBe(true); }); it('should execute the callback at least after a delay of x milliseconds', () => { const callback = vi.fn(); delayedFunction(callback, 1000); vi.advanceTimersByTime(999); expect(callback).not.toHaveBeenCalled(); // not called yet vi.advanceTimersByTime(1); // advance another 1 ms expect(callback).toHaveBeenCalled(); // now is called }) }); """ ## 5. Test Data Management ### 5.1. Standard: Use Test Data Factories or Fixtures * **Do This:** Create test data factories or fixtures to generate consistent and reusable test data. * **Don't Do This:** Hardcode test data directly in your tests, leading to duplication and maintenance issues. * **Why:** Test data factories make it easier to create complex test data structures and ensure consistency across tests. """typescript // Example: Test data factory import { faker } from '@faker-js/faker'; export const createUser = (overrides = {}) => ({ id: faker.number.int(), email: faker.internet.email(), name: faker.person.fullName(), ...overrides, }); //Use it in your tests: import { describe, expect, it } from 'vitest'; import { createUser } from './factories'; describe('User', () => { it('should create a valid user', () => { const user = createUser(); expect(user).toHaveProperty('id'); expect(user).toHaveProperty('email'); }); it('should allow overriding properties', () => { const user = createUser({ name: 'Custom Name' }); expect(user.name).toBe('Custom Name'); }); }); """ ### 5.2. Standard: Avoid Sharing Test Data * **Do This:** Create new test data for each test case to avoid interference between tests. If you must, use a method like the "beforeEach" hook. * **Don't Do This:** Mutate shared test data, which can lead to unpredictable test results and flaky tests. * **Why:** Prevents test pollution and ensures that each test case is independent and reliable. ### 5.3. Standard: Seed Your Testing Database Before Tests * **Do This:** Seed database with a known state before any tests are run. * **Don't Do This:** Allow tests to create dependencies on each other's data and state. * **Why:** Ensures tests have a consistent, predictable, isolated environment. """typescript // Example: Seeding database before running tests import { describe, expect, it, beforeAll, afterAll } from 'vitest'; import { seedDatabase, clearDatabase } from './db'; // your database setup file import { User } from '../src/models/User'; describe('User Model', () => { beforeAll(async () => { await seedDatabase(); // Seed the database with consistent test data }); afterAll(async () => { await clearDatabase(); // Clear the database after all tests are complete }); it('should create a user correctly', async () => { const newUser = await User.create({ name: 'Test User', email: 'test@example.com' }); expect(newUser.name).toBe('Test User'); }); it('should find a user by email', async () => { const user = await User.findOne({ where: { email: 'test@example.com' } }); expect(user).toBeDefined(); }); }); """ ## 6. Performance Testing ### 6.1. Standard: Use "performance.mark" and "performance.measure" for Performance Measurement * **Do This:** Utilize the "performance.mark" and "performance.measure" APIs to measure the execution time of critical code sections. * **Don't Do This:** Rely on manual time tracking or inaccurate timing methods. * **Why:** Provides precise performance metrics for optimizing code execution. """typescript import { describe, expect, it } from 'vitest'; import { expensiveFunction } from '../src/utils'; describe('expensiveFunction', () => { it('should execute within a reasonable time', () => { performance.mark('start'); expensiveFunction(); performance.mark('end'); const measure = performance.measure('expensiveFunction', 'start', 'end'); expect(measure.duration).toBeLessThan(100); // Milliseconds }); }); """ ### 6.2. Standard: Threshold-Based Assertions * **Do This:** Set thresholds or performance budgets by setting "expect(measure.duration).toBeLessThan(100)". * **Don't Do This:** Test performance in an unmeasurable relative "it's fast" way. * **Why:** Prevents performance regressions by ensuring code executes within acceptable time limits. ## 7. Security Considerations ### 7.1. Standard: Avoid Hardcoding Secrets in Tests * **Do This:** Use environment variables or configuration files to store sensitive information used in tests. * **Don't Do This:** Hardcode API keys, passwords, or other secrets directly in your test code. * **Why:** Protects sensitive information from exposure and reduces the risk of security breaches. ### 7.2. Standard: Sanitize Test Inputs * **Do This:** Sanitize test inputs to prevent injection attacks or other security vulnerabilities. * **Don't Do This:** Use unsanitized user inputs directly in your tests, which can introduce security risks. * **Why:** Helps identify potential security vulnerabilities in your code and prevent real-world attacks. Added security can come from "vi.fn()" mocking return or argument sanitization. ### 7.3 Standard: Mock Authentication and Authorization * **Do This:** Mock authentication and authorization services during tests to avoid making real external calls. * **Why:** Prevents sensitive credentials from use and allows for specific permissions to be tested. """typescript // Example: Mock Authenticated User import { describe, expect, it, vi } from 'vitest'; import { getUserProfile } from '../src/authService'; import { getRestrictedData } from '../src/dataService'; vi.mock('../src/authService', () => ({ getUserProfile: vi.fn(() => ({ id: 'mocked-user', isAdmin: true })) }); describe('accessControlTesting', () => { it('should allow access for admin users', async () => { const profile = await getUserProfile(); const data = await getRestrictedData(profile.id); expect(data).toBeDefined(); }); }); """ ## 8. Continuous Integration (CI) ### 8.1. Standard: Run Tests on Every Commit * **Do This:** Configure your CI/CD pipeline to automatically run all tests on every commit or pull request. * **Don't Do This:** Only run tests manually or on a schedule, which can delay feedback and increase the risk of regressions. * **Why:** Provides immediate feedback on code changes and prevents regressions from making their way into production. ### 8.2. Standard: Use a Dedicated Test Environment * **Do This:** Run tests in a dedicated environment that is isolated from other processes and has a known configuration. * **Don't Do This:** Run tests in a shared environment or on your local machine, which can introduce inconsistencies and dependencies. * **Why:** Ensures consistent and reliable test results, regardless of the environment. ### 8.3 Standard: Utilize Vitest CLI Flags in CI * **Do This:** Use "--run" and "--reporter=junit" to ensure tests are running correctly on the CI process. JUnit provides a way to look back at test results. ## 9. Code Coverage ### 9.1. Standard: Aim for High Code Coverage * **Do This:** Strive for high code coverage (e.g., 80% or higher) to ensure that most of your code is being tested. * **Don't Do This:** Focus solely on code coverage metrics without considering the quality and effectiveness of your tests. * **Why:** Provides a measure of how much of your code is being tested and helps identify areas that may need more coverage. ### 9.2. Standard: Use "--coverage" Flag for Coverage Reports * **Do This:** Use the "--coverage" flag in Vitest to generate code coverage reports. * **Don't Do This:** Rely on external coverage tools that may not be compatible with Vitest. * **Why:** Provides detailed information about code coverage, including line, branch, and function coverage. ## 10. Test Doubles Test doubles mimic real components for testing purposes. Common types include: * **Stubs:** Provide predefined responses to calls. * **Mocks:** Verify interactions and behaviors. * **Spies:** Track how a function or method is used. * **Fakes:** Simplified implementations of a component. * **Dummies:** Pass placeholders when a value is needed but not used. ### 10.1 Standard: Use Doubles to Verify Proper Code Execution * **Do This:** Mock components or functions and verify those mocks got called as expected. * **Why:** Allows for test driven development when mocking. Helps isolate code. """typescript // mock axios and check if it gets called with the correct URL" import axios from 'axios'; import { fetchData } from '../src/apiFunctions'; vi.mock('axios'); it('fetches data from the correct URL', async () => { const mockAxios = vi.mocked(axios); mockAxios.get.mockResolvedValue({ data: { message: 'Success!' } }); const result = await fetchData('test-url'); expect(mockAxios.get).toHaveBeenCalledWith('test-url'); expect(result).toEqual({ message: 'Success!' }); }); """
# API Integration Standards for Vitest This document outlines coding standards and best practices for integrating Vitest tests with backend services and external APIs, ensuring maintainable, performant, and secure testing. ## 1. General Principles ### 1.1 Use Mocking and Stubbing **Do This:** - Use mocking libraries like "vi.fn()" from Vitest or "sinon" to isolate unit tests from external dependencies. - Stub API calls to return predefined responses, preventing real API calls during testing. **Don't Do This:** - Directly call real APIs within unit tests. - Rely on external services being available or predictable during testing. **Why:** Mocking and stubbing ensure test isolation and repeatable results, which are crucial for unit testing. Avoid relying on the state of external services that could be unstable or change unexpectedly. **Example:** """typescript // src/apiClient.ts export async function fetchData(id: string): Promise<any> { const response = await fetch("https://api.example.com/data/${id}"); if (!response.ok) { throw new Error("HTTP error! status: ${response.status}"); } return await response.json(); } // src/apiClient.test.ts import { fetchData } from './apiClient'; import { describe, it, expect, vi } from 'vitest'; global.fetch = vi.fn(() => Promise.resolve({ ok: true, json: () => Promise.resolve({ id: '123', value: 'testData' }), }) ) as any; describe('fetchData', () => { it('fetches data correctly', async () => { const data = await fetchData('123'); expect(data).toEqual({ id: '123', value: 'testData' }); expect(fetch).toHaveBeenCalledWith('https://api.example.com/data/123'); }); it('handles errors correctly', async () => { (global.fetch as any).mockImplementationOnce(() => Promise.resolve({ ok: false, status: 404, }) ); await expect(fetchData('456')).rejects.toThrowError('HTTP error! status: 404'); }); }); """ ### 1.2 Separate Unit and Integration Tests **Do This:** - Classify tests based on scope: unit tests (isolated), integration tests (connecting components), and end-to-end tests (entire system). - Use different Vitest configurations or file naming conventions to distinguish between test types. **Don't Do This:** - Mix unit tests with integration tests in the same files. **Why:** Separation allows for faster feedback cycles for unit tests (mocked) and more comprehensive but slower tests for integration (potentially using real or test instances of APIs). **Example:** """ // vitest.config.unit.js export default { test: { include: ['src/**/*.unit.test.ts'], environment: 'node', }, }; // vitest.config.integration.js export default { test: { include: ['src/**/*.integration.test.ts'], environment: 'node', //setupFiles: ['./test/setupIntegration.ts'], // setup for integration tests (env vars etc.) timeout: 10000, // Longer timeout for integration tests }, }; """ File structure: """ src/ - apiClient.ts - apiClient.unit.test.ts // unit tests (mocked API calls) - apiClient.integration.test.ts // integration tests (potentially real API - use with caution) """ ### 1.3 Configuration Management **Do This:** - Use environment variables or configuration files to manage API endpoints and authentication keys. - Load environment variables using libraries like "dotenv" and make them available in your Vitest environment. **Don't Do This:** - Hardcode API endpoints or secrets directly in test code. - Commit sensitive information to version control. **Why:** Securely configuring test environments prevents exposing sensitive data and makes it easier to switch between test and production environments. **Example:** """typescript // .env.test API_ENDPOINT=https://test.example.com/api API_KEY=your_test_api_key """ """typescript // vitest.config.ts import { defineConfig } from 'vitest/config'; import 'dotenv/config'; // Load environment variables export default defineConfig({ test: { globals: true, environment: 'node', setupFiles: ['./test/setupEnv.ts'], }, }); // test/setupEnv.ts process.env.API_ENDPOINT = process.env.API_ENDPOINT || 'https://backup.example.com/api'; // src/apiClient.test.ts import { fetchData } from './apiClient'; import { describe, it, expect, vi } from 'vitest'; describe('fetchData', () => { it('uses the configured API endpoint', async () => { global.fetch = vi.fn().mockResolvedValue({ ok: true, json: () => Promise.resolve({ id: '123', value: 'testData' }), }); await fetchData('123'); expect(fetch).toHaveBeenCalledWith("${process.env.API_ENDPOINT}/data/123"); }); }); """ ## 2. Advanced Mocking Techniques ### 2.1 Mocking Modules **Do This:** - Use "vi.mock" to replace entire modules with mock implementations. - Create a separate mock module file with the same exports as the original. **Don't Do This:** - Mutate imported modules directly in test files without using "vi.mock". **Why:** "vi.mock" provides a cleaner and more maintainable way to mock modules, particularly if you aim to replace the entire module's functionality. **Example:** """typescript // src/apiClient.ts export async function externalFunction(id: string): Promise<string> { return "Actual result for ID: ${id}"; } // src/apiClient.test.ts import { externalFunction } from './apiClient'; import { describe, it, expect, vi } from 'vitest'; vi.mock('./apiClient', async () => { return { externalFunction: vi.fn().mockResolvedValue('Mocked Response'), }; }); describe('externalFunction', () => { it('should use the mocked function', async () => { const result = await externalFunction('123'); expect(result).toBe('Mocked Response'); }); }); """ ### 2.2 Mocking with Spies **Do This:** - Use "vi.spyOn" to monitor the behavior of specific functions or methods without replacing them entirely. - Verify that mocked functions are called with the correct arguments and number of times. **Don't Do This:** - Over-mock functions unless necessary; sometimes, observing behavior is sufficient. **Why:** Spies are useful when you want to verify how a function is used without completely altering its behavior. **Example:** """typescript // src/service.ts export class MyService { callExternalAPI(id: string): string { console.log("Calling API with id: ${id}"); return "API called for id: ${id}"; } } // src/service.test.ts import { MyService } from './service'; import { describe, it, expect, vi } from 'vitest'; describe('MyService', () => { it('should call the external API', () => { const service = new MyService(); const spy = vi.spyOn(service, 'callExternalAPI'); service.callExternalAPI('123'); expect(spy).toHaveBeenCalledWith('123'); expect(spy).toHaveBeenCalledTimes(1); }); }); """ ### 2.3 Asynchronous Mocking **Do This:** - Use "mockResolvedValue" or "mockRejectedValue" to mock asynchronous function responses. - Accurately simulate success and failure scenarios. **Don't Do This:** - Use synchronous mocking techniques for asynchronous functions. **Why:** Properly mocking asynchronous functions is crucial for testing asynchronous code paths in a non-blocking and deterministic way. **Example:** """typescript // src/apiClient.ts export async function fetchData(id: string): Promise<any> { await new Promise(resolve => setTimeout(resolve, 10)); // Simulate async operation const response = await fetch("https://api.example.com/data/${id}"); if (!response.ok) { throw new Error("HTTP error! status: ${response.status}"); } return await response.json(); } // src/apiClient.test.ts import { fetchData } from './apiClient'; import { describe, it, expect, vi } from 'vitest'; describe('fetchData', () => { it('mocks a resolved value', async () => { global.fetch = vi.fn().mockResolvedValue({ ok: true, json: () => Promise.resolve({ data: 'mocked' }), }); const result = await fetchData('123'); expect(result).toEqual({ data: 'mocked' }); }); it('mocks a rejected value', async () => { global.fetch = vi.fn().mockRejectedValue(new Error('API error')); await expect(fetchData('123')).rejects.toThrow('API error'); }); }); """ ## 3. Integration Testing Strategies ### 3.1 Test Databases **Do This:** - Use separate databases for testing and production environments. - Use tools like Docker or in-memory databases (like SQLite) to manage test databases. - Seed the test database with known data before running tests. - Clean up the test database after each test run. **Don't Do This:** - Run tests against the production database. - Leave test data in the database after tests are complete. **Why:** Ensures tests do not affect production data and that tests are repeatable. **Example:** """typescript // test/setupIntegration.ts import { beforeAll, afterAll } from 'vitest'; import { Sequelize } from 'sequelize'; const sequelize = new Sequelize('test_db', 'user', 'password', { dialect: 'sqlite', storage: ':memory:', // Use an in-memory database logging: false, }); beforeAll(async () => { await sequelize.sync({ force: true }); // Create tables and drop existing // Seed the database (example): // await MyModel.bulkCreate([{ name: 'Test Data' }]); }); afterAll(async () => { await sequelize.close(); }); export { sequelize }; """ ### 3.2 API Contracts **Do This:** - Document API contracts using tools like OpenAPI/Swagger. - Validate API responses in integration tests against the documented contract. **Don't Do This:** - Assume API responses will always match expectations without verification. **Why:** Ensures API changes are caught during testing and prevents integration issues. **Example:** """typescript // src/apiClient.integration.test.ts import { fetchData } from './apiClient'; import { describe, it, expect } from 'vitest'; import {validate} from 'jsonschema'; const apiResponseSchema = { type: 'object', properties: { id: { type: 'string' }, value: { type: 'string' }, }, required: ['id', 'value'], }; describe('fetchData Integration', () => { it('fetches data and validates the API response', async () => { const data = await fetchData('123'); const validationResult = validate(data, apiResponseSchema); expect(validationResult.valid).toBe(true); expect(data).toHaveProperty('id'); expect(data).toHaveProperty('value'); }); }); """ ### 3.3 Test Doubles **Do This:** - Use test doubles like mocks, stubs, and spies strategically to control external dependencies during integration tests. - Use fakes and dummies when simple replacements are needed and behavior verification is not. **Don't Do This:** - Overuse test doubles, which can make tests too complex. **Why:** Effectively manages dependencies, making tests both more reliable and deterministic. ## 4. Security Considerations ### 4.1 Data Sanitization **Do This:** - Sanitize test data to prevent security vulnerabilities like SQL injection or XSS. - Avoid using real user data in tests. **Don't Do This:** - Directly use unsanitized data from external sources in tests. **Why:** Prevents accidental introduction or masking of security vulnerabilities. **Example:** """typescript import { sanitize } from 'dompurify'; describe('Data Sanitization', () => { it('should sanitize potentially malicious input', () => { const maliciousInput = '<img src="x" onerror="alert(\'XSS\')">'; const sanitizedInput = sanitize(maliciousInput); expect(sanitizedInput).toBe('<img src="x">'); // Sanitized output }); }); """ ### 4.2 Authentication and Authorization **Do This:** - Mock authentication and authorization mechanisms to avoid using real credentials in tests. - Verify that authorized users can access specific endpoints. - Create separate test accounts with limited privileges. **Don't Do This:** - Use production credentials in test environments. - Skip authorization checks in tests. **Why:** Proper testing ensures authentication and authorization mechanisms work as expected without compromising security. ## 5. Performance Testing with Vitest ### 5.1 Measuring Execution Time **Do This:** - Use "performance.now()" or "performance.mark()" to measure the execution time of API calls during integration tests. - Set performance thresholds and fail tests if execution time exceeds the threshold. **Don't Do This:** - Neglect performance testing and solely focus on functionality. **Why:** Identifies performance bottlenecks and ensures API calls meet performance requirements. **Example:** """typescript import { describe, it, expect } from 'vitest'; import { fetchData } from './apiClient'; describe('Performance Testing', () => { it('should execute within a specified time limit', async () => { const startTime = performance.now(); await fetchData('123'); const endTime = performance.now(); const executionTime = endTime - startTime; const performanceThreshold = 200; // Milliseconds expect(executionTime).toBeLessThan(performanceThreshold); }); }); """ ### 5.2 Load Testing Simulation **Do This:** - Simulate concurrent API calls to assess performance under load. - Use libraries like "p-map" to control concurrency. **Don't Do This:** - Neglect load testing, which can reveal scalability issues. **Why:** Uncovers performance issues that only manifest under load. ## 6. Best Practices for Maintainability ### 6.1 Descriptive Test Names **Do This:** - Use clear and concise test names that describe the expected behavior. - Include relevant information, such as input values and expected output. **Don't Do This:** - Use generic or vague test names that do not provide context. **Why:** Improves readability and makes it easier to understand test failures. **Example:** """typescript it('should fetch data successfully with a valid ID', async () => { // ... }); it('should throw an error when fetching data with an invalid ID', async () => { // ... }); """ ### 6.2 Test Data Management **Do This:** - Use factories or fixtures to generate test data consistently. - Keep test data separate from test logic. **Don't Do This:** - Hardcode test data directly in test cases. **Why:** Simplifies test maintenance and prevents duplication of test data. ### 6.3 Clean Test Code **Do This:** - Follow the DRY (Don't Repeat Yourself) principle in test code. - Refactor common setup logic into helper functions or reusable modules. - Keep test cases focused and concise. **Don't Do This:** - Write redundant or overly complex test code. **Why:** Simplifies test code and reduces the likelihood of errors. By following these standards, you'll create a robust, maintainable, and secure Vitest testing environment for your API integrations.