# Performance Optimization Standards for Vitest
This document outlines performance optimization standards for Vitest, focusing on improving application speed, responsiveness, and resource usage during testing. These standards aim to guide developers in writing efficient and effective tests, ensuring a fast and reliable development workflow.
## 1. Test Suite Structure and Organization
### 1.1. Grouping Related Tests
**Standard:** Group related tests within "describe" blocks to improve readability and enable focused execution and reduce setup costs.
**Why:** Grouping allows developers to quickly identify the scope of a test suite, facilitates targeted test execution (e.g., "vitest run --testNamePattern "MyComponent""), and can enable shared setup/teardown logic for related tests.
**Do This:**
"""typescript
// src/components/MyComponent.test.ts
import { describe, it, expect, beforeEach } from 'vitest';
import MyComponent from './MyComponent.vue';
import { mount } from '@vue/test-utils';
describe('MyComponent', () => {
let wrapper;
beforeEach(() => {
wrapper = mount(MyComponent);
});
it('renders correctly', () => {
expect(wrapper.exists()).toBe(true);
});
it('displays the correct default message', () => {
expect(wrapper.text()).toContain('Hello from MyComponent!');
});
it('updates the message when the prop changes', async () => {
await wrapper.setProps({ msg: 'New Message' });
expect(wrapper.text()).toContain('New Message');
});
});
"""
**Don't Do This:**
"""typescript
// Avoid a flat structure with ungrouped tests
import { it, expect } from 'vitest';
import MyComponent from './MyComponent.vue';
import { mount } from '@vue/test-utils';
it('renders correctly', () => {
const wrapper = mount(MyComponent);
expect(wrapper.exists()).toBe(true);
});
it('displays the correct default message', () => {
const wrapper = mount(MyComponent);
expect(wrapper.text()).toContain('Hello from MyComponent!');
});
"""
### 1.2. Test File Naming Conventions
**Standard:** Use consistent and meaningful naming conventions for test files. A common convention is "[componentName].test.ts/js" or "[moduleName].spec.ts/js".
**Why:** Clear naming conventions make it easier to locate and understand tests, improving maintainability and collaboration. Vitest can also leverage naming patterns for targeted test runs.
**Do This:**
"""
src/
├── components/
│ ├── MyComponent.vue
│ └── MyComponent.test.ts // Or MyComponent.spec.ts
└── utils/
├── stringUtils.ts
└── stringUtils.test.ts // Or stringUtils.spec.ts
"""
**Don't Do This:**
"""
src/
├── components/
│ ├── MyComponent.vue
│ └── test.ts // Vague and unclear
└── utils/
├── stringUtils.ts
└── utils.test.ts // Ambiguous, doesn't clearly specify what's being tested
"""
## 2. Efficient Test Setup and Teardown
### 2.1. Minimizing Global Setup and Teardown
**Standard:** Avoid unnecessary global setup and teardown operations. Use "beforeEach" and "afterEach" hooks within "describe" blocks for context-specific setup and teardown.
**Why:** Global setup and teardown can significantly slow down test execution, especially in large projects. Isolating setup and teardown to specific test suites reduces overhead and makes tests more predictable.
**Do This:**
"""typescript
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
describe('MyComponent with specific data', () => {
let component;
let mockData;
beforeEach(() => {
mockData = { name: 'Test Name', value: 123 };
component = createComponent(mockData); // Hypothetical createComponent function
});
afterEach(() => {
component.destroy(); // Hypothetical destroy function to clean up resources
mockData = null;
});
it('renders the name correctly', () => {
expect(component.getName()).toBe('Test Name');
});
it('renders the value correctly', () => {
expect(component.getValue()).toBe(123);
});
});
"""
**Don't Do This:**
"""typescript
// Avoid using global beforeAll and afterAll unless absolutely necessary
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
let globalComponent;
beforeAll(() => {
// This will run once before *all* tests, potentially creating unnecessary overhead
globalComponent = createGlobalComponent();
});
afterAll(() => {
// This will run once after *all* tests, even those that don't need cleanup
globalComponent.destroy();
});
describe('Test Suite 1', () => {
it('test 1', () => {
// ...
});
});
describe('Test Suite 2', () => {
it('test 2', () => {
// ...
});
});
"""
### 2.2. Leveraging "beforeAll" and "afterAll" Strategically
**Standard:** Use "beforeAll" and "afterAll" for expensive operations that only need to be performed once per test suite, such as database connections or large data set initialization. However, carefully consider the impact on test isolation.
**Why:** "beforeAll" and "afterAll" can optimize performance by avoiding redundant setup. However, global state changes within these hooks can introduce dependencies between tests, leading to flaky results.
**Do This (with caution and clear documentation):**
"""typescript
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import { connectToDatabase, disconnectFromDatabase } from './database';
describe('Database Interactions', () => {
let dbConnection;
beforeAll(async () => {
dbConnection = await connectToDatabase();
});
afterAll(async () => {
await disconnectFromDatabase(dbConnection);
});
it('fetches data correctly', async () => {
const data = await dbConnection.query('SELECT * FROM users');
expect(data).toBeDefined();
});
it('inserts data correctly', async () => {
await dbConnection.query('INSERT INTO users (name) VALUES ("Test User")');
// ...
});
});
"""
**Don't Do This (if not necessary for performance):**
"""typescript
// Avoid overusing beforeAll if the setup is not truly expensive.
import { describe, it, expect, beforeAll } from 'vitest';
describe('Simple Operations', () => {
let simpleValue;
beforeAll(() => {
// Unnecessary use of beforeAll for a simple assignment
simpleValue = 10;
});
it('adds 5 to the value', () => {
expect(simpleValue + 5).toBe(15);
});
});
"""
### 2.3. Lazy Initialization
**Standard:** Use lazy initialization for resources that are only needed by a subset of tests. Initialize these resources only when they are first accessed.
**Why:** Lazy initialization avoids unnecessary setup costs for tests that don't require specific resources. This can significantly improve test suite run time, especially when dealing with complex or large-scale applications.
**Do This:**
"""typescript
import { describe, it, expect } from 'vitest';
describe('Conditional Resource Initialization', () => {
let expensiveResource = null;
const getExpensiveResource = () => {
if (!expensiveResource) {
expensiveResource = createExpensiveResource(); // Only create when needed
}
return expensiveResource;
};
it('test 1 does not need the resource', () => {
expect(true).toBe(true); // Simple assertion
});
it('test 2 needs the expensive resource', () => {
const resource = getExpensiveResource();
expect(resource).toBeDefined();
// ... use the resource
});
});
"""
**Don't Do This:**
"""typescript
import { describe, it, expect, beforeAll } from 'vitest';
describe('Unconditional Resource Initialization', () => {
let expensiveResource;
beforeAll(() => {
// Expensive resource is created even if tests don't use it, wasting resources
expensiveResource = createExpensiveResource();
});
it('test 1 does not need the resource', () => {
expect(true).toBe(true);
});
it('test 2 needs the expensive resource', () => {
expect(expensiveResource).toBeDefined();
// ... use the resource
});
});
"""
## 3. Mocking and Stubbing Strategies
### 3.1. Minimizing External Dependencies
**Standard:** Mock or stub external dependencies (e.g., API calls, database queries, third-party services) to isolate units under test and avoid slow or unreliable external factors.
**Why:** Mocking and stubbing allows for predictable and fast test execution by eliminating dependence on external systems that may be unavailable, slow, or change unexpectedly. Vitest provides built-in mocking capabilities for this purpose.
**Do This:**
"""typescript
import { describe, it, expect, vi } from 'vitest';
import { fetchData } from './api'; // Hypothetical API function
import MyComponent from './MyComponent.vue';
import { mount } from '@vue/test-utils';
vi.mock('./api', () => {
return {
fetchData: vi.fn(() => Promise.resolve({ data: 'Mocked Data' })),
};
});
describe('MyComponent with Mocked API', () => {
it('displays mocked data correctly', async () => {
const wrapper = mount(MyComponent);
await wrapper.vm.$nextTick(); // Ensure data is fetched and rendered
expect(wrapper.text()).toContain('Mocked Data');
expect(fetchData).toHaveBeenCalled();
});
});
"""
**Don't Do This:**
"""typescript
// Avoid making actual API calls during testing
import { describe, it, expect } from 'vitest';
import { fetchData } from './api';
import MyComponent from './MyComponent.vue';
import { mount } from '@vue/test-utils';
describe('MyComponent without Mocking', () => {
it('displays fetched data (potentially slow and unreliable)', async () => {
const wrapper = mount(MyComponent);
await wrapper.vm.$nextTick();
// May fail if the API is down or slow
expect(wrapper.text()).toContain('Expected Data from API');
});
});
"""
### 3.2. Mocking Only What's Necessary
**Standard:** Only mock the specific functions or modules that are directly involved in the test. Avoid mocking entire modules or services unless absolutely necessary.
**Why:** Over-mocking can lead to brittle tests that don't accurately reflect the behavior of the system. Mocking only the relevant parts allows for more focused and reliable tests.
**Do This:**
"""typescript
// Mock only the fetchData function, not the entire api module.
import { describe, it, expect, vi } from 'vitest';
import { fetchData, processData } from './api'; // Now processData remains real
import MyComponent from './MyComponent.vue';
import { mount } from '@vue/test-utils';
vi.mock('./api', async () => {
const actual = await vi.importActual('./api')
return {
...actual, // import all the existing exports
fetchData: vi.fn(() => Promise.resolve({ data: 'Mocked Data' })),
}
})
describe('MyComponent with Specific Mocking', () => {
it('displays processed data correctly', async () => {
const wrapper = mount(MyComponent);
await wrapper.vm.$nextTick();
expect(wrapper.text()).toContain('Mocked Data'); // Relies on the mocked fetchData result
expect(fetchData).toHaveBeenCalled();
// processData still runs with real logic
});
});
"""
**Don't Do This:**
"""typescript
// Avoid mocking the entire module if only one function needs to be mocked
import { describe, it, expect, vi } from 'vitest';
import * as api from './api'; // Import the whole module
import MyComponent from './MyComponent.vue';
import { mount } from '@vue/test-utils';
vi.mock('./api', () => {
return {
fetchData: vi.fn(() => Promise.resolve({ data: 'Mocked Data' })),
processData: vi.fn(() => 'Mocked Processed Data'), // Unnecessary mocking
};
});
describe('MyComponent with Over-Mocking', () => {
it('displays mocked data correctly (but over-mocked)', async () => {
const wrapper = mount(MyComponent);
await wrapper.vm.$nextTick();
expect(wrapper.text()).toContain('Mocked Processed Data'); // Using the mocked processData, even if it's unnecessary
expect(api.fetchData).toHaveBeenCalled();
});
});
"""
### 3.3. Using "vi.spyOn" for Partial Mocking
**Standard:** Use "vi.spyOn" to mock specific methods on an existing object or module *without* replacing the entire object. This allows you to verify that the method was called and observe its arguments, while still executing the original implementation.
**Why:** "vi.spyOn" provides a more granular and less disruptive way to mock functionality, especially when you need to test interactions with a method while still relying on its original behaviour.
**Do This:**
"""typescript
import { describe, it, expect, vi } from 'vitest';
import * as MyModule from './myModule'; // Hypothetical module with several functions
describe('Using vi.spyOn', () => {
it('should call the original function and allow assertions', () => {
const spy = vi.spyOn(MyModule, 'myFunction'); // Spy on myFunction
const result = MyModule.myFunction(1, 2);
expect(spy).toHaveBeenCalledTimes(1);
expect(spy).toHaveBeenCalledWith(1, 2);
expect(result).toBe(3); // Assuming myFunction returns the sum of its arguments
spy.mockRestore(); // Restore the original implementation of myFunction
});
});
"""
**Don't Do This:**
"""typescript
// Avoid using vi.mock if you only need to spy on a function
import { describe, it, expect, vi } from 'vitest';
import * as MyModule from './myModule';
vi.mock('./myModule', () => {
return {
myFunction: vi.fn((a, b) => a + b), // Replaces myFunction entirely - less ideal if you want to call the original.
};
});
describe('Avoid replacing the function completely with vi.mock', () => {
it('should call the original function and allow assertions', () => {
// ... less flexible for observing calls and executing original code
});
});
"""
## 4. Efficient Assertions and Expectations
### 4.1. Avoiding Excessive Assertions
**Standard:** Focus assertions on the specific behavior being tested in each test case. Avoid including unrelated or redundant assertions.
**Why:** Excessive assertions can make tests harder to understand and maintain, and can also slow down test execution. Each assertion adds overhead.
**Do This:**
"""typescript
import { describe, it, expect } from 'vitest';
describe('Focused Assertions', () => {
it('correctly calculates the sum', () => {
const result = calculateSum(2, 3);
expect(result).toBe(5); // Focus on the sum itself
});
});
"""
**Don't Do This:**
"""typescript
// Avoid including irrelevant assertions.
import { describe, it, expect } from 'vitest';
describe('Excessive Assertions', () => {
it('calculates the sum and checks unrelated properties', () => {
const result = calculateSum(2, 3);
expect(result).toBe(5);
expect(typeof result).toBe('number'); // Redundant and unnecessary
expect(result > 0).toBe(true); // Redundant and unnecessary
});
});
"""
### 4.2. Using Specific Matchers
**Standard:** Use the most specific and appropriate Vitest matchers for each assertion. For example, use "toBe" for primitive values, "toEqual" for objects, "toContain" for arrays, and "toHaveBeenCalled" for mocks.
**Why:** Specific matchers improve the clarity and expressiveness of tests, and can also provide better performance by avoiding unnecessary comparisons or type conversions. Using type-safe matchers (where possible) offer type safety and performance improvements.
**Do This:**
"""typescript
import { describe, it, expect, vi } from 'vitest';
describe('Specific Matchers', () => {
it('uses toBe for primitive values', () => {
expect(1 + 1).toBe(2);
});
it('uses toEqual for objects', () => {
const obj1 = { a: 1, b: 2 };
const obj2 = { a: 1, b: 2 };
expect(obj1).toEqual(obj2);
});
it('uses toContain for arrays', () => {
const arr = [1, 2, 3];
expect(arr).toContain(2);
});
it('uses toHaveBeenCalled for mocks', () => {
const mockFn = vi.fn();
mockFn();
expect(mockFn).toHaveBeenCalled();
});
});
"""
**Don't Do This:**
"""typescript
// Avoid using generic matchers when more specific ones are available.
import { describe, it, expect, vi } from 'vitest';
describe('Generic Matchers', () => {
it('incorrectly uses toEqual for primitive values', () => {
expect(1 + 1).toEqual(2); // Inefficient; 'toBe' is better for primitives
});
it('incorrectly uses toContain for objects', () => {
const obj1 = { a: 1, b: 2 };
const obj2 = { a: 1, b: 2 };
expect(obj1).toContain(obj2); // Incorrect and will likely fail
});
});
"""
### 4.3. Asynchronous Assertions
**Standard:** Use "async/await" with Vitest's built-in support for asynchronous testing. Ensure you wait for asynchronous operations to complete before making assertions.
**Why:** Asynchronous tests can be prone to errors if assertions are made before asynchronous operations have finished. Using "async/await" ensures that tests wait for completion, leading to more reliable results.
**Do This:**
"""typescript
import { describe, it, expect } from 'vitest';
import { fetchData } from './api'; // Hypothetical async function
describe('Asynchronous Assertions', () => {
it('fetches data correctly', async () => {
const data = await fetchData();
expect(data).toBeDefined();
// ... more assertions on the fetched data
});
});
"""
**Don't Do This:**
"""typescript
// Avoid making assertions before asynchronous operations complete.
import { describe, it, expect } from 'vitest';
import { fetchData } from './api';
describe('Incorrect Asynchronous Assertions', () => {
it('attempts to assert before data is fetched (likely to fail)', () => {
let data;
fetchData().then(result => {
data = result;
});
expect(data).toBeDefined(); // Assertion made before fetchData resolves.
});
});
"""
## 5. Code Coverage Considerations
### 5.1. Balancing Coverage and Performance
**Standard:** While code coverage is important, prioritize writing meaningful tests that cover critical functionality and edge cases. Avoid striving for 100% coverage at the expense of test performance or maintainability.
**Why:** High code coverage can provide a false sense of security if tests are not well-designed or if they focus on trivial code paths. Focus on writing tests that thoroughly validate the most important aspects of the system.
**Do This:**
* Identify critical functionalities and prioritize testing these areas thoroughly.
* Focus on covering boundary conditions, edge cases, and potential error scenarios.
* Use code coverage tools (e.g., c8 via Vitest's "--coverage" flag) to identify uncovered areas, but don't treat 100% coverage as the primary goal.
**Don't Do This:**
* Write tests solely to increase code coverage without considering their value in validating functionality.
* Create complex or convoluted tests to cover trivial code paths that are unlikely to cause issues.
* Neglect testing important areas simply because they are difficult to cover with tests.
### 5.2. Excluding Non-Essential Files
**Standard:** Exclude non-essential files (e.g., configuration files, third-party libraries) from code coverage analysis to avoid skewing the results and wasting resources.
**Why:** Including non-essential files in code coverage analysis can make it difficult to identify areas of the codebase that truly need more attention.
**Do This:**
Configure the coverage reporter in "vitest.config.ts" to exclude files.
"""typescript
// vitest.config.ts
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
coverage: {
exclude: [
'**/node_modules/**',
'**/dist/**',
'**/coverage/**',
'src/config.ts', // Example: exclude configuration file
'src/external/**', //Ignore external libraries
],
},
},
});
"""
**Don't Do This:**
* Include all files in code coverage analysis without considering their relevance.
* Fail to exclude generated files or build artifacts from coverage reports.
## 6. Parallelization and Concurrency
### 6.1. Enabling Test Parallelization
**Standard:** Enable parallel test execution in Vitest to reduce overall test run time, especially for large projects.
**Why:** Vitest can run tests in parallel, leveraging multiple CPU cores to significantly speed up execution. This is especially beneficial for tests that involve I/O operations or long-running computations.
**Do This:**
By default, Vitest parallelizes test execution. You can control the level of parallelism within "vitest.config.ts". Make sure your tests are properly isolated for parallelism.
"""typescript
// vitest.config.ts
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
threads: true, // or number of threads
},
});
"""
**Don't Do This:**
* Disable parallelization unless there are specific reasons to do so (e.g., tests that rely on shared mutable state).
* Ignore potential concurrency issues in tests when running them in parallel (e.g., race conditions when accessing shared resources).
### 6.2. Managing Shared State in Parallel Tests
**Standard:** Avoid shared mutable state between tests, or carefully manage access to shared resources using appropriate synchronization mechanisms (e.g., locks, mutexes) to prevent race conditions.
**Why:** Parallel tests that share mutable state can lead to non-deterministic results and flaky test runs.
**Do This:**
* Ensure each test operates on its own isolated data set.
* Use database transactions or other isolation techniques to prevent interference between tests that interact with shared databases.
* If shared state is unavoidable, use appropriate locking mechanisms to protect access.
**Don't Do This:**
* Allow tests to modify global variables or shared data structures without proper synchronization.
* Assume that tests will always run in a specific order when running them in parallel.
## 7. Performance Monitoring and Analysis
### 7.1. Using Performance Measurement Tools
**Standard:** Use performance measurement tools (e.g., "console.time" and "console.timeEnd" for basic timing, profiling tools for detailed analysis) to identify performance bottlenecks in tests.
**Why:** Performance measurement tools can help pinpoint slow-running tests or inefficient code within tests, allowing developers to optimize them.
**Do This:**
"""typescript
import { describe, it, expect } from 'vitest';
describe('Performance Measurement', () => {
it('measures the execution time of a function', () => {
console.time('myFunction');
myFunction(); // Hypothetical function to measure
console.timeEnd('myFunction');
});
});
"""
**Don't Do This:**
* Ignore performance issues in tests.
* Rely solely on intuition when identifying performance bottlenecks without using measurement tools.
* Leave performance measurement code in production code. Add it only to tests when performance measurements are needed and remove it afterwards.
By adhering to these performance optimization standards, developers can create Vitest test suites that are fast, reliable, and maintainable, ensuring a smooth and efficient development process.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Security Best Practices Standards for Vitest This document outlines security best practices for writing Vitest tests. It aims to guide developers building secure applications by ensuring the integrity, confidentiality, and availability of their systems. These standards are designed to be used in conjunction with general security best practices but focuses specifically on how they apply to Vitest and testing. ## 1. Input Validation and Sanitization in Tests ### Standard 1.1: Validate and Sanitize Test Data **Do This:** * Always validate input data used in your tests to ensure it conforms to expected types, formats, and ranges. * Sanitize input data used in tests to prevent injection attacks and other vulnerabilities. **Don't Do This:** * Assume input data is always safe or clean. * Directly use unsanitized or unvalidated input data in tests without proper validation. **Why:** Neglecting input validation and sanitization in tests can result in security vulnerabilities, especially when dealing with sensitive data or external inputs. **Code Example:** """typescript import { describe, it, expect } from 'vitest'; describe('User Input Validation', () => { it('should handle valid usernames', () => { const username = 'validUser123'; const isValid = validateUsername(username); // Assume validateUsername is a utility function expect(isValid).toBe(true); }); it('should reject invalid usernames', () => { const username = '<script>alert("XSS")</script>'; // Malicious username const isValid = validateUsername(username); expect(isValid).toBe(false); }); it('should handle SQL injection attempts in username', () => { const username = "user'; DROP TABLE users;--"; const isValid = validateUsername(username); expect(isValid).toBe(false); }); // Example validateUsername function (implementation will vary) function validateUsername(username: string): boolean { // Check for null or undefined if (!username) return false; // Check length if (username.length < 3 || username.length > 20) return false; // Check for allowed characters (alphanumeric and underscores) if (!/^[a-zA-Z0-9_]+$/.test(username)) return false; return true; } }); """ ### Standard 1.2: Encoding Output Data in Tests **Do This:** * Encode output data properly to prevent cross-site scripting (XSS) attacks. * Use appropriate encoding techniques based on the context of the output. **Don't Do This:** * Directly output data without proper encoding. **Why:** Encoding output data ensures that it is displayed correctly and prevents malicious scripts from being executed in the browser when rendering dynamic content or test reports. **Code Example:** """typescript import { describe, it, expect } from 'vitest'; import { escape } from 'lodash'; // Example utility - use a trusted encoding library. lodash itself is not a security tool, its just being used here for demonstration purposes describe('Output Encoding', () => { it('should encode HTML entities to prevent XSS', () => { const maliciousData = '<script>alert("XSS")</script>'; const encodedData = escape(maliciousData); expect(encodedData).toBe('<script>alert("XSS")</script>'); }); }); """ ### Standard 1.3: Utilize Parameterized Tests **Do This:** * Use Parameterized tests to cover multiple input variants and edge cases to ensure comprehensive testing. * Define clear boundaries for input parameters to prevent unexpected behaviors. **Don't Do This:** * Manually duplicate tests for each input variant, which can lead to inconsistent coverage. * Perform testing without considering edge cases and boundary conditions. **Why:** Parameterized tests streamline test coverage by allowing multiple inputs to be tested using one test case, ensuring that a broader range of potential vulnerabilities is tested. **Code example:** """typescript import { describe, it, expect } from 'vitest'; describe('Parameterized Input Validation', () => { const testCases = [ { input: 'validUser', expected: true }, { input: '<script>alert("XSS")</script>', expected: false }, { input: "user'; DROP TABLE users;--", expected: false }, { input: 'a'.repeat(50), expected: false } // Too long ]; it.each(testCases)('should validate username: %s', ({ input, expected }) => { const isValid = validateUsername(input); expect(isValid).toBe(expected); }); function validateUsername(username: string): boolean { if (!username) return false; if (username.length < 3 || username.length > 20) return false; if (!/^[a-zA-Z0-9_]+$/.test(username)) return false; return true; } }); """ ## 2. Authentication and Authorization in Tests ### Standard 2.1: Simulate Authentication Flows **Do This:** * Simulate authentication flows in tests to verify that authentication mechanisms are functioning correctly. * Utilize mock authentication services or test-specific user identities for authentication. **Don't Do This:** * Use real user credentials in tests. * Bypass authentication mechanisms during testing. **Why:** Properly simulating authentication in tests ensures that access controls are functioning correctly and that only authenticated users can access protected resources. **Code Example:** """typescript import { describe, it, expect, vi } from 'vitest'; import { authenticateUser } from '../src/auth'; // Assumed auth module import { AuthResult } from '../src/types'; describe('Authentication', () => { it('should authenticate a valid user', async () => { const mockUser = { username: 'testuser', password: 'password123' }; const authResult: AuthResult = await authenticateUser(mockUser.username, mockUser.password); expect(authResult.success).toBe(true); expect(authResult.user).toBeDefined(); // Ideally, fully asserted. }); it('should reject invalid user credentials', async () => { const mockUser = { username: 'invalidUser', password: 'wrongPassword' }; const authResult: AuthResult = await authenticateUser(mockUser.username, mockUser.password); expect(authResult.success).toBe(false); expect(authResult.error).toBeDefined(); // Error message check }); }); """ ### Standard 2.2: Verify Authorization Rules **Do This:** * Verify that authorization rules are enforced correctly in tests. * Test access control mechanisms to ensure that users only have access to resources they are authorized to access. **Don't Do This:** * Assume authorization rules are always enforced. * Grant excessive permissions to test users. **Why:** Properly verifying authorization rules ensures that only authorized users can access sensitive data and perform privileged operations, protecting against unauthorized access and privilege escalation attacks. **Code Example:** """typescript import { describe, it, expect, vi } from 'vitest'; import { checkPermission } from '../src/auth'; // Assumed auth module describe('Authorization', () => { it('should allow access for authorized users', () => { const user = { id: 123, role: 'admin' }; const resource = 'adminPanel'; const canAccess = checkPermission(user, resource); expect(canAccess).toBe(true); }); it('should deny access for unauthorized users', () => { const user = { id: 456, role: 'user' }; const resource = 'adminPanel'; const canAccess = checkPermission(user, resource); expect(canAccess).toBe(false); }); // Example checkPermission Function function checkPermission(user: any, resource: string): boolean { if (user.role === 'admin' && resource === 'adminPanel') { return true; } return false; } }); """ ### Standard 2.3: Test Role-Based Access Control (RBAC) **Do This:** * Systematically test different user roles to verify that access permissions are assigned correctly under various scenarios. * Create tests that confirm users in one role cannot access resources that are meant for a different role. **Don't Do This:** * Only test a small subset of roles, which can leave significant gaps in security coverage. * Give all roles similar permissions during testing, thereby failing to effectively test the role-based system. **Why:** Adequate RBAC testing ensures that the system access permissions align strictly with the intended roles, minimizing the risk of unauthorized access and enhancing security. **Code Example:** """typescript import { describe, it, expect } from 'vitest'; import { authorize } from '../src/auth'; // Assumed auth module describe('Role-Based Access Control', () => { it('should grant admin access to admin resources', () => { expect(authorize('admin', 'admin-resource')).toBe(true); }); it('should deny user access to admin resources', () => { expect(authorize('user', 'admin-resource')).toBe(false); }); it('should allow user access to user resources', () => { expect(authorize('user', 'user-resource')).toBe(true); }); it('should deny guest access to any protected resources', () => { expect(authorize('guest', 'admin-resource')).toBe(false); expect(authorize('guest', 'user-resource')).toBe(false); }); function authorize(role: string, resource: string): boolean { const permissions = { 'admin': ['admin-resource', 'user-resource'], 'user': ['user-resource'], }; return permissions[role]?.includes(resource) || false; } }); """ ## 3. Data Protection in Tests ### Standard 3.1: Protect Sensitive Data **Do This:** * Handle sensitive data securely in tests, such as encryption keys, passwords, and personal information. * Use appropriate encryption and storage mechanisms to protect sensitive data at rest and in transit. **Don't Do This:** * Store sensitive data in plain text in tests. * Expose sensitive data in test logs or error messages. **Why:** Protecting sensitive data ensures that it is not compromised or leaked during testing, preserving the confidentiality and integrity of user data. **Code Example:** """typescript import { describe, it, expect } from 'vitest'; import { encryptData, decryptData } from '../src/crypto'; // Assumed crypto module describe('Data Encryption', () => { it('should encrypt and decrypt data successfully', async () => { const plainText = 'Sensitive data'; const key = 'supersecretkey'; const cipherText = await encryptData(plainText, key); const decryptedText = await decryptData(cipherText, key); expect(decryptedText).toBe(plainText); }); // Example Implementation (Avoid storing Key Inline) const crypto = require('crypto'); const algorithm = 'aes-256-cbc'; // Use a secure algorithm async function encryptData(data: string, key: string): Promise<string> { const iv = crypto.randomBytes(16); // Generate a random initialization vector const cipher = crypto.createCipheriv(algorithm, Buffer.from(key), iv); let encrypted = cipher.update(data); encrypted = Buffer.concat([encrypted, cipher.final()]); return iv.toString('hex') + ':' + encrypted.toString('hex'); // Prepend IV for decryption } async function decryptData(encryptedData: string, key: string): Promise<string> { const textParts = encryptedData.split(':'); const iv = Buffer.from(textParts.shift(), 'hex'); const encryptedText = Buffer.from(textParts.join(':'), 'hex'); const decipher = crypto.createDecipheriv(algorithm, Buffer.from(key), iv); let decrypted = decipher.update(encryptedText); decrypted = Buffer.concat([decrypted, decipher.final()]); return decrypted.toString(); } }); """ ### Standard 3.2: Sanitize Data in Test Environments **Do This:** * Sanitize or anonymize data in test environments to remove personally identifiable information (PII) and other sensitive data. * Use synthetic data or test data generators to create realistic but non-sensitive data for testing purposes. **Don't Do This:** * Use production data directly in test environments without sanitization. * Expose sensitive data in test databases or configurations. **Why:** Sanitizing data in test environments protects the privacy of users and prevents the accidental disclosure of sensitive information during testing. **Code Example:** """typescript import { describe, it, expect } from 'vitest'; import { sanitizeEmail } from '../src/utils'; // Assumed utils module describe('Data Sanitization', () => { it('should sanitize email addresses', () => { const email = 'testuser@example.com'; const sanitizedEmail = sanitizeEmail(email); expect(sanitizedEmail).toBe('t*****@example.com'); }); function sanitizeEmail(email: string): string { const [user, domain] = email.split('@'); const sanitizedUser = user.charAt(0) + '*'.repeat(user.length - 2) + user.charAt(user.length - 1); return sanitizedUser + '@' + domain; } }); """ ## 4. Vulnerability Testing ### Standard 4.1: Conduct Security-Focused Integration Tests **Do This:** * Incorporate security-focused integrations into tests, concentrating on the interaction of multiple system or application components to uncover vulnerabilities. * Simulate various attack scenarios that can span numerous components, thereby revealing interconnected security lapses. **Don't Do This:** * Limit integration tests to only functional aspects, neglecting potential security exploits. * Test components in isolation, which overlooks the vulnerabilities that arise during component interactions **Why:** Security-focused integration testing is vital for identifying vulnerabilities that emerge from the interplay of different components, ensuring the robustness of the system as a whole. **Code Example:** """typescript import { describe, it, expect } from 'vitest'; import { createUser, updateUserRole, fetchUserData } from '../src/api'; // Assumed API module describe('Security-Focused Integration Tests', () => { it('should prevent privilege escalation', async () => { // Create a standard user const user = await createUser({ username: 'testuser', role: 'user' }); // Attempt to elevate the user's role to admin (this should fail) try { await updateUserRole(user.id, 'admin'); //Add explicit failure test } catch (error) { // Verify the user's role remains as 'user' const userData = await fetchUserData(user.id); expect(userData.role).toBe('user'); } }); }); """ ### Standard 4.2: Perform Fuzz Testing **Do This:** * Use fuzz testing to provide invalid, unexpected, or random data as input into the application to identify potential crashes, errors, and vulnerabilities. * Automate fuzz testing processes to ensure continuous testing during the development phase. **Don't Do This:** * Rely solely on manual input validation, which is prone to oversights and cannot cover all possible input variations. * Neglect automated testing frameworks, which limits the coverage of fuzz testing scenarios **Why:** Fuzz testing plays a crucial role in exposing vulnerabilities that might not be apparent through conventional testing methods. **Code Example:** """typescript import { describe, it, expect } from 'vitest'; import { generateRandomString } from '../src/utils'; // Assumed API module import { processInput } from '../src/app'; // Assumed app module describe('Fuzz Testing', () => { it('should handle unexpected input without crashing', () => { const randomInput = generateRandomString(200); expect(() => processInput(randomInput)).not.toThrowError(); }); it('should identify potential SQL injection vulnerabilities', () => { const sqlInjectionInput = "'; DROP TABLE users; --"; expect(() => processInput(sqlInjectionInput)).not.toThrowError(); // Or handle specifically based on use case }); // Simple random string generator (enhance this for more robust fuzzing) function generateRandomString(length: number): string { let result = ''; const characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'; const charactersLength = characters.length; for (let i = 0; i < length; i++) { result += characters.charAt(Math.floor(Math.random() * charactersLength)); } return result; } function processInput(input: string): void { } }); """ ## 5. Dependency Management and Code Review ### Standard 5.1: Scan Dependencies for Vulnerabilities **Do This:** * Regularly scan project dependencies for known vulnerabilities using tools like "npm audit" or "yarn audit". * Update vulnerable dependencies to the latest secure versions, or apply security patches if available. * Utilize dependency scanning tools integrated into CI/CD pipelines to continuously monitor for vulnerabilities. **Don't Do This:** * Ignore security warnings or advisories related to project dependencies. * Use outdated or unsupported dependencies with known vulnerabilities. **Why:** Regularly scanning dependencies for vulnerabilities helps identify and mitigate security risks introduced by third-party libraries and packages. **Action:** * In your project, run "npm audit" or "yarn audit" to check for vulnerabilities. ### Standard 5.2: Conduct Security Code Reviews **Do This:** * Perform security code reviews to identify potential security flaws and vulnerabilities in test code as well as application code. Security reviews should be performed by developers with strong security expertise, or independent security professionals. * Pay close attention to areas such as input validation, authentication, authorization, data protection, and error handling during code reviews. * Use automated static code analysis tools to identify common security issues and coding errors. **Don't Do This:** * Skip code reviews or rely solely on automated testing for security assurance. * Ignore security recommendations or suggestions during code reviews. **Why:** Security code reviews provide an additional layer of security assurance by identifying and addressing potential vulnerabilities that may not be detected through automated testing alone. **Action:** * Schedule regular security code review sessions with your team. * Use static code analysis tools to automate the detection of security issues. ### Standard 5.3: Lock Down Dependency Versions **Do This:** * Pin dependency versions in "package.json" or similar files to avoid unexpected changes due to automatic updates that may introduce security issues. * Use semantic versioning conventions to define the allowable range of updates, balancing stability and security. **Don't Do This:** * Use broad version ranges (e.g., ""^1.0.0"") that allow major updates, as these can include breaking changes and introduce unexpected risks. * Neglect to update dependencies periodically, which could miss important security patches. **Why:** Pinning dependency versions ensures a controlled and predictable environment, decreasing the risk of introducing security vulnerabilities through automatic updates. **Code Example:** """json // Example package.json { "dependencies": { "lodash": "4.17.21", "axios": "0.27.2" }, "devDependencies": { "vitest": "latest" } } """ ## 6. Error Handling and Logging ### Standard 6.1: Implement Secure Error Handling **Do This:** * Implement consistent and secure error handling throughout your test suites to prevent information leakage. * Handle errors gracefully, providing informative messages where appropriate and preventing sensitive data from being exposed. **Don't Do This:** * Expose sensitive information (like database connection strings, API keys, or internal paths) in error messages intended for the user. * Neglect to handle errors, which can lead to unhandled exceptions and potential security vulnerabilities. **Why:** Proper error handling can prevent attackers from leveraging error messages to gain insights into the system. **Code Example:** """typescript import { describe, it, expect } from 'vitest'; import { performSensitiveOperation } from '../src/app'; // Assumed API module describe('Error Handling', () => { it('should handle errors gracefully without exposing sensitive information', async () => { try { await performSensitiveOperation(); } catch (error: any) { expect(error.message).not.toContain('Sensitive Information'); expect(error.message).toContain('An error occurred'); } }); // Example function performSensitiveOperation(): Promise<void> { return new Promise((resolve, reject) => { reject(new Error('An error occurred')); }); } }); """ ### Standard 6.2: Secure Logging Practices **Do This:** * Implement comprehensive and secure logging mechanisms in your test suites to track important events and potential security incidents. * Log relevant events, such as authentication attempts, access control decisions, and data modifications, for auditing purposes. **Don't Do This:** * Log sensitive information (like passwords, API keys, or personal data) in plain text. * Neglect to monitor logs, which can lead to missed security incidents. **Why:** Proper logging enables the detection and investigation of security incidents, supporting both proactive monitoring and reactive response. **Code Example:** """typescript import { describe, it, expect } from 'vitest'; import { logEvent } from '../src/utils'; // Assumed utils module describe('Secure Logging', () => { it('should log important events without including sensitive information', () => { const eventType = 'authentication'; const eventDetails = { user: 'testuser', status: 'success' }; // No direct assertions here; the expectation is that the log mechanism securely handles the data logEvent(eventType, eventDetails); // Expect logs not to contain sensitive information }); function logEvent(type: string, details: Record<string, any>): void { // Ideally implement rotation, and masking } }); """ ## 7. Continuous Monitoring and Auditing ### Standard 7.1: Implement Continuous Security Monitoring **Do This:** * Set up automated monitoring systems to continuously track security-related metrics and events in the test and production environments. * Monitor log files, system performance, and network traffic for suspicious activity or anomalies. **Don't Do This:** * Rely solely on manual monitoring or periodic security assessments. * Ignore alerts or notifications generated by monitoring systems. **Why:** Continuous security monitoring allows for early detection of security incidents and vulnerabilities, enabling timely response and mitigation. ### Standard 7.2: Conduct Regular Security Audits **Do This:** * Conduct regular security audits to assess the effectiveness of security controls and identify areas for improvement. * Involve external security experts to provide independent assessments and recommendations. **Don't Do This:** * Skip security audits or treat them as one-time events. * Ignore the recommendations or findings of security audits. **Why:** Regular security audits help ensure that security measures are up-to-date and effective in protecting against evolving threats. This document provides a foundation for security best practices in Vitest. It should be reviewed and updated regularly to align with evolving security threats and best practices. The ultimate responsibility for security lies with the development team, who should apply these principles in their daily activities.
# State Management Standards for Vitest This document outlines the coding standards for state management when writing tests with Vitest, the next-gen testing framework powered by Vite. It aims to guide developers in creating robust, maintainable, and performant tests by adopting modern best practices for managing application state, data flow, and reactivity within the testing context. ## 1. Introduction to State Management in Vitest While Vitest primarily focuses on unit and integration testing, understanding state management principles is crucial for creating effective and reliable tests, especially when dealing with complex components or application logic. State might refer to various entities: component internal state, data fetched from external sources, or the overall application state managed by tools like Vuex, Redux, or Pinia. ### 1.1. Why State Management Matters in Testing * **Reproducibility:** Clearly defined state makes tests reproducible, ensuring that failures always indicate real issues. * **Isolation:** Proper state isolation prevents tests from interfering with each other, avoiding flaky test suites. * **Maintainability:** Well-structured state management simplifies test setup and teardown, making tests easier to understand and maintain. * **Accuracy:** Accurate state representation guarantees that tests accurately reflect the actual application behavior. ### 1.2. Scope of these Standards These standards cover: * Approaches for setting up and managing state within tests. * Strategies for isolating state between tests. * Best practices for mocking and stubbing external dependencies that influence state. * Specific considerations for testing reactive state with frameworks like Vue, React, and Svelte. ## 2. General Principles for State Management in Vitest ### 2.1. Declarative vs. Imperative State Setup * **Do This:** Prefer declarative state setup using "beforeEach" or "beforeAll" hooks to define the initial state for each test or test suite. """typescript import { beforeEach, describe, expect, it } from 'vitest'; describe('Counter component', () => { let counter: { value: number }; beforeEach(() => { counter = { value: 0 }; // Declarative state setup }); it('should increment the counter value', () => { counter.value++; expect(counter.value).toBe(1); }); }); """ * **Don't Do This:** Avoid directly modifying the state within the test body unless it's the action being tested. This makes the test harder to read and understand as the initial state becomes implicit. ### 2.2. Isolate State Between Tests * **Do This:** Use "beforeEach" to reset the state before each test, ensuring that tests do not interfere with each other. Consider using a factory function to create fresh state instances. """typescript import { beforeEach, describe, expect, it } from 'vitest'; // Factory function to create a new state object const createCounter = () => ({ value: 0 }); describe('Counter component', () => { let counter: { value: number }; beforeEach(() => { counter = createCounter(); // Creates a fresh counter object for each test }); it('should increment the counter value', () => { counter.value++; expect(counter.value).toBe(1); }); it('should not be affected by previous test', () => { expect(counter.value).toBe(0); // Reset to initial state }); }); """ * **Don't Do This:** Share mutable state directly between tests without resetting it. This can lead to unexpected test failures and makes debugging difficult. ### 2.3. Minimize Global State * **Do This:** Encapsulate the state as much as possible within the component or module being tested. Use dependency injection to provide state dependencies. * **Don't Do This:** Rely heavily on global variables or shared mutable objects to manage state. This introduces tight coupling and makes tests harder to isolate. ### 2.4. Use Mocks and Stubs for External Dependencies * **Do This:** Use "vi.mock" or manual mocks to isolate the component being tested from external dependencies (e.g., databases, APIs). This allows you to control the state returned by the dependencies and focus on testing the component's logic. """typescript // api.ts const fetchData = async () => { const response = await fetch('/api/data'); return await response.json(); }; export default fetchData; // component.test.ts import { describe, expect, it, vi } from 'vitest'; import fetchData from '../api'; // Import the original module import MyComponent from '../MyComponent.vue'; //Example using Vue - framework agnostic principle vi.mock('../api', () => ({ //Mock the whole module default: vi.fn(() => Promise.resolve({ data: 'mocked data' })), })); describe('MyComponent', () => { it('should display mocked data', async () => { const wrapper = mount(MyComponent); await vi.waitFor(() => { // Adjust timeout as needed expect(wrapper.text()).toContain('mocked data'); }); expect(fetchData).toHaveBeenCalled(); }); }); """ * **Don't Do This:** Make real API calls or database queries during tests. This can make tests slow, unreliable, and dependent on external factors. Also avoid tightly coupling tests to a real database or API. ### 2.5. Understanding "vi.mock" vs. "vi.spyOn" * **"vi.mock":** Replaces an entire module (or specific functions within that module) with a mock implementation. Useful when you need to completely control the behavior of a dependency. Importantly, Vitest hoists the mock to the top of the scope, meaning the mock is defined *before* the actual import. This allows you to mock even before the component is imported. * **"vi.spyOn":** Wraps an existing function (either a function on an object or a directly imported function) and allows you to track its calls, arguments, and return values *without* replacing the original implementation. Useful when you want to assert that a function was called with specific arguments, or a certain number of times. However, "vi.spyOn" works on existing objects/functions and can only be used *after* the object is imported and the function exists. """typescript import {describe, expect, it, vi} from 'vitest'; const myModule = { myFunction: (x: number) => x * 2, }; describe('myFunction', () => { it('should call the function with correct arguments', () => { const spy = vi.spyOn(myModule, 'myFunction'); myModule.myFunction(5); expect(spy).toHaveBeenCalledWith(5); }); it('should return the correct value', () => { const spy = vi.spyOn(myModule, 'myFunction'); spy.mockReturnValue(100); expect(myModule.myFunction(5)).toBe(100); }); }); """ ### 2.6. Async State and "vi.waitFor" * **Do This:** When dealing with asynchronous state updates (e.g., fetching data from an API), use "vi.waitFor" to ensure that the state has been updated before making assertions. This is crucial to prevent race conditions and flaky tests. """typescript import { describe, expect, it, vi } from 'vitest'; describe('Async Component', () => { it('should update state after async operation', async () => { let state = { data: null }; const fetchData = async () => { return new Promise((resolve) => { setTimeout(() => { state.data = 'Async Data'; resolve(state.data); }, 100); }); }; await fetchData(); await vi.waitFor(() => { expect(state.data).toBe('Async Data'); // Assert that the state has been updated }); }); }); """ * **Don't Do This:** Rely on fixed timeouts to wait for asynchronous operations to complete. This can lead to flaky tests if the operation takes longer than expected. Manually trigger the resolve using "mockResolvedValue" when mocking async functions. ### 2.7. Testing Reactivity with Testing Frameworks * **Do This:** Leverage framework-specific testing utilities to properly observe and interact with reactive state. For Vue, use "vue-test-utils", for React, use "@testing-library/react", and so on. These utilities provide methods for triggering state changes and waiting for updates to propagate. **Example (Vue with vue-test-utils):** """typescript import {describe, expect, it} from 'vitest'; import {mount} from '@vue/test-utils'; import { ref } from 'vue'; const MyComponent = { template: '<div>{{ count }}</div>', setup() { const count = ref(0); return { count }; } }; describe('MyComponent', () => { it('should render the correct count', async () => { const wrapper = mount(MyComponent); expect(wrapper.text()).toContain('0'); //Simulate interaction with the component (e.g., by emitting an event) wrapper.vm.count = 5; //Direct state change in a simple example - typically you'd trigger event await wrapper.vm.$nextTick(); //Wait for DOM update expect(wrapper.text()).toContain('5'); }); }); """ * **Don't Do This:** Directly manipulate internal component state without using the testing framework's utilities. This can bypass reactivity mechanisms and lead to incorrect test results. Also it results to fragile tests that are dependent on the internal component implementation. ### 2.8. Immutability Where Possible * **Do This:** Favor immutable data structures and state management techniques where applicable to avoid unintended side effects and simplify reasoning about state changes. Libraries like Immer can be helpful for working with immutable data. * **Don't Do This:** Mutate state directly without considering the consequences for other parts of the application or tests. ### 2.9. Testing State Transitions * **Do This:** Explicitly test all possible state transitions in a component or module. Use "describe" blocks to group tests related to specific state transitions. """typescript import { beforeEach, describe, expect, it } from 'vitest'; describe('Component with State Transitions', () => { let componentState: { isLoading: boolean; data: any; error: any }; beforeEach(() => { componentState = { isLoading: false, data: null, error: null }; }); describe('Initial State', () => { it('should start in the loading state', () => { expect(componentState.isLoading).toBe(false); expect(componentState.data).toBeNull(); expect(componentState.error).toBeNull(); }); }); describe('Loading State', () => { it('should set isLoading to true when fetching data', () => { componentState.isLoading = true; expect(componentState.isLoading).toBe(true); }); }); describe('Success State', () => { it('should set data when data is successfully fetched', () => { const mockData = { name: 'Test Data' }; componentState.data = mockData; componentState.isLoading = false; expect(componentState.data).toEqual(mockData); expect(componentState.isLoading).toBe(false); }); }); describe('Error State', () => { it('should set error when fetching data fails', () => { const mockError = new Error('Failed to fetch data'); componentState.error = mockError; componentState.isLoading = false; expect(componentState.error).toEqual(mockError); expect(componentState.isLoading).toBe(false); }); }); }); """ * **Don't Do This:** Assume that state transitions will work correctly without explicit tests. Missing tests for state transitions are frequently causes of bugs. ## 3. Testing Specific State Management Patterns ### 3.1. Testing Vuex/Pinia Stores * **Do This:** Mock the store's actions, mutations, and getters to isolate the component being tested. Use "createLocalVue" (for Vuex) to create a local Vue instance with the mocked store. For Pinia, mock the store directly using "vi.mock". **Example (Pinia with Vitest):** """typescript import {describe, expect, it, vi} from 'vitest'; import {useMyStore} from '../src/stores/myStore'; //Replace with your actual path vi.mock('../src/stores/myStore', () => { return { useMyStore: vi.fn(() => ({ count: 10, increment: vi.fn(), doubleCount: vi.fn().mockReturnValue(20), })), }; }); describe('Component using Pinia store', () => { it('should display the count from the store', () => { const store = useMyStore(); expect(store.count).toBe(10); }); it('should call the increment action when a button is clicked', () => { const store = useMyStore(); //Simulate user interaction or similar that is expected to call store.increment() // ... expect(store.increment).toHaveBeenCalled(); }); }); """ * **Don't Do This:** Directly interact with the real store during component tests. This can make tests slow, and introduces dependencies between tests, increases complexity. Test the store in a separate test file dedicated to store logic. ### 3.2. Testing Redux Reducers and Actions * **Do This:** Test reducers in isolation by providing them with different actions and verifying that they produce the expected state changes. Test actions by dispatching them and asserting on the side effects (e.g., API calls). """typescript import { describe, expect, it } from 'vitest'; import reducer from './reducer'; import { increment, decrement } from './actions'; describe('Counter Reducer', () => { it('should return the initial state', () => { expect(reducer(undefined, {})).toEqual({ value: 0 }); }); it('should handle INCREMENT', () => { expect(reducer({ value: 0 }, increment())).toEqual({ value: 1 }); }); it('should handle DECREMENT', () => { expect(reducer({ value: 1 }, decrement())).toEqual({ value: 0 }); }); }); """ * **Don't Do This:** Test reducers and actions together in a complex integration test. This makes it harder to isolate the cause of failures. ### 3.3 Testing React Context * **Do This:** Create custom test providers to mock the context values and test components within a controlled context. Use "@testing-library/react" to render and interact with components. """typescript import { render, screen, fireEvent } from '@testing-library/react'; import { describe, expect, it } from 'vitest'; import React, { createContext, useContext, useState } from 'react'; // Context setup const CounterContext = createContext({ count: 0, setCount: (value: number) => {}, }); const useCounter = () => useContext(CounterContext); const CounterProvider = ({ children, initialCount = 0 }) => { const [count, setCount] = useState(initialCount); return ( <CounterContext.Provider value={{ count, setCount }}> {children} </CounterContext.Provider> ); }; // Component const CounterComponent = () => { const { count, setCount } = useCounter(); return ( <div> <span>{count}</span> <button onClick={() => setCount(count + 1)}>Increment</button> </div> ); }; describe('Counter Component with Context', () => { it('should display the initial count from context', () => { render( <CounterProvider initialCount={5}> <CounterComponent /> </CounterProvider> ); expect(screen.getByText('5')).toBeInTheDocument(); }); it('should increment the count when the button is clicked', () => { render( <CounterProvider initialCount={0}> <CounterComponent /> </CounterProvider> ); const incrementButton = screen.getByText('Increment'); fireEvent.click(incrementButton); expect(screen.getByText('1')).toBeInTheDocument(); }); it('should use a custom context value', () => { // Custom Provider for testing const TestProvider = ({ children }) => ( <CounterContext.Provider value={{ count: 100, setCount: () => {} }}> {children} </CounterContext.Provider> ); render( <TestProvider> <CounterComponent /> </TestProvider> ); expect(screen.getByText('100')).toBeInTheDocument(); }); }); """ * **Don't Do This:** Rely on the default context values during tests, as they might not accurately reflect the component's behavior in different scenarios. ## 4. Performance Considerations ### 4.1. Optimize State Setup * **Do This:** Minimize the amount of state that needs to be set up for each test. Only set up the state that is relevant to the specific test case. Use lazy initialization where possible. * **Don't Do This:** Create large, complex state objects unnecessarily. ### 4.2. Avoid Unnecessary State Updates * **Do This:** Only update the state when necessary. Avoid unnecessary state updates that can trigger re-renders or other performance-intensive operations. * **Don't Do This:** Continuously update state in a loop or in response to every event. ## 5. Security Considerations ### 5.1. Secure State Storage * **Do This:** If you are testing code that deals with sensitive data (e.g., passwords, API keys), ensure that the data is stored securely and is not exposed in test logs or reports. * **Don't Do This:** Store sensitive data in plain text or commit it to version control. Use environment variables or dedicated secrets management tools. ### 5.2. Input Validation * **Do This:** Test state updates with invalid or malicious input to ensure that the application handles errors gracefully and does not become vulnerable to security exploits. * **Don't Do This:** Assume that all input will be valid. ## 6. Code Style and Formatting * Follow established code style guidelines (e.g., Airbnb, Google) for whitespace, indentation, and naming conventions. * Use descriptive variable names to clearly indicate the purpose of the state being managed. * Add comments to explain complex state transitions or mocking strategies. ## 7. Conclusion By following these coding standards, developers can write more robust, maintainable, and performant tests for state management in Vitest. Adhering to these best practices will improve the overall quality of the codebase and reduce the risk of bugs and security vulnerabilities. Remember to always use the latest version of the stack and its libraries to maximize performance, security, and compatibility with the latest features. This document is intended as a living document, and it will be updated as new best practices and technologies emerge.
# Tooling and Ecosystem Standards for Vitest This document outlines coding standards specific to the Vitest testing framework, focusing on the tooling and ecosystem surrounding it. Adhering to these standards will enhance maintainability, readability, performance, and collaboration across projects utilizing Vitest. These standards are focused on the latest versions of Vitest and its related tooling. ## 1. IDE Integration and Configuration ### Standard: Use IDE integrations to enhance development workflow. * **Do This:** Configure your IDE with official Vitest extensions or plugins to provide real-time feedback, test execution, and debugging capabilities. * **Don't Do This:** Rely solely on command-line execution without leveraging IDE integrations for more efficient and interactive testing. **Why:** IDE integrations provide instant feedback, streamline debugging, and improve overall development efficiency. **Code Example (VS Code Configuration):** """json // .vscode/settings.json { "vitest.enable": true, "vitest.commandLine": "npx vitest", "vitest.include": ["**/*.{test,spec}.?(c|m)[jt]s?(x)"], "vitest.exclude": ["**/node_modules/**", "**/dist/**"] } """ **Explanation:** This configuration enables the Vitest extension in VS Code, sets the command-line execution path, and specifies the files to include and exclude from testing cycles. **Anti-Pattern:** Manually running tests via the command line for every small change. ### Standard: Configure file watching appropriately in your IDE. * **Do This:** Set up file watchers within your IDE to automatically trigger Vitest when relevant files change. * **Don't Do This:** Leave file watching disabled, forcing manual test runs after each code modification. **Why:** Automatic test execution drastically shortens the feedback loop during development. **Code Example (IDE-Specific Configuration - Generic description):** Consult your IDE's documentation on file watchers. Most IDEs have built-in or plugin-based systems that can detect file changes and run external tools, such as Vitest. For example, in WebStorm, use the "File Watchers" settings panel. """text // Example workflow: 1. Install the relevant Vitest plugin for your IDE. 2. Configure the plugin to watch specific file types (.js, .ts, .vue, etc.). 3. Set the Vitest command to run on file change (e.g., "npx vitest run"). """ **Anti-Pattern:** Waiting until the end of a coding session to run all tests. ## 2. Linters and Formatters ### Standard: Integrate linters and formatters with Vitest projects. * **Do This:** Use ESLint, Prettier, or similar tools to enforce consistent code style and catch potential errors. * **Don't Do This:** Neglect code style enforcement, resulting in inconsistent and potentially error-prone codebases. **Why:** Linters and formatters promote code consistency, reduce errors, and improve readability. **Code Example (ESLint Configuration):** """javascript // .eslintrc.js module.exports = { "env": { "browser": true, "es2021": true, "vitest/globals": true // Enables Vitest globals }, "extends": [ "eslint:recommended", "plugin:@typescript-eslint/recommended", "prettier" // Enables eslint-plugin-prettier and eslint-config-prettier. This will display prettier errors as ESLint errors. Make sure this is always the last configuration in the extends array. ], "parser": "@typescript-eslint/parser", "parserOptions": { "ecmaVersion": "latest", "sourceType": "module" }, "plugins": [ "@typescript-eslint", "vitest" ], "rules": { // Custom rules here } }; """ **Explanation:** This ESLint configuration includes TypeScript support, enables Vitest globals, and integrates Prettier for code formatting. The "vitest/globals" plugin automatically imports and configures Vitest's global APIs so that you don't have to import them manually within your test files. **Code Example (Prettier Configuration):** """javascript // .prettierrc.js module.exports = { semi: true, trailingComma: "all", singleQuote: true, printWidth: 120, tabWidth: 2, }; """ **Explanation:** This Prettier configuration sets basic formatting rules for semicolons, trailing commas, single quotes, line width, and tab width. **Anti-Pattern:** Ignoring linting errors and warnings, leading to potential bugs and code style inconsistencies. ### Standard: Implement pre-commit hooks to automatically run linters, formatters, and tests. * **Do This:** Use tools like Husky and lint-staged to ensure code quality before committing changes. * **Don't Do This:** Allow developers to commit code without automated checks, increasing the risk of introducing errors or style violations. **Why:** Pre-commit hooks enforce code quality, prevent broken builds, and automate repetitive tasks. **Code Example (Husky and lint-staged Configuration):** """json // package.json { "devDependencies": { "husky": "^8.0.0", "lint-staged": "^13.0.0" }, "scripts": { "prepare": "husky install", "test": "vitest" }, "lint-staged": { "*.{js,ts,vue}": [ "eslint --fix", "prettier --write", "git add" ], "*.{css,scss,md}": [ "prettier --write", "git add" ], "*.test.ts": [ "npm run test -- --relatedFiles $FILES" ] } } """ """bash # Enable husky npm install husky --save-dev npx husky install npm set-script prepare "husky install" npm run prepare # Add a pre-commit hook npx husky add .husky/pre-commit "npx lint-staged" """ **Explanation:** This configuration installs Husky and lint-staged, configures a pre-commit hook to run ESLint and Prettier on staged files, and runs the test suite against any changed test files. "prepare" script ensures husky is installed when the project is set up. This prevents commits with linting problems. **Anti-Pattern:** Skipping pre-commit hooks or disabling them entirely, negating their benefits. ## 3. Code Coverage ### Standard: Use code coverage tools to identify untested areas. * **Do This:** Integrate code coverage reporting into your Vitest workflow using tools like c8 or Istanbul. * **Don't Do This:** Disregard code coverage metrics, potentially leaving critical parts of the application untested. **Why:** Code coverage analysis helps identify gaps in testing and ensures comprehensive test suites. **Code Example (Vitest Configuration with Coverage):** """typescript // vitest.config.ts import { defineConfig } from 'vitest/config' export default defineConfig({ test: { environment: 'jsdom', coverage: { reporter: ['text', 'json', 'html'], all: true, // Ensure that all files are considered, even if not explicitly imported in a test file include: ['src/**'], // Files to consider for coverage exclude: ['src/**/*.d.ts'] // Files to exclude from coverage }, }, }) """ **Explanation:** This configuration enables code coverage reporting with multiple reporters (text, JSON, and HTML) and specifies the files to include and exclude. The "all: true" configuration is important to ensure full coverage reporting. **Anti-Pattern:** Aiming for 100% code coverage without considering the quality and relevance of the tests. Focus also on branch and path coverage. ### Standard: Set reasonable code coverage thresholds. * **Do This:** Define minimum code coverage thresholds (e.g., 80% for statements, branches, and functions) to maintain adequate test coverage. * **Don't Do This:** Ignore code coverage thresholds, allowing untested code to slip through, or setting unrealistically high targets that are difficult to achieve without sacrificing test quality. **Why:** Coverage thresholds enforce a minimum standard of test coverage and prevent regressions. **Code Example (Setting Coverage Thresholds):** While Vitest natively provides coverage reporting, enforcing thresholds often needs plugin integration. One common approach is integrating with tools during CI/CD. """bash # Example workflow (using a custom script or CI/CD pipeline): # 1. Run Vitest with coverage enabled. # 2. Parse the coverage report (e.g., using jq to extract coverage percentages from a JSON report). # 3. Fail the build if any coverage metric falls below the threshold. vitest run --coverage.enabled --coverage.reporter=json-summary # example using jq to validate in cicd: jq '.total.lines.pct > 80 and .total.statements.pct > 80' coverage-summary.json """ **Anti-Pattern:** Having no coverage thresholds, leading to low test coverage and increased risk of regressions. ## 4. Mocking and Stubbing Libraries ### Standard: Utilize mocking and stubbing libraries for isolating units of code. * **Do This:** Use "vi.mock()" in Vitest to mock dependencies and control their behavior during testing. * **Don't Do This:** Directly manipulate real dependencies or rely on global state, leading to brittle and unreliable tests. **Why:** Mocking allows for predictable and isolated testing, preventing external factors from influencing test results. **Code Example (Using "vi.mock()"):** """typescript // src/api.ts export async function fetchData() { const response = await fetch('https://example.com/data'); return response.json(); } // src/api.test.ts import { fetchData } from './api'; import { vi, describe, it, expect } from 'vitest'; describe('fetchData', () => { it('should return data from the API', async () => { const mockData = { message: 'Hello, world!' }; global.fetch = vi.fn().mockResolvedValue({ json: vi.fn().mockResolvedValue(mockData), }); const data = await fetchData(); expect(data).toEqual(mockData); expect(global.fetch).toHaveBeenCalledWith('https://example.com/data'); }); }); """ **Explanation:** This example mocks the "fetch" function and verifies that it is called with the correct URL and returns the expected data. **Anti-Pattern:** Over-mocking, which can lead to tests that are too tightly coupled to the implementation details and do not accurately reflect the system's behavior. ### Standard: Mock selectively and focus on external dependencies. * **Do This:** Mock external services, databases, or complex third-party libraries to simplify testing and avoid external dependencies. * **Don't Do This:** Mock internal implementation details or simple functions, which can make tests brittle and harder to maintain. **Why:** Mocking external dependencies isolates the unit under test and ensures that tests are not affected by external factors. **Code Example (Selective Mocking):** Instead of mocking the behavior of a function within the same module, focus on external dependencies. """typescript // src/service.ts import { fetchData } from './api'; export async function processData() { const data = await fetchData(); return data.message.toUpperCase(); } // src/service.test.ts import { processData } from './service'; import { fetchData } from './api'; import { vi, describe, it, expect } from 'vitest'; vi.mock('./api', () => { return { fetchData: vi.fn().mockResolvedValue({ message: 'mocked data' }), }; }); describe('processData', () => { it('should process data from the API', async () => { const result = await processData(); expect(result).toBe('MOCKED DATA'); expect(fetchData).toHaveBeenCalled(); }); }); """ **Explanation:** This example mocks the "fetchData" function from the "api" module, allowing the "processData" function to be tested in isolation. **Anti-Pattern:** Not using mocks when interacting with external services in unit tests, making the tests slow and dependent on external systems. ## 5. Test Runners and Configuration ### Standard: Configure Vitest to suit project requirements. * **Do This:** Customize Vitest's configuration options (e.g., "test.environment", "test.globals", "test.setupFiles") to optimize the testing environment. * **Don't Do This:** Use the default configuration without considering project-specific needs, potentially leading to suboptimal testing performance or compatibility issues. **Why:** Configuration options allow customization of the testing environment to suit specific project requirements. **Code Example (Vitest Configuration):** """typescript // vitest.config.ts import { defineConfig } from 'vitest/config' import { fileURLToPath } from 'node:url' import { resolve } from 'path' export default defineConfig({ resolve: { alias: { '@': fileURLToPath(new URL('./src', import.meta.url)) } }, test: { environment: 'jsdom', // or 'node' globals: true, setupFiles: ['./src/setupTests.ts'], include: ['**/*.{test,spec}.?(c|m)[jt]s?(x)'], exclude: ['node_modules', 'dist', '.idea', '.git', '.cache'], coverage: { reporter: ['html', 'text', 'json'], }, }, }) """ **Explanation:** This configuration sets the test environment to "jsdom", enables global variables, specifies setup files, and defines include/exclude patterns. It also configures coverage reporting. It adds an alias to make imports easier. **Anti-Pattern:** Setting the "environment" incorrectly (e.g. using "jsdom" in a Node.js backend project). ### Standard: Utilize Vitest's command-line interface (CLI) effectively. * **Do This:** Use CLI options (e.g., "--watch", "--run", "--filter", "--coverage") to control test execution and reporting. * **Don't Do This:** Rely solely on default test execution, missing opportunities to optimize testing workflows and target specific tests. **Why:** The CLI provides flexibility and control over test execution and reporting. **Code Example (Vitest CLI Usage):** """bash # Run all tests in watch mode npx vitest --watch # Run specific tests npx vitest ./src/components/MyComponent.test.ts # Run tests matching a pattern npx vitest --filter "MyComponent" # Generate coverage report npx vitest run --coverage """ **Explanation:** These commands demonstrate various CLI options for running tests in watch mode, targeting specific tests, filtering tests by name, and generating coverage reports. **Anti-Pattern:** Not using watch mode during development, which slows down the testing feedback loop. ## 6. CI/CD Integration ### Standard: Integrate Vitest into the CI/CD pipeline for automated testing. * **Do This:** Configure CI/CD pipelines (e.g., GitHub Actions, GitLab CI, Jenkins) to run Vitest tests on every commit or pull request. * **Don't Do This:** Manually run tests on local machines only, potentially missing regressions and delaying feedback. **Why:** CI/CD integration automates testing, prevents regressions, and ensures code quality throughout the development lifecycle. **Code Example (GitHub Actions Configuration):** """yaml # .github/workflows/test.yml name: Vitest Tests on: push: branches: [ main ] pull_request: branches: [ main ] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup Node.js uses: actions/setup-node@v3 with: node-version: 18 # Or your project's node version - name: Install dependencies run: npm install - name: Run tests run: npm run test -- --run # Ensure it runs all tests non-interactively - name: Upload coverage reports to Codecov uses: codecov/codecov-action@v3 with: token: ${{ secrets.CODECOV_TOKEN }} # Required for private repositories """ **Explanation:** This GitHub Actions workflow checks out the code, sets up Node.js, installs dependencies, runs Vitest tests, and uploads the coverage report to Codecov. **Anti-Pattern:** Not integrating tests into the CI/CD pipeline, which can lead to undetected regressions and lower code quality. ### Standard: Fail CI/CD builds on test failures. * **Do This:** Configure CI/CD pipelines to fail builds if any Vitest tests fail, preventing faulty code from being deployed. * **Don't Do This:** Allow CI/CD pipelines to pass despite test failures, creating the risk of deploying broken code to production. **Why:** Failing builds on test failures ensures that only working code is deployed to production. **Code Example (CI/CD Configuration - Generic example):** Most CI/CD platforms will automatically fail a build if the test command exits with a non-zero exit code. Vitest does this by default when tests fail. Ensure your CI/CD system is configured to detect non-zero exit codes and fail the build accordingly. No specific configuration is usually necessary, provided that "npm run test" (or equivalent) is the command that runs the tests. **Anti-Pattern:** Ignoring test failures in the CI/CD pipeline, potentially deploying broken code. ## 7. Reporting and Analytics ### Standard: Use reporting tools to visualize test results and track performance metrics. * **Do This:** Integrate Vitest with reporting tools like SonarQube, Codecov, or other code quality platforms to track test results, coverage metrics, and performance metrics over time. * **Don't Do This:** Rely solely on command-line output, missing opportunities to analyze trends and identify areas for improvement. **Why:** Reporting tools provide valuable insights into test performance, coverage trends, and code quality. **Code Example (Codecov Integration):** As shown in the GitHub Actions example above, Codecov can be easily integrated by uploading the coverage reports generated by Vitest. Ensure the "CODECOV_TOKEN" secret is properly configured. **Anti-Pattern:** Not tracking test performance and coverage metrics over time, making it difficult to identify and address regressions or performance bottlenecks. ### Standard: Set up notifications for test failures and performance regressions. * **Do This:** Configure CI/CD pipelines to send notifications (e.g., email, Slack) to developers when tests fail or performance regressions are detected. * **Don't Do This:** Rely on developers to manually check test results, potentially delaying the detection and resolution of issues. **Why:** Notifications ensure that developers are promptly informed of test failures and performance regressions, allowing them to address issues quickly. **Code Example (CI/CD Notifications - Generic example):** Most CI/CD platforms offer built-in notification mechanisms. Configure your platform to send email or Slack notifications to the relevant team members when a build fails. For example, in GitHub Actions: """yaml # .github/workflows/test.yml (continued) - name: Send Slack notification on failure if: failure() uses: rtCamp/action-slack-notify@v2 env: SLACK_CHANNEL: '#your-slack-channel' SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }} """ **This example requires the "rtCamp/action-slack-notify" action and a properly configured "SLACK_WEBHOOK" secret.** Similar configurations exist for other CI/CD platforms. **Anti-Pattern:** Ignoring test failure notifications, leading to delayed issue resolution and potentially broken builds. By adhering to these tooling and ecosystem standards, development teams can maximize the benefits of Vitest, improve code quality, and streamline the testing process. Remember to keep up-to-date with the latest features and best practices in the Vitest ecosystem.
# API Integration Standards for Vitest This document outlines coding standards and best practices for integrating Vitest tests with backend services and external APIs, ensuring maintainable, performant, and secure testing. ## 1. General Principles ### 1.1 Use Mocking and Stubbing **Do This:** - Use mocking libraries like "vi.fn()" from Vitest or "sinon" to isolate unit tests from external dependencies. - Stub API calls to return predefined responses, preventing real API calls during testing. **Don't Do This:** - Directly call real APIs within unit tests. - Rely on external services being available or predictable during testing. **Why:** Mocking and stubbing ensure test isolation and repeatable results, which are crucial for unit testing. Avoid relying on the state of external services that could be unstable or change unexpectedly. **Example:** """typescript // src/apiClient.ts export async function fetchData(id: string): Promise<any> { const response = await fetch("https://api.example.com/data/${id}"); if (!response.ok) { throw new Error("HTTP error! status: ${response.status}"); } return await response.json(); } // src/apiClient.test.ts import { fetchData } from './apiClient'; import { describe, it, expect, vi } from 'vitest'; global.fetch = vi.fn(() => Promise.resolve({ ok: true, json: () => Promise.resolve({ id: '123', value: 'testData' }), }) ) as any; describe('fetchData', () => { it('fetches data correctly', async () => { const data = await fetchData('123'); expect(data).toEqual({ id: '123', value: 'testData' }); expect(fetch).toHaveBeenCalledWith('https://api.example.com/data/123'); }); it('handles errors correctly', async () => { (global.fetch as any).mockImplementationOnce(() => Promise.resolve({ ok: false, status: 404, }) ); await expect(fetchData('456')).rejects.toThrowError('HTTP error! status: 404'); }); }); """ ### 1.2 Separate Unit and Integration Tests **Do This:** - Classify tests based on scope: unit tests (isolated), integration tests (connecting components), and end-to-end tests (entire system). - Use different Vitest configurations or file naming conventions to distinguish between test types. **Don't Do This:** - Mix unit tests with integration tests in the same files. **Why:** Separation allows for faster feedback cycles for unit tests (mocked) and more comprehensive but slower tests for integration (potentially using real or test instances of APIs). **Example:** """ // vitest.config.unit.js export default { test: { include: ['src/**/*.unit.test.ts'], environment: 'node', }, }; // vitest.config.integration.js export default { test: { include: ['src/**/*.integration.test.ts'], environment: 'node', //setupFiles: ['./test/setupIntegration.ts'], // setup for integration tests (env vars etc.) timeout: 10000, // Longer timeout for integration tests }, }; """ File structure: """ src/ - apiClient.ts - apiClient.unit.test.ts // unit tests (mocked API calls) - apiClient.integration.test.ts // integration tests (potentially real API - use with caution) """ ### 1.3 Configuration Management **Do This:** - Use environment variables or configuration files to manage API endpoints and authentication keys. - Load environment variables using libraries like "dotenv" and make them available in your Vitest environment. **Don't Do This:** - Hardcode API endpoints or secrets directly in test code. - Commit sensitive information to version control. **Why:** Securely configuring test environments prevents exposing sensitive data and makes it easier to switch between test and production environments. **Example:** """typescript // .env.test API_ENDPOINT=https://test.example.com/api API_KEY=your_test_api_key """ """typescript // vitest.config.ts import { defineConfig } from 'vitest/config'; import 'dotenv/config'; // Load environment variables export default defineConfig({ test: { globals: true, environment: 'node', setupFiles: ['./test/setupEnv.ts'], }, }); // test/setupEnv.ts process.env.API_ENDPOINT = process.env.API_ENDPOINT || 'https://backup.example.com/api'; // src/apiClient.test.ts import { fetchData } from './apiClient'; import { describe, it, expect, vi } from 'vitest'; describe('fetchData', () => { it('uses the configured API endpoint', async () => { global.fetch = vi.fn().mockResolvedValue({ ok: true, json: () => Promise.resolve({ id: '123', value: 'testData' }), }); await fetchData('123'); expect(fetch).toHaveBeenCalledWith("${process.env.API_ENDPOINT}/data/123"); }); }); """ ## 2. Advanced Mocking Techniques ### 2.1 Mocking Modules **Do This:** - Use "vi.mock" to replace entire modules with mock implementations. - Create a separate mock module file with the same exports as the original. **Don't Do This:** - Mutate imported modules directly in test files without using "vi.mock". **Why:** "vi.mock" provides a cleaner and more maintainable way to mock modules, particularly if you aim to replace the entire module's functionality. **Example:** """typescript // src/apiClient.ts export async function externalFunction(id: string): Promise<string> { return "Actual result for ID: ${id}"; } // src/apiClient.test.ts import { externalFunction } from './apiClient'; import { describe, it, expect, vi } from 'vitest'; vi.mock('./apiClient', async () => { return { externalFunction: vi.fn().mockResolvedValue('Mocked Response'), }; }); describe('externalFunction', () => { it('should use the mocked function', async () => { const result = await externalFunction('123'); expect(result).toBe('Mocked Response'); }); }); """ ### 2.2 Mocking with Spies **Do This:** - Use "vi.spyOn" to monitor the behavior of specific functions or methods without replacing them entirely. - Verify that mocked functions are called with the correct arguments and number of times. **Don't Do This:** - Over-mock functions unless necessary; sometimes, observing behavior is sufficient. **Why:** Spies are useful when you want to verify how a function is used without completely altering its behavior. **Example:** """typescript // src/service.ts export class MyService { callExternalAPI(id: string): string { console.log("Calling API with id: ${id}"); return "API called for id: ${id}"; } } // src/service.test.ts import { MyService } from './service'; import { describe, it, expect, vi } from 'vitest'; describe('MyService', () => { it('should call the external API', () => { const service = new MyService(); const spy = vi.spyOn(service, 'callExternalAPI'); service.callExternalAPI('123'); expect(spy).toHaveBeenCalledWith('123'); expect(spy).toHaveBeenCalledTimes(1); }); }); """ ### 2.3 Asynchronous Mocking **Do This:** - Use "mockResolvedValue" or "mockRejectedValue" to mock asynchronous function responses. - Accurately simulate success and failure scenarios. **Don't Do This:** - Use synchronous mocking techniques for asynchronous functions. **Why:** Properly mocking asynchronous functions is crucial for testing asynchronous code paths in a non-blocking and deterministic way. **Example:** """typescript // src/apiClient.ts export async function fetchData(id: string): Promise<any> { await new Promise(resolve => setTimeout(resolve, 10)); // Simulate async operation const response = await fetch("https://api.example.com/data/${id}"); if (!response.ok) { throw new Error("HTTP error! status: ${response.status}"); } return await response.json(); } // src/apiClient.test.ts import { fetchData } from './apiClient'; import { describe, it, expect, vi } from 'vitest'; describe('fetchData', () => { it('mocks a resolved value', async () => { global.fetch = vi.fn().mockResolvedValue({ ok: true, json: () => Promise.resolve({ data: 'mocked' }), }); const result = await fetchData('123'); expect(result).toEqual({ data: 'mocked' }); }); it('mocks a rejected value', async () => { global.fetch = vi.fn().mockRejectedValue(new Error('API error')); await expect(fetchData('123')).rejects.toThrow('API error'); }); }); """ ## 3. Integration Testing Strategies ### 3.1 Test Databases **Do This:** - Use separate databases for testing and production environments. - Use tools like Docker or in-memory databases (like SQLite) to manage test databases. - Seed the test database with known data before running tests. - Clean up the test database after each test run. **Don't Do This:** - Run tests against the production database. - Leave test data in the database after tests are complete. **Why:** Ensures tests do not affect production data and that tests are repeatable. **Example:** """typescript // test/setupIntegration.ts import { beforeAll, afterAll } from 'vitest'; import { Sequelize } from 'sequelize'; const sequelize = new Sequelize('test_db', 'user', 'password', { dialect: 'sqlite', storage: ':memory:', // Use an in-memory database logging: false, }); beforeAll(async () => { await sequelize.sync({ force: true }); // Create tables and drop existing // Seed the database (example): // await MyModel.bulkCreate([{ name: 'Test Data' }]); }); afterAll(async () => { await sequelize.close(); }); export { sequelize }; """ ### 3.2 API Contracts **Do This:** - Document API contracts using tools like OpenAPI/Swagger. - Validate API responses in integration tests against the documented contract. **Don't Do This:** - Assume API responses will always match expectations without verification. **Why:** Ensures API changes are caught during testing and prevents integration issues. **Example:** """typescript // src/apiClient.integration.test.ts import { fetchData } from './apiClient'; import { describe, it, expect } from 'vitest'; import {validate} from 'jsonschema'; const apiResponseSchema = { type: 'object', properties: { id: { type: 'string' }, value: { type: 'string' }, }, required: ['id', 'value'], }; describe('fetchData Integration', () => { it('fetches data and validates the API response', async () => { const data = await fetchData('123'); const validationResult = validate(data, apiResponseSchema); expect(validationResult.valid).toBe(true); expect(data).toHaveProperty('id'); expect(data).toHaveProperty('value'); }); }); """ ### 3.3 Test Doubles **Do This:** - Use test doubles like mocks, stubs, and spies strategically to control external dependencies during integration tests. - Use fakes and dummies when simple replacements are needed and behavior verification is not. **Don't Do This:** - Overuse test doubles, which can make tests too complex. **Why:** Effectively manages dependencies, making tests both more reliable and deterministic. ## 4. Security Considerations ### 4.1 Data Sanitization **Do This:** - Sanitize test data to prevent security vulnerabilities like SQL injection or XSS. - Avoid using real user data in tests. **Don't Do This:** - Directly use unsanitized data from external sources in tests. **Why:** Prevents accidental introduction or masking of security vulnerabilities. **Example:** """typescript import { sanitize } from 'dompurify'; describe('Data Sanitization', () => { it('should sanitize potentially malicious input', () => { const maliciousInput = '<img src="x" onerror="alert(\'XSS\')">'; const sanitizedInput = sanitize(maliciousInput); expect(sanitizedInput).toBe('<img src="x">'); // Sanitized output }); }); """ ### 4.2 Authentication and Authorization **Do This:** - Mock authentication and authorization mechanisms to avoid using real credentials in tests. - Verify that authorized users can access specific endpoints. - Create separate test accounts with limited privileges. **Don't Do This:** - Use production credentials in test environments. - Skip authorization checks in tests. **Why:** Proper testing ensures authentication and authorization mechanisms work as expected without compromising security. ## 5. Performance Testing with Vitest ### 5.1 Measuring Execution Time **Do This:** - Use "performance.now()" or "performance.mark()" to measure the execution time of API calls during integration tests. - Set performance thresholds and fail tests if execution time exceeds the threshold. **Don't Do This:** - Neglect performance testing and solely focus on functionality. **Why:** Identifies performance bottlenecks and ensures API calls meet performance requirements. **Example:** """typescript import { describe, it, expect } from 'vitest'; import { fetchData } from './apiClient'; describe('Performance Testing', () => { it('should execute within a specified time limit', async () => { const startTime = performance.now(); await fetchData('123'); const endTime = performance.now(); const executionTime = endTime - startTime; const performanceThreshold = 200; // Milliseconds expect(executionTime).toBeLessThan(performanceThreshold); }); }); """ ### 5.2 Load Testing Simulation **Do This:** - Simulate concurrent API calls to assess performance under load. - Use libraries like "p-map" to control concurrency. **Don't Do This:** - Neglect load testing, which can reveal scalability issues. **Why:** Uncovers performance issues that only manifest under load. ## 6. Best Practices for Maintainability ### 6.1 Descriptive Test Names **Do This:** - Use clear and concise test names that describe the expected behavior. - Include relevant information, such as input values and expected output. **Don't Do This:** - Use generic or vague test names that do not provide context. **Why:** Improves readability and makes it easier to understand test failures. **Example:** """typescript it('should fetch data successfully with a valid ID', async () => { // ... }); it('should throw an error when fetching data with an invalid ID', async () => { // ... }); """ ### 6.2 Test Data Management **Do This:** - Use factories or fixtures to generate test data consistently. - Keep test data separate from test logic. **Don't Do This:** - Hardcode test data directly in test cases. **Why:** Simplifies test maintenance and prevents duplication of test data. ### 6.3 Clean Test Code **Do This:** - Follow the DRY (Don't Repeat Yourself) principle in test code. - Refactor common setup logic into helper functions or reusable modules. - Keep test cases focused and concise. **Don't Do This:** - Write redundant or overly complex test code. **Why:** Simplifies test code and reduces the likelihood of errors. By following these standards, you'll create a robust, maintainable, and secure Vitest testing environment for your API integrations.
# Testing Methodologies Standards for Vitest This document outlines the coding standards for testing methodologies using Vitest. These standards aim to ensure consistent, maintainable, performant, and secure tests across our projects. This document covers strategies for unit, integration, and end-to-end tests with Vitest. ## 1. Testing Pyramid & Levels of Testing ### 1.1. Standard: Adhere to the Testing Pyramid * **Do This:** Prioritize unit tests. Strive for a higher number of unit tests compared to integration or end-to-end tests. Reduce the number of end-to-end tests. * **Don't Do This:** Create a "top-heavy" testing pyramid with a large number of slow, brittle end-to-end tests. * **Why:** Unit tests are faster and more granular, leading to quicker feedback and easier debugging. End-to-end tests are slower, less specific, and more likely to break due to UI changes or environment issues. ### 1.2. Unit Tests * **Standard:** Unit tests should focus on testing a single unit of code in isolation (e.g., a function, a class method). * **Do This:** Mock dependencies to isolate the unit under test. * **Don't Do This:** Perform database operations, network requests, or file system accesses directly in unit tests. * **Why:** Isolating units makes tests faster and more reliable. Dependencies can introduce external factors that make tests flaky or slow. """typescript // Example: Unit test for a function that formats a date import { formatDate } from '../src/utils'; import { describe, expect, it, vi } from 'vitest'; describe('formatDate', () => { it('should format a date correctly', () => { const mockDate = new Date('2024-01-01T12:00:00.000Z'); vi.spyOn(global, 'Date').mockImplementation(() => mockDate); // mock Date object, to ensure date is always the same for testing expect(formatDate(new Date())).toBe('2024-01-01'); // now it won't be "new Date()", but mockDate. }); it('should handle different date objects', () => { const date = new Date('2024-02-15T08:30:00.000Z'); expect(formatDate(date)).toBe('2024-02-15'); }); }); """ ### 1.3. Integration Tests * **Standard:** Integration tests should verify the interaction between different components or modules. * **Do This:** Make API calls within your application and mock the external service responses. * **Don't Do This:** Test intricate business logic which should go into unit tests if possible. * **Why:** Validates that modules work together as expected, ensuring the larger system functions correctly. """typescript // Example: Integration test for an API client import { describe, expect, it, vi } from 'vitest'; import { fetchUserData } from '../src/apiClient'; global.fetch = vi.fn(() => Promise.resolve({ json: () => Promise.resolve({ id: 1, name: 'John Doe' }), }) ) as any; describe('fetchUserData', () => { it('should fetch user data from the API', async () => { const userData = await fetchUserData(1); expect(userData).toEqual({ id: 1, name: 'John Doe' }); expect(fetch).toHaveBeenCalledWith('/api/users/1'); }); it('should handle API errors', async () => { (fetch as any).mockImplementationOnce(() => Promise.reject('API Error')); await expect(fetchUserData(1)).rejects.toEqual('API Error'); }); }); """ ### 1.4. End-to-End (E2E) Tests * **Standard:** E2E tests should simulate real user interactions to validate the entire application flow. * **Do This:** Use tools like Playwright or Cypress for browser automation. Test critical user journeys. * **Don't Do This:** Use slow, brittle E2E tests for verifying logic that can be easily unit-tested. * **Why:** E2E tests provide confidence that the entire system works correctly from the user's perspective. """typescript // Example: Playwright E2E Test import { test, expect } from '@playwright/test'; test('Homepage has title', async ({ page }) => { await page.goto('http://localhost:3000/'); await expect(page).toHaveTitle("My App"); }); test('Navigation to about page', async ({ page }) => { await page.goto('http://localhost:3000/'); await page.getByRole('link', { name: 'About' }).click(); await expect(page).toHaveURL(/.*about/); }); """ ## 2. Test Structure and Organization ### 2.1. Standard: Arrange, Act, Assert (AAA) Pattern * **Do This:** Structure your tests into three distinct parts: Arrange (setup the test environment), Act (execute the code being tested), and Assert (verify the expected outcome). * **Don't Do This:** Mix setup, execution, and verification logic within a single block of code. * **Why:** AAA makes tests more readable, maintainable, and easier to understand. """typescript // Example: AAA pattern import { describe, expect, it } from 'vitest'; import { add } from '../src/math'; describe('add', () => { it('should add two numbers correctly', () => { // Arrange const a = 5; const b = 3; // Act const result = add(a, b); // Assert expect(result).toBe(8); }); }); """ ### 2.2. Standard: Test File Structure * **Do This:** Create a "test" directory mirroring your "src" directory for test files. Use "*.test.ts" or "*.spec.ts" naming convention. * **Don't Do This:** Place test files directly alongside source files. * **Why:** A consistent file structure makes it easier to locate and maintain tests. """ src/ components/ Button.tsx utils/ formatDate.ts test/ components/ Button.test.tsx utils/ formatDate.test.ts """ ### 2.3. Standard: Descriptive Test Names * **Do This:** Write descriptive test names that clearly explain what the test is verifying. Follow a convention like "should [verb] [expected result] when [scenario]". * **Don't Do This:** Use generic, unclear, or vague test names. * **Why:** Clear test names facilitate debugging and give a good overview of the tested functionality. """typescript // Good: it('should return "Hello, World!" when no name is provided', () => { /* ... */ }); // Bad: it('test', () => { /* ... */ }); """ ### 2.4. Standard: Grouping Tests with 'describe' * **Do This:** Use the "describe" block to group related tests for clarity and organization. * **Don't Do This:** Create a single, monolithic test file with no logical grouping. * **Why:** "describe" blocks improve test readability and help identify the area of the code being tested. """typescript import { describe, expect, it } from 'vitest'; import { calculateDiscount } from '../src/utils'; describe('calculateDiscount', () => { it('should apply a 10% discount for orders over $100', () => { expect(calculateDiscount(150)).toBe(15); }); it('should not apply a discount for orders under $100', () => { expect(calculateDiscount(50)).toBe(0); }); }); """ ## 3. Mocking and Stubbing ### 3.1. Standard: Minimize Mocking * **Do This:** Use mocks only when necessary to isolate the unit under test. Prefer real dependencies when possible. * **Don't Do This:** Mock everything by default. Over-mocking can blur the line between implementation change and test update. * **Why:** Reduces the risk of false positives and increases confidence in the tests' accuracy. ### 3.2. Standard: Use Vitest's Built-in Mocking * **Do This:** Use "vi.mock", "vi.spyOn", and "vi.fn" for mocking and stubbing in Vitest. * **Don't Do This:** Use external mocking libraries that may not be compatible with Vitest. * **Why:** Vitest's built-in mocking is well-integrated and performant. """typescript // Example: Mocking a module function import { describe, expect, it, vi } from 'vitest'; import { fetchData } from '../src/dataService'; import { processData } from '../src/processor'; vi.mock('../src/dataService', () => ({ fetchData: vi.fn(() => Promise.resolve([{ id: 1, name: 'Test Data' }])), })); describe('processData', () => { it('should process data from the dataService', async () => { const result = await processData(); expect(result).toEqual([{ id: 1, name: 'Processed Test Data' }]); }); }); """ ### 3.3. Standard: Restore Mocks After Each Test * **Do This:** Use "vi.restoreAllMocks()" (or "afterEach(vi.restoreAllMocks())") to reset mocks after each test. * **Don't Do This:** Leave mocks active between tests, which can lead to unexpected behavior and test pollution. * **Why:** Prevents interference between tests and ensures reliable results. """typescript import { describe, expect, it, vi, afterEach } from 'vitest'; import { externalService } from '../src/externalService'; import { myModule } from '../src/myModule'; describe('myModule', () => { afterEach(() => { vi.restoreAllMocks(); }); it('should call externalService correctly', () => { const spy = vi.spyOn(externalService, 'doSomething'); myModule.run(); expect(spy).toHaveBeenCalled(); }); it('should handle errors from externalService', async () => { vi.spyOn(externalService, 'doSomething').mockRejectedValue(new Error('Service Unavailable')); await expect(myModule.run()).rejects.toThrowError('Service Unavailable'); }); }); """ ### 3.4 Standard: Mocking In-Source Testing * **Do This:** Use "import.meta.vitest" inside of the scope you want to test. Run tests directly within component or module. * **Why:** Tests share the same scope making them able to test against private states. """typescript // src/index.ts export const add = (a: number, b: number) => a + b if (import.meta.vitest) { const { it, expect } = import.meta.vitest it('add', () => { expect(add(1, 2)).eq(3) }) } """ ## 4. Asynchronous Testing ### 4.1. Standard: Use "async/await" for Asynchronous Operations * **Do This:** Use "async" and "await" to handle asynchronous operations in your tests. * **Don't Do This:** Rely on callbacks or Promises without "async/await", which can make tests harder to read and debug. * **Why:** "async/await" makes asynchronous code look and behave more like synchronous code, improving readability and maintainability. """typescript // Example: Testing an asynchronous function import { describe, expect, it } from 'vitest'; import { fetchData } from '../src/apiClient'; describe('fetchData', () => { it('should fetch data successfully', async () => { const data = await fetchData('https://example.com/api/data'); expect(data).toBeDefined(); }); it('should handle errors when fetching data', async () => { try { await fetchData('https://example.com/api/error'); } catch (error: any) { expect(error.message).toBe('Failed to fetch data'); } }); }); """ ### 4.2. Standard: Handle Promises with "expect.resolves" and "expect.rejects" * **Do This:** Use "expect.resolves" to assert that a Promise resolves with a specific value, and "expect.rejects" to assert that a Promise rejects with a specific error. * **Don't Do This:** Use "try/catch" for successful asynchronous calls. * **Why:** "expect.resolves" and "expect.rejects" provide a more concise and readable way to test Promises. """typescript import { describe, expect, it } from 'vitest'; import { createUser } from '../src/userService'; describe('createUser', () => { it('should create a user successfully', async () => { await expect(createUser('john.doe@example.com')).resolves.toBe('user123'); }); it('should reject with an error if the email is invalid', async () => { await expect(createUser('invalid-email')).rejects.toThrowError('Invalid email format'); }); }); """ ### 4.3. Standard: Use Fake Timers for Time-Dependent Tests * **Do This:** Use "vi.useFakeTimers()" with "vi.advanceTimersByTime()" to control the passage of time in your tests. * **Don't Do This:** Rely on "setTimeout" or "setInterval" with real timers, which can make tests slow and unreliable. * **Why:** Fake timers make time-dependent tests faster, more deterministic, and easier to control. """typescript import { describe, expect, it, vi, beforeEach } from 'vitest'; import { delayedFunction } from '../src/utils'; describe('delayedFunction', () => { beforeEach(() => { vi.useFakeTimers(); }); it('should execute the callback after a delay', () => { let executed = false; delayedFunction(() => { executed = true; }, 1000); expect(executed).toBe(false); vi.advanceTimersByTime(1000); expect(executed).toBe(true); }); it('should execute the callback at least after a delay of x milliseconds', () => { const callback = vi.fn(); delayedFunction(callback, 1000); vi.advanceTimersByTime(999); expect(callback).not.toHaveBeenCalled(); // not called yet vi.advanceTimersByTime(1); // advance another 1 ms expect(callback).toHaveBeenCalled(); // now is called }) }); """ ## 5. Test Data Management ### 5.1. Standard: Use Test Data Factories or Fixtures * **Do This:** Create test data factories or fixtures to generate consistent and reusable test data. * **Don't Do This:** Hardcode test data directly in your tests, leading to duplication and maintenance issues. * **Why:** Test data factories make it easier to create complex test data structures and ensure consistency across tests. """typescript // Example: Test data factory import { faker } from '@faker-js/faker'; export const createUser = (overrides = {}) => ({ id: faker.number.int(), email: faker.internet.email(), name: faker.person.fullName(), ...overrides, }); //Use it in your tests: import { describe, expect, it } from 'vitest'; import { createUser } from './factories'; describe('User', () => { it('should create a valid user', () => { const user = createUser(); expect(user).toHaveProperty('id'); expect(user).toHaveProperty('email'); }); it('should allow overriding properties', () => { const user = createUser({ name: 'Custom Name' }); expect(user.name).toBe('Custom Name'); }); }); """ ### 5.2. Standard: Avoid Sharing Test Data * **Do This:** Create new test data for each test case to avoid interference between tests. If you must, use a method like the "beforeEach" hook. * **Don't Do This:** Mutate shared test data, which can lead to unpredictable test results and flaky tests. * **Why:** Prevents test pollution and ensures that each test case is independent and reliable. ### 5.3. Standard: Seed Your Testing Database Before Tests * **Do This:** Seed database with a known state before any tests are run. * **Don't Do This:** Allow tests to create dependencies on each other's data and state. * **Why:** Ensures tests have a consistent, predictable, isolated environment. """typescript // Example: Seeding database before running tests import { describe, expect, it, beforeAll, afterAll } from 'vitest'; import { seedDatabase, clearDatabase } from './db'; // your database setup file import { User } from '../src/models/User'; describe('User Model', () => { beforeAll(async () => { await seedDatabase(); // Seed the database with consistent test data }); afterAll(async () => { await clearDatabase(); // Clear the database after all tests are complete }); it('should create a user correctly', async () => { const newUser = await User.create({ name: 'Test User', email: 'test@example.com' }); expect(newUser.name).toBe('Test User'); }); it('should find a user by email', async () => { const user = await User.findOne({ where: { email: 'test@example.com' } }); expect(user).toBeDefined(); }); }); """ ## 6. Performance Testing ### 6.1. Standard: Use "performance.mark" and "performance.measure" for Performance Measurement * **Do This:** Utilize the "performance.mark" and "performance.measure" APIs to measure the execution time of critical code sections. * **Don't Do This:** Rely on manual time tracking or inaccurate timing methods. * **Why:** Provides precise performance metrics for optimizing code execution. """typescript import { describe, expect, it } from 'vitest'; import { expensiveFunction } from '../src/utils'; describe('expensiveFunction', () => { it('should execute within a reasonable time', () => { performance.mark('start'); expensiveFunction(); performance.mark('end'); const measure = performance.measure('expensiveFunction', 'start', 'end'); expect(measure.duration).toBeLessThan(100); // Milliseconds }); }); """ ### 6.2. Standard: Threshold-Based Assertions * **Do This:** Set thresholds or performance budgets by setting "expect(measure.duration).toBeLessThan(100)". * **Don't Do This:** Test performance in an unmeasurable relative "it's fast" way. * **Why:** Prevents performance regressions by ensuring code executes within acceptable time limits. ## 7. Security Considerations ### 7.1. Standard: Avoid Hardcoding Secrets in Tests * **Do This:** Use environment variables or configuration files to store sensitive information used in tests. * **Don't Do This:** Hardcode API keys, passwords, or other secrets directly in your test code. * **Why:** Protects sensitive information from exposure and reduces the risk of security breaches. ### 7.2. Standard: Sanitize Test Inputs * **Do This:** Sanitize test inputs to prevent injection attacks or other security vulnerabilities. * **Don't Do This:** Use unsanitized user inputs directly in your tests, which can introduce security risks. * **Why:** Helps identify potential security vulnerabilities in your code and prevent real-world attacks. Added security can come from "vi.fn()" mocking return or argument sanitization. ### 7.3 Standard: Mock Authentication and Authorization * **Do This:** Mock authentication and authorization services during tests to avoid making real external calls. * **Why:** Prevents sensitive credentials from use and allows for specific permissions to be tested. """typescript // Example: Mock Authenticated User import { describe, expect, it, vi } from 'vitest'; import { getUserProfile } from '../src/authService'; import { getRestrictedData } from '../src/dataService'; vi.mock('../src/authService', () => ({ getUserProfile: vi.fn(() => ({ id: 'mocked-user', isAdmin: true })) }); describe('accessControlTesting', () => { it('should allow access for admin users', async () => { const profile = await getUserProfile(); const data = await getRestrictedData(profile.id); expect(data).toBeDefined(); }); }); """ ## 8. Continuous Integration (CI) ### 8.1. Standard: Run Tests on Every Commit * **Do This:** Configure your CI/CD pipeline to automatically run all tests on every commit or pull request. * **Don't Do This:** Only run tests manually or on a schedule, which can delay feedback and increase the risk of regressions. * **Why:** Provides immediate feedback on code changes and prevents regressions from making their way into production. ### 8.2. Standard: Use a Dedicated Test Environment * **Do This:** Run tests in a dedicated environment that is isolated from other processes and has a known configuration. * **Don't Do This:** Run tests in a shared environment or on your local machine, which can introduce inconsistencies and dependencies. * **Why:** Ensures consistent and reliable test results, regardless of the environment. ### 8.3 Standard: Utilize Vitest CLI Flags in CI * **Do This:** Use "--run" and "--reporter=junit" to ensure tests are running correctly on the CI process. JUnit provides a way to look back at test results. ## 9. Code Coverage ### 9.1. Standard: Aim for High Code Coverage * **Do This:** Strive for high code coverage (e.g., 80% or higher) to ensure that most of your code is being tested. * **Don't Do This:** Focus solely on code coverage metrics without considering the quality and effectiveness of your tests. * **Why:** Provides a measure of how much of your code is being tested and helps identify areas that may need more coverage. ### 9.2. Standard: Use "--coverage" Flag for Coverage Reports * **Do This:** Use the "--coverage" flag in Vitest to generate code coverage reports. * **Don't Do This:** Rely on external coverage tools that may not be compatible with Vitest. * **Why:** Provides detailed information about code coverage, including line, branch, and function coverage. ## 10. Test Doubles Test doubles mimic real components for testing purposes. Common types include: * **Stubs:** Provide predefined responses to calls. * **Mocks:** Verify interactions and behaviors. * **Spies:** Track how a function or method is used. * **Fakes:** Simplified implementations of a component. * **Dummies:** Pass placeholders when a value is needed but not used. ### 10.1 Standard: Use Doubles to Verify Proper Code Execution * **Do This:** Mock components or functions and verify those mocks got called as expected. * **Why:** Allows for test driven development when mocking. Helps isolate code. """typescript // mock axios and check if it gets called with the correct URL" import axios from 'axios'; import { fetchData } from '../src/apiFunctions'; vi.mock('axios'); it('fetches data from the correct URL', async () => { const mockAxios = vi.mocked(axios); mockAxios.get.mockResolvedValue({ data: { message: 'Success!' } }); const result = await fetchData('test-url'); expect(mockAxios.get).toHaveBeenCalledWith('test-url'); expect(result).toEqual({ message: 'Success!' }); }); """