# State Management Standards for PyTest
This document outlines coding standards for managing state within PyTest test suites. Effective state management is crucial for creating reliable, maintainable, and performant tests. It covers approaches to managing application state, data flow, and reactivity specifically applied to PyTest, using modern patterns based on the latest PyTest version.
## 1. Introduction to State Management in PyTest
State management in testing refers to how you handle the data and dependencies that your tests rely on. This includes setting up initial conditions, modifying data during the test, and cleaning up afterwards. Proper state management is essential for:
* **Isolation:** Ensuring tests are independent and don't interfere with each other.
* **Repeatability:** Guaranteeing tests produce consistent results across different runs.
* **Maintainability:** Making tests easier to understand, modify, and debug.
* **Performance:** Optimizing setup and teardown processes to minimize test execution time.
## 2. Key Principles of State Management
* **Explicit Setup and Teardown:** Clearly define the initial state and cleanup actions for each test.
* **Minimize Shared State:** Reduce dependencies between tests to improve isolation.
* **Use Fixtures Wisely:** Leverage PyTest fixtures to manage shared resources and dependencies.
* **Data Isolation:** Avoid modifying global data or shared databases directly within tests.
* **Reactivity Awareness:** When testing reactive systems, ensure you can predictably trigger state changes and verify the resulting outputs.
## 3. Fixtures for State Management
PyTest fixtures are the primary mechanism for managing state. They provide a way to set up resources, dependencies, and data needed by tests.
### 3.1. Fixture Scope
The scope of a fixture determines how often it is created and destroyed. Choose the appropriate scope to balance isolation and performance.
* **"function" (default):** A fixture is created for each test function.
* **"class":** A fixture is created once per test class.
* **"module":** A fixture is created once per module.
* **"package":** A fixture is created once per package (introduced in PyTest 5.4). Particularly useful for setting up package-level resources.
* **"session":** A fixture is created once per test session.
"""python
import pytest
@pytest.fixture(scope="module")
def db_connection():
# Setup: Establish database connection
conn = connect_to_db()
yield conn # Provide the connection to tests
# Teardown: Close the connection
conn.close()
"""
**Do This:** Use the most narrow scope possible while still meeting the requirements of your tests. Start with "function" scope and only increase it if necessary.
**Don't Do This:** Use "session" scope for everything. This can lead to unintended side effects and make debugging difficult.
**Why?** Narrower scopes reduce dependencies between tests and improve isolation.
### 3.2. Fixture Autouse
The "autouse" parameter allows fixtures to be automatically applied to tests without explicitly requesting them as arguments. Use with caution!
"""python
import pytest
@pytest.fixture(autouse=True)
def setup_logging():
# Configure logging for all tests in the module
configure_logging()
yield
# Reset logging configuration
reset_logging()
"""
**Do This:** Use "autouse=True" for fixtures that perform essential setup or teardown tasks that are required by most or all tests in a module or class.
**Don't Do This:** Overuse "autouse=True". Explicitly injecting fixtures makes dependencies clear and promotes readability.
**Why?** Explicit dependencies improve code clarity and reduce the risk of unexpected behavior.
### 3.3. Fixture Finalization (Teardown)
Fixtures should include cleanup logic to release resources and reset state. PyTest provides several mechanisms for finalization:
* **"yield" statement:** The code after the "yield" statement is executed after the test completes.
* **"request.addfinalizer":** A more flexible way to register finalization functions.
"""python
import pytest
@pytest.fixture()
def temp_file(tmp_path):
# Setup: Create a temporary file
file_path = tmp_path / "test_file.txt"
with open(file_path, "w") as f:
f.write("Initial content")
yield file_path
# Teardown: Remove the temporary file
file_path.unlink()
"""
"""python
import pytest
@pytest.fixture()
def resource_a(request):
#Setup: Acquire some external resource
resource = acquire_resource()
def fin():
release_resource(resource)
request.addfinalizer(fin)
return resource
"""
**Do This:** Always include proper finalization logic to prevent resource leaks and ensure a clean state for subsequent tests.
**Don't Do This:** Rely on garbage collection to clean up resources. It's not guaranteed to happen immediately and can lead to issues.
**Why?** Finalization ensures tests are isolated and repeatable.
### 3.4. Parametrization
Fixtures can be parametrized to provide different values to tests. This is useful for testing the same logic with different inputs.
"""python
import pytest
@pytest.fixture(params=["value1", "value2", "value3"])
def param_fixture(request):
return request.param
def test_param_fixture(param_fixture):
assert isinstance(param_fixture, str)
print(f"Testing with value: {param_fixture}")
"""
**Do This:** Use parametrization to test a range of inputs and edge cases with the same test logic.
**Don't Do This:** Hardcode different input values within the test function itself. Parameterization makes the test more concise and maintainable.
**Why?** Parameterization improves test coverage and reduces code duplication.
### 3.5. Sharing Fixtures Across Modules
Fixtures can be defined in a "conftest.py" file, which PyTest automatically discovers. This allows you to share fixtures across multiple test modules. "conftest.py" files can exist in multiple subdirectories in your test suite, where fixtures defined in a nested "conftest.py" will be available to all tests in that directory and subdirectories.
"""python
# conftest.py
import pytest
@pytest.fixture(scope="session")
def api_client():
# Setup: Create and configure API client
client = APIClient()
client.authenticate()
yield client
# Teardown: Close the API client
client.close()
"""
**Do This:** Place commonly used fixtures in "conftest.py" to promote code reuse and reduce duplication.
**Don't Do This:** Put all fixtures in a single "conftest.py" file. Organize fixtures into logical groups and place them in separate "conftest.py" files within relevant directories.
**Why?** Using "conftest.py" promotes code reusability and avoids duplication. Organized "conftest.py" files make your test suite easier to navigate and understand.
## 4. Database State Management
Testing database interactions requires careful state management to avoid data corruption and ensure isolation.
### 4.1. Transactional Tests
Wrap each test in a database transaction that is rolled back after the test completes. This ensures that no changes are persisted to the database.
"""python
import pytest
from sqlalchemy import create_engine, text
from sqlalchemy.orm import sessionmaker
@pytest.fixture(scope="session")
def engine():
# Replace with your database connection string
engine = create_engine("sqlite:///:memory:")
yield engine
@pytest.fixture(scope="session")
def tables(engine):
# Create tables (replace with your table definitions)
with engine.connect() as conn:
conn.execute(text("CREATE TABLE users (id INTEGER PRIMARY KEY, name VARCHAR(255))"))
yield
with engine.connect() as conn:
conn.execute(text("DROP TABLE users"))
@pytest.fixture()
def db_session(engine, tables):
connection = engine.connect()
transaction = connection.begin()
Session = sessionmaker(bind=engine)
session = Session(bind=connection)
yield session
session.close()
transaction.rollback()
connection.close()
def test_add_user(db_session):
with db_session.connection().begin():
db_session.execute(text("INSERT INTO users (name) VALUES ('Alice')"))
result = db_session.execute(text("SELECT * FROM users")).fetchall()
assert len(result) == 1
assert result[0][1] == 'Alice'
"""
**Do This:** Use transactional tests to isolate database interactions and prevent data pollution.
**Don't Do This:** Modify the database directly without using transactions.
**Why?** Transactions ensure that each test operates on a consistent and isolated database state.
### 4.2. Database Fixtures
Use fixtures to populate the database with test data and clean up after tests.
"""python
import pytest
from sqlalchemy import create_engine, text
from sqlalchemy.orm import sessionmaker
@pytest.fixture()
def populated_db(db_session):
with db_session.connection().begin():
db_session.execute(text("INSERT INTO users (name) VALUES ('Bob'), ('Charlie')"))
yield
"""
**Do This:** Use fixtures to manage database state and populate the database with test data. Use SQLAlchemy core for more direct control
**Don't Do This:** Load data using raw SQL strings directly in the test functions.
**Why?** Fixtures centralize state management logic, making tests more readable and maintainable.
## 5. Mocking and Patching for State Control
Mocking and patching allow you to replace real dependencies with controlled substitutes, enabling you to isolate and test specific units of code. These mocks act as very specific, localized state management points.
### 5.1. "unittest.mock"
The "unittest.mock" module provides tools for creating mock objects and patching existing objects. It is the standard library's mocking tool.
"""python
from unittest.mock import patch
import my_module
def test_my_function():
with patch('my_module.external_api_call') as mock_api_call:
mock_api_call.return_value = "Mocked response"
result = my_module.my_function()
assert result == "Expected result based on mocked response"
mock_api_call.assert_called_once()
"""
### 5.2. "pytest-mock" Plugin
The "pytest-mock" plugin provides a fixture called "mocker" that simplifies mocking and patching. It's built on top of "unittest.mock", offering a more convenient API.
"""python
import pytest
import my_module
def test_my_function(mocker):
mock_api_call = mocker.patch('my_module.external_api_call')
mock_api_call.return_value = "Mocked response"
result = my_module.my_function()
assert result == "Expected result based on mocked response"
mock_api_call.assert_called_once()
"""
**Do This:** Use mocking to isolate units of code and control external dependencies. Prefer "pytest-mock" for its convenience and integration with PyTest.
**Don't Do This:** Over-mock. Only mock dependencies that are necessary to isolate the code under test.
**Why?** Mocking allows you to test code in isolation and control the behavior of external dependencies.
## 6. Testing Reactive Systems
When testing reactive systems, you need to carefully manage the asynchronous state transitions and ensure predictable behavior.
### 6.1 "pytest-asyncio" plugin
The "pytest-asyncio" plugin should be used when working with asynchronous applications. Example:
"""python
import pytest
import asyncio
from my_module import async_function
@pytest.mark.asyncio
async def test_my_async_function():
result = await async_function()
assert result == "Expected result"
"""
## 7. Managing Global State
Avoid modifying global state directly within tests. If it's unavoidable, use fixtures with appropriate scoping and cleanup logic to isolate changes.
"""python
import pytest
original_value = None
@pytest.fixture(scope="function", autouse=True)
def preserve_global_state():
global original_value
original_value = my_module.global_variable
yield
my_module.global_variable = original_value
"""
**Do This:** Minimize the use of global state. If it's necessary, carefully manage it using fixtures and cleanup logic.
**Don't Do This:** Modify global state directly without any cleanup or isolation mechanisms.
**Why?** Modifying global state can lead to unpredictable behavior and make tests difficult to debug.
## 8. Naming Conventions
* Use descriptive names for fixtures that clearly indicate their purpose and scope.
* Prefix fixture names with "_" if they are intended for internal use only.
## 9. Example: Testing a Class with State
Consider a simple class that manages state:
"""python
class Counter:
def __init__(self):
self.count = 0
def increment(self):
self.count += 1
def get_count(self):
return self.count
"""
Here's how you might test it using PyTest:
"""python
import pytest
from my_module import Counter
@pytest.fixture
def counter():
return Counter()
def test_increment(counter):
counter.increment()
assert counter.get_count() == 1
def test_get_count(counter):
assert counter.get_count() == 0
counter.increment()
assert counter.get_count() == 1
"""
## 10. Performance Considerations
Optimize fixture setup and teardown to minimize test execution time.
* Use the appropriate fixture scope to avoid unnecessary setup and teardown operations.
* Consider using lazy initialization for expensive resources.
* Profile slow tests and identify bottlenecks in fixture setup and teardown.
## 11. Security Considerations
* Avoid storing sensitive data in fixtures or test data files.
* Be careful when mocking external services to prevent unintended interactions with real systems.
## 12. Conclusion
Effective state management is crucial for creating reliable, maintainable, and performant PyTest test suites. By following these coding standards and best practices, you can ensure your tests are isolated, repeatable, and easy to understand. Remember to leverage PyTest fixtures effectively, minimize shared state, and always include proper cleanup logic. Also always keep up to date with the newest PyTest release notes to keep code modern and well-performing.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Performance Optimization Standards for PyTest This document outlines coding standards for performance optimization when using PyTest for testing Python code. Adhering to these standards will improve test suite speed, reduce resource usage, and enhance the overall efficiency of your testing process. ## 1. General Principles ### 1.1. Minimize Test Execution Time **Standard:** Write tests that execute as quickly as possible. Prioritize speed while maintaining sufficient coverage. **Why:** Faster tests provide quicker feedback during development, reducing iteration time and improving developer productivity. **Do This:** * Focus tests on specific functionalities. * Avoid unnecessary computations or I/O operations within tests. * Use mocking and patching to isolate units of code. * Profile your tests to identify bottlenecks. **Don't Do This:** * Write overly complex or integration-heavy tests when unit tests suffice. * Perform expensive operations (e.g., large data loading, extensive database queries) in every test. * Run tests that depend on external services without proper mocking. ### 1.2. Resource Efficiency **Standard:** Design tests to minimize CPU, memory, and I/O usage. **Why:** Efficient tests reduce the load on your testing environment, allowing for more parallel test runs and lower infrastructure costs. **Do This:** * Use appropriate data structures for test data. * Clean up resources after each test (e.g., delete temporary files, close database connections). * Avoid unnecessary object creation. * Leverage PyTest's built-in features for fixture management. **Don't Do This:** * Leak resources (e.g., open files, unclosed sockets). * Create large datasets or objects unnecessarily. * Perform redundant setup or teardown steps. ### 1.3. Parallelization **Standard:** Configure PyTest to run tests in parallel whenever possible. **Why:** Parallel execution dramatically reduces test suite runtime, especially for large projects. **Do This:** * Install and configure "pytest-xdist" for parallel test execution. * Ensure your tests are independent and avoid shared mutable state. * Adjust the number of worker processes based on your hardware resources. **Don't Do This:** * Run tests in parallel that have dependencies or modify shared resources. * Ignore potential race conditions or synchronization issues in parallel tests. ## 2. Code-Level Optimizations ### 2.1. Fixture Optimization **Standard:** Use fixtures effectively to manage setup and teardown, but optimize them for performance. **Why:** Fixtures are powerful, but poorly designed fixtures can become performance bottlenecks. **Do This:** * Use fixture scope appropriately ("function", "module", "session"). Avoid "session" scope unless truly necessary. Prefer 'function' when possible for isolation. * Cache fixture results when computationally expensive operations are involved. * Use "yield" fixtures for setup and teardown. * Mark computationally expensive fixtures with "autouse=False" and explicitly inject them only where they are needed. **Don't Do This:** * Perform redundant computations in fixtures. * Use fixtures with unnecessarily broad scope. * Create circular fixture dependencies. **Example:** """python import pytest import time @pytest.fixture(scope="function") def expensive_setup(): """Simulates an expensive setup operation.""" print("\nPerforming setup...") time.sleep(1) # Simulate a delay yield print("\nPerforming teardown...") def test_with_expensive_setup(expensive_setup): """Test that uses the expensive setup.""" print("Running test...") assert True def test_without_expensive_setup(): """Test that runs without the expensive setup.""" print("Running another test...") assert True """ ### 2.2. Mocking and Patching **Standard:** Use mocking and patching to isolate units under test and avoid external dependencies. **Why:** Mocking reduces test execution time and ensures tests are deterministic. **Do This:** * Use "unittest.mock" or "pytest-mock" for mocking and patching. * Mock only the necessary components. * Specify mocks clearly and concisely. * Use "pytest-mock"'s "mocker" fixture for easier mocking. **Don't Do This:** * Over-mock (mock everything). * Create overly complex mock setups. * Forget to restore patched objects after the test. **Example:** """python from unittest.mock import patch import pytest def function_that_calls_external_api(): # In reality, this communicates with an external API return "Real API Response" def function_to_test(): result = function_that_calls_external_api() return f"Processed: {result}" def test_function_to_test(mocker): # Use pytest-mock to mock the external API call mocker.patch('__main__.function_that_calls_external_api', return_value="Mocked API Response") assert function_to_test() == "Processed: Mocked API Response" """ ### 2.3. Database Interactions **Standard:** Optimize database interactions in tests. **Why:** Database operations are often slow and can significantly impact test performance. **Do This:** * Use in-memory databases (e.g., SQLite) for testing whenever possible. * If using a real database, use transactions and rollbacks to avoid data pollution and speed up cleanup. * Minimize the number of database queries per test. * Use fixtures to create and load test data efficiently. **Don't Do This:** * Perform database migrations as part of every test. * Insert large amounts of data directly within tests. * Leave open database connections after tests finish. **Example:** """python import pytest import sqlite3 @pytest.fixture(scope="function") def db_connection(): conn = sqlite3.connect(':memory:') # Use in-memory database cursor = conn.cursor() cursor.execute("CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)") cursor.execute("INSERT INTO users (name) VALUES ('Test User')") conn.commit() yield conn conn.close() def test_user_exists(db_connection): cursor = db_connection.cursor() cursor.execute("SELECT COUNT(*) FROM users WHERE name = 'Test User'") count = cursor.fetchone()[0] assert count == 1 @pytest.fixture(scope="function") def transactional_db(): # Example assuming you have some ORM and database setup. from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker engine = create_engine('sqlite:///:memory:') # or a test database # If you use SQLAlchemy, you will likely use declarative_base() # Also import your models here #Base.metadata.create_all(engine) # Create tables. Commented, as declarative base examples vary greatly Session = sessionmaker(bind=engine) session = Session() connection = engine.connect() transaction = connection.begin() session.begin_nested() yield session session.close() # Close current session transaction.rollback() connection.close() def test_something_using_transactional_db(transactional_db): # Use transactional_db to manipulate DB # ORM operations will by default be rolled back. pass # Replace with actual test logic """ ### 2.4. Conftest Optimization **Standard:** Optimize "conftest.py" files to avoid unnecessary loading and execution. **Why:** "conftest.py" is loaded automatically, so inefficient content can slow down your entire test suite. **Do This:** * Keep "conftest.py" files as small as possible. * Use lazy-loading techniques to avoid importing modules until they are needed. * Place fixtures in the most relevant "conftest.py" file (closest to where they are used) to minimize scope. Avoid polluting the global "conftest.py". * Consider using pytest's plugin system for complex configurations. **Don't Do This:** * Place computationally expensive code directly in "conftest.py". * Define unused fixtures in "conftest.py". **Example:** """python # conftest.py (Optimized) import pytest def pytest_configure(config): """ Only run this expensive setup when certain markers exist. Avoids running in situations when not needed. """ if config.getoption("--runslow") or config.getini('markers'): # Import expensive modules here only when needed from my_expensive_module import perform_expensive_setup config.expensive_resource = perform_expensive_setup() """ ### 2.5. Mark and Select Tests Efficiently **Standard**: Use markers efficiently to select and run specific subsets of tests thereby reducing wasted computation. **Why:** Efficient marker usages lets you target the areas you are actively working on. **Do This:** * Create custom markers to categorize tests (e.g., "@pytest.mark.slow", "@pytest.mark.integration"). * Use the "-m" option to select tests based on markers (e.g., "pytest -m "slow and network""). * Combine markers using boolean operators ("and", "or", "not"). * Avoid overly complex marker expressions. * Register your markers in "pytest.ini" to prevent warnings. **Don't Do This:** * Use overly generic markers. * Create a large number of markers with overlapping functionality. * Rely on marker expressions that are difficult to understand or maintain. **Example:** """python # pytest.ini [pytest] markers = slow: Marks tests as slow-running. network: Marks tests as network-dependent. # test_example.py import pytest @pytest.mark.slow def test_slow_function(): assert True @pytest.mark.network def test_network_call(): assert True def test_fast_function(): assert True """ Run only slow tests: "pytest -m slow" Run slow tests that also require a network connection: "pytest -m "slow and network"" ### 2.6. Skip Tests Conditionally **Standard**: Skip tests that are not applicable to the current environment or configuration. **Why**: Skipping irrelevant tests reduces execution time and avoids false failures. **Do This**: * Use "@pytest.mark.skip" to unconditionally skip a test. * Use "@pytest.mark.skipif" to skip tests based on a condition. * Use "pytest.skip()" within a test to skip it dynamically. * Provide clear reasons for skipping tests. **Don't Do This**: * Leave commented-out tests instead of skipping them properly. * Skip tests without a clear explanation. * Use complex conditions for skipping tests that are difficult to understand. **Example:** """python import pytest import sys @pytest.mark.skip(reason="This test is temporarily disabled.") def test_skipped_test(): assert False @pytest.mark.skipif(sys.platform == "win32", reason="Does not run on Windows") def test_linux_specific(): assert True def test_dynamic_skip(): if some_condition(): pytest.skip("Skipping due to some_condition") assert True """ ## 3. Tooling and Configuration ### 3.1. Profiling **Standard**: Profile your tests to identify performance bottlenecks. **Why**: Profiling helps pinpoint areas where optimization efforts will have the greatest impact. **Do This**: * Use the built-in "cProfile" module or dedicated profiling tools (e.g., "py-spy", "perf"). * Focus on the most time-consuming functions and fixtures. * Track the performance of your tests over time. **Don't Do This**: * Guess at performance bottlenecks without profiling. * Ignore profiling results. **Example:** """bash python -m cProfile -o profile.txt -m pytest test_example.py """ Then, analyze "profile.txt" using "snakeviz" or similar tools. ### 3.2. Parallel Execution ("pytest-xdist") **Standard**: Use "pytest-xdist" to run tests in parallel across multiple CPUs. **Why**: Reduces test suite runtime significantly. **Do This**: * Install "pytest-xdist": "pip install pytest-xdist" * Run tests with the "-n" option to specify the number of worker processes (e.g., "pytest -n auto"). "auto" will usually use the number of CPUs available. * Ensure your tests are thread-safe and do not rely on shared mutable state. * Consider using a distributed testing framework like "pytest-distributor" for larger projects. **Don't Do This**: * Run tests in parallel without considering thread safety. * Overload your system with too many worker processes. * Ignore errors or race conditions that may occur during parallel execution. ### 3.3. Test Selection **Standard**: Run only the necessary tests using various selection options. **Why**: Reduces wasted CPU cycles and waiting time. **Do This**: * Use "-k" to filter tests by name (e.g., "pytest -k "test_login""). * Use "--lf" (last failed) to run only tests that failed in the previous run. * Use "--ff" (failed first) to run failed tests first, accelerating debugging. **Don't Do This**: * Always run the entire test suite, even when only modifying a small part of the codebase. * Ignore the available test selection options. ### 3.4. CI/CD Integration **Standard**: Integrate performance testing into your CI/CD pipeline. **Why**: Ensures that performance regressions are detected early. **Do This**: * Track test execution time over time. * Set up alerts for significant performance regressions. * Run performance-sensitive tests as part of your build process. **Don't Do This**: * Ignore performance metrics in your CI/CD pipeline. * Allow performance regressions to go unnoticed. ## 4. Advanced Techniques ### 4.1. Database Connection Pooling **Standard**: Use database connection pooling for tests that interact with real databases (when in-memory isn't suitable). **Why**: Reduces overhead of establishing new database connections for each test **Do This**: * Configure your ORM (e.g., SQLAlchemy) or database library to use connection pooling. **Example (SQLAlchemy):** """python from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker engine = create_engine('postgresql://user:password@host:port/database', pool_size=5, max_overflow=10) Session = sessionmaker(bind=engine) # Use Session in your tests """ ### 4.2. Asynchronous Testing **Standard**: Handle asynchronous code using "pytest-asyncio". **Why**: Tests involving asyncio need proper handling to avoid delays or hangs. **Do This**: * Install "pytest-asyncio": "pip install pytest-asyncio" * Mark asynchronous tests with "@pytest.mark.asyncio". * Use "async" fixtures. **Example**: """python import pytest import asyncio @pytest.mark.asyncio async def test_async_function(): await asyncio.sleep(0.1) assert True @pytest.fixture async def async_fixture(): await asyncio.sleep(0.1) return "Async Result" @pytest.mark.asyncio async def test_using_async_ficture(async_fixture): assert async_fixture == "Async Result" """ ### 4.3. Test Data Strategies **Standard**: Minimize the size and complexity of test data while still ensuring adequate coverage. **Why**: Large data sets can slow down tests and consume excessive memory. **Do This**: * Use parameterized tests (e.g., "pytest.mark.parametrize") to test multiple inputs with a single test function efficiently * Create only the minimal amount of data necessary for each test case * Utilize data generation libraries (e.g., Faker) to create realistic but concise test data. ## 5. Continuous Improvement ### 5.1. Code Reviews **Standard**: Include performance considerations in code reviews. **Why**: Ensures that new code adheres to performance best practices. **Do This**: * Review test code for potential performance bottlenecks * Question inefficient algorithms or data structures * Encourage the use of mocking and other optimization techniques ### 5.2. Monitoring and Alerting **Standard**: Monitor test suite execution time and set up alerts for significant regressions. **Why**: Proactively identify and address performance problems. **Do This**: * Use CI/CD tools to track test execution time over time * Set up alerts for performance regressions * Investigate any performance alerts promptly By following these performance optimization standards, you can significantly improve the speed, efficiency, and reliability of your PyTest test suites. Remember to continuously monitor and refine your testing practices to achieve the best possible performance. Note that Pytest and its ecosystem are continuously evolving, so this document should be reviewed regularly.
# Core Architecture Standards for PyTest This document outlines the core architecture standards for developing robust, maintainable, and performant tests using PyTest. It focuses on project structure, architectural patterns, and organizational principles specific to PyTest. ## 1. Project Structure and Organization A well-defined project structure is crucial for test discovery, maintainability, and scalability. **Do This:** * **Follow a consistent directory structure:** Colocate tests with the code they test (e.g., "src/my_module.py" and "tests/test_my_module.py"). Alternatively, maintain a dedicated "tests" directory at the project root. * **Organize tests into modules:** Group related tests within individual Python files (modules). * **Use "conftest.py" strategically:** Place "conftest.py" files in the project root or subdirectories to define shared fixtures, plugins, and hooks. * **Keep test files small and focused:** Avoid excessively large test files; break them down into smaller, more manageable modules. **Don't Do This:** * **Scatter test files haphazardly:** Avoid placing test files in random locations within the project. * **Create monolithic test files:** Don't cram all tests into a single file; this reduces readability and maintainability. * **Overuse or misuse "conftest.py":** Avoid placing irrelevant code or global configurations directly in "conftest.py". * **Mix testing and production code:** Never include test code within production modules. **Why:** * **Improved test discovery:** PyTest automatically discovers tests based on naming conventions and directory structure. A consistent structure ensures that all tests are found and executed. * **Increased maintainability:** A modular structure simplifies code navigation, refactoring, and bug fixing. * **Enhanced scalability:** A well-organized project can easily accommodate new tests and features as the project grows. **Example:** """ my_project/ ├── src/ │ ├── my_module.py │ └── another_module.py ├── tests/ │ ├── __init__.py │ ├── conftest.py # Project-level fixtures and configuration │ ├── test_my_module.py # Tests for my_module.py │ └── test_another_module.py # Tests for another_module.py ├── pyproject.toml # Project metadata and dependencies └── README.md """ ## 2. Architectural Patterns Adopting architectural patterns improves test code reusability, readability, and maintainability. ### 2.1 Data-Driven Testing Data-driven testing uses parameterized tests to run the same test logic with different sets of data. **Do This:** * **Use "pytest.mark.parametrize":** Decorate test functions with "@pytest.mark.parametrize" to provide multiple sets of input data. * **Define test data clearly:** Store test data in a structured format (e.g., lists, tuples, dictionaries) to improve readability. * **Choose meaningful parameter names:** Use descriptive names for the parameters passed to the test function. **Don't Do This:** * **Hardcode test data:** Avoid embedding test data directly within the test function; this makes it difficult to modify or extend the tests. * **Use cryptic parameter names:** Don't use ambiguous or meaningless names for parameters; this reduces clarity. **Why:** * **Reduced code duplication:** Data-driven testing avoids repeating the same test logic multiple times. * **Improved test coverage:** Parameterized tests can easily cover a wide range of input values and scenarios. * **Enhanced test maintainability:** Changes to test data only need to be made in one place. **Example:** """python import pytest @pytest.mark.parametrize( "input_value, expected_output", [ (2, 4), (3, 9), (4, 16), ], ids=["2 squared", "3 squared", "4 squared"] # Added test IDs for clarity ) def test_square(input_value, expected_output): assert input_value * input_value == expected_output """ ### 2.2 Fixture-Based Testing Fixtures provide a mechanism for setting up and tearing down test environments. **Do This:** * **Use fixtures for setup and teardown:** Define fixtures to create test resources, configure environments, and perform cleanup actions. * **Scope fixtures appropriately:** Use fixture scopes (e.g., "function", "module", "session") to control the lifetime of fixtures. * **Use autouse fixtures sparingly:** Only use "autouse=True" for fixtures that are required by almost every test in a module or session. * **Request fixtures explicitly:** In general, request fixtures by listing them as arguments to the test function * **Leverage fixture composition:** Combine multiple fixtures to create more complex test environments. **Don't Do This:** * **Perform setup/teardown directly in tests:** Avoid embedding setup or teardown logic directly within test functions. * **Use global variables for test data:** Don't rely on global variables to store test data; use fixtures instead. * **Overuse session-scoped fixtures:** Avoid session-scoped fixtures if the resource is not shared acros tests. * **Use mutable defaults in fixtures:** Mutable defaults are evaluated only once, leading to unexpected shared state across tests. **Why:** * **Simplified test functions:** Fixtures reduce boilerplate code in test functions, making them more readable and focused. * **Consistent test environments:** Fixtures ensure that all tests run in a consistent and predictable environment. * **Improved test isolation:** Fixtures can be used to isolate tests from each other, preventing interference. * **Increased code reusability:** Fixtures can be reused across multiple test modules. **Example:** """python import pytest @pytest.fixture def temp_file(tmp_path): """Creates a temporary file for testing.""" file_path = tmp_path / "test_file.txt" file_path.write_text("This is a test file.") return file_path def test_file_content(temp_file): """Tests the content of the temporary file.""" content = temp_file.read_text() assert content == "This is a test file." @pytest.fixture(scope="module") def database_connection(): """Creates a database connection for all tests in the module.""" connection = connect_to_database() # Assume connect_to_database exists yield connection connection.close() """ ### 2.3 Plugin-Based Architecture PyTest's plugin architecture allows extending its functionality with custom hooks and features. **Do This:** * **Develop custom plugins for reusable test utilities:** Create plugins to encapsulate common test logic, fixtures, or custom markers. * **Use appropriate hook functions:** Implement the correct PyTest hook functions (e.g., "pytest_addoption", "pytest_configure", "pytest_collection_modifyitems") to extend PyTest's behavior. * **Distribute plugins as Python packages:** Package plugins as Python packages for easy installation and distribution. **Don't Do This:** * **Modify PyTest's core functionality directly:** Avoid modifying PyTest's internal code; use plugins instead. * **Create overly complex plugins:** Keep plugins focused and modular; avoid creating monolithic plugins. * **Ignore PyTest's plugin documentation:** Consult the official PyTest documentation for detailed information on plugin development. **Why:** * **Extensibility:** The PyTest plugin architecture facilitates extending the testing framework for specific needs without modifying the core. * **Reusability:** Plugins promote code reusability across multiple projects. * **Maintainability:** Plugins isolate custom functionality from PyTest's core, improving maintainability. **Example:** """python # contents of pytest_myplugin.py import pytest def pytest_addoption(parser): parser.addoption( "--runslow", action="store_true", help="run slow tests" ) def pytest_configure(config): # Add a marker only if the --runslow option is passed via command line. if config.getoption("--runslow"): config.addinivalue_line("markers", "slow: mark test as slow to run") def pytest_collection_modifyitems(config, items): if config.getoption("--runslow"): # --runslow given in cli: do not skip slow tests return skip_slow = pytest.mark.skip(reason="need --runslow option to run") for item in items: if "slow" in item.keywords: item.add_marker(skip_slow) """ """python # In your test file: import pytest @pytest.mark.slow def test_slow_function(): # This test will only run if --runslow is passed. assert True """ ## 3. Test Implementation Details Writing clear, concise, and effective tests is critical for ensuring code quality. ### 3.1 Test Naming Conventions Consistent naming conventions improve test discoverability and readability. **Do This:** * **Start test files with "test_" or end with "_test.py":** For example, "test_my_module.py" or "my_module_test.py". * **Start test functions with "test_":** For example, "test_add_numbers()". * **Give tests descriptive names:** Use names that clearly indicate what the test is verifying. * **Include the name of the function being tested:** For example, "test_my_function_returns_correct_value()". **Don't Do This:** * **Use obscure or ambiguous names:** Avoid names that are difficult to understand or that don't clearly indicate the test's purpose. * **Use names that don't follow the convention:** Do not forget the "test_" prefix, the test runner skips the test if the method is not prefixed correctly. **Why:** * **Automatic test discovery:** PyTest automatically discovers tests based on naming conventions; consistent naming ensures that all tests are found. * **Improved readability:** Clear and descriptive names make it easier to understand what each test is verifying. **Example:** """python def test_add_numbers_returns_correct_sum(): """Tests that the add_numbers function returns the correct sum.""" from my_module import add_numbers result = add_numbers(2, 3) assert result == 5 """ ### 3.2 Assertions and Validation Using appropriate assertion methods is crucial for verifying expected outcomes. **Do This:** * **Use "assert" statements for basic comparisons:** Use the built-in "assert" statement for simple equality and inequality checks. * **Use "pytest.raises" for exception handling:** Use "pytest.raises" to verify that a function raises the expected exception. * **Use "pytest.warns" to assert warnings are raised.** * **Use helper functions for complex validations:** Create helper functions to encapsulate complex validation logic. * **Write clear assertion messages:** Provide custom assertion messages to help diagnose test failures. **Don't Do This:** * **Use bare "assert" statements without messages:** Avoid using "assert" statements without providing custom messages; this makes it difficult to understand the reason for a failure. * **Catch broad exceptions without verification:** Don't catch generic exceptions (e.g., "Exception") without verifying the exception type and message. * **Perform multiple assertions in a single statement:** Avoid combining multiple assertions in a single statement; this makes it difficult to pinpoint the exact cause of a failure. **Why:** * **Clear test outcomes:** Assertions clearly define the expected outcomes of each test. * **Easy debugging:** Descriptive assertion messages simplify the debugging process. * **Robust error handling:** Proper exception handling ensures that tests handle unexpected errors gracefully. **Example:** """python import pytest def test_divide_numbers_returns_correct_quotient(): """Tests that the divide_numbers function returns the correct quotient.""" from my_module import divide_numbers result = divide_numbers(10, 2) assert result == 5, "Expected 10 / 2 to equal 5" def test_divide_numbers_raises_zero_division_error(): """Tests that the divide_numbers function raises ZeroDivisionError when dividing by zero.""" from my_module import divide_numbers with pytest.raises(ZeroDivisionError): divide_numbers(10, 0) def test_depreciation_warning(): with pytest.warns(DeprecationWarning): legacy_function() """ ### 3.3 Test Isolation and Independence Ensuring that tests are independent of each other is crucial for reliable test results. **Do This:** * **Use fixtures to create isolated test environments:** Use fixtures to set up and tear down resources for each test. * **Avoid shared mutable state:** Do not rely on shared mutable state between tests; this can lead to unexpected interactions. * **Run tests in a random order:** Use the "--random-order" option to help uncover dependencies between tests. * **Mock external dependencies:** Use mocking libraries to isolate tests from external dependencies such as databases or network services. **Don't Do This:** * **Modify global state in tests:** Avoid modifying global variables or system resources within test functions, unless it is properly isolated using cleanup. * **Rely on test execution order:** Do not assume that tests will be executed in a specific order; tests should be independent of each other. * **Skip cleanup operations:** Always ensure that test resources are properly cleaned up after each test. **Why:** * **Reliable test results:** Independent tests produce consistent and predictable results, regardless of the execution environment. * **Simplified debugging:** Isolated tests make it easier to identify the cause of test failures. * **Reduced test flakiness:** Independent tests are less likely to be flaky or unreliable. **Example:** """python import pytest from unittest.mock import patch @pytest.fixture def mock_external_api(): with patch("my_module.external_api_call") as mock_api: mock_api.return_value = "Mocked response" yield mock_api def test_function_with_external_dependency(mock_external_api): from my_module import function_that_uses_api result = function_that_uses_api() assert result == "Mocked response" mock_external_api.assert_called_once() # Verify that the API was called """ ## 4. Performance Optimization Optimizing test execution time is crucial for large projects with extensive test suites. **Do This:** * **Use the "-n" option for parallel test execution:** Use the "pytest-xdist" plugin and the "-n" option to run tests in parallel. * **Mark slow tests:** Mark slow-running tests with the "@pytest.mark.slow" marker and exclude them from regular test runs. * **Optimize fixture scope:** Use the smallest possible fixture scope to minimize fixture setup and teardown overhead. * **Use "--dist loadscope" to distribute tests across workers based on fixture scope.** * **Profile slow tests:** Use profiling tools to identify performance bottlenecks in slow tests. **Don't Do This:** * **Run all tests in a single process:** Avoid running tests in a single process, especially for large projects. * **Overuse session-scoped fixtures:** Avoid session-scoped fixtures if they are not required by all tests. * **Ignore slow test:** Address slow tests as they increase the feedback loop. **Why:** * **Reduced test execution time:** Parallel test execution can significantly reduce the overall test execution time. * **Faster feedback cycles:** Quick test feedback allows developers to identify and fix bugs more quickly. * **Improved resource utilization:** Optimizing fixture usage reduces unnecessary resource consumption. **Example:** """bash pytest -n auto # Run tests in parallel using all available CPU cores pytest -m "not slow" # Exclude tests marked as slow """ ## 5. Security Best Practices Security should be a primary concern, even in testing. Incorporate security checks within tests to assure application safety. **Do This:** * **Sanitize input data:** When using parameterized testing, make sure to sanitize test inputs to prevent injection attacks. * **Validate error outputs:** Check the outputs of tests for sensitive information. * **Use secure test data:** Ensure that sensitive data used in tests (e.g., passwords, API keys) is properly protected and never exposed in logs or reports. * **Limit network access:** When running tests that interact with external resources, restrict network access to only the necessary services. * **Regularly update dependencies:** Keep PyTest and all its plugins up to date with the latest security patches. **Don't Do This:** * **Use real credentials:** Never use real user credentials or production API keys in tests. * **Log sensitive data:** Avoid logging sensitive information, such as passwords or API keys, during test execution. * **Ignore security warnings:** Address any security warnings or vulnerabilities reported by PyTest or its plugins immediately. **Why:** * **Prevent test pollution:** Protect against malicious test code that could compromise the test environment or the system under test. * **Protect sensitive data:** Ensure that sensitive data used in tests is not exposed to unauthorized users. * **Reduce the attack surface:** Minimize the risk of security vulnerabilities being exploited during the testing process. **Example:** """python import pytest import os @pytest.fixture(scope="session") def api_key(): """Returns a mock API key from environment variables.""" api_key = os.environ.get("TEST_API_KEY") if not api_key: raise ValueError("TEST_API_KEY environment variable not set.") return api_key """ This comprehensive guide provides a strong foundation for developing robust, maintainable, and performant tests using PyTest. By following these coding standards, development teams can ensure the delivery of high-quality software with confidence. Remember to stay updated on the latest PyTest features and best practices to continually improve your testing processes.
# API Integration Standards for PyTest This document outlines the coding standards and best practices for integrating with APIs using PyTest. It provides guidance on how to write robust, maintainable, and performant API integration tests, leveraging the latest features of PyTest. ## 1. General Principles ### 1.1. Test Pyramid Adherence * **Do This:** Focus on unit tests for individual components and increase reliance on integration tests for API interactions. Aim for a balanced test pyramid. * **Don't Do This:** Over-rely on end-to-end tests that cover multiple systems at once, as they are harder to maintain and debug. **Why:** Integration tests verify the interaction between different components, which is crucial for API testing. Unit tests verify individual functions and prevent unexpected behavior. The testing pyramid suggests a higher proportion of unit tests than integration tests, which is much more than end-to-end tests, optimizing the testing efforts while maintaining software quality. ### 1.2. Test Isolation * **Do This:** Use mocking and patching to isolate the system under test from external dependencies. Consider using dependency injection for better testability. * **Don't Do This:** Directly interact with live production APIs during testing. **Why:** Prevents test flakiness due to external factors such as network issues or API downtime. Ensures tests are repeatable and independent. Increases test execution speed dramatically. ### 1.3. Idempotency & Repeatability * **Do This:** Design tests to be idempotent, meaning they can be run multiple times without changing the system's state unexpectedly *or* explicitly clean up any state changes. * **Don't Do This:** Write tests that depend on the order of execution or leave behind residual data. **Why:** Guarantees consistent test results across multiple runs. Avoids issues like data contamination and race conditions. Makes debugging easier. ### 1.4. Clear Assertions * **Do This:** Write assertions that clearly express the expected behavior. Use descriptive error messages in assertions. * **Don't Do This:** Use generic assertions (e.g., "assert True") without providing specific context. **Why:** Reduces ambiguity. Makes it easier for anyone to find the root cause in case of test failure. Improves debuggability of the test suite. ### 1.5. Configuration Management * **Do This:** Store API endpoints, credentials, and other configuration values in environment variables or configuration files. Implement a mechanism to load and manage these configurations. Use the "pytest.ini" file for project-specific settings. * **Don't Do This:** Hardcode sensitive information within test scripts. **Why:** Simplifies managing test environments. Makes switching between different environments (e.g., development, staging, production) easy. Enhances security by not exposing sensitive data. ## 2. Setting up the Test Environment ### 2.1. Virtual Environments * **Do This:** Use virtual environments to isolate project dependencies. * **Don't Do This:** Install project dependencies globally. **Why:** Avoids dependency conflicts between different projects. Creates reproducible build environments. """bash python3 -m venv .venv source .venv/bin/activate # Linux/macOS .venv\Scripts\activate # Windows pip install pytest requests pytest-mock pytest-vcr """ ### 2.2. Installing Dependencies * **Do This:** Use "pip" to manage project dependencies. Specify dependencies in a "requirements.txt" file. * **Don't Do This:** Manually install dependencies without tracking them. **Why:** Ensures consistency across different environments. Simplifies dependency management. """bash pip freeze > requirements.txt pip install -r requirements.txt """ ## 3. Connecting to APIs ### 3.1. Using "requests" Library * **Do This:** Use the "requests" library for making HTTP requests. * **Don't Do This:** Use lower-level libraries like "urllib" directly, unless absolutely necessary. **Why:** "requests" provides a clean and easy-to-use API. It simplifies tasks like handling sessions, authentication, and request/response processing. """python import requests import pytest @pytest.fixture def api_url(): return "https://api.example.com/v1" def test_get_request(api_url): response = requests.get(f"{api_url}/users/1") assert response.status_code == 200 data = response.json() assert data['id'] == 1 assert 'name' in data def test_post_request(api_url): payload = {"name": "John Doe", "email": "john.doe@example.com"} response = requests.post(f"{api_url}/users", json=payload) assert response.status_code == 201 data = response.json() assert data['name'] == "John Doe" """ ### 3.2. Session Management * **Do This:** Use "requests.Session" for managing persistent connections. * **Don't Do This:** Create a new "requests" instance for each request. **Why:** Improves performance by reusing connections. Handles cookies and authentication headers automatically. """python import requests import pytest @pytest.fixture(scope="session") def api_session(): session = requests.Session() session.headers.update({"Authorization": "Bearer <token>"}) # example of setting auth headers yield session session.close() # Properly close the session def test_get_request_with_session(api_session, api_url): response = api_session.get(f"{api_url}/resource") assert response.status_code == 200 """ ### 3.3. Error Handling * **Do This:** Handle potential exceptions during API calls (e.g., "requests.exceptions.RequestException"). * **Don't Do This:** Ignore exceptions or let them propagate unhandled. **Why:** Prevents test failures due to network errors or API unavailability. Allows for retries or graceful degradation. """python import requests import pytest def test_api_error_handling(api_url): try: response = requests.get(f"{api_url}/nonexistent_resource", timeout=5) response.raise_for_status() # Raises HTTPError for bad responses (4xx or 5xx) except requests.exceptions.RequestException as e: pytest.fail(f"API request failed: {e}") # Use pytest.fail to properly fail the test """ ### 3.4. Timeouts * **Do This:** Set appropriate timeouts for API requests using the "timeout" parameter in "requests". * **Don't Do This:** Use large or missing timeouts causing tests to hang indefinitely. **Why:** Prevents tests from hanging indefinitely due to slow or unresponsive APIs. Allows tests to fail quickly and provide feedback. """python import requests def test_api_timeout(api_url): try: response = requests.get(f"{api_url}/slow_endpoint", timeout=2) # 2 second timeout assert response.status_code == 200 except requests.exceptions.Timeout: pytest.fail("API request timed out") """ ## 4. Mocking and Patching ### 4.1. Using "pytest-mock" * **Do This:** Use the "pytest-mock" plugin for mocking and patching. * **Don't Do This:** Use the built-in "unittest.mock" directly, which can be less convenient. **Why:** "pytest-mock" provides a clean pytest fixture to easily mock objects. """python import requests import pytest from unittest.mock import MagicMock def get_user_name(api_url, user_id): response = requests.get(f"{api_url}/users/{user_id}") response.raise_for_status() # Raise HTTPError for bad responses data = response.json() return data['name'] def test_get_user_name_mocked(mocker, api_url): mock_response = MagicMock() mock_response.status_code = 200 mock_response.json.return_value = {"id": 1, "name": "Mocked User"} mocker.patch("requests.get", return_value=mock_response) # Patch the requests.get function directly user_name = get_user_name(api_url, 1) assert user_name == "Mocked User" """ ### 4.2. Patching API Responses * **Do This:** Patch the "requests.get", "requests.post", etc., functions to return mock responses. * **Don't Do This:** Mock the entire API client unnecessarily. mock only the part of the API client that will be executed. **Why:** Provides control over the API responses. Simplifies setting up different test scenarios. ### 4.3. Side Effects * **Do This:** Use "side_effect" in mocks for complex scenarios (e.g., raising exceptions or returning different values based on input). * **Don't Do This:** Overcomplicate tests with unnecessary mocking. **Why:** Simulates different API behaviors. Handles edge cases and error conditions. """python import requests import pytest from unittest.mock import MagicMock def test_api_retry(mocker, api_url): mock_response = MagicMock() mock_response.status_code = 503 # Service Unavailable mock_response.raise_for_status.side_effect = requests.exceptions.HTTPError("Service Unavailable") mock_get = mocker.patch("requests.get", side_effect=[mock_response, MagicMock(status_code=200, json=lambda: {"id": 1, "name": "Success"})]) def attempt_request(): try: response = requests.get(f"{api_url}/resource") response.raise_for_status() return response.json() except requests.exceptions.HTTPError: return None result = attempt_request() assert mock_get.call_count == 2 # Verify that the request was retried assert result == {"id": 1, "name": "Success"} # Check the successful result """ ## 5. API Response Validation ### 5.1. Status Code Validation * **Do This:** Always validate the HTTP status code. * **Don't Do This:** Assume the request was successful without checking the status code. **Why:** Confirms the basic success or failure of the API request. """python import requests def test_status_code(api_url): response = requests.get(f"{api_url}/users") assert response.status_code == 200 # OK """ ### 5.2. JSON Schema Validation * **Do This:** Validate the API response against a JSON schema. Use libraries like "jsonschema". * **Don't Do This:** Manually check individual fields in the response. **Why:** Ensures the response structure and data types are correct. Provides comprehensive validation. """python import requests import pytest from jsonschema import validate from jsonschema.exceptions import ValidationError def test_json_schema(api_url): response = requests.get(f"{api_url}/products/1") assert response.status_code == 200 data = response.json() schema = { "type": "object", "properties": { "id": {"type": "integer"}, "name": {"type": "string"}, "price": {"type": "number"} }, "required": ["id", "name", "price"] } try: validate(instance=data, schema=schema) except ValidationError as e: pytest.fail(f"JSON schema validation failed: {e}") """ ### 5.3. Response Time Validation * **Do This:** Measure and validate the API response time if latency is critical. * **Don't Do This:** Ignore response time unless it is explicitly part of the requirements. **Why:** Detects performance regressions and ensures the APIs meet performance SLAs. """python import requests import time def test_response_time(api_url): start_time = time.time() response = requests.get(f"{api_url}/data") end_time = time.time() response_time = end_time - start_time assert response.status_code == 200 assert response_time < 0.5 # Check if response time is less than 0.5 seconds """ ## 6. Data Management ### 6.1. Test Data Generation * **Do This:** Use libraries like "Faker" to generate realistic test data. * **Don't Do This:** Hardcode sample data in test functions. **Why:** Creates varied and realistic test scenarios. Reduces boilerplate code. """python from faker import Faker import requests fake = Faker() def test_create_user(api_url): payload = { "name": fake.name(), "email": fake.email(), "address": fake.address() } response = requests.post(f"{api_url}/users", json=payload) assert response.status_code == 201 """ ### 6.2. Database Cleanup * **Do This:** Implement a cleanup mechanism to remove test data after test execution, use a fixture with autouse=True and scope="session". * **Don't Do This:** Leave test data in the database, which can affect subsequent tests. **Why:** Prevents data contamination and ensures tests are repeatable. Maintains data integrity. """python import pytest import requests @pytest.fixture(scope="session", autouse=True) def cleanup_database(api_url): # Setup: do something before all tests in the session print("\nSetting up test data...") # Example: creating a test user payload = {"name": "Test User", "email": "test@example.com"} response = requests.post(f"{api_url}/users", json=payload) assert response.status_code == 201 user_id = response.json()["id"] yield # This is where the testing happens # Teardown: run after all tests in the session print("\nCleaning up test data...") # Clean up the test user that was created delete_response = requests.delete(f"{api_url}/users/{user_id}") assert delete_response.status_code == 204 or delete_response.status_code == 404 # Resource may already be deleted by a later test so assert 204 (No Content) for success and 404 (Not Found) for failure is valid to cleanup. """ ## 7. Authentication & Authorization ### 7.1. Authentication Handling * **Do This:** Use appropriate authentication methods (e.g., API keys, Bearer tokens, OAuth) based on the API requirements. Store secrets securely. * **Don't Do This:** Hardcode authentication credentials in the test scripts. **Why:** Ensure that the tests accurately simulate authenticated API requests. Protects sensitive information. """python import requests import os def test_authentication(api_url): api_key = os.environ.get("API_KEY") headers = {"X-API-Key": api_key} # or {"Authorization": f"Bearer {api_key}"} for bearer token response = requests.get(f"{api_url}/secure_resource", headers=headers) assert response.status_code == 200 """ ### 7.2. Role-Based Access Control (RBAC) * **Do This:** Test different user roles and permissions to ensure proper access control. * **Don't Do This:** Assume all users have the same access rights. **Why:** Verify that access to resources is correctly restricted based on user roles. """python import requests import pytest @pytest.mark.parametrize( "user_role, expected_status", [ ("admin", 200), ("user", 403), # Unauthorized ] ) def test_role_based_access(api_url, user_role, expected_status): # Assuming you have endpoints for logging in as different roles if user_role == "admin": login_data = {"username": "admin", "password": "admin_password"} elif user_role == "user": login_data = {"username": "user", "password": "user_password"} else: pytest.fail(f"Unknown user role: {user_role}") login_response = requests.post(f"{api_url}/login", json=login_data) assert login_response.status_code == 200 token = login_response.json().get("access_token") headers = {"Authorization": f"Bearer {token}"} # Use the obtained token resource_response = requests.get(f"{api_url}/admin_resource", headers=headers) assert resource_response.status_code == expected_status """ ## 8. Data Serialization ### 8.1 Serialization Methods * **Do This:** Utilize proper data serialization techniques like "json", "xml", and "protobuf" to interact with APIs. * **Don't Do This:** Manually format your data to interact with APIs. **Why:** Ensures effective data communication, compatibility, and robustness of test scenarios. """python import json import requests import pytest @pytest.fixture def api_url(): return "https://api.example.com/v1" def test_post_request_json(api_url): payload = {"name": "John Doe", "email": "john.doe@example.com"} headers = {"Content-Type": "application/json"} # Define the Content-Type as JSON response = requests.post(f"{api_url}/users", data=json.dumps(payload), headers=headers) # Use json.dumps to serialize the dictionary to a JSON string assert response.status_code == 201 data = response.json() assert data['name'] == "John Doe" """ ## 9. Performance Optimization and Scalability ### 9.1 Connection Pooling * **Do This** Reusing Connections: Use "requests.Session" for connection pooling, which reuses the underlying TCP connection for multiple requests to the same host. This avoids the overhead of establishing a new connection for each request. * **Don't Do This:** Creating new connections for all API calls: Avoid creating a new "requests" instance or using "requests.get" / "requests.post" etc. directly for each request. **WHY:** Creating new connections introduces significant overhead especially impacting high-frequency, short-lived requests. Connection reuse significantly reduces this overhead. TCP handshakes, SSL negotiation (if HTTPS), and other connection establishment costs are all minimized with connection pooling. ### 9.2 Parallel Test Execution * **Do This:** Utilize "pytest-xdist" for parallel test execution. Configure the number of workers based on the number of CPU cores available. For I/O bound operations consider "pytest-asyncio". * **Don't Do This:** Run tests sequentially when possible. **Why:** Reduces the total test execution time significantly. Improves efficiency and provides faster feedback. """bash pip install pytest-xdist pytest -n auto # auto detects the number of CPUs """ or """bash pytest -n 4 # sets # of workers to 4. """ ### 9.3 Data Caching Strategies * **Do This:** Implement caching mechanisms to avoid redundant API calls. Use "functools.lru_cache" for caching API responses based on input parameters. Consider using a more robust caching solution like Redis or Memcached for larger datasets or distributed environments. * **Don't Do This:** Repeatedly fetching same from the API within tests. **Why**: In the testing phase, unnecessary API calls will waste resources and are inefficient. """python import functools @functools.lru_cache(maxsize=128) def get_data_from_api(api_url, item_id): response = requests.get(f"{api_url}/items/{item_id}") response.raise_for_status() return response.json() def test_get_item_cached(api_url): item_1 = get_data_from_api(api_url, 1) item_1 = get_data_from_api(api_url, 1) # Second call will be cached # Perform assertions using cached data """ This expanded document provides a comprehensive guide to API integration testing with PyTest, incorporating detailed examples, explanations, and addressing important aspects such as performance, security, and best practices. This document provides strong guidelines so AI coding assistants can help improve code reliability, performance, and reduce security risks.
# Component Design Standards for PyTest This document outlines component design standards for PyTest, aiming to promote reusable, maintainable, and performant test suites. These guidelines are intended for developers of all levels and can be used as a reference for AI coding assistants. ## 1. Modular Fixtures ### 1.1. Standard: Create Modular, Reusable Fixtures **Do This:** Design fixtures to be small, focused, and reusable across multiple test functions and test classes. **Don't Do This:** Create monolithic fixtures that are tightly coupled to specific tests or contain excessive logic. **Why:** Modular fixtures promote code reuse, reduce redundancy, and improve the overall maintainability of the test suite. They prevent the same setup and teardown logic from being duplicated across multiple tests. **Code Example:** """python import pytest @pytest.fixture def temp_dir(tmp_path): """Creates a temporary directory for testing.""" d = tmp_path / "test_data" d.mkdir() return d @pytest.fixture def sample_data(temp_dir): """Creates a sample data file within the temporary directory.""" data_file = temp_dir / "sample.txt" data_file.write_text("Hello, PyTest!") return data_file def test_read_data(sample_data): """Tests reading data from the sample data file.""" content = sample_data.read_text() assert content == "Hello, PyTest!" def test_file_exists(temp_dir): """Verifies if a file exists in the temporary directory.""" file_path = temp_dir / "another_file.txt" file_path.write_text('Another test file.') assert file_path.exists() """ **Anti-Pattern:** Duplicating the "temp_dir" and file creation logic within each test function. **Technology-Specific Detail:** Utilizing PyTest's "tmp_path" fixture for creating temporary directories and files ensures proper cleanup after tests, preventing filesystem pollution. The use of "tmp_path" is preferred over "tempfile" directly, as it integrates better with PyTest's context management. ### 1.2. Standard: Fixture Scopes **Do This:** Choose the appropriate fixture scope ("function", "class", "module", "package", "session") based on the fixture's lifecycle requirements. **Don't Do This:** Use overly broad scopes (e.g., "session") when a narrower scope (e.g., "function") would suffice. **Why:** Correct scoping minimizes resource consumption and test execution time. Using the smallest scope possible prevents unnecessary setup and teardown, making tests faster and more isolated. **Code Example:** """python import pytest @pytest.fixture(scope="function") # Each test function gets a new instance def db_connection(): """Creates a database connection for each test function.""" conn = create_connection() # Assume create_connection is defined elsewhere yield conn conn.close() @pytest.fixture(scope="module") # One connection per module. def module_resource(): """A resource that is acquired only once per module.""" resource = acquire_resource() yield resource release_resource(resource) def test_db_query_1(db_connection): """Tests a database query.""" result = db_connection.execute("SELECT * FROM table1") assert result is not None def test_db_query_2(db_connection): """Tests another database query.""" result = db_connection.execute("SELECT * FROM table2") assert result is not None def acquire_resource(): """Simulates acquiring a resource.""" print("Resource acquired.") return "Resource" def release_resource(resource): """Simulates releasing a resource.""" print("Resource released.") def test_module_usage_1(module_resource): """Tests using the module resource.""" assert module_resource == "Resource" print("Test 1 using module resource.") def test_module_usage_2(module_resource): """Tests using the module resource again.""" assert module_resource == "Resource" print("Test 2 using module resource.") """ **Anti-Pattern:** Using "scope="session"" for every fixture, leading to unnecessary resource allocation for tests that don't require it. **Technology-Specific Detail:** PyTest's scoping mechanism allows for fine-grained control over fixture lifetimes. "function" scope provides maximum isolation, while "session" scope provides minimal overhead for resources shared across the entire test suite. The "package" scope is useful for initializing resources once per package of tests. ### 1.3. Standard: Fixture Factories **Do This:** Use fixture factories to create different instances of a resource or object within the same test or across different tests. **Don't Do This:** Modify a single fixture instance directly within a test, as this can lead to test contamination and unpredictable results. **Why:** Fixture factories allow tests to be independent and reproducible, by creating different environments each time. **Code Example:** """python import pytest @pytest.fixture def user_factory(): """Fixture factory to create user objects.""" def _create_user(username, email): return User(username=username, email=email) # Assume User class is defined return _create_user def test_create_user(user_factory): """Tests creating different user objects.""" user1 = user_factory("alice", "alice@example.com") user2 = user_factory("bob", "bob@example.com") assert user1.username == "alice" assert user2.username == "bob" class User: def __init__(self, username, email): self.username = username self.email = email """ **Anti-Pattern:** Creating a single user fixture and then modifying its properties within different tests. **Technology-Specific Detail:** Using a function-based fixture as a factory allows for passing parameters to the fixture, enabling the creation of customized instances for each test. ## 2. Test Parameterization ### 2.1. Standard: Parameterize Test Functions **Do This:** Use "pytest.mark.parametrize" to execute a single test function with multiple sets of input data and expected results. **Don't Do This:** Write separate test functions for each set of input data, leading to code duplication. **Why:** Parameterization reduces code redundancy and makes tests more concise and easier to maintain. It focuses test logic on core assertions, not on boilerplate data handling. **Code Example:** """python import pytest @pytest.mark.parametrize( "input_str, expected_output", [ ("hello", "HELLO"), ("pytest", "PYTEST"), ("world", "WORLD"), ], ) def test_string_uppercase(input_str, expected_output): """Tests converting strings to uppercase.""" assert input_str.upper() == expected_output """ **Anti-Pattern:** Writing three separate test functions for converting "hello", "pytest", and "world" to uppercase. **Technology-Specific Detail:** "pytest.mark.parametrize" allows for defining multiple test cases within a single function, making it easier to test different scenarios with minimal code duplication. The IDs are automatically generated, but also customizable. ### 2.2. Standard: Comprehensive Parameter Names **Do This:** Use descriptive parameter names in "@pytest.mark.parametrize" to improve readability and debugging. **Don't Do This:** Use single-letter or cryptic parameter names that make it difficult to understand the purpose of each parameter. **Why:** Clear parameter names provide context for each test case, making it easier to understand the input data and expected results. **Code Example:** """python import pytest @pytest.mark.parametrize( "input_string, expected_uppercase", [ ("hello", "HELLO"), ("pytest", "PYTEST"), ], ids=["lowercase_hello", "lowercase_pytest"] # Provide custom ids ) def test_string_uppercase_named_params(input_string, expected_uppercase): """Tests converting strings to uppercase with descriptive parameter names.""" assert input_string.upper() == expected_uppercase """ **Anti-Pattern:** Using "x", "y", "z" as parameter names without any explanation of their purpose. **Technology-Specific Detail:** When test fail with parameterization, the parameter names are displayed in the traceback, making it easier to identify the specific test case that failed. Using the optional "ids" parameter enhances the clarity even further. ## 3. Custom Markers ### 3.1. Standard: Use Custom Markers **Do This:** Use custom markers to categorize tests and selectively run subsets of tests based on their category. **Don't Do This:** Rely solely on filename or directory structure for test categorization. **Why:** Custom markers provide a flexible and explicit way to organize tests, enabling you to run specific groups of tests based on functionality, priority, or other criteria. **Code Example:** """python import pytest @pytest.mark.slow def test_long_running_process(): """Tests a long-running process.""" # ... test logic ... assert True @pytest.mark.api def test_api_endpoint(): """Tests an API endpoint.""" # ... test logic ... assert True # Run slow tests: pytest -m slow # Run API tests: pytest -m api """ **Anti-Pattern:** Using naming conventions or directory structures (e.g., "slow_tests.py") to group tests, which is less flexible and harder to maintain. **Technology-Specific Detail:** "pytest.mark.<marker_name>" allows for adding custom markers to tests. You can then use the "-m" option to run tests with a specific marker. Markers should be registered in "pytest.ini" or "conftest.py" to avoid warnings. ### 3.2. Standard: Register Custom Markers **Do This:** Register custom markers in the "pytest.ini" or "conftest.py" file to avoid warnings when running tests. **Don't Do This:** Neglect to register custom markers, leading to warning messages and potential confusion. **Why:** Registration of custom markers is required by best practices and eliminates noise in the test output. **Code Example (pytest.ini):** """ini [pytest] markers = slow: Marks tests as slow-running. api: Marks tests as API tests. integration: Marks tests as integration tests. """ **Code Example (conftest.py):** """python import pytest def pytest_configure(config): config.addinivalue_line( "markers", "slow: mark test as slow to run" ) config.addinivalue_line( "markers", "integration: mark test as an integration test." ) """ **Anti-Pattern:** Running tests with custom markers without registering them first, resulting in warnings. **Technology-Specific Detail:** PyTest validates markers during test execution to ensure that they are properly defined. Registering the markers prevents these warnings and improves the overall quality of the test suite. Marker descriptions can provide helpful context during test selection and reporting. ## 4. Configuration and Hooks ### 4.1. Standard: Use "conftest.py" for Configuration **Do This:** Use the "conftest.py" file to define fixtures, plugins, and other configuration settings that are shared across multiple test modules. **Don't Do This:** Define global variables or configuration settings directly in test modules, as this can lead to code duplication and maintainability issues. **Why:** "conftest.py" provides a centralized location for managing test configurations and resources, making it easier to reuse code and maintain a consistent test environment. **Code Example (conftest.py):** """python import pytest @pytest.fixture(scope="session", autouse=True) def setup_session(): """Sets up the test session before any tests are run.""" print("\nSetting up the test session...") # ... session setup logic ... yield print("\nTearing down the test session...") # ... session teardown logic ... """ **Anti-Pattern:** Duplicating common fixtures or configuration settings in multiple test modules. **Technology-Specific Detail:** PyTest automatically discovers "conftest.py" files in the test directory hierarchy, making it easy to define shared resources and configurations without explicitly importing them. The "autouse" option makes fixtures run automatically. ### 4.2. Standard: Implement PyTest Hooks for Custom Behavior **Do This:** Use PyTest hooks to customize the test execution process, such as modifying test collection, reporting, or result handling. **Don't Do This:** Directly modify PyTest's internal code or Monkeypatch unless absolutely necessary, as this can lead to compatibility issues and unexpected behavior. **Why:** PyTest hooks provide a well-defined extension mechanism for customizing the testing framework, enabling you to add custom functionality without modifying the core code. **Code Example (conftest.py):** """python # Example: implement pytest_addoption to add a command line option to pytest def pytest_addoption(parser): parser.addoption( "--env", action="store", default="dev", help="environment: dev or prod" ) # Example: implement pytest_collection_modifyitems to modify the list of collected test items def pytest_collection_modifyitems(config, items): if config.getoption("--env") == "prod": # --env=prod given in cli: do not collect "marks" tests skip_marks = pytest.mark.skip(reason="do not run marks tests in prod env") for item in items: if "marks" in item.keywords: item.add_marker(skip_marks) """ **Anti-Pattern:** Modifying PyTest's internal code or Monkeypatching heavily, making your test suite brittle and difficult to upgrade. **Technology-Specific Detail:** PyTest offers a rich set of hooks that allow you to customize almost every aspect of the test execution process. By implementing these hooks in "conftest.py", you can add custom functionality to your test suite without modifying the core PyTest code. ## 5. Consistent Naming and Structure ### 5.1. Standard: Test File and Function Naming **Do This:** Adopt a consistent naming convention for test files and test functions that clearly indicates the purpose of the tests. **Don't Do This:** Use arbitrary or ambiguous names that make it difficult to understand the relationship between tests and the code they are testing. **Why:** Consistent naming improves code readability and maintainability, making it easier to locate and understand tests. **Code Example:** * Test files: "test_<module_name>.py" (e.g., "test_user.py", "test_api.py") * Test functions: "test_<functionality>_<scenario>" (e.g., "test_create_user_success", "test_login_failure_invalid_credentials") """python # test_user.py def test_create_user_success(): """Tests successful user creation.""" # ... test logic ... assert True def test_login_failure_invalid_credentials(): """Tests login failure due to invalid credentials.""" # ... test logic ... assert False """ **Anti-Pattern:** Using names like "test1", "test2", "test_stuff" without any indication of the tested functionality. **Technology-Specific Detail:** While PyTest doesn't enforce specific naming conventions, adopting a consistent convention makes it easier to navigate and maintain large test suites. ### 5.2. Standard: Logical Test Structure **Do This:** Organize test files and directories in a way that mirrors the structure of the application code. Keep test functions short and focused on a single assertion or related set of assertions. Use comments and docstrings to clearly explain the purpose of each test. **Don't Do This:** Create large, monolithic test files or deeply nested directory structures that are difficult to navigate and understand. Write test functions that contain excessive logic or multiple unrelated assertions. **Why:** A logical test structure improves code discoverability and maintainability, making it easier to locate and understand tests. **Code Example:** """ my_project/ ├── src/ │ ├── user.py │ ├── api.py │ └── ... └── tests/ ├── test_user.py ├── test_api.py ├── conftest.py └── ... """ """python # test_user.py def test_create_user_valid_input(): """Tests user creation with valid input data.""" # ... setup ... # ... action ... # ... assertion ... assert True def test_create_user_invalid_email(): """Tests user creation with an invalid email address.""" # ... setup ... # ... action ... # ... assertion ... assert False """ **Anti-Pattern:** Placing all tests in a single file or creating a deeply nested directory structure that doesn't reflect the application structure. **Technology-Specific Detail:** PyTest's test discovery mechanism automatically finds test files and functions based on naming conventions and directory structure. Structuring tests logically makes it easier for PyTest to discover and run them. Following these component design standards will lead to more robust, maintainable, and efficient PyTest test suites, helping ensure the quality and reliability of your Python applications.
# Tooling and Ecosystem Standards for PyTest This document outlines the recommended tooling, libraries, and ecosystem practices for developing and maintaining PyTest test suites. Adhering to these standards will promote maintainability, readability, performance, and security of your test code. ## 1. Core Tooling and Configuration ### 1.1. PyProject.toml Configuration **Standard:** Use "pyproject.toml" to manage dependencies, build system, and PyTest configuration. **Why:** "pyproject.toml" is the standard configuration file for Python projects, promoting consistency and compatibility with modern build tools. It allows declaring dependencies, build backends, and tool-specific settings in a single location. **Do This:** * Define all project dependencies (including testing dependencies like pytest) in "pyproject.toml". * Use "setuptools" or "poetry" as the build backend. * Configure PyTest specific settings using the "[tool.pytest.ini_options]" section. **Don't Do This:** * Use "requirements.txt" alone for dependency management (use it to pin dependency versions, but reference it from "pyproject.toml"). * Store configurations in setup.py or older config files. **Example:** """toml # pyproject.toml [build-system] requires = ["setuptools>=61.0"] build-backend = "setuptools.build_meta" [project] name = "my-project" version = "0.1.0" description = "A sample project" dependencies = [ "requests>=2.28", ] [project.optional-dependencies] test = [ "pytest>=7.0", "pytest-cov", "pytest-xdist", "pytest-mock" ] [tool.pytest.ini_options] addopts = [ "--cov=./my_package", "--cov-report=term-missing" ] testpaths = [ "tests" ] """ **Anti-pattern:** Storing pytest configuration in "setup.cfg" or "pytest.ini" instead of "pyproject.toml". ### 1.2. Virtual Environments **Standard:** Always use virtual environments to isolate project dependencies. **Why:** Virtual environments prevent dependency conflicts between projects and ensure reproducibility of test environments. **Do This:** * Use "venv", "virtualenv", or "conda" to create isolated environments. * Activate the virtual environment before installing dependencies or running tests. **Don't Do This:** * Install project dependencies globally. * Mix dependencies from different projects in the same environment. **Example:** """bash # Using venv python3 -m venv .venv source .venv/bin/activate # Using virtualenv virtualenv .venv source .venv/bin/activate """ **Anti-pattern:** Installing packages directly into the system Python environment. ## 2. Recommended PyTest Plugins and Extensions ### 2.1. Coverage Reporting ("pytest-cov") **Standard:** Use "pytest-cov" to measure test coverage. **Why:** Coverage reporting helps identify untested code paths, improving the quality and reliability of the codebase. **Do This:** * Install "pytest-cov" as a test dependency. * Configure coverage reporting options using "pyproject.toml" or command-line arguments. * Set coverage thresholds to enforce minimum test coverage percentages. * Use branch coverage measurement. **Don't Do This:** * Ignore coverage reports. * Aim for 100% coverage without considering the complexity and risk associated with each code path. * Only use line coverage. **Example:** """toml # pyproject.toml [tool.pytest.ini_options] addopts = [ "--cov=./my_package", "--cov-report=term-missing", "--cov-branch" #Enables branch coverage ] """ **Example:** Run tests with coverage reporting from the command line. """bash pytest --cov=./my_package --cov-report=term-missing --cov-branch """ ### 2.2. Mocking ("pytest-mock") **Standard:** Use "pytest-mock" for mocking dependencies in unit tests. **Why:** "pytest-mock" provides a convenient fixture for creating and managing mock objects, simplifying the process of isolating units of code during testing. Provides better integration with pytest fixtures and marks, facilitating testing complex interactions. **Do This:** * Use the "mocker" fixture provided by "pytest-mock" to create mock objects. * Use "mocker.patch" to replace dependencies with mock objects. **Don't Do This:** * Use manual mocking or outdated mocking libraries like "unittest.mock". * Over-mock code. Only mock external dependencies or complex interfaces. **Example:** """python # tests/test_my_module.py from my_package import my_module def test_my_function(mocker): mock_dependency = mocker.patch("my_package.my_module.external_dependency") mock_dependency.return_value = "mocked value" result = my_module.my_function() assert result == "expected value" mock_dependency.assert_called_once() """ ### 2.3. Distributed Testing ("pytest-xdist") **Standard:** Use "pytest-xdist" to parallelize test execution. **Why:** Parallelizing tests can significantly reduce test execution time, especially for large test suites. **Do This:** * Install "pytest-xdist" as a test dependency. * Use the "-n" option to specify the number of worker processes. * Ensure tests are isolated and do not rely on shared resources. **Don't Do This:** * Run tests in parallel if they have dependencies on shared resources without proper synchronization mechanisms. * Over-allocate worker processes, which can lead to resource contention and increased overhead. **Example:** """bash pytest -n auto # Automatically determine the number of workers pytest -n 4 # Use 4 worker processes """ ### 2.4. Test Fixtures and Factories ("pytest-factoryboy") **Standard:** Use "pytest-factoryboy" to generate test data using factories. **Why:** "pytest-factoryboy" integrates Factory Boy with PyTest, providing a convenient way to generate realistic test data and avoid duplication of test data setup code. **Do This:** * Define factories for model classes using Factory Boy. * Use the factories within PyTest fixtures to generate test data. * Use lazy attributes or sequence attributes to create unique or dynamic values. **Don't Do This:** * Hardcode test data directly in test functions. * Create redundant or inconsistent test data setup logic. **Example:** """python # tests/conftest.py import factory import pytest from my_package.models import User class UserFactory(factory.Factory): class Meta: model