# Code Style and Conventions Standards for TDD
This document outlines the coding style and conventions standards for Test-Driven Development (TDD). Adhering to these guidelines will promote code consistency, readability, maintainability, and collaboration within the development team. These standards are designed to be used by developers and as context for AI coding assistants like GitHub Copilot, Cursor, and similar tools.
## 1. General Principles
### 1.1. Consistency is Key
* **Do This:** Maintain a consistent style throughout the codebase. This includes naming conventions, formatting, and architectural patterns.
* **Don't Do This:** Introduce disparate styles within the same project or module. This leads to cognitive overhead and increases the risk of errors.
### 1.2. Readability Matters
* **Do This:** Write code that is easy to read and understand. Use meaningful names, clear comments, and proper formatting.
* **Don't Do This:** Write cryptic code that only you can understand. Assume that someone else (or your future self) will need to maintain your code.
### 1.3. Testability is Paramount
* **Do This:** Design your code with testability in mind from the outset. Favor dependency injection, interfaces, and loose coupling to facilitate unit testing.
* **Don't Do This:** Create tightly coupled classes with hard-coded dependencies that are difficult to isolate for testing.
### 1.4. KISS (Keep It Simple, Stupid)
* **Do This:** Strive for simplicity in your code. Avoid unnecessary complexity and over-engineering.
* **Don't Do This:** Introduce design patterns or abstractions prematurely. Only add complexity when it's clearly needed.
### 1.5. DRY (Don't Repeat Yourself)
* **Do This:** Avoid redundant code. Extract common logic into reusable functions or classes.
* **Don't Do This:** Copy and paste code blocks. This leads to inconsistencies and makes it harder to maintain your code.
## 2. Naming Conventions
### 2.1. General Naming Rules
* **Do This:** Use descriptive and meaningful names for variables, functions, classes, and modules.
* **Don't Do This:** Use single-letter variable names (except for loop counters like "i", "j", "k"). Avoid abbreviations unless they are widely understood in the domain.
### 2.2. Class Names
* **Do This:** Use PascalCase (e.g., "CustomerOrderService") for class names. Class names should be nouns or noun phrases representing the entity or concept the class models.
* **Don't Do This:** Use vague or ambiguous class names like "Helper" or "Manager" without providing context. Avoid using verbs in class names.
### 2.3. Method Names
* **Do This:** Use camelCase (e.g., "calculateTotalPrice()") for method names. Method names should be verbs or verb phrases indicating what the method does.
* **Don't Do This:** Use names that are too short or cryptic. Avoid names that don't accurately describe the method's behavior.
### 2.4. Variable Names
* **Do This:** Use camelCase (e.g., "customerName") for variable names. Variable names should reflect the type and purpose of the variable.
* **Don't Do This:** Use variable names that are too similar (e.g., "item" and "items"). Avoid shadowing variable names from outer scopes.
### 2.5. Constant Names
* **Do This:** Use UPPER_SNAKE_CASE (e.g., "MAX_RETRIES") for constant names.
* **Don't Do This:** Use magic numbers directly in your code. Define them as named constants.
### 2.6. Test Method Names
* **Do This:** Use a structure that clearly describes the scenario under test and the expected outcome. A common pattern is "[UnitOfWork]_[ScenarioUnderTest]_[ExpectedBehavior]". Examples: "CalculateDiscount_ValidCustomer_ReturnsDiscount", "ApplyCoupon_InvalidCode_ThrowsException".
* **Don't Do This:** Use vague test method names like "Test1", "Test2". Avoid names that don't reflect the expected behavior.
* **Why:** Clear and descriptive test names are *essential* in TDD. They serve as executable specifications and provide immediate feedback on whether the system behaves as expected. Poorly named tests make it difficult to understand the intent and diagnose failures.
### 2.7. Technology-Specific Naming
* **Example (C#):** Use Hungarian notation sparingly. Modern IDEs provide sufficient type information. Prefer descriptive names over encoding type information in the name. For collections (e.g., lists, arrays), use plural names (e.g., "customers" instead of "customerList").
* **Example (JavaScript/TypeScript):** Follow similar camelCase conventions for variables and methods. For boolean variables, consider prefixing with "is", "has", or "should" (e.g., "isActive", "hasPermission", "shouldUpdate").
## 3. Formatting
### 3.1. Indentation
* **Do This:** Use consistent indentation (e.g., 4 spaces or 2 spaces) throughout the codebase. Configure your editor or IDE to automatically handle indentation.
* **Don't Do This:** Mix tabs and spaces for indentation. This can lead to visual inconsistencies across different environments.
### 3.2. Line Length
* **Do This:** Limit line length to a reasonable maximum (e.g., 120 characters). This improves readability and reduces the need for horizontal scrolling.
* **Don't Do This:** Write excessively long lines of code. Break them up into smaller, more manageable chunks.
### 3.3. Whitespace
* **Do This:** Use whitespace to improve readability. Add blank lines to separate logical blocks of code. Use spaces around operators and after commas.
* **Don't Do This:** Cram code together without any whitespace. Avoid excessive whitespace that makes the code look sparse.
### 3.4. Braces
* **Do This:** Use consistent brace style (e.g., Allman style or K&R style). Choose a style and stick to it. Allman Style (braces on their own lines) often improves readability, especially in larger codebases.
* **Don't Do This:** Mix different brace styles within the same file or project.
### 3.5. Formatting Specific to TDD - Test File Structure
* **Do This:** Structure test files to be easily navigable. Common patterns include:
* Using "#region" directives (C#) or similar mechanisms to group related tests.
* Organizing tests by the specific unit of work being tested.
* Grouping arrange, act, and assert sections with comments (or helper methods, see examples below).
* **Don't Do This:** Have a single, monolithic test file with hundreds of lines of code and no clear structure.
## 4. Comments and Documentation
### 4.1. Code Comments
* **Do This:** Write clear and concise comments to explain complex logic, algorithms, or design decisions. Explain *why* the code is doing something, not *what* it is doing (the code already shows that).
* **Don't Do This:** Comment obvious code. Avoid writing comments that are outdated or inaccurate.
### 4.2. API Documentation
* **Do This:** Use documentation generators (e.g., JSDoc, Doxygen, DocFX) to create API documentation from your code comments.
* **Don't Do This:** Neglect to document your public APIs. This makes it difficult for others to use your code.
### 4.3. TDD-Specific Comments
* **Do This:** Use comments sparingly *within* test methods to delineate the Arrange, Act, and Assert (AAA) sections, *especially* in complex tests. Consider using helper methods instead of comments for simple cases.
* **Don't Do This:** Comment every single line of your test code. The test code itself should be relatively self-explanatory if the test and the code are well-designed.
## 5. Code Examples
Here are some code examples demonstrating the coding style and conventions outlined above, keeping in mind the focus on TDD practices:
### 5.1. C# Example
"""csharp
// Class definition (PascalCase)
public class CustomerOrderService
{
private readonly IDiscountCalculator _discountCalculator; // Dependency Injection
// Constructor (dependency injection)
public CustomerOrderService(IDiscountCalculator discountCalculator)
{
_discountCalculator = discountCalculator;
}
// Method definition (camelCase)
public decimal CalculateTotalPrice(Customer customer, List orderItems)
{
decimal subtotal = orderItems.Sum(item => item.Price * item.Quantity);
decimal discount = _discountCalculator.CalculateDiscount(customer, subtotal); // Use injected dependency
return subtotal - discount;
}
}
public interface IDiscountCalculator
{
decimal CalculateDiscount(Customer customer, decimal subtotal);
}
public class Customer { public bool IsPreferredCustomer { get; set; } }
public class OrderItem {public decimal Price { get; set; }public int Quantity { get; set; }}
// Test Class and Methods
using Xunit;
using Moq; //Example using Moq for Mocking
public class CustomerOrderServiceTests
{
[Fact]
public void CalculateTotalPrice_PreferredCustomer_AppliesDiscount()
{
// Arrange
var mockDiscountCalculator = new Mock();
mockDiscountCalculator.Setup(x => x.CalculateDiscount(It.IsAny(), It.IsAny())).Returns(10); // Setup the mock with expected return
var customerOrderService = new CustomerOrderService(mockDiscountCalculator.Object);
var customer = new Customer { IsPreferredCustomer = true };
var orderItems = new List { new OrderItem { Price = 100, Quantity = 1 } };
// Act
decimal totalPrice = customerOrderService.CalculateTotalPrice(customer, orderItems);
// Assert
Assert.Equal(90, totalPrice);
mockDiscountCalculator.Verify(x => x.CalculateDiscount(customer, 100), Times.Once); // Verify mock interaction
}
[Fact]
public void CalculateTotalPrice_NonPreferredCustomer_NoDiscountTaken()
{
// Arrange (helper method for clarity - see below)
var (customerOrderService, mockDiscountCalculator, customer, orderItems) = ArrangeCalculateTotalPriceTest(false);
// Act
decimal totalPrice = customerOrderService.CalculateTotalPrice(customer, orderItems);
// Assert
Assert.Equal(100, totalPrice);
mockDiscountCalculator.Verify(x => x.CalculateDiscount(customer, 100), Times.Once);
}
//Helper method to keep tests DRY
private (CustomerOrderService, Mock, Customer, List) ArrangeCalculateTotalPriceTest(bool isPreferred)
{
var mockDiscountCalculator = new Mock();
mockDiscountCalculator.Setup(x => x.CalculateDiscount(It.IsAny(), It.IsAny())).Returns(isPreferred ? 10 : 0);
var customerOrderService = new CustomerOrderService(mockDiscountCalculator.Object);
var customer = new Customer { IsPreferredCustomer = isPreferred };
var orderItems = new List { new OrderItem { Price = 100, Quantity = 1 } };
return (customerOrderService, mockDiscountCalculator, customer, orderItems);
}
}
"""
### 5.2. JavaScript/TypeScript Example
"""typescript
// Class definition (PascalCase)
class CustomerOrderService {
private discountCalculator: IDiscountCalculator; //Dependency Injection
// Constructor (dependency injection)
constructor(discountCalculator: IDiscountCalculator) {
this.discountCalculator = discountCalculator;
}
// Method definition (camelCase)
public calculateTotalPrice(customer: Customer, orderItems: OrderItem[]): number {
const subtotal = orderItems.reduce((sum, item) => sum + item.price * item.quantity, 0);
const discount = this.discountCalculator.calculateDiscount(customer, subtotal);
return subtotal - discount;
}
}
interface IDiscountCalculator {
calculateDiscount(customer: Customer, subtotal: number): number;
}
class Customer { public isPreferredCustomer: boolean; constructor(isPreferredCustomer: boolean){this.isPreferredCustomer = isPreferredCustomer;}}
class OrderItem {public price: number; public quantity: number; constructor(price: number,quantity: number){this.price = price; this.quantity = quantity;}}
// Test Class and Methods
import { CustomerOrderService } from './customer-order-service'; // Adjust import as needed
import { Customer } from './customer';
import { OrderItem } from './order-item';
describe('CustomerOrderService', () => {
it('calculateTotalPrice_preferredCustomer_appliesDiscount', () => {
// Arrange
const mockDiscountCalculator = {
calculateDiscount: jest.fn((customer, subtotal) => 10),
};
const customerOrderService = new CustomerOrderService(mockDiscountCalculator);
const customer = new Customer(true);
const orderItems = [new OrderItem(100,1)];
// Act
const totalPrice = customerOrderService.calculateTotalPrice(customer, orderItems);
// Assert
expect(totalPrice).toBe(90);
expect(mockDiscountCalculator.calculateDiscount).toHaveBeenCalledWith(customer, 100); //More explicit check
});
it('calculateTotalPrice_nonPreferredCustomer_noDiscount', () => {
// Arrange
const mockDiscountCalculator = {
calculateDiscount: jest.fn((customer, subtotal) => 0),
};
const customerOrderService = new CustomerOrderService(mockDiscountCalculator);
const customer = new Customer(false);
const orderItems = [new OrderItem(100,1)];
// Act
const totalPrice = customerOrderService.calculateTotalPrice(customer, orderItems);
// Assert
expect(totalPrice).toBe(100);
expect(mockDiscountCalculator.calculateDiscount).toHaveBeenCalledWith(customer, 100);
});
});
"""
### 5.3. Python Example
"""python
# Class definition (PascalCase/CamelCase is less strict, but consistency within project is Key)
class CustomerOrderService:
def __init__(self, discount_calculator): #Dependency Injection
self.discount_calculator = discount_calculator # Dependency injection
def calculate_total_price(self, customer, order_items): #Method definition (snake_case)
subtotal = sum(item.price * item.quantity for item in order_items)
discount = self.discount_calculator.calculate_discount(customer, subtotal)
return subtotal - discount
#Interface achieved via duck-typing (Python)
# class IDiscountCalculator: # not strictly enforced
class Customer:
def __init__(self, is_preferred_customer):
self.is_preferred_customer = is_preferred_customer
class OrderItem:
def __init__(self, price, quantity):
self.price = price
self.quantity = quantity
# Test Class and method
import unittest
from unittest.mock import Mock #Use unittest.mock instead of older mock library
# from customer_order_service import CustomerOrderService, Customer, OrderItem #Adjust import as needed
class TestCustomerOrderService(unittest.TestCase):
def test_calculate_total_price_preferred_customer_applies_discount(self):
# Arrange
mock_discount_calculator = Mock()
mock_discount_calculator.calculate_discount.return_value = 10
customer_order_service = CustomerOrderService(mock_discount_calculator)
customer = Customer(True)
order_items = [OrderItem(100, 1)]
# Act
total_price = customer_order_service.calculate_total_price(customer, order_items)
# Assert
self.assertEqual(total_price, 90)
mock_discount_calculator.calculate_discount.assert_called_with(customer, 100) #Improved assertion
def test_calculate_total_price_non_preferred_customer_no_discount(self):
# Arrange
mock_discount_calculator = Mock()
mock_discount_calculator.calculate_discount.return_value = 0
customer_order_service = CustomerOrderService(mock_discount_calculator)
customer = Customer(False)
order_items = [OrderItem(100, 1)]
# Act
total_price = customer_order_service.calculate_total_price(customer, order_items)
# Assert
self.assertEqual(total_price, 100)
mock_discount_calculator.calculate_discount.assert_called_with(customer, 100)
"""
## 6. Common Anti-Patterns
* **Long Methods:** Avoid methods that are too long and complex. Break them up into smaller, more focused methods.
* **God Classes:** Avoid classes that do too much. Follow the Single Responsibility Principle.
* **Feature Envy:** Avoid classes that access the internal state of other classes too frequently. This violates encapsulation.
* **Shotgun Surgery:** Avoid making changes that require you to modify multiple unrelated classes. This indicates poor design.
* **Ignoring Warnings:** Treat compiler warnings and static analysis warnings seriously. They often indicate potential problems in your code.
* **Over-Commenting (Test Code):** As stated above - let the *test name* and the *code* within the test speak for itself. Excessive commenting often hides poorly designed tests.
* **Under-Testing Edge Cases:** Failing to test boundary conditions, invalid inputs, and error scenarios. This can lead to unexpected behavior and failures in production
## 7. Technology-Specific Considerations
Specific languages and frameworks have their own conventions and best practices. It's crucial to be aware of these and follow them consistently. For example:
* **C#:** Utilize features like LINQ, async/await, and expression-bodied members for concise and readable code. Embrace dependency injection frameworks like AutoFac or Microsoft.Extensions.DependencyInjection. Aim for immutability where practical.
* **JavaScript/TypeScript:** Use modern ES6+ features like arrow functions, destructuring, and template literals. Employ TypeScript's type system effectively to prevent runtime errors. Consider using a linter like ESLint and a formatter like Prettier for automated code style enforcement.
* **Python:** Follow PEP 8 style guide for Python code. Use virtual environments to manage dependencies. Take advantage of Python's dynamic typing for rapid prototyping, but use type hints where appropriate for clarity and maintainability.
## 8. Tooling
Leverage tools to automate code style enforcement and identify potential issues. Popular options include:
* **Linters:** ESLint (JavaScript/TypeScript), Pylint (Python), StyleCop (C#)
* **Formatters:** Prettier (JavaScript/TypeScript), Black (Python), EditorConfig (Cross-language)
* **Static Analyzers:** SonarQube, Coverity, NDepend (C#)
* **IDEs:** Rider, VS Code, and Visual Studio have excellent support for code formatting, linting, and refactoring
## 9. Continuous Improvement
Coding standards are not static. Regularly review and update them based on new technologies, best practices, and lessons learned from past projects. Encourage feedback from the development team and be willing to adapt the standards as needed. Hold periodic code reviews to ensure adherence to the standards and identify areas for improvement.
By adopting and consistently applying these coding style and convention standards, your TDD projects will be more maintainable, readable, and robust, leading to increased developer productivity and higher-quality software.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Core Architecture Standards for TDD This document outlines the core architectural standards for Test-Driven Development (TDD). Adhering to these standards improves code quality, maintainability, and testability. These standards are designed to be used by developers and as context for AI coding assistants, ensuring consistency and best practices across the development lifecycle. The goal is to ensure that architectural decisions are strongly aligned with the testability demands of TDD. ## 1. Architectural Principles and Patterns ### 1.1. Fundamental Principles * **Do This:** Apply SOLID principles at the architectural level. Specifically, strive for single responsibility at multiple levels of abstraction (e.g. modules, packages, services), open/closed principle in framework design or component architecture. * **Don't Do This:** Create monolithic applications with tightly coupled components. This makes testing difficult and hinders future modifications. * **Why:** SOLID principles promote loosely coupled, modular designs that facilitate independent testing of components. * **Do This:** Prioritize separation of concerns (SoC). Architect your application by dividing it into distinct sections, each addressing a specific concern. * **Don't Do This:** Mix unrelated functionalities within a single module or class. This leads to code that is hard to understand, test, and maintain. * **Why:** Separation of concerns improves code organization and allows for isolated testing and easier modification of individual features. * **Do This:** Embrace the Dependency Inversion Principle (DIP). Abstractions should not depend on details; details should depend on abstractions. * **Don't Do This:** Hardcode concrete dependencies. This makes classes difficult to test in isolation. * **Why:** DIP enables the use of dependency injection and mocking, allowing tests to control dependencies and verify interactions. * **Do This:** Ensure the architecture facilitates testability and maintainability. Tests should be easy to write, run, and understand. * **Don't Do This:** Defer testability considerations to the later phases of development, assuming that tests can always be added later with minimal refactoring. * **Why:** This makes systems more robust, easier to maintain, and ensures comprehensive testing throughout the development process. ### 1.2. Architectural Patterns Tailored for TDD * **Do This:** Favor a layered architecture. Typically, this involves a presentation layer, an application layer, a domain layer, and an infrastructure layer each serving a specific purpose. * **Don't Do This:** Directly accessing the database from the presentation logic. Violating layers boundaries makes it extremely difficult to thoroughly test at the layer level. * **Why:** Layered Architecture allows for a clear separation of concerns. Each layer can be tested independently using mocks or stubs. """python # Example of Layered Architecture in Python (simplified web application) # Infrastructure Layer (Data Access) class UserRepository: def get_user_by_id(self, user_id): # Database logic to retrieve user pass # Domain Layer (Business Logic) class UserService: def __init__(self, user_repository: UserRepository): self.user_repository = user_repository def get_user_profile(self, user_id): user = self.user_repository.get_user_by_id(user_id) # Additional business logic return user # Application Layer (API Endpoints) from flask import Flask, jsonify app = Flask(__name__) user_repository = UserRepository() user_service = UserService(user_repository) @app.route("/users/<int:user_id>", methods=['GET']) def get_user(user_id): user = user_service.get_user_profile(user_id) return jsonify(user) if __name__ == '__main__': app.run(debug=True) # Example Test (testing the Application Layer, mocking the domain layer). Note this example uses pytest. import pytest from unittest.mock import MagicMock from your_app import get_user, user_service # Adjust import @pytest.fixture def mock_user_service(monkeypatch): mock = MagicMock() monkeypatch.setattr(your_app, 'user_service', mock) # adjust import return mock def test_get_user_success(mock_user_service): mock_user_service.get_user_profile.return_value = {"id": 1, "name": "Test User"} response = get_user(1) # Assume this returns Flask's response object assert response.status_code == 200 assert response.get_json() == {"id": 1, "name": "Test User"} """ * **Do This:** Consider Hexagonal Architecture (Ports and Adapters). Place the core business logic at the center, surrounded by ports (interfaces) and adapters (implementations). * **Don't Do This:** Directly couple the core business logic to external technologies (e.g., databases or UI frameworks). * **Why:** Hexagonal Architecture separates the domain logic from the infrastructure, enabling easier testing with mock adapters and simplifies swapping out external dependencies. """java // Example of Hexagonal Architecture in Java // Port (Interface) interface UserRepository { User getUserById(String userId); } // Adapter (Implementation) class PostgresUserRepository implements UserRepository { @Override public User getUserById(String userId) { // Implementation using Postgres database return new User(); //dummy. replace with DB fetch } } // Domain (Core Business Logic) class UserService { private final UserRepository userRepository; public UserService(UserRepository userRepository) { this.userRepository = userRepository; } public User getUserProfile(String userId) { return userRepository.getUserById(userId); // Add business logic } } // Test using a mock adapter. Demonstrates flexibility of swapping implementations through ports. import org.junit.jupiter.api.Test; import static org.mockito.Mockito.*; import static org.junit.jupiter.api.Assertions.*; class UserServiceTest { @Test void getUserProfile_shouldReturnUser_whenUserExists() { // Arrange UserRepository mockUserRepository = mock(UserRepository.class); User expectedUser = new User(); //replace with some populated User class when(mockUserRepository.getUserById("123")).thenReturn(expectedUser); UserService userService = new UserService(mockUserRepository); // Act User actualUser = userService.getUserProfile("123"); // Assert assertEquals(expectedUser, actualUser); verify(mockUserRepository).getUserById("123"); // Verify the method was called } } """ * **Do This:** When developing microservices apply the strangler fig pattern to iteratively migrate from an older monolithic architecture to a new microservices-based architecture. * **Don't Do This:** Attempt a "big bang" rewrite by rebuilding an entire application as microservices at once. * **Why:** This allows for incremental building and rollout. Old functionality remains in place until the new microservice is thoroughly tested and ready for production. ### 1.3. Project Structure & Organization * **Do This:** Structure your project according to architectural layers or modules, keeping test code alongside the corresponding source code. A common practice is to have "src/" and "tests/" directories at the root level, mirroring package structures within each. * **Don't Do This:** Place all tests in a single, monolithic "tests/" directory. This becomes unwieldy and difficult to navigate as the project grows. * **Why:** This organization improves discoverability and helps maintain a clear relationship between code and its corresponding tests. """ project-root/ ├── src/ │ ├── main/ │ │ ├── java/ │ │ │ ├── com/ │ │ │ │ └── example/ │ │ │ │ ├── domain/ │ │ │ │ │ ├── User.java │ │ │ │ │ └── UserService.java │ │ │ │ ├── infrastructure/ │ │ │ │ │ └── UserRepository.java │ │ │ │ └── api/ │ │ │ │ └── UserController.java ├── tests/ │ ├── test/ │ │ ├── java/ │ │ │ ├── com/ │ │ │ │ └── example/ │ │ │ │ ├── domain/ │ │ │ │ │ └── UserServiceTest.java │ │ │ │ ├── infrastructure/ │ │ │ │ │ └── UserRepositoryTest.java │ │ │ │ └── api/ │ │ │ │ └── UserControllerTest.java ├── pom.xml (Maven project) """ * **Do This:** Use meaningful package and class names. Reflect the domain and functionality. * **Don't Do This:** Use generic names like "Util" or "Manager" without specific contexts. * **Why:** Improves code readability and maintainability across the project. * **Do This:** Keep modules small and cohesive. A module should have a focused responsibility and a well-defined interface. * **Don't Do This:** Create "god classes" or modules that try to do too much. * **Why:** Small modules are easier to understand, test, and reuse. This increases development speed and significantly lowers debugging costs. ## 2. TDD Workflow and Integration with Architecture ### 2.1. Red-Green-Refactor Cycle * **Do This:** Strictly adhere to the Red-Green-Refactor cycle. Write a failing test first (Red), implement the minimum amount of code to make the test pass (Green), and then refactor the code to improve its design (Refactor). * **Don't Do This:** Write code without a failing test, or skip the refactoring step. * **Why:** The Red-Green-Refactor cycle ensures that code is written to satisfy specific requirements and that it is continuously improved. ### 2.2. Test Pyramid * **Do This:** Follow the test pyramid: Aim for many unit tests, fewer integration tests, and even fewer end-to-end tests. Focus the majority of testing efforts on unit tests. * **Don't Do This:** Rely heavily on end-to-end tests at the expense of unit tests. This leads to slow and brittle test suites. * **Why:** Unit tests are faster to write and execute and provide more precise feedback. Integration tests verify interactions between components, while end-to-end tests ensure the application works as a whole. ### 2.3. Integrating Tests with Build and CI/CD pipelines * **Do This:** Integrate tests into the build process and CI/CD pipeline. Ensure that all tests pass before deploying any code. * **Don't Do This:** Defer running tests to manual execution or skip tests in the CI/CD pipeline to speed up deployments. * **Why:** Continuous testing ensures that any regressions are detected early and that the application remains in a working state. """yaml # Example of CI/CD Pipeline with Tests (GitHub Actions) name: CI/CD on: push: branches: [ "main" ] pull_request: branches: [ "main" ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Set up JDK 17 uses: actions/setup-java@v3 with: java-version: '17' distribution: 'temurin' - name: Grant execute permission for gradlew run: chmod +x gradlew - name: Run Tests with Gradle run: ./gradlew test - name: Build with Gradle run: ./gradlew build - name: Upload a Build Artifact uses: actions/upload-artifact@v3 with: name: Package path: build/libs/ """ ## 3. Testing Techniques and Tools ### 3.1. Unit Testing * **Do This:** Write focused unit tests that isolate and verify the behavior of individual classes or functions. * **Don't Do This:** Write overly complex unit tests that test multiple aspects of a component at once. * **Why:** Focused unit tests are easier to understand, maintain, and debug. """java // Example of Focused Unit Test (Java with JUnit) import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.*; class StringUtils { public static String reverseString(String input) { return new StringBuilder(input).reverse().toString(); } } class StringUtilsTest { @Test void reverseString_shouldReturnReversedString() { String input = "hello"; String expected = "olleh"; String actual = StringUtils.reverseString(input); assertEquals(expected, actual); } @Test void reverseString_shouldReturnEmptyString_whenInputIsEmpty() { String input = ""; String expected = ""; String actual = StringUtils.reverseString(input); assertEquals(expected, actual); } } """ ### 3.2. Mocking and Stubbing * **Do This:** Use mocking frameworks to isolate classes under test and control their dependencies. * **Don't Do This:** Mock everything. Only mock dependencies that are external or complex to set up. * **Why:** Mocking enables testing in isolation and verifying interactions between components. """python # Example of Mocking in Python using pytest and unittest.mock import pytest from unittest.mock import MagicMock class EmailService: def send_email(self, recipient, message): # Implementation to send an email print(f"Sending email to {recipient}: {message}") class UserService: def __init__(self, email_service: EmailService): self.email_service = email_service def register_user(self, username, email): # Logic to register user self.email_service.send_email(email, f"Welcome, {username}!") return {"username": username, "email": email} @pytest.fixture def mock_email_service(): return MagicMock() def test_register_user_sends_email(mock_email_service): user_service = UserService(mock_email_service) user = user_service.register_user("testuser", "test@example.com") mock_email_service.send_email.assert_called_once_with("test@example.com", "Welcome, testuser!") assert user == {"username": "testuser", "email": "test@example.com"} #Another Example import unittest from unittest.mock import patch def add(x, y): return x + y class TestAdd(unittest.TestCase): @patch('__main__.add') def test_add(self, mock_add): mock_add.return_value = 5 result = add(2, 3) self.assertEqual(result, 5) # Assert that the mock was used and returned 5 mock_add.assert_called_with(2, 3) # assert that the proper arguments were utilized """ ### 3.3. Integration Testing * **Do This:** Write integration tests to verify the interaction between different components or modules. * **Don't Do This:** Test the complete system in integration tests. Focus on verifying specific interactions. * **Why:** Integration tests ensure that components work correctly together. ### 3.4. End-to-End Testing * **Do This:** Use end-to-end testing to ensure the entire application works as expected from the user's perspective. * **Don't Do This:** Rely solely on end-to-end tests. They are slow and difficult to debug. * **Why:** End-to-end tests provide confidence that the application delivers the expected functionality. ## 4. Technology Specific Details ### 4.1. Java and Spring Boot * **Do This:** Use Spring's testing support ("@SpringBootTest", "@MockBean") for integration and unit testing. These annotations and classes provide convenient ways to load application contexts and mock dependencies. * **Don't Do This:** Manually create and manage Spring application contexts in tests unless absolutely necessary. Spring's testing support simplifies this process. * **Why:** Spring's testing framework integrates seamlessly with JUnit and provides powerful features for testing Spring applications. Using these features results in cleaner, more maintainable tests. """java @SpringBootTest // Loads the full application context class MyServiceIntegrationTest { @Autowired private MyService myService; @MockBean // Replaces the real bean with a mock private Dependency dependency; @Test void testSomething() { // Configure the mock when(dependency.doSomething()).thenReturn("mockedResult"); // Call the service method String result = myService.performAction(); // Assert the result and verify interactions assertEquals("expectedResult", result); verify(dependency).doSomething(); } } """ ### 4.2 Python Testing with Pytest * **Do This:** Use pytest fixtures for setup and teardown in tests. Fixtures help manage test resources and dependencies in a clean and reusable manner. * **Don't Do This:** Directly instantiate objects or manage resources within test functions. This makes tests harder to read and maintain. * **Why:** Pytest fixtures promote clean test code and facilitate the creation of reusable test components. """python # Example of Test using Pytest Fixtures import pytest from unittest.mock import MagicMock class DatabaseConnection: #Dummy DB connector class def connect(self): return True def disconnect(self): return True @pytest.fixture def mock_db_connection(): connection = MagicMock(spec=DatabaseConnection) # create a "fake" Database connector class connection.connect.return_value = True yield connection connection.disconnect() def test_database_interaction(mock_db_connection): # Your test logic here, using the mocked database connection mock_db_connection.connect.assert_called_once() # assert that the db connected """ ### 4.3 Javascript testing with Jest * **Do This:** Use Jest's mocking utilities ("jest.mock()", "jest.spyOn()") to mock dependencies and verify function calls. This makes it easier to isolate units of code and ensures they behave as expected. * **Don't Do This:** Manually mock dependencies by creating mock objects and functions. Jest provides built-in utilities that simplify this process. * **Why:** Using Jest's mocking utilities results in cleaner and more maintainable tests. They also provide enhanced features for verifying function calls and interactions. """javascript // Example of Mocking in Jest // myModule.js export const fetchData = async () => { const response = await fetch('/api/data'); const data = await response.json(); return data; }; // myModule.test.js import { fetchData } from './myModule'; global.fetch = jest.fn(() => // Mock the global fetch function Promise.resolve({ json: () => Promise.resolve({ key: 'mocked value' }), }) ); test('fetchData should return mocked value', async () => { const result = await fetchData(); expect(result).toEqual({ key: 'mocked value' }); expect(fetch).toHaveBeenCalledTimes(1); }); """ ## 5. Common Anti-Patterns and Mistakes * **Long Setup:** Tests with excessive setup code become difficult to read and maintain. Simplify test setup by using helper functions or fixtures. * **Testing Implementation Details:** Tests should focus on verifying behavior, not implementation details. Avoid asserting on private methods or internal state. * **Ignoring Edge Cases:** Always test edge cases and boundary conditions to ensure code handles unexpected input correctly. * **Insufficient Assertions:** When running tests in Green after Red, tests with too few assertions may pass without sufficiently validating. Use multiple assertions to make sure all aspects have been validated to the requirements. By adhering to these core architecture standards, development teams can leverage TDD to build robust, maintainable, and testable software applications. This comprehensive guide serves as a valuable resource for developers and AI coding assistants to ensure consistency and best practices throughout the development lifecycle.
# Deployment and DevOps Standards for TDD This document outlines coding standards related to deployment and DevOps practices for Test-Driven Development (TDD). It focuses on how TDD principles integrate with build processes, CI/CD pipelines, and production environment considerations. These standards ensure maintainability, performance, security, and efficient delivery of software developed using TDD. ## 1. Build Processes and TDD ### 1.1. Standard: Automated Build Verification **Do This:** Integrate a complete test suite execution into your build process. Ensure that the build fails if any tests fail. **Don't Do This:** Skip test execution during builds or ignore test failures. **Why:** Early detection of failures is crucial for maintaining code quality and stability. Failing fast during the build process prevents defective code from propagating further down the pipeline. **Code Example (Maven):** """xml <!-- pom.xml --> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.0.0-M5</version> <configuration> <failIfNoTests>true</failIfNoTests> </configuration> </plugin> </plugins> </build> """ **Explanation:** The "maven-surefire-plugin" configured with "failIfNoTests" set to "true" will cause the build to fail if there are no tests or if any tests fail. This ensures that failing tests explicitly break the build. ### 1.2. Standard: Incremental Builds **Do This:** Design build processes to be incremental and efficient, compiling only changed or dependent modules. **Don't Do This:** Perform full builds every time, especially in large projects, as this significantly increases build times and reduces developer productivity. **Why:** Faster build times enable more frequent feedback loops, aligning with TDD's iterative nature. **Code Example (Gradle):** """gradle // build.gradle.kts plugins { java } tasks.test { useJUnitPlatform() testLogging { events("passed", "skipped", "failed") } } """ **Explanation:** Gradle's incremental build capabilities ensure that only necessary components are rebuilt and tested. Integrated testing with JUnit platform and logging events provides clear build status. ### 1.3. Standard: Dependency Management **Do This:** Use a robust dependency management system (e.g., Maven, Gradle, npm) to manage project dependencies and ensure consistency across different environments. **Don't Do This:** Manually manage dependencies or rely on system-wide installations. **Why:** Consistent dependency management prevents conflicts and ensures that the application behaves identically in development, testing, and production. **Code Example (npm):** """json // package.json { "name": "my-tdd-app", "version": "1.0.0", "dependencies": { "jest": "^29.0.0", "lodash": "^4.17.21" }, "devDependencies": { "eslint": "^8.0.0" }, "scripts": { "test": "jest", "lint": "eslint ." } } """ **Explanation:** "package.json" defines project dependencies and development dependencies. "npm install" will install the correct versions, ensuring consistency. Using "npm test" executes tests defined in the "scripts" section. ## 2. CI/CD and TDD ### 2.1. Standard: Continuous Integration Triggered by Code Changes **Do This:** Configure your CI/CD pipeline to automatically trigger a build and run all tests upon every code commit to the main branch or pull request. **Don't Do This:** Manually trigger builds or skip running tests in CI/CD. **Why:** Automated testing in CI/CD provides continuous feedback, preventing integration issues and ensuring that the main codebase remains stable. **Example Configuration (GitHub Actions):** """yaml # .github/workflows/ci.yml name: CI on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Set up JDK 17 uses: actions/setup-java@v3 with: java-version: '17' distribution: 'adopt' - name: Cache Maven packages uses: actions/cache@v3 with: path: ~/.m2 key: ${{ runner.os }}-m2-${{ hashFiles('**/pom.xml') }} restore-keys: ${{ runner.os }}-m2 - name: Build with Maven run: mvn -B verify --file pom.xml """ **Explanation:** This GitHub Actions workflow triggers on pushes to the "main" branch and pull requests targeting "main". It sets up Java, caches Maven packages, and runs the Maven verify goal, which includes running unit tests. Any test failures will cause the action to fail, preventing the merge. ### 2.2. Standard: Pre-Deployment Verification **Do This:** Include comprehensive automated tests in a pre-deployment verification step. These tests should include unit tests, integration tests, and end-to-end tests. **Don't Do This:** Deploy code without running all automated tests or rely solely on manual testing. **Why:** Comprehensive testing before deployment minimizes the risk of introducing defects into production. **Code Example (CI/CD Pipeline Stage):** """yaml # GitLab CI Example (.gitlab-ci.yml) stages: - build - test - deploy build: stage: build image: maven:3.8.1-jdk-17 script: - mvn compile test: stage: test image: maven:3.8.1-jdk-17 script: - mvn test deploy: stage: deploy image: docker:latest services: - docker:dind before_script: - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY script: - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA """ **Explanation:** This GitLab CI configuration defines three stages: build, test, and deploy. The "test" stage runs Maven tests, ensuring that the deployment stage is only reached if all tests pass. ### 2.3. Standard: Feature Flags **Do This:** Use feature flags to enable or disable new features in production without deploying new code. Include tests that verify the behavior of the code with feature flags enabled and disabled. **Don't Do This:** Directly deploy untested and unflagged features to production. **Why:** Feature flags allow for controlled rollout and testing of new features, reducing the risk of impacting all users simultaneously. They also support A/B testing and canary releases. **Code Example (Java with Togglz):** """java import org.togglz.core.Feature; import org.togglz.core.annotation.EnabledByDefault; import org.togglz.core.annotation.Label; import org.togglz.core.annotation.ActivationStrategy; import org.togglz.core.activation.PercentageBasedActivationStrategy; public enum MyFeatures implements Feature { @EnabledByDefault @Label("New Feature") NEW_FEATURE, @Label("Experimental Feature with percentage rollout") @ActivationStrategy(id = PercentageBasedActivationStrategy.ID, parameters = { @org.togglz.core.annotation.Parameter(name = PercentageBasedActivationStrategy.PARAM_PERCENTAGE, value = "50") }) EXPERIMENTAL_FEATURE } """ """java import org.togglz.core.Togglz; import static org.junit.jupiter.api.Assertions.*; import org.junit.jupiter.api.Test; public class MyServiceTest { @Test public void testNewFeatureEnabled() { // Assuming Togglz is configured appropriately if (Togglz.check(MyFeatures.NEW_FEATURE)) { // Test the behavior when the feature is enabled assertTrue(true, "Feature should be enabled"); // Replace with real assertions } else { fail("Feature should be enabled by default for this test"); } } @Test public void testExperimentalFeatureRollout() { // Mock the Togglz context for controlled testing // This is a simplified example; real implementations will vary boolean featureEnabled = Math.random() < 0.5; // Simulate 50% rollout if (featureEnabled) { //Test behavior when the feature is enabled assertTrue(true, "Feature should be enabled"); } else { //Test behavior when the feature is disabled assertTrue(true, "Feature should be disabled"); } } } """ **Explanation:** This example uses Togglz. "MyFeatures" enum defines feature flags. JUnit tests verify behavior based on the flag's status. Note that enabling feature flags needs proper Togglz configuration including defining appropriate activation strategies beyond just "@EnabledByDefault". Tests should account for enabled and disabled states. The example is merely illustrative; you normally would not use "Math.random()" in a test but instead properly mock Togglz or use a test-specific Togglz configuration. ## 3. Production Considerations and TDD ### 3.1. Standard: Observability **Do This:** Implement comprehensive logging, metrics, and tracing to monitor application health and performance in production. Align tests to ensure these observability features are recording correct metrics. **Don't Do This:** Deploy code without adequate monitoring or rely solely on manual inspection of logs. **Why:** Observability enables quick detection and diagnosis of issues in production, reducing downtime and improving user experience. **Code Example (Spring Boot with Micrometer and Prometheus):** """java import io.micrometer.core.instrument.Counter; import io.micrometer.core.instrument.MeterRegistry; import org.springframework.stereotype.Service; @Service public class MyService { private final Counter myCounter; public MyService(MeterRegistry registry) { this.myCounter = Counter.builder("my_service.invocations") .description("Number of times my service has been invoked") .register(registry); } public void doSomething() { myCounter.increment(); // Business logic } } """ """java import org.springframework.boot.test.context.SpringBootTest; import org.springframework.beans.factory.annotation.Autowired; import io.micrometer.core.instrument.MeterRegistry; import io.micrometer.core.instrument.Counter; import static org.junit.jupiter.api.Assertions.*; import org.junit.jupiter.api.Test; @SpringBootTest public class MyServiceTest { @Autowired private MyService myService; @Autowired private MeterRegistry registry; @Test public void testDoSomethingIncrementsCounter() { Counter counter = registry.find("my_service.invocations").counter(); double initialCount = (counter != null) ? counter.count() : 0.0; myService.doSomething(); counter = registry.find("my_service.invocations").counter(); assertNotNull(counter, "Counter should not be null after invocation"); assertEquals(initialCount + 1, counter.count(), 0.001, "Counter should be incremented by 1"); } } """ **Explanation:** This example uses Micrometer to create a counter metric. The JUnit test verifies that the counter is incremented when "doSomething" is called. This ensures that the metric is functioning correctly. Configure Prometheus to scrape metrics endpoint for live monitoring. ### 3.2. Standard: Rollback Strategy **Do This:** Implement a clear and automated rollback strategy to quickly revert to a stable version in case of deployment failures. Include tests to verify the rollback process. **Don't Do This:** Rely on manual intervention or lack a defined rollback procedure. **Why:** Quick rollback capability minimizes the impact of faulty deployments and ensures business continuity. **Code Example (Deployment Script with Rollback Logic):** """bash #!/bin/bash # Deploy script with rollback DEPLOY_VERSION=$1 PREVIOUS_VERSION=$(cat /opt/app/current_version) echo "Deploying version: $DEPLOY_VERSION" # Deploy steps (simplified) # ... DEPLOY_RESULT=$? if [ $DEPLOY_RESULT -ne 0 ]; then echo "Deployment failed. Rolling back to previous version: $PREVIOUS_VERSION" # Rollback steps (simplified) # ... echo "Rollback complete." exit 1 else echo "Deployment successful. Updating current_version." echo $DEPLOY_VERSION > /opt/app/current_version exit 0 fi """ **Explanation:** This simplified script demonstrates a deployment process with rollback logic. It checks the deployment result and, if it fails, reverts to the previous version. Tests for this script would involve simulating deployment failures and verifying that the rollback steps are correctly executed. In a real-world scenario, deployment tools like Kubernetes and cloud-native infrastructure provide automated rollback strategies that are deeply integrated into the platform rather than relying on simple scripts. ### 3.3. Standard: Security Considerations **Do This:** Incorporate security checks into the CI/CD pipeline, such as static code analysis, vulnerability scanning, and dependency checks. Write security tests *before* writing code to ensure adherence to practices like authentication and authorization. **Don't Do This:** Neglect security testing or assume that security is solely the responsibility of a separate team. **Why:** Integrating security checks early in the development lifecycle prevents security vulnerabilities from reaching production. **Code Example (OWASP Dependency-Check Maven Plugin):** """xml <!-- pom.xml --> <build> <plugins> <plugin> <groupId>org.owasp</groupId> <artifactId>dependency-check-maven</artifactId> <version>8.0.0</version> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> </plugins> </build> """ **Explanation:** This Maven plugin checks project dependencies for known vulnerabilities during the build process. Integrating this into CI/CD ensures that vulnerable dependencies are detected before deployment. Also run static analysis tools like SonarQube. ## 4. Modern Approaches and Patterns ### 4.1. Standard: Infrastructure as Code (IaC) **Do This:** Manage infrastructure (servers, networks, databases, etc.) using code and automate its provisioning and configuration as part of the CI/CD pipeline. Tests should verify the state of infrastructure after it's provisioned based on IaC scripts. **Don't Do This:** Manually configure infrastructure or treat it as a snowflake environment. **Why:** IaC ensures consistent and repeatable infrastructure deployments, simplifies scaling, and enables version control of infrastructure configurations. **Code Example (Terraform):** """terraform # main.tf resource "aws_instance" "example" { ami = "ami-0c55b9c055d45c064" # Update with a valid AMI ID instance_type = "t2.micro" tags = { Name = "ExampleInstance" } } """ """python # test_main.py (using pytest and terratest) import pytest import terratest from terratest import aws @pytest.fixture(scope="session") def terraform_options(): terratest_options = terratest.Options(terraform_dir="./") return terratest_options def test_aws_instance_exists(terraform_options): terraform = terratest.Terraform(terraform_options) terraform.init() terraform.apply() instance_id = terraform.output("instance_id") assert aws.is_instance_running(instance_id, region="us-west-2") terraform.destroy() """ **Explanation:** The Terraform script defines an AWS instance. The Pytest code (using Terratest library which abstracts over Terraform CLI) verifies that the instance is running after Terraform apply and destroys the infrastructure after the test. The AMI ID should be updated to a valid one available in your AWS region. ### 4.2. Standard: Containerization and Orchestration **Do This:** Package applications into containers (Docker) and orchestrate their deployment and management using tools like Kubernetes. Use health checks within Kubernetes and other orchestration tools to facilitate self-healing. **Don't Do This:** Deploy applications directly onto virtual machines or rely on manual container management. **Why:** Containerization provides consistent environments across different stages, simplifies deployment, and improves resource utilization. Orchestration provides scalability, high availability, and automated management of containerized applications. **Code Example (Dockerfile):** """dockerfile FROM openjdk:17-slim WORKDIR /app COPY target/*.jar app.jar EXPOSE 8080 ENTRYPOINT ["java", "-jar", "app.jar"] """ **Code Example (Kubernetes Deployment):** """yaml # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: my-registry/my-app:latest ports: - containerPort: 8080 readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 10 periodSeconds: 5 """ **Explanation:** The Dockerfile packages a Java application into a container. The Kubernetes deployment defines how many replicas of the container should be running and configures a readiness probe to check the application's health. A readiness probe is an HTTP endpoint that the application exposes (e.g., "/health"). Kubernetes automatically restarts pods that fail the readiness probe. Write tests in your application for the health endpoint, which provides a health check. ### 4.3 Standard: Canary Deployments **Do This:** Incrementally roll out new versions of your application to a small subset of users before deploying to the entire user base. Use metrics collected during the canary phase as input to automated tests; regression tests often are not sufficient to capture the nuance of real-world traffic shifts. **Don't Do This:** Release major updates to all users at once. **Why:** Canary deployments minimize the risk of impacting all users with a faulty release; you can monitor performance on canary instances and can analyze real user behavior with the new code before a full deployment. **Example (Istio traffic management):** """yaml # VirtualService for canary deployment apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: my-app spec: hosts: - "my-app.example.com" gateways: - my-gateway http: - match: - headers: user-agent: regex: ".*Canary.*" # Match requests from canary users route: - destination: host: my-app subset: canary # Route traffic to canary version - route: - destination: host: my-app subset: v1 # Route rest of traffic to stable version """ **Explanation:** This Istio VirtualService configuration routes traffic to a canary version of the "my-app" service for users with a specific User-Agent header (simulating internal testing or other staged rollout). All other traffic routes to the stable "v1" version. Metrics monitoring during this phase allows verification and controlled progression of the deployment. Automate canary analysis with tools like Kayenta (Spinnaker) to statistically compare the canary deployment's metrics against the baseline. Ideally, these results will feed *new* tests confirming acceptable performance and stability. These standards, diligently applied, ensure that TDD practices extend beyond individual unit tests into build, integration, and production environments, resulting in more reliable and maintainable software.
# State Management Standards for TDD This document outlines coding standards for managing application state, data flow, and reactivity within a Test-Driven Development (TDD) environment. The focus is on ensuring maintainability, performance, security, and testability of state management solutions, embracing modern approaches and patterns. ## 1. Core Principles of State Management in TDD ### 1.1. Unidirectional Data Flow **Do This:** Embrace unidirectional data flow patterns like Flux, Redux, or their reactive counterparts (e.g., RxJS-based state management, Vuex). Data should flow in a single direction, making state changes predictable and traceable. **Don't Do This:** Avoid two-way data binding or direct state mutations within components/services. These practices create implicit dependencies and make debugging and testing significantly harder. **Why:** Unidirectional data flow simplifies testing because each state change is triggered by a specific action and results in a predictable new state. This makes it easier to write specific, isolated tests. Additionally, it prevents unexpected side effects, improving code reliability and decreasing debugging time. **Code Example (Redux Style - TypeScript/JavaScript):** """typescript // actions.ts export const INCREMENT = 'INCREMENT'; interface IncrementAction { type: typeof INCREMENT; } export const increment = (): IncrementAction => ({ type: INCREMENT, }); export type AppActions = IncrementAction; // reducer.ts interface AppState { count: number; } const initialState: AppState = { count: 0, }; export const appReducer = (state: AppState = initialState, action: AppActions): AppState => { switch (action.type) { case INCREMENT: return { ...state, count: state.count + 1 }; default: return state; } }; // component.ts (Example using React Hooks) import React, { useReducer } from 'react'; import { appReducer, AppActions } from './reducer'; import { increment } from './actions'; interface Props {} const CounterComponent: React.FC<Props> = () => { const [state, dispatch] = useReducer(appReducer, { count: 0 }); const handleIncrement = () => { dispatch(increment()); }; return ( <div> <p>Count: {state.count}</p> <button onClick={handleIncrement}>Increment</button> </div> ); }; export default CounterComponent; // Test example (Jest/Enzyme or React Testing Library) import { appReducer, AppActions } from './reducer'; import { increment } from './actions'; describe('appReducer', () => { it('should increment the count', () => { const initialState = { count: 0 }; const action: AppActions = increment(); const newState = appReducer(initialState, action); expect(newState.count).toBe(1); }); it('should return the current state if action is unknown', () => { const initialState = { count: 0 }; const action = { type: 'UNKNOWN' }; const newState = appReducer(initialState, action as any); // Explicit cast to 'any' since 'action' isn't properly typed expect(newState).toBe(initialState); }); }); """ ### 1.2. Immutability **Do This:** Treat state as immutable. Use techniques like "Object.assign({}, state, change)" or the spread operator ("{...state, ...change}") to create new state objects instead of directly modifying the existing ones. For complex data structures, consider leveraging immutable data libraries (e.g., Immutable.js). **Don't Do This:** Directly modify state objects (e.g., "state.property = newValue"). **Why:** Immutability simplifies change detection, enables time-travel debugging, and makes testing easier. It avoids side effects and unexpected behavior when multiple parts of the application share the same state. React (and many other frameworks) are heavily optimized for immutable state. **Code Example (Immutability with Spread Operator):** """typescript interface User { id: number; name: string; email: string; } interface UserState { users: User[]; } const initialState: UserState = { users: [{ id: 1, name: 'John Doe', email: 'john.doe@example.com' }], }; const updatedUser = { id: 1, name: 'Jane Doe', email: 'jane.doe@example.com' }; const updatedState: UserState = { ...initialState, users: initialState.users.map((user) => (user.id === updatedUser.id ? { ...user, ...updatedUser } : user)), }; // Test Example describe('UserState', () => { it('should update a user immutably', () => { const initialState: UserState = { users: [{ id: 1, name: 'John Doe', email: 'john.doe@example.com' }], }; const updatedUser = { id: 1, name: 'Jane Doe', email: 'jane.doe@example.com' }; const updatedState: UserState = { ...initialState, users: initialState.users.map((user) => (user.id === updatedUser.id ? { ...user, ...updatedUser } : user)), }; expect(updatedState.users[0].name).toBe('Jane Doe'); expect(initialState.users[0].name).toBe('John Doe'); // Ensure original state wasn't mutated }); }); """ ### 1.3. Single Source of Truth **Do This:** Designate one place in your application as the single source of truth for your state. This could be a Redux store, a MobX observable, or a Vuex store. **Don't Do This:** Duplicate state across multiple components or services. This leads to inconsistencies and makes it difficult to manage updates. **Why:** A single source of truth ensures consistency and simplifies debugging. Changes to the state are centralized and easy to track. It also significantly simplifies testing, allowing you to focus on the state management logic without worrying about side effects. **Code Example (Simple Shared State with Context - React):** """typescript // StateContext.tsx import React, { createContext, useState, useContext } from 'react'; interface AppState { theme: 'light' | 'dark'; } interface StateContextProps { state: AppState; toggleTheme: () => void; } const StateContext = createContext<StateContextProps | undefined>(undefined); export const StateProvider: React.FC<{ children: React.ReactNode }> = ({ children }) => { const [theme, setTheme] = useState<AppState['theme']>('light'); const toggleTheme = () => { setTheme((prevTheme) => (prevTheme === 'light' ? 'dark' : 'light')); }; const value: StateContextProps = { state: { theme }, toggleTheme, }; return <StateContext.Provider value={value}>{children}</StateContext.Provider>; }; export const useStateContext = () => { const context = useContext(StateContext); if (!context) { throw new Error('useStateContext must be used within a StateProvider'); } return context; }; // Component using the context // ThemeToggler.tsx import React from 'react'; import { useStateContext } from './StateContext'; const ThemeToggler: React.FC = () => { const { state, toggleTheme } = useStateContext(); return ( <div> <p>Current Theme: {state.theme}</p> <button onClick={toggleTheme}>Toggle Theme</button> </div> ); }; export default ThemeToggler; // Test Example (Testing the Context Provider and consumer) import { render, screen, fireEvent } from '@testing-library/react'; import { StateProvider, useStateContext } from './StateContext'; import ThemeToggler from './ThemeToggler'; describe('StateContext', () => { it('should provide initial state and allow updates', () => { const TestComponent = () => { const { state, toggleTheme } = useStateContext(); return ( <div> <p>Theme: {state.theme}</p> <button onClick={toggleTheme}>Toggle</button> </div> ); }; render( <StateProvider> <TestComponent /> </StateProvider> ); expect(screen.getByText('Theme: light')).toBeInTheDocument(); fireEvent.click(screen.getByText('Toggle')); expect(screen.getByText('Theme: dark')).toBeInTheDocument(); }); it('should throw an error if used outside StateProvider', () => { const TestComponent = () => { useStateContext(); // Intentionally called outside the provider return null; }; const consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); // Suppress React's error message momentarily expect(() => render(<TestComponent />)).toThrowError('useStateContext must be used within a StateProvider'); consoleErrorSpy.mockRestore(); }); }); """ ### 1.4. Explicit Actions and Mutations **Do This:** Use explicit actions to initiate state changes. In Redux, these are actions dispatched to the store. In Vuex, these are mutations committed to the store. **Don't Do This:** Directly modify the state within components or services without going through defined actions or mutations. **Why:** Explicit actions and mutations provide a clear audit trail of how the state changes over time. This is invaluable for debugging and understanding application behavior. Tests can be written to verify that specific actions trigger the correct state transitions. **Code Example (Vuex/Redux similarities using actions):** """typescript // Vuex store example - mutations trigger state changes // store.ts import Vue from 'vue'; import Vuex from 'vuex'; Vue.use(Vuex); interface State { count: number; } const store = new Vuex.Store<State>({ state: { count: 0, }, mutations: { increment(state: State) { state.count++; }, decrement(state: State) { state.count--; }, }, actions: { incrementAsync({ commit }) { setTimeout(() => { commit('increment'); }, 1000); }, }, getters: { getCount: (state: State) => state.count, } }); export default store; // Component using the action (e.g., increment) // CounterComponent.vue <template> <div> <p>Count: {{ count }}</p> <button @click="increment">Increment</button> </div> </template> <script> import { mapGetters, mapActions } from 'vuex'; export default { computed: { ...mapGetters(['getCount']), count() { return this.getCount; } }, methods: { ...mapActions(['increment']), } }; </script> // Testing Vuex actions and mutations // store.spec.ts import store from './store'; describe('Vuex Store', () => { it('should increment the count', () => { store.commit('increment'); expect(store.state.count).toBe(1); }); it('should decrement the count', () => { store.commit('decrement'); expect(store.state.count).toBe(-1); }); it('should increment the count asynchronously', (done) => { store.dispatch('incrementAsync'); // Wait for the timeout defined in the action and then verify. setTimeout(() => { expect(store.state.count).toBe(0); // Assumes you're starting from 0 done(); // Signal that the asynchronous test is complete }, 1100); }); }); """ ### 1.5. Separation of Concerns **Do This:** Keep state management logic separate from component logic. Use hooks, selectors, or connected components to access state and dispatch actions. **Don't Do This:** Embed state management logic directly within components. This mixes concerns and makes testing difficult. **Why:** Separation of concerns makes components more reusable and easier to test in isolation. It leads to a cleaner codebase with better organization and maintainability. State logic can be tested independently from the UI which increases confidence. **Code Example (React Hooks with custom hook):** """typescript // useCounter.ts (Custom Hook) import { useState, useCallback } from 'react'; const useCounter = (initialValue: number = 0) => { const [count, setCount] = useState(initialValue); const increment = useCallback(() => { setCount((prevCount) => prevCount + 1); }, []); const decrement = useCallback(() => { setCount((prevCount) => prevCount - 1); }, []); return { count, increment, decrement }; }; export default useCounter; // CounterComponent.ts (Component using the hook) import React from 'react'; import useCounter from './useCounter'; const CounterComponent: React.FC = () => { const { count, increment, decrement } = useCounter(); return ( <div> <p>Count: {count}</p> <button onClick={increment}>Increment</button> <button onClick={decrement}>Decrement</button> </div> ); }; export default CounterComponent; // Test Examples (testing the hook in isolation) import { renderHook, act } from '@testing-library/react-hooks'; import useCounter from './useCounter'; describe('useCounter', () => { it('should initialize the count to 0 by default', () => { const { result } = renderHook(() => useCounter()); expect(result.current.count).toBe(0); }); it('should initialize the count to the provided value', () => { const { result } = renderHook(() => useCounter(10)); expect(result.current.count).toBe(10); }); it('should increment the count', () => { const { result } = renderHook(() => useCounter()); act(() => { result.current.increment(); }); expect(result.current.count).toBe(1); }); it('should decrement the count', () => { const { result } = renderHook(() => useCounter()); act(() => { result.current.decrement(); }); expect(result.current.count).toBe(-1); }); }); """ ## 2. Technology-Specific Considerations ### 2.1. React * **Context API:** Use the Context API for simple, application-wide state management scenarios. It's built into React and requires no external libraries. Prefer more robust solutions like Redux or Zustand for more complex applications. * **Redux:** Redux requires boilerplate, but it provides a predictable state container, useful for debugging complex applications. Tools like Redux Toolkit minimize the boilerplate. * **Zustand:** A small, fast, and scalable bearbones state-management solution using simplified flux principles. * **Recoil:** Innovative state management library by Facebook focusing on granular state definition and efficient updates, especially for asynchronous data. ### 2.2. Angular * **NgRx:** The Angular equivalent of Redux. Provides a reactive state management solution based on RxJS observables. Offers similar benefits of unidirectional data flow and immutability. * **RxJS Observables with Services:** For simpler state management, leverage RxJS observables within Angular services. Components can subscribe to these observables to react to state changes. Avoid direct mutation, and use ".next()" on a Subject or BehaviorSubject to emit new immutable states. ### 2.3. Vue.js * **Vuex:** Vue's official state management library. Similar to Redux but designed specifically for Vue.js. Enforces a strict unidirectional data flow pattern. * **Provide/Inject:** Similar to React's Context API, "provide/inject" offers a way to share state across components without relying on props drilling, suitable for smaller to medium applications * **Pinia:** Relatively new library which supersedes Vuex. Very similar with simpler syntax, and full TypeScript support. ## 3. Testing Strategies for State Management ### 3.1. Unit Testing Reducers/Mutations **Do This:** Write unit tests for reducers (Redux) or mutations (Vuex) to verify that they correctly transform the state based on different actions. **Don't Do This:** Neglect unit testing reducers/mutations. They are the core of your state management logic. """typescript // Reducer test example import { appReducer, AppActions } from './reducer'; import { increment } from './actions'; describe('appReducer', () => { it('should increment the count', () => { const initialState = { count: 0 }; const action: AppActions = increment(); const newState = appReducer(initialState, action); expect(newState.count).toBe(1); expect(newState).not.toBe(initialState); //Ensure immutability }); }); """ ### 3.2. Testing Actions/Effects **Do This:** Test actions (Redux) or effects (NgRx) to ensure they dispatch the correct sequence of actions, especially when dealing with asynchronous operations. Use mocking techniques to isolate the action/effect being tested. """typescript // Redux thunk test example using redux-mock-store import configureMockStore from 'redux-mock-store'; import thunk from 'redux-thunk'; import { fetchData } from './actions'; import * as api from './api'; // Mock API calls const middlewares = [thunk]; const mockStore = configureMockStore(middlewares); describe('async actions', () => { it('dispatches FETCH_DATA_SUCCESS after successful API call', () => { const mockData = [{ id: 1, name: 'Test' }]; jest.spyOn(api, 'fetchData').mockResolvedValue(mockData); // Mock the API call const expectedActions = [ { type: 'FETCH_DATA_REQUEST' }, { type: 'FETCH_DATA_SUCCESS', payload: mockData } ]; const store = mockStore({ data: [] }); return store.dispatch(fetchData() as any).then(() => { // return of async actions expect(store.getActions()).toEqual(expectedActions); }); }); }); """ ### 3.3. Integration Testing Components with State **Do This:** Write integration tests to ensure that components correctly interact with the state management system. Mock the store or state provider to control the state and verify component behavior. Use UI testing libraries (e.g., React Testing Library, Cypress) to simulate user interactions. """typescript // React Testing Library integration example import { render, screen, fireEvent } from '@testing-library/react'; import { Provider } from 'react-redux'; import { createStore } from 'redux'; import CounterComponent from './CounterComponent'; import { appReducer } from './reducer'; import { increment } from './actions'; const mockStore = createStore(appReducer); describe('CounterComponent integration', () => { it('should increment the count when the button is clicked', () => { const { getByText, dispatch } = render( <Provider store={mockStore}> <CounterComponent /> </Provider> ); const incrementButton = getByText('Increment'); fireEvent.click(incrementButton); expect(getByText('Count: 1')).toBeInTheDocument(); }); }); """ ### 3.4. End-to-End (E2E) Testing **Do This:** Use E2E testing frameworks like Cypress or Playwright to test the entire data flow from the UI through the state management system to the backend (if applicable). **Don't Do This:** Rely solely on E2E tests. They are slow and expensive to maintain. Use them to test critical user flows and integration points. ## 4. Common Anti-Patterns * **Prop Drilling:** Passing props down through many layers of components. Use Context API, Redux, or similar to avoid this. * **Mutating State Directly:** Causes unpredictable side effects. Always create new state objects immutably. * **Over-Reliance on Global State:** Global state can become a bottleneck. Use local component state where appropriate. * **Ignoring Asynchronous Operations:** Failing to handle asynchronous operations correctly in actions/effects can lead to race conditions and incorrect state updates. * **Complex Selectors without Memoization:** Selectors that perform expensive computations should be memoized to prevent unnecessary re-renders and performance bottlenecks. Memoization libraries such as "reselect" should be considered for complex applications. ## 5. Performance Optimization * **Memoization:** Use memoization techniques (e.g., "React.memo", "useMemo", "reselect") to avoid unnecessary re-renders of components that depend on state. * **Code Splitting:** Split your application into smaller chunks to reduce the initial load time. State management libraries often support code splitting. * **Selective State Updates:** Optimize state updates to only trigger updates when necessary. For example, avoid dispatching actions that result in no state change. * **Immutable Data Structures:** Using libraries like Immutable.js can improve performance by optimizing change detection and reducing memory usage. However, be mindful of the potential overhead of these libraries. ## 6. Security Considerations * **Avoid Storing Sensitive Data in Global State:** Sensitive data (e.g., passwords, API keys) should not be stored in the client-side state. Use secure storage mechanisms like cookies or browser storage with appropriate encryption. * **Sanitize User Input:** When updating state based on user input, always sanitize the input to prevent XSS vulnerabilities. * **Rate Limiting:** Implement rate limiting on actions that modify state to prevent abuse or denial-of-service attacks. By adhering to these state management standards, development teams can build robust, maintainable, and testable TDD applications that are both performant and secure. Continuous review and refinement of these standards based on project needs and evolving technologies are highly recommended.
# Security Best Practices Standards for TDD This document outlines the security best practices to be followed when developing software using Test-Driven Development (TDD). These standards aim to minimize vulnerabilities, promote secure coding patterns, and ensure application resilience. Adhering to these guidelines throughout the TDD cycle, from writing the first test to refactoring, is crucial for building secure and reliable software. ## 1. Secure Development Lifecycle with TDD ### 1.1 TDD and Security Integration **Do This:** * Integrate security considerations into every phase of the TDD cycle. * Write security-focused tests early in the process to ensure application behavior aligns with security requirements. * Use threat modeling to identify potential vulnerabilities and create appropriate tests. **Don't Do This:** * Treat security as an afterthought, addressing it only after development is complete. * Ignore or postpone security-specific tests in favor of functional tests. * Assume security testing is only the responsibility of a specialized team. **Why:** Shift-left security improves the overall security posture by identifying and mitigating vulnerabilities early in the development process. Integrating security into the initial test design ensures inherent flaws are caught before they reach production. ### 1.2 Security Requirements Elicitation **Do This:** * Collaborate with security experts to define clear and measurable security requirements. * Translate these requirements into specific, testable scenarios. * Document security requirements using a standardized format (e.g., user stories with security-specific acceptance criteria). **Don't Do This:** * Rely on vague or ambiguous security goals. * Neglect to document security requirements alongside functional requirements. * Assume all developers have sufficient security expertise to determine requirements. **Why:** Clearly defined security requirements provide a solid foundation for building secure software. They ensure test cases effectively address potential vulnerabilities and provide clear instructions for developers. ## 2. Threat Modeling and Security Tests ### 2.1 Identifying Potential Threats **Do This:** * Conduct regular threat modeling sessions using techniques like STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) and attack trees. * Involve diverse perspectives, including developers, testers, security architects, and operations personnel. * Focus on identifying attack vectors, potential vulnerabilities, and impact on the system. **Don't Do This:** * Skip threat modeling or perform it sporadically. * Limit threat modeling to a checklist exercise without in-depth analysis. * Fail to update threat models when the code changes. **Why:** Threat modeling identifies potential security vulnerabilities before they are introduced into the codebase. This proactive approach helps prioritize and focus security testing efforts. ### 2.2 Writing Security Test Cases **Do This:** * Translate threat model outputs into specific security test cases. * Focus tests on known attack vectors and potential vulnerabilities identified during threat modeling. * Prioritize tests based on the risk associated with each vulnerability (likelihood and impact). **Don't Do This:** * Rely solely on generic security tests that don’t address the specific threats identified in the threat model. * Ignore low-likelihood but high-impact vulnerabilities. * Fail to update test cases when new threats are identified. **Why:** Security tests confirm that the application's behavior aligns with security requirements and mitigates identified threats. Targeted tests based on threat modeling are more effective than a general "scan and hope" approach. **Example:** Assume you've identified that SQL Injection is a potential threat in a web application that takes user input to query a database. """java // Test to prevent SQL Injection import org.junit.Test; import static org.junit.Assert.*; public class SQLInjectionTest { @Test public void testPreventSQLInjection() { DatabaseQueryExecutor executor = new DatabaseQueryExecutor(); // Attempt to inject malicious SQL code String userInput = "'; DROP TABLE users; --"; String safeQuery = executor.prepareQuery("SELECT * FROM products WHERE name = ?", userInput); // Assert that the prepared query does not contain the malicious SQL code assertFalse(safeQuery.toLowerCase().contains("drop table")); } } class DatabaseQueryExecutor { public String prepareQuery(String baseQuery, String userInput) { // Simulate a prepared statement to prevent SQL injection // In a real implementation, use JDBC PreparedStatement or similar. return baseQuery.replace("?", escapeUserInput(userInput)); } private String escapeUserInput(String userInput) { // Basic input sanitization (replace single quotes) return userInput.replace("'", "''"); } } """ **Explanation:** This code simulates an SQL injection attempt and tests whether a method prepares the query correctly. It is basic, real life implementations should use a library for parameterized queries. ## 3. Secure Coding Practices ### 3.1 Input Validation and Sanitization **Do This:** * Validate all inputs, including those from users, external systems, and configuration files. * Use strict validation rules and regular expressions to ensure data conforms to expected formats. * Sanitize all inputs to remove potentially harmful characters or sequences before processing. * Encode data when displaying it to prevent cross-site scripting (XSS) attacks. **Don't Do This:** * Trust input data without validation. * Rely on client-side validation only. * Use blacklist-based validation (attempting to block specific harmful inputs) instead of whitelist-based validation (allowing only known good inputs). **Why:** Input validation and sanitization prevent numerous vulnerabilities, including SQL injection, cross-site scripting (XSS), and command injection. It is a fundamental security practice. **Example:** """java // Input Validation and Sanitization Example import org.junit.Test; import static org.junit.Assert.*; public class InputValidationTest { @Test public void testValidEmail() { assertTrue(isValidEmail("test@example.com")); } @Test public void testInvalidEmail() { assertFalse(isValidEmail("test@example")); assertFalse(isValidEmail("test.example.com")); } private boolean isValidEmail(String email) { String emailRegex = "^[a-zA-Z0-9_+&*-]+(?:\\.[a-zA-Z0-9_+&*-]+)*@(?:[a-zA-Z0-9-]+\\.)+[a-zA-Z]{2,7}$"; return email.matches(emailRegex); } @Test public void testSanitizeInput() { String userInput = "<script>alert('XSS');</script>"; String sanitizedInput = sanitize(userInput); assertEquals("<script>alert('XSS');</script>", sanitizedInput); } private String sanitize(String input) { // Replace potentially harmful characters with their HTML entities return input.replace("<", "<") .replace(">", ">") .replace("\"", """) .replace("'", "'"); } } """ **Explanation:** The example contains functions for validating email formats using regular expressions and sanitizing inputs encoding to defend against cross-site scripting. ### 3.2 Authentication and Authorization **Do This:** * Use strong and well-established authentication protocols (e.g., OAuth 2.0, OpenID Connect). * Implement multi-factor authentication (MFA) where possible. * Store passwords securely using bcrypt or Argon2 hashing algorithms with a unique salt per password. * Enforce the principle of least privilege (PoLP) by granting users only the permissions they need to perform their tasks. **Don't Do This:** * Use weak or custom authentication protocols. * Store passwords in plain text or using weak hashing algorithms like MD5 or SHA1. * Grant excessive privileges to users. * Bypass authentication or authorization checks in testing or development environments. **Why:** Authentication and authorization protect sensitive data and prevent unauthorized access to system resources. Strong authentication protects against identity theft, whilst authorization enforces appropriate access. **Example:** Using Spring Security for authentication and authorization with a TDD approach. """java // Spring Security Authentication Test import org.junit.Test; import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; import static org.junit.Assert.*; public class PasswordHashingTest { @Test public void testPasswordHashing() { String plainTextPassword = "MySecretPassword123"; BCryptPasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); String hashedPassword = passwordEncoder.encode(plainTextPassword); // Ensure the hashed password is not the same as the plain text password assertNotEquals(plainTextPassword, hashedPassword); assertTrue(passwordEncoder.matches(plainTextPassword, hashedPassword)); } } """ **Explanation:** This JUnit test authenticates password are encrypted correctly using bcrypt. ### 3.3 Session Management **Do This:** * Use secure session identifiers (e.g., UUIDs) to prevent session hijacking. * Implement session timeouts to automatically invalidate inactive sessions. * Regenerate session IDs after authentication or privilege escalation. * Store session data securely on the server-side and not in cookies. **Don't Do This:** * Use predictable session IDs. * Store sensitive data in session cookies. * Expose session IDs in URLs. **Why:** Secure session management prevents attackers from impersonating legitimate users and gaining unauthorized access to sensitive data or functionality. ### 3.4 Error Handling and Logging **Do This:** * Implement robust error handling to gracefully manage exceptions and prevent information leakage. * Log all security-related events, including authentication attempts, authorization failures, and data access violations. * Use centralized logging to facilitate security monitoring and incident response. * Anonymize sensitive data in logs. **Don't Do This:** * Display detailed error messages to users, which can reveal sensitive information. * Log sensitive data (e.g., passwords, credit card numbers) in plain text. * Fail to monitor logs for suspicious activity. **Why:** Proper error handling prevents information leakage and improves the application's resilience. Comprehensive logging enables security monitoring, incident response, and forensic analysis. **Example:** Centralized logging with TDD: """java // Centralized Logging Test import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import static org.junit.Assert.*; public class LoggingTest { private static final Logger logger = LoggerFactory.getLogger(LoggingTest.class); @Test public void testLogAuthenticationAttempt() { // Simulate an authentication attempt String username = "testUser"; boolean authenticationSuccess = false; // Log the authentication attempt if (authenticationSuccess) { logger.info("User {} successfully authenticated.", username); } else { logger.warn("Failed authentication attempt for user {}.", username); } // In a real-world scenario, you would assert that the log was written. // This can be achieved by mocking the Logger or using a logging framework that allows querying logs. assertTrue(true); // Placeholder assertion } } """ **Explanation:** This example demonstrates logging successful and failed authentication attempts and using a logging framework. ### 3.5 Cryptographic Practices **Do This:** * Use strong cryptographic algorithms with appropriate key lengths. * Generate cryptographic keys using a cryptographically secure random number generator (e.g., "java.security.SecureRandom" in Java). * Protect cryptographic keys by storing them securely in a hardware security module (HSM) or encrypted configuration file. * Implement key rotation to regularly change cryptographic keys. * Validate cryptographic implementations with known test vectors. **Don't Do This:** * Use weak or obsolete cryptographic algorithms (e.g., DES, MD5, SHA1). * Hardcode cryptographic keys in the code. * Store cryptographic keys in plain text. * Implement crypto functions from scratch. **Why:** Strong cryptography protects data confidentiality and integrity. Weak or improperly implemented cryptography can lead to data breaches or tampering. ## 4. Dependency Management and Component Security ### 4.1 Secure Dependencies **Do This:** * Use a dependency management tool (e.g., Maven, Gradle, npm) to manage project dependencies. * Keep dependencies up-to-date to patch known vulnerabilities. * Use a vulnerability scanner (e.g., OWASP Dependency-Check, Snyk) to identify vulnerable dependencies. * Use a Software Bill of Materials (SBOM) to track all dependencies. **Don't Do This:** * Use outdated or unsupported dependencies. * Download dependencies from untrusted sources. * Ignore warnings from vulnerability scanners. **Why:** Vulnerable dependencies are a primary attack vector for modern applications. Keeping dependencies up-to-date and scanning for vulnerabilities are essential for mitigating this risk. **Example:** Using OWASP Dependency-Check in a Maven project: 1. **Add the Dependency-Check Maven plugin to your "pom.xml":** """xml <plugin> <groupId>org.owasp</groupId> <artifactId>dependency-check-maven</artifactId> <version>6.5.0</version> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> """ 2. **Run the analysis:** """bash mvn dependency-check:check """ **Explanation:** This configuration scans for vulnerabilities within directly and transitively linked dependencies. If vulnerabilities are found, the task throws an error. ### 4.2 Third-Party Components **Do This:** * Evaluate the security posture of third-party components before integrating them into your application. * Use only components from trusted sources with a proven security track record. * Regularly audit the security of third-party components. **Don't Do This:** * Use components from unknown or untrusted sources. * Assume that third-party components are automatically secure. * Trust third-party code with unlimited access to your system resources. **Why:** Third-party components can introduce security vulnerabilities if they are not properly vetted. Rigorous evaluation and monitoring are important for mitigating this risk. ## 5. Automated Security Testing ### 5.1 Static Analysis Security Testing (SAST) **Do This:** * Integrate SAST tools (e.g., SonarQube, Checkmarx) into the CI/CD pipeline. * Configure SAST tools to identify security vulnerabilities, code smells, and adherence to coding standards. * Address SAST findings promptly and fix any identified issues. **Don't Do This:** * Ignore SAST findings. * Rely solely on SAST tools without manual code review. * Use SAST tools as a replacement for secure coding practices. **Why:** SAST tools can automatically identify vulnerabilities in the source code, enabling early detection and remediation. ### 5.2 Dynamic Analysis Security Testing (DAST) **Do This:** * Integrate DAST tools (e.g., OWASP ZAP, Burp Suite) into the CI/CD pipeline. * Configure DAST tools to automatically scan the running application for web application vulnerabilities (e.g., XSS, SQL injection, CSRF). * Address DAST findings promptly and fix any identified issues. **Don't Do This:** * Rely solely on DAST tools without manual penetration testing. * Run DAST tools in production environments without proper authorization. * Use DAST tools as a replacement for secure coding practices. **Why:** DAST tools can identify vulnerabilities in a running application by simulating real-world attacks. ### 5.3 Interactive Application Security Testing (IAST) **Do This:** * Integrate IAST tools into the test environment alongside functional tests. * Configure IAST tools to monitor application behavior during testing and identify vulnerabilities. * Address IAST findings promptly and fix any identified issues. **Don't Do This:** * Rely solely on IAST tools without manual code review or penetration testing. * Ignore IAST findings. * Use IAST tools as a replacement for secure coding practices. **Why:** IAST tools combine the benefits of SAST and DAST by providing real-time vulnerability analysis during application testing. ## 6. Continuous Monitoring and Incident Response ### 6.1 Security Monitoring **Do This:** * Implement centralized log management and security information and event management (SIEM) systems. * Monitor logs for suspicious activity, such as failed authentication attempts, unauthorized access, and data exfiltration. * Implement intrusion detection and prevention systems (IDS/IPS) to detect and block malicious traffic. **Don't Do This:** * Ignore security alerts. * Fail to investigate suspicious activity promptly. * Rely solely on automated monitoring without human oversight. **Why:** Continuous monitoring enables early detection of security incidents and proactive response to potential threats. ### 6.2 Incident Response **Do This:** * Develop an incident response plan to handle security incidents effectively. * Regularly test the incident response plan through simulations and tabletop exercises. * Establish clear roles and responsibilities for incident response. * Document all security incidents and lessons learned. **Don't Do This:** * Panic or take ad-hoc actions during a security incident. * Fail to contain the incident and prevent further damage. * Neglect to learn from past security incidents. **Why:** A well-defined incident response plan enables swift and effective action to minimize the impact of security incidents. ## 7. Specific Code Examples for Secure TDD Practice Here are more elaborate code examples to concretize security best practices in a TDD environment: ### 7.1. Preventing Cross-Site Request Forgery (CSRF) """java // CSRF Prevention Test import org.junit.Test; import static org.junit.Assert.*; import org.springframework.mock.web.MockHttpServletRequest; import javax.servlet.http.HttpServletRequest; public class CSRFTest { @Test public void testCSRFTokenIsPresent() { HttpServletRequest request = new MockHttpServletRequest(); CSRFHandler handler = new CSRFHandler(); String token = handler.generateCSRFToken(request); assertNotNull(token); assertFalse(token.isEmpty()); } private static class CSRFHandler { public String generateCSRFToken(HttpServletRequest request) { // Mock CSRF token generation logic return "secureCSRFToken"; } public boolean isValidCSRFToken(HttpServletRequest request, String token) { // Mock CSRF token validation logic return "secureCSRFToken".equals(token); } } @Test public void testValidateCSRFToken() { MockHttpServletRequest request = new MockHttpServletRequest(); String expectedToken = "secureCSRFToken"; request.setParameter("csrfToken", expectedToken); CSRFHandler handler = new CSRFHandler(); assertTrue(handler.isValidCSRFToken(request, request.getParameter("csrfToken"))); } } """ Explanation: The *CSRFHandler.generateCSRFToken* method returns a string. The unit test then checks whether this returned value is not null or empty. The second test mocks a CSRF token validation to verify the request returns a valid token. By adhering to these security coding standards within the TDD framework, development teams can produce more resilient and safe software, reducing risks and maintaining user trust.
# Component Design Standards for TDD This document outlines the coding standards for component design within a Test-Driven Development (TDD) environment. It aims to guide developers in creating reusable, maintainable, and testable components while adhering to the principles of TDD. These standards are designed to be used directly by developers and also to provide context for AI coding assistants. ## 1. Introduction to Component Design in TDD In TDD, component design is not an afterthought but rather an integral part of the development process. The tests drive the design, ensuring that components are focused, loosely coupled, and easy to test. This section sets the stage for the detailed standards that follow. ### 1.1. Importance of Component Design in TDD Well-designed components are crucial for: * **Testability:** Loosely coupled components enable focused unit tests, simplifying the verification of individual component behavior. * **Maintainability:** Clear component boundaries reduce the complexity of code changes, making it easier to evolve the system over time. Refactoring becomes safer with comprehensive test coverage. * **Reusability:** Properly designed components can be easily reused in different parts of the application or in other projects, saving development time and effort. * **Performance:** Although not the primary driver, thoughtful component design can aid in creating code that is more performant when the whole system is taken into account. * **Clarity:** Well-defined components, tests, and documentation are able to improve clarity for any new developers to the team. ### 1.2. TDD Cycle and Component Design The TDD cycle (Red-Green-Refactor) significantly impacts component design. * **Red (Write a failing test):** This compels us to consider the *interface* of the component before its *implementation*. What does the component *do*, not *how* does it do it? * **Green (Make the test pass):** Focuses on implementing the minimal amount of code required to fulfill the test's requirements. Avoid over-engineering. * **Refactor (Improve the code):** This is where component design shines. It allows us to improve the structure, remove duplication, and enhance readability while retaining the demonstrated functionality and behaviour of the component. ## 2. Core Principles for Component Design in TDD These principles guide the creation of well-structured components that are easy to test, maintain, and reuse. ### 2.1. Single Responsibility Principle (SRP) * **Do This:** Ensure each component has *one*, and only one, reason to change. * **Don't Do This:** Create components that handle multiple unrelated tasks. These are harder to test and maintain. * **Why:** SRP promotes modularity and reduces the likelihood that a change in one part of the system will affect unrelated parts. This enhances testability and maintainability. """python # Good: Separate classes for order processing and email notification class OrderProcessor: def process_order(self, order): # Process the order details return True class EmailNotifier: def send_confirmation(self, order): # Send order confirmation email return True # Bad: Single class responsible for both order processing and email notification class OrderManager: # Violates SRP def process_order(self, order): # Process the order details self.send_confirmation(order) # Contains email logic def send_confirmation(self, order): # Send order confirmation email return True """ ### 2.2. Open/Closed Principle (OCP) * **Do This:** Design components that are open for extension but closed for modification. Use abstractions (interfaces, abstract classes) to allow new functionality to be added without altering existing code. * **Don't Do This:** Modify existing component code directly to add new functionality. This can introduce bugs and break existing functionality. * **Why:** OCP reduces the risk of introducing bugs when adding new features. It encourages the use of polymorphism and dependency injection, making components more flexible and reusable. """python # Good: Shape interface and concrete implementations from abc import ABC, abstractmethod class Shape(ABC): # Abstraction @abstractmethod def area(self): pass class Rectangle(Shape): # Extension without Modification def __init__(self, width, height): self.width = width self.height = height def area(self): return self.width * self.height class Circle(Shape): # Extension without Modification def __init__(self, radius): self.radius = radius def area(self): return 3.14159 * self.radius * self.radius # Bad: Modifying existing code to add new shape logic class AreaCalculator: # Violates OCP def calculate_area(self, shape_type, dimensions): if shape_type == "rectangle": # Rectangle area calculation pass elif shape_type == "circle": # Circle area calculation (Adding new shape requires modifying this function) pass """ ### 2.3. Liskov Substitution Principle (LSP) * **Do This:** Ensure that subtypes can be used interchangeably with their base types without altering the correctness of the program. * **Don't Do This:** Create subtypes that violate the behavior expected of their base types. * **Why:** LSP ensures that inheritance is used correctly, leading to more robust and predictable code. It simplifies testing and reduces the risk of unexpected behavior. """python # Good: Subclass adheres to the contract of superclass class Bird: def fly(self): return "Flying" class Sparrow(Bird): def fly(self): # maintains same behaviour as Bird return "Sparrow flying" # Bad: Subclass violates the contract of the superclass class Square: def __init__(self, side): self._side = side def set_width(self, width): self._side = width def set_height(self, height): self._side = height def get_area(self): return self._side * self._side class Rectangle(Square): # Rectangle violates LSP, because a rectangle does not HAVE to have equal sides def set_width(self, width): self._side = width def set_height(self, height): self._side = height """ ### 2.4. Interface Segregation Principle (ISP) * **Do This:** Design interfaces that are specific to the needs of the clients that use them. Avoid creating large, monolithic interfaces that force clients to implement methods they don't need. * **Don't Do This:** Create "fat" interfaces that have many methods, some of which may be irrelevant to certain clients. * **Why:** ISP reduces coupling and improves cohesion. It allows clients to depend only on the methods they actually use, making the system more flexible and maintainable. """python # Good: Separate interfaces for different client needs from abc import ABC, abstractmethod class Worker(ABC): @abstractmethod def work(self): pass class Eater(ABC): @abstractmethod def eat(self): pass class Human(Worker, Eater): def work(self): return "Human working" def eat(self): return "Human eating" # Bad: Single interface forces clients to implement unnecessary methods class IWorker(ABC): # Fat interface @abstractmethod def work(self): pass @abstractmethod def eat(self): pass class Robot(IWorker): # Forced to implement eat even though it isn't required def work(self): return "Robot working" def eat(self): raise NotImplementedError("Robots don't eat") """ ### 2.5. Dependency Inversion Principle (DIP) * **Do This:** Depend on abstractions, not concretions. High-level modules should not depend on low-level modules. Both should depend on abstractions. * **Don't Do This:** High-level modules depending directly on low-level modules. * **Why:** DIP reduces coupling and improves modularity. It makes it easier to change the implementation of a component without affecting its clients. """python # Good: High-level module depends on abstraction from abc import ABC, abstractmethod class Switchable(ABC): # Abstraction @abstractmethod def turn_on(self): pass @abstractmethod def turn_off(self): pass class LightBulb(Switchable): def turn_on(self): return "LightBulb: on..." def turn_off(self): return "LightBulb: off..." class ElectricPowerSwitch: # High-level module does not depend on LightBulb. Abstraction. def __init__(self, client: Switchable): self.client = client self.on = False def press(self): if self.on: self.client.turn_off() self.on = False else: self.client.turn_on() self.on = True # Bad: High-level module depends on concretion class LightBulb_Bad: def turn_on(self): return "LightBulb: on..." def turn_off(self): return "LightBulb: off..." class ElectricPowerSwitch_Bad: # High-level module (ElectricPowerSwitch) now directly coupled to the low-level module (LightBulb) def __init__(self, client: LightBulb_Bad): self.client = client self.on = False def press(self): if self.on: self.client.turn_off() self.on = False else: self.client.turn_on() self.on = True """ ## 3. TDD-Specific Component Design Practices These practices are specifically geared toward component design within a TDD workflow. ### 3.1. Starting with the Test (Red Phase) * **Do This:** Write a test that defines the *expected behavior* of the component *before* writing any implementation code. This forces you to think about the component's interface and responsibilities. * **Don't Do This:** Start coding the component's implementation first, and then write tests afterward. This often leads to poorly designed, difficult-to-test components. * **Why:** Starting with the test ensures that the component is designed to be testable and that it meets the specific requirements of the application. It also aids in defining the API of the component with minimal bias toward implementation details. """python # Example: Test for a simple calculator component import unittest class TestCalculator(unittest.TestCase): def test_add(self): calculator = Calculator() self.assertEqual(calculator.add(2, 3), 5) # Write this test FIRST class Calculator: #Implemented AFTER the test is written. Forces you to write tests before methods in the class. The focus is on the expected function. def add(self, x, y): return x + y """ ### 3.2. Test-Driven APIs * **Do This:** Use tests to drive the development of your component's API. Each test should focus on a specific aspect of the API, such as input validation, error handling, or return values. * **Don't Do This:** Design the API based on intuition or guesswork. This can lead to APIs that are difficult to use or that do not meet the needs of the application. Design the full system based on API. * **Why:** Test-driven APIs are more likely to be well-designed, easy to use, and meet the specific requirements of the application. They are also fully documented by the tests themselves. """python # Example: Test-driven API for a user authentication component import unittest class TestUserAuthentication(unittest.TestCase): def test_authenticate_valid_user(self): auth = UserAuthentication() self.assertTrue(auth.authenticate("valid_user", "password")) def test_authenticate_invalid_user(self): auth = UserAuthentication() self.assertFalse(auth.authenticate("invalid_user", "password")) class UserAuthentication: # Test drives the creation of the authentication function def authenticate(self, username, password): if username == "valid_user" and password == "password": return True else: return False """ ### 3.3. Mocking and Dependency Injection * **Do This:** Use mocking frameworks to isolate components under test. Inject dependencies into components to make them more testable and flexible. Use existing frameworks where possible. * **Don't Do This:** Create tight coupling between components, making it difficult to test them in isolation. * **Why:** Mocking and dependency injection allow you to test components in isolation, without relying on external dependencies. This leads to more reliable and faster tests. It also promotes loose coupling, which is a key principle of good component design. """python # Example: Using a mock to test a component that depends on a database import unittest from unittest.mock import Mock class UserService: def __init__(self, db_service): self.db_service = db_service def get_user(self, user_id): return self.db_service.get_user(user_id) class TestUserService(unittest.TestCase): def test_get_user(self): mock_db = Mock() # DB mock to isolate component for testing mock_db.get_user.return_value = {"id": 1, "name": "Test User"} user_service = UserService(mock_db) user = user_service.get_user(1) self.assertEqual(user["name"], "Test User") """ ### 3.4. Refactoring for Component Design * **Do This:** Use the refactor phase of TDD to improve the design of your components. Look for opportunities to apply the SOLID principles, reduce duplication, and improve readability. Consider DRY (Don't Repeat Yourself). * **Don't Do This:** Neglect the refactor phase. This leads to technical debt and makes the code harder to maintain over time. * **Why:** Refactoring is an essential part of TDD. It allows you to continuously improve the design of your components, making them more maintainable, reusable, and testable. The Green phase ensures you refactor with confidence and minimal risk, as your existing tests should all still pass if refactoring is done correctly. """python # Example: Refactoring to extract a common method class OrderProcessor: # Refactor to remove duplicate code def process_order(self, order): # ... self._send_notification(order, "Order processed") def cancel_order(self, order): # ... self._send_notification(order, "Order cancelled") def _send_notification(self, order, message): # common method # Send order notification (DRY) pass """ ## 4. Modern Approaches and Patterns Incorporate these patterns and approaches to build robust and scalable systems. ### 4.1. Microservices Architecture * **Do This:** Design components as independent, deployable services. Use APIs for communication between microservices. * **Don't Do This:** Build monolithic applications where components are tightly coupled and deployed as a single unit. * **Why:** They enable independent scaling and deployment, fault isolation, and technology diversity. ### 4.2. Event-Driven Architecture * **Do This:** Design components to react to events. Use message queues or event buses for asynchronous communication. * **Don't Do This:** Rely on synchronous, request-response patterns for all interactions between components. * **Why:** Event-Driven Architectures makes the application more responsive, scalable, and resilient. ### 4.3. Domain-Driven Design (DDD) * **Do This:** Align component design with the business domain. Model components around domain entities and concepts. Use ubiquitous language. * **Don't Do This:** Design components based solely on technical considerations. * **Why:** DDD helps ensure that the software accurately reflects the business requirements, making it easier to understand and maintain. ### 4.4. CQRS (Command Query Responsibility Segregation) * **Do This:** Separate the read and write operations of a data store or component. Use separate models for commands (write) and queries (read). * **Don't Do This:** Use the same model for both reading and writing data. * **Why:** CQRS allows you to optimize the read and write paths independently, improving performance and scalability. ### 4.5. Functional Programming * **Do This:** Design components as pure functions that have no side effects. Use immutable data structures. * **Don't Do This:** Create components that rely on mutable state and side effects, making them harder to test and reason about. * **Why:** Promotes clearer, more testable code since the component's behaviours are deterministic and contained. ## 5. Common Anti-Patterns and Mistakes Avoid these common pitfalls to ensure that component design remains effective, maintainable, and aligned with TDD principles. ### 5.1. God Components * **Description:** A component that knows too much or does too much. * **Solution:** Apply the Single Responsibility Principle to break down the component into smaller, more focused components. ### 5.2. Tight Coupling * **Description:** Components that are highly dependent on each other. * **Solution:** Use dependency injection, interfaces, and abstractions to decouple components. ### 5.3. Shotgun Surgery * **Description:** A change in one part of the system requires changes in many other parts. * **Solution:** Apply the Open/Closed Principle and encapsulate changes within well-defined components. ### 5.4. Premature Optimization * **Description:** Optimizing code before it is necessary. * **Solution:** Focus on writing clear, testable code first. Optimize only when performance bottlenecks are identified through profiling. Write tests to measure performance before refactoring. ### 5.5. Ignoring Test Coverage * **Description:** Not writing enough tests to cover all aspects of the component. Aim for high test coverage (80%+) * **Solution:** Always prioritize writing comprehensive tests to cover all possible scenarios and edge cases. Use coverage tools to identify gaps and correct it by creating new tests and logic. Also, consider mutation testing. ## 6. Technology-Specific Details This section provides specific examples tailored for different technologies (Python, Java, JavaScript, etc.), highlighting the nuances that differentiate good code from great code in each ecosystem. (Examples included will be limited to Python due to space, but structure should address how you would incorporate other modern popular languages) ### 6.1. Python-Specific Considerations * **Do This:** Leverage Python's dynamic typing and duck typing to create flexible and reusable components. Use decorators for cross-cutting concerns. Use type hints for improved code clarity and maintainability where appropriate. """python # Example: Using a decorator for caching import functools def cache(func): @functools.wraps(func) def wrapper(*args, **kwargs): if args in wrapper.cache: return wrapper.cache[args] else: result = func(*args, **kwargs) wrapper.cache[args] = result return result wrapper.cache = {} return wrapper """ ### 6.2. Java-Specific Considerations (Placeholder: This section would contain Java-specific details, e.g., using Spring Framework for dependency injection, leveraging the Java Collections Framework, etc.) ### 6.3. JavaScript-Specific Considerations (Placeholder: This section would contain JavaScript-specific details, e.g., using React components, leveraging ES6+ features, using testing frameworks like Jest or Mocha, etc.) ### 6.4 C#-Specific Considerations (Placeholder: This section would contain C#-specific details, e.g., using .NET's built-in dependency injection and testing framework) ## 7. Conclusion Adhering to these component design standards within a TDD workflow will lead to more maintainable, testable, and reusable code. By embracing these principles and practices, developers can build robust and scalable systems that meet the evolving needs of the business. When in doubt, always strive to write tests that drive the design of your components, promoting modularity, loose coupling, and clear responsibilities. Continuous refactoring, guided by a comprehensive test suite, will ensure that the code remains clean, efficient, and easy to understand. This document should be kept up-to-date with the latest developments in TDD.