# Tooling and Ecosystem Standards for Clean Code
This document outlines coding standards for Clean Code, focusing specifically on the selection and utilization of appropriate tooling and ecosystem components. Adhering to these standards will promote code maintainability, readability, and efficiency within a Clean Code context, while also ensuring optimal use of the resources available within the latest ecosystem.
## 1. Dependency Management
Choosing and managing dependencies is crucial for project health. Selecting the right tools and versioning them appropriately can significantly impact code quality and maintainability.
### 1.1. Standard: Explicit and Minimal Dependencies
* **Do This:** Declare all project dependencies explicitly in a dedicated dependency management file (e.g., "package.json" for Node.js, "pom.xml" for Java, "requirements.txt" for Python). Only include the minimum set of dependencies required.
* **Don't Do This:** Rely on implicit dependencies or include unnecessary libraries "just in case."
**Why:** Explicit dependencies make it clear what the project relies on. Minimal dependencies reduce the risk of conflicts, vulnerabilities, and unnecessary bloat.
**Code Example (Node.js):**
"""json
// package.json
{
"name": "clean-code-example",
"version": "1.0.0",
"dependencies": {
"lodash": "^4.17.21",
"express": "^4.18.2"
},
"devDependencies": {
"eslint": "^8.0.0",
"jest": "^29.0.0"
}
}
"""
**Why:** This "package.json" clarifies exactly what libraries the "clean-code-example" project uses, and what versions are compatible. The "devDependencies" key separates the tooling specific to development from the runtime dependencies
### 1.2. Standard: Semantic Versioning (SemVer)
* **Do This:** Use semantic versioning (major.minor.patch) for all dependencies and specify allowed version ranges (e.g., "^4.17.0", "~1.2.3") to allow for compatible updates.
* **Don't Do This:** Use fixed version numbers (e.g., "4.17.21") without considering potential patch updates or broader compatibility. This can lead to unexpected bugs or security vulnerabilities remaining unpatched.
**Why:** SemVer provides a standardized way to understand the potential impact of updates. Allowing for compatible updates ensures that bug fixes and minor improvements are automatically applied (within the specified range) without introducing breaking changes.
**Code Example (Python):**
"""python
# requirements.txt
requests==2.28.1
beautifulsoup4>=4.11.1,<4.12
Flask~=2.2.0
"""
**Why:** This example uses "==" for a specific version of requests. It also correctly uses greater than/less than operators to define a version range in "beautifulsoup4". Finally, it demonstrates the ~= operator, which specifies that "Flask" should be compatible with 2.2.x versions.
### 1.3. Standard: Dependency Auditing
* **Do This:** Use tools like "npm audit" (Node.js), "mvn dependency:analyze" (Java/Maven), or "pip check" (Python) to regularly audit dependencies for known security vulnerabilities and outdated packages.
* **Don't Do This:** Ignore security warnings or postpone dependency updates indefinitely.
**Why:** Dependency auditing helps identify and mitigate potential security risks in your project's third-party components.
**Code Example (Node.js):**
"""bash
npm audit
"""
**Output:**
"""
=== npm audit security report ===
found 10 vulnerabilities (3 moderate, 7 high)
run "npm audit fix" to fix some of them.
For complete remediation, or help with review, a paid upgrade is available.
# Run npm audit fix to automatically install compatible updates to vulnerable dependencies.
"""
This clearly shows how "npm audit" catches vulnerabilities. Running "npm audit fix" attempts to fix easily resolvable issues, but some often require manual intervention.
### 1.4 Standard: Dependency Injection Containers
* **Do This:** Employ dependency injection containers (e.g., Spring in Java, InversifyJS in TypeScript, Autofac in .NET) to manage dependencies effectively, increase testability and reduce coupling between classes.
* **Don't Do This:** Hardcode object creation or directly instantiate dependencies within classes. This creates tight coupling, making the code harder to test and maintain.
**Why:** Dependency injection makes code more modular and testable. It promotes loose coupling, which is a key principle of clean code.
**Code Example (TypeScript with InversifyJS):**
"""typescript
import { injectable, inject } from "inversify";
import "reflect-metadata";
interface ILogger {
log(message: string): void;
}
@injectable()
class ConsoleLogger implements ILogger {
log(message: string): void {
console.log(message);
}
}
interface IApp {
run(): void;
}
@injectable()
class App implements IApp {
private readonly logger: ILogger;
constructor(@inject(ConsoleLogger) logger: ILogger) {
this.logger = logger;
}
run(): void {
this.logger.log("Application started.");
}
}
// Container setup
import { Container } from "inversify";
import { TYPES } from "./types";
const myContainer = new Container();
myContainer.bind(ConsoleLogger).toSelf();
myContainer.bind(App).toSelf();
const app = myContainer.resolve(App);
app.run();
"""
**Why:** The example demonstrates how to use InversifyJS to inject the "ConsoleLogger" into the "App" class. This makes the "App" class more testable, as you can easily mock the "ILogger" interface for unit tests. The container manages the dependencies by resolving "App" and recursively injecting all of its dependencies.
## 2. Code Analysis Tools
Static code analysis tools help to identify potential issues in code before runtime. Integrating these tools into the development workflow is essential for maintaining high code quality.
### 2.1. Standard: Linters
* **Do This:** Use a linter (e.g., ESLint for JavaScript, Pylint for Python, Checkstyle/SpotBugs for Java) to enforce coding style and identify potential errors. Configure the linter with a project-specific configuration file.
* **Don't Do This:** Ignore linter warnings or disable rules without a valid reason.
**Why:** Linters automatically enforce a consistent coding style, catch common errors, and improve code readability. Consistent style allows developers to focus on the logic rather than getting distracted by layout inconsistencies.
**Code Example (JavaScript with ESLint):**
"""javascript
// .eslintrc.js
module.exports = {
"env": {
"browser": true,
"node": true,
"es2023": true
},
"extends": [
"eslint:recommended",
"plugin:@typescript-eslint/recommended"
],
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaVersion": "latest",
"sourceType": "module"
},
"plugins": [
"@typescript-eslint"
],
"rules": {
"no-unused-vars": "warn",
"no-console": "warn",
"@typescript-eslint/explicit-function-return-type": "warn",
"semi": ["error", "always"]
}
};
"""
**Why:** This configuration extends the recommended ESLint rules, enables TypeScript support, and defines custom rules for unused variables, console statements, explicit function return types, and semicolon usage. These specific customizations demonstrate how to tailor linting to project-specific requirements.
### 2.2. Standard: Static Analyzers
* **Do This:** Use static analysis tools (e.g., SonarQube, PMD, FindBugs) to detect potential bugs, code smells, and security vulnerabilities. Integrate these tools into the CI/CD pipeline.
* **Don't Do This:** Neglect static analysis reports or fail to address identified issues.
**Why:** Static analysis tools can identify complex issues that are difficult to detect manually, leading to more robust and secure code.
**Code Example (Java with SonarQube):**
Integrate SonarQube analysis as part of a Maven build:
"""xml
org.sonarsource.scanner.maven
sonar-maven-plugin
3.9.1.2184
"""
Run the analysis:
"""bash
mvn sonar:sonar
"""
**Why:** Including the SonarQube plugin in the "pom.xml" enables static analysis during the build process. After the "mvn sonar:sonar" command runs, you can review the results in the SonarQube dashboard. This allows to quickly identify issues like code smells or potential bugs in the Java code.
### 2.3. Standard: Code Formatters
* **Do This:** Use a code formatter (e.g., Prettier, Black) to automatically format code according to a consistent style. Configure the formatter to run automatically on save or as a pre-commit hook.
* **Don't Do This:** Manually format code or ignore code formatting inconsistencies.
**Why:** Code formatters eliminate subjective formatting debates and ensure a consistent look and feel across the codebase. This allows developers to focus on the logic and makes diffs more readable.
**Code Example (Python with Black):**
Configuration (e.g., in "pyproject.toml"):
"""toml
# pyproject.toml
[tool.black]
line-length = 120
target-version = ['py311']
include = '\.pyi?$'
exclude = '''
/(
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \.venv
| _build
| buck-out
| dist
)/
'''
"""
Running Black:
"""bash
black .
"""
**Why:** This "pyproject.toml" file configures Black to enforce a line length of 120 characters, target Python 3.11, and exclude certain directories. Running "black ." will automatically reformat all Python files in the current directory and its subdirectories based on this configuration.
## 3. Testing Frameworks
Choosing the right testing frameworks and libraries is essential for ensuring code reliability and maintainability.
### 3.1. Standard: Unit Testing Frameworks
* **Do This:** Use a unit testing framework (e.g., Jest for JavaScript, JUnit for Java, pytest for Python) to write comprehensive unit tests for all critical components.
* **Don't Do This:** Skip unit tests or write superficial tests that don't adequately verify the behavior of the code.
**Why:** Unit tests provide a safety net for code changes, ensuring that existing functionality remains intact when new features are added or bugs are fixed.
**Code Example (JavaScript with Jest):**
"""javascript
// Example function
function add(a, b) {
return a + b;
}
// Unit test
test('adds 1 + 2 to equal 3', () => {
expect(add(1, 2)).toBe(3);
});
"""
**Why:** This example shows a simple test case where the output of a function "add" is tested against a predetermined value. Jest allows for easy testing by comparing the expected and actual outputs of the "add" function.
### 3.2. Standard: Mocking Libraries
* **Do This:** Use mocking libraries (e.g., Mockito for Java, Jest's "jest.fn()" for JavaScript, unittest.mock for Python) to isolate unit tests from external dependencies.
* **Don't Do This:** Directly access external resources (e.g., databases, APIs) in unit tests, as this makes tests slow, brittle, and unreliable.
**Why:** Mocking allows you to test code in isolation, without relying on external resources. This makes tests faster, more reliable, and easier to maintain.
**Code Example (Python with unittest.mock):**
"""python
import unittest
from unittest.mock import patch
def get_data_from_api(url):
# Assume this function makes an API call
pass
def process_data(url):
data = get_data_from_api(url)
return data.get('value')
class TestProcessData(unittest.TestCase):
@patch('__main__.get_data_from_api')
def test_process_data(self, mock_get_data_from_api):
mock_get_data_from_api.return_value = {'value': 10}
result = process_data('http://example.com/api')
self.assertEqual(result, 10)
"""
**Why:** This Python example demonstrates how to mock a call to a potentially brittle external API. It patches the "get_data_from_api" function to return a canned response using "unittest.mock", which allows the function "process_data" to be tested in isolation.
### 3.3. Standard: Test Coverage Tools
* **Do This:** Use test coverage tools (e.g., Istanbul/NYC for JavaScript, JaCoCo for Java, Coverage.py for Python) to measure the percentage of code covered by unit tests. Set a minimum coverage threshold for the project (e.g., 80%).
* **Don't Do This:** Aim solely for high coverage without considering the quality of tests. Poorly written tests that don't adequately verify the behavior of the code are not useful.
**Why:** Test coverage metrics provide a quantitative measure of how well the codebase is tested. While high coverage doesn't guarantee that the code is bug-free, it increases confidence in its correctness.
**Code Example (JavaScript with Jest and NYC):**
Configure "package.json" for test coverage reporting:
"""json
// package.json
{
"scripts": {
"test": "jest --coverage"
}
}
"""
Run tests with coverage reporting:
"""bash
npm test
"""
**Why:** Running tests with the "--coverage" flag in Jest generates a coverage report using NYC, which details lines, statements, branches, and functions covered by the tests.
## 4. CI/CD Pipelines
Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the build, test, and deployment process. These pipelines also provide infrastructure to run the static analysis discussed earlier.
### 4.1. Standard: Automated Builds
* **Do This:** Set up a CI/CD pipeline using tools like Jenkins, GitLab CI, GitHub Actions, or CircleCI to automatically build, test, and deploy code changes.
* **Don't Do This:** Manually build and deploy code, as this is error-prone and time-consuming.
**Why:** Automated builds ensure that code changes are continuously integrated, tested, and deployed, reducing the risk of integration issues and deployment failures.
**Code Example (GitHub Actions):**
"""yaml
# .github/workflows/main.yml
name: CI/CD Pipeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '18.x'
- name: Install dependencies
run: npm install
- name: Run linters
run: npm run lint
- name: Run tests
run: npm test
- name: Build project
run: npm run build
- name: Deploy to production
if: github.ref == 'refs/heads/main'
run: |
# Add deployment code here
echo "Deploying to production..."
"""
**Why:** This GitHub Actions workflow defines a CI/CD pipeline that automatically builds, tests, and deploys code changes to the "main" branch. It first sets up Node.js, installs dependencies, runs linters and tests, builds the project, and then deploys to production if the code changes are on the "main" branch.
### 4.2. Standard: Automated Testing
* **Do This:** Configure the CI/CD pipeline to automatically run unit tests, integration tests, and end-to-end tests on every code change.
* **Don't Do This:** Skip automated testing or rely solely on manual testing.
**Why:** Automated testing ensures that code changes are thoroughly tested before deployment, reducing the risk of introducing bugs into production.
### 4.3. Standard: Automated Deployments
* **Do This:** Configure the CI/CD pipeline to automatically deploy code changes to staging and production environments after successful testing.
* **Don't Do This:** Manually deploy code changes, as this is error-prone and time-consuming.
**Why:** Automated deployments ensure that code changes are quickly and reliably deployed, reducing the time it takes to get new features and bug fixes into the hands of users.
## 5. Documentation Tools
Effective documentation is crucial for code maintainability and collaboration.
### 5.1. Standard: API Documentation Generators
* **Do This:** Use API documentation generators (e.g., JSDoc for JavaScript, Javadoc for Java, Sphinx for Python) to automatically generate API documentation from code comments.
* **Don't Do This:** Rely solely on manual documentation or skip documenting code altogether.
**Why:** API documentation generators make it easy to create and maintain up-to-date API documentation, improving code readability and collaboration.
**Code Example (JavaScript with JSDoc):**
"""javascript
/**
* Adds two numbers together.
* @param {number} a The first number.
* @param {number} b The second number.
* @returns {number} The sum of the two numbers.
*/
function add(a, b) {
return a + b;
}
"""
**Why:** Using JSDoc format, it is possible to add standardized documentation directly in the code. The comment block above the functions outlines the paramters and return values of the function. This comment can then be parsed by JSDoc and turned into documentation.
### 5.2. Standard: README Files
* **Do This:** Include a comprehensive README file in the project repository that explains how to set up, build, test, and run the project.
* **Don't Do This:** Skip creating a README file or write a superficial README file that doesn't provide enough information.
**Why:** README files provide a central location for project documentation, making it easier for new developers to get started.
### 5.3. Standard: Architecture decision Records (ADRs)
* **Do This:** Use Architecture Decision Records (ADRs) to document important architectural decisions, their rationale, and consequences. Tools like "adr-tools" can streamline this documentation.
* **Don't Do This:** Make significant architectural decisions without documenting them, leading to confusion and knowledge loss over time.
**Why:** ADRs provide a historical record of architectural choices, helping new developers understand the system's design and evolution. It makes explicit the reasoning behind critical design decisions, which can inform trade-offs when new features are added.
**Code Example (Markdown ADR):**
"""markdown
# 1. Record title
Date: YYYY-MM-DD
## Status
Proposed/Accepted/Rejected/Deprecated/Superseded
## Context
Describe the forces at play and relevant background information.
## Decision
State the decision you are making.
## Consequences
Describe the resulting context, after applying the decision. Include positive and negative consequences.
"""
## 6. Version Control Systems
Version control is essential for collaboration and code management
### 6.1 Standard: Git
* **Do This:** Employ Git for version control, following established branching strategies like Gitflow or trunk-based development.
* **Don't Do This:** Avoid committing directly to the main branch, bypass code review processes or neglect meaningful commit messages.
**Why:** Proper Git usage facilitates collaboration, provides an audit trail of changes, and enables easy rollback to previous versions. Good branching strategies are critical for avoiding integration issues.
### 6.2 Standard: Code Review Tools
* **Do This:** Utilize code review tools like GitHub Pull Requests, GitLab Merge Requests, or Bitbucket Pipelines to ensure collaborative code review and quality assurance.
* **Don't Do This:** Merge code without review or conduct superficial reviews that fail to identify potential issues.
**Why:** Code review promotes knowledge sharing, improves code quality, and helps catch potential bugs or security vulnerabilities before they reach production.
**Code Example (GitHub Pull Request):**
A typical code review workflow involves opening a pull request against the main branch, where other developers review the changes, provide feedback, and approve the pull request before it is merged. This process ensures that code undergoes thorough scrutiny before integration.
## 7. Monitoring and Logging Tools
### 7.1. Standard: Centralized Logging
* **Do This:** Use a centralized logging system (e.g. ELK Stack, Splunk, Graylog) to gather logs from all application components in one place. Use structured logging (e.g. JSON) for easy querying and analysis.
* **Don't Do This:** Rely on local log files that are difficult to access and analyze, or use unstructured logging that makes it hard to search and correlate events.
**Why:** Centralized logging makes it easier to monitor application health, troubleshoot problems, and analyze trends. Structured logging allows for efficient searching, filtering, and aggregation of log data.
**Code Example (Python with structured logging):**
"""python
import logging
import json
def configure_logging():
logger = logging.getLogger('my_app')
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
logger = configure_logging()
def process_data(data):
try:
result = data['value'] * 2
logger.info(json.dumps({'event': 'data_processed', 'input': data, 'result': result}))
return result
except Exception as e:
logger.error(json.dumps({'event': 'processing_failed', 'input': data, 'error': str(e)}))
return None
"""
### 7.2 Standard: Application Performance Monitoring (APM)
* **Do This:** Use APM tools (e.g., New Relic, Datadog, Dynatrace) to monitor application performance metrics like response time, throughput, and error rates. Set up alerts to notify you of performance degradation or errors.
* **Don't Do This:** Neglect performance monitoring or wait for users to report issues before addressing them.
**Why:** APM tools provide real-time visibility into application performance, helping you identify and resolve bottlenecks before they impact users.
## Conclusion
Adhering to these Tooling and Ecosystem standards within the context of Clean Code principles helps create maintainable, readable, efficient, and secure systems. Using up-to-date tooling and following prescribed patterns enables the development team to focus on building robust solutions while minimizing technical debt. This leads to enhanced collaboration, improved code quality, and faster delivery of high-quality software.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Angular Guidelines Use this guidelines when working with Angular related code. ## 1. Core Architecture - **Standalone Components:** Components, directives, and pipes are standalone by default. The `standalone: true` flag is no longer required and should be omitted in new code (Angular v17+ and above). - **Strong Typing:** TypeScript types, interfaces, and models provide type safety throughout the codebase - **Single Responsibility:** Each component and service has a single, well-defined responsibility - **Rule of One:** Files focus on a single concept or functionality - **Reactive State:** Signals provide reactive and efficient state management - **Dependency Injection:** Angular's DI system manages service instances - **Function-Based DI:** Use function-based dependency injection with the `inject()` function instead of constructor-based injection in all new code. Example: ```typescript import { inject } from "@angular/core"; import { HttpClient } from "@angular/common/http"; export class MyService { private readonly http = inject(HttpClient); // ... } ``` - **Lazy Loading:** Deferrable Views and route-level lazy loading with `loadComponent` improve performance - **Directive Composition:** The Directive Composition API enables reusable component behavior - **Standalone APIs Only:** Do not use NgModules, CommonModule, or RouterModule. Import only required standalone features/components. - **No Legacy Modules:** Do not use or generate NgModules for new features. Migrate existing modules to standalone APIs when possible. ## 2. Angular Style Guide Patterns - **Code Size:** Files are limited to 400 lines of code - **Single Purpose Files:** Each file defines one entity (component, service, etc.) - **Naming Conventions:** Symbols have consistent, descriptive names - **Folder Structure:** Code is organized by feature-based folders - **File Separation:** Templates and styles exist in their own files for components - **Property Decoration:** Input and output properties have proper decoration - **Component Selectors:** Component selectors use custom prefixes and kebab-case (e.g., `app-feature-name`) - **No CommonModule or RouterModule Imports:** Do not import CommonModule or RouterModule in standalone components. Import only the required standalone components, directives, or pipes. ## 3. Input Signal Patterns - **Signal-Based Inputs:** The `input()` function creates InputSignals: ```typescript // Current pattern readonly value = input(0); // Creates InputSignal // Legacy pattern @Input() value = 0; ``` - **Required Inputs:** The `input.required()` function marks inputs as mandatory: ```typescript readonly value = input.required<number>(); ``` - **Input Transformations:** Transformations convert input values: ```typescript readonly disabled = input(false, { transform: booleanAttribute }); readonly value = input(0, { transform: numberAttribute }); ``` - **Two-Way Binding:** Model inputs enable two-way binding: ```typescript readonly value = model(0); // Creates a model input with change propagation // Model values update with .set() or .update() increment(): void { this.value.update(v => v + 1); } ``` - **Input Aliases:** Aliases provide alternative input names: ```typescript readonly value = input(0, { alias: "sliderValue" }); ``` ## 3a. Typed Reactive Forms - **Typed Forms:** Always use strictly typed reactive forms by defining an interface for the form values and using `FormGroup<MyFormType>`, `FormBuilder.group<MyFormType>()`, and `FormControl<T>()`. - **Non-Nullable Controls:** Prefer `nonNullable: true` for controls to avoid null issues and improve type safety. - **Patch and Get Values:** Use `patchValue` and `getRawValue()` to work with typed form values. - **Reference:** See the [Angular Typed Forms documentation](https://angular.dev/guide/forms/typed-forms) for details and examples. ## 4. Component Patterns - **Naming Pattern:** Components follow consistent naming - `feature.type.ts` (e.g., `hero-list.component.ts`) - **Template Extraction:** Non-trivial templates exist in separate `.html` files - **Style Extraction:** Styles exist in separate `.css/.scss` files - **Signal-Based Inputs:** Components use the `input()` function for inputs - **Two-Way Binding:** Components use the `model()` function for two-way binding - **Lifecycle Hooks:** Components implement appropriate lifecycle hook interfaces (OnInit, OnDestroy, etc.) - **Element Selectors:** Components use element selectors (`selector: 'app-hero-detail'`) - **Logic Delegation:** Services contain complex logic - **Input Initialization:** Inputs have default values or are marked as required - **Lazy Loading:** The `@defer` directive loads heavy components or features - **Error Handling:** Try-catch blocks handle errors - **Modern Control Flow:** Templates use `@if`, `@for`, `@switch` instead of structural directives - **State Representation:** Components implement loading and error states - **Derived State:** The `computed()` function calculates derived state - **No NgModules:** Do not use or reference NgModules in new code. ## 5. Styling Patterns - **Component Encapsulation:** Components use scoped styles with proper encapsulation - **CSS Methodology:** BEM methodology guides CSS class naming when not using Angular Material - **Component Libraries:** Angular Material or other component libraries provide consistent UI elements - **Theming:** Color systems and theming enable consistent visual design - **Accessibility:** Components follow a11y standards - **Dark Mode:** Components support dark mode where appropriate ## 5a. Angular Material and Angular CDK Usage - **Standard UI Library:** Use Angular Material v3 for all standard UI components (buttons, forms, navigation, dialogs, etc.) to ensure consistency, accessibility, and alignment with Angular best practices. - **Component Development:** Build new UI components and features using Angular Material components as the foundation. Only create custom components when Material does not provide a suitable solution. - **Behavioral Primitives:** Use Angular CDK for advanced behaviors (drag-and-drop, overlays, accessibility, virtual scrolling, etc.) and for building custom components that require low-level primitives. - **Theming:** Leverage Angular Material's theming system for consistent color schemes, dark mode support, and branding. Define and use custom themes in `styles.scss` or feature-level styles as needed. - **Accessibility:** All UI components must meet accessibility (a11y) standards. Prefer Material components for built-in a11y support. When using CDK or custom components, follow WCAG and ARIA guidelines. - **Best Practices:** - Prefer Material's layout and typography utilities for spacing and text. - Use Material icons and fonts for visual consistency. - Avoid mixing multiple UI libraries in the same project. - Reference the [Angular Material documentation](https://material.angular.io) for usage patterns and updates. - **CDK Utilities:** Use Angular CDK utilities for custom behaviors, overlays, accessibility, and testing harnesses. - **Migration:** For legacy or custom components, migrate to Angular Material/CDK where feasible. ## 5b. Template Patterns - **Modern Control Flow:** Use the new Angular control flow syntax: `@if`, `@for`, `@switch` in templates. Do not use legacy structural directives such as `*ngIf`, `*ngFor`, or `*ngSwitch`. - **No Legacy Structural Directives:** Remove or migrate any usage of `*ngIf`, `*ngFor`, or `*ngSwitch` to the new control flow syntax in all new code. Legacy code should be migrated when touched. - **Referencing Conditional Results:** When using `@if`, reference the result using the `as` keyword, e.g. `@if (user(); as u) { ... }`. This is the recommended pattern for accessing the value inside the block. See the [Angular documentation](https://angular.dev/guide/templates/control-flow#referencing-the-conditional-expressions-result) for details. ## 6. Service and DI Patterns - **Service Declaration:** Services use the `@Injectable()` decorator with `providedIn: 'root'` for singletons - **Data Services:** Data services handle API calls and data operations - **Error Handling:** Services include error handling - **DI Hierarchy:** Services follow the Angular DI hierarchy - **Service Contracts:** Interfaces define service contracts - **Focused Responsibilities:** Services focus on specific tasks - **Function-Based DI:** Use function-based dependency injection with the `inject()` function instead of constructor-based injection in all new code. Example: ```typescript import { inject } from "@angular/core"; import { HttpClient } from "@angular/common/http"; export class MyService { private readonly http = inject(HttpClient); // ... } ``` ## 7. Directive and Pipe Patterns - **Attribute Directives:** Directives handle presentation logic without templates - **Host Property:** The `host` property manages bindings and listeners: ```typescript @Directive({ selector: '[appHighlight]', host: { // Host bindings '[class.highlighted]': 'isHighlighted', '[style.color]': 'highlightColor', // Host listeners '(click)': 'onClick($event)', '(mouseenter)': 'onMouseEnter()', '(mouseleave)': 'onMouseLeave()', // Static properties 'role': 'button', '[attr.aria-label]': 'ariaLabel' } }) ``` - **Selector Prefixes:** Directive selectors use custom prefixes - **Pure Pipes:** Pipes are pure when possible for better performance - **Pipe Naming:** Pipes follow camelCase naming conventions ## 8. State Management Patterns - **Signals:** Signals serve as the primary state management solution - **Component Inputs:** Signal inputs with `input()` handle component inputs - **Two-Way Binding:** Model inputs with `model()` enable two-way binding - **Local State:** Writable signals with `signal()` manage local component state - **Derived State:** Computed signals with `computed()` calculate derived state - **Side Effects:** The `effect()` function handles side effects - **Error Handling:** Signal computations include error handling - **Signal Conversion:** The `toSignal()` and `toObservable()` functions enable interoperability with RxJS ## 9. Testing Patterns - **Test Coverage:** Tests cover components and services - **Unit Tests:** Focused unit tests verify services, pipes, and components - **Component Testing:** TestBed and component harnesses test components - **Mocking:** Tests use mocking techniques for dependencies - **Test Organization:** Tests follow the AAA pattern (Arrange, Act, Assert) - **Test Naming:** Tests have descriptive names that explain the expected behavior - **Playwright Usage:** Playwright handles E2E testing with fixtures and test isolation - **Test Environment:** Test environments match production as closely as possible ## 10. Performance Patterns - **Change Detection:** Components use OnPush change detection strategy - **Lazy Loading:** Routes and components load lazily - **Virtual Scrolling:** Virtual scrolling renders long lists efficiently - **Memoization:** Memoization optimizes expensive computations - **Bundle Size:** Bundle size monitoring and optimization reduce load times - **Server-Side Rendering:** SSR improves initial load performance - **Web Workers:** Web workers handle intensive operations ## 11. Security Patterns - **XSS Prevention:** User input undergoes sanitization - **CSRF Protection:** CSRF tokens secure forms - **Content Security Policy:** CSP headers restrict content sources - **Authentication:** Secure authentication protects user accounts - **Authorization:** Authorization checks control access - **Sensitive Data:** Client-side code excludes sensitive data ## 12. Accessibility Patterns - **ARIA Attributes:** ARIA attributes enhance accessibility - **Keyboard Navigation:** Interactive elements support keyboard access - **Color Contrast:** UI elements maintain proper color contrast ratios - **Screen Readers:** Components work with screen readers - **Focus Management:** Focus management guides user interaction - **Alternative Text:** Images include alt text
Add as custom prompt to Roocode you can completely replace the system prompt for this mode (aside from the role definition and custom instructions) by creating a file at .roo/system-prompt-codershortrules in your workspace. You are Roo, a highly skilled software engineer with extensive knowledge in many programming languages, frameworks, design patterns, and best practices. Use tools one at a time to complete tasks step-by-step. Wait for user confirmation after each tool use. Tools read_file: Read file contents. Use for analyzing code, text files, or configs. Output includes line numbers. Extracts text from PDFs and DOCX. Not for other binary files. Parameters: path (required) search_files: Search files in a directory using regex. Shows matches with context. Useful for finding code patterns or specific content. Parameters: path (required), regex (required), file_pattern (optional) list_files: List files and directories. Can be recursive. Don’t use to check if files you created exist; user will confirm. Parameters: path (required), recursive (optional) list_code_definition_names: List top-level code definitions (classes, functions, etc.) in a directory. Helps understand codebase structure. Parameters: path (required) apply_diff: Replace code in a file using a search and replace block. Must match existing content exactly. Use read_file first if unsure. Parameters: path (required), diff (required), start_line (required), end_line (required) Diff Format: text Wrap Copy <<<<<<< SEARCH [exact content] ======= [new content] >>>>>>> REPLACE write_to_file: Write full content to a file. Overwrites if exists, creates if not. MUST provide COMPLETE file content, not partial updates. MUST include app 3 parameters, path, content, and line_count Parameters: path (required), content (required), line_count (required) execute_command: Run CLI commands. Explain what the command does. Prefer complex commands over scripts. Commands run in the current directory. To run in a different directory, use cd path && command. Parameters: command (required) ask_followup_question: Ask the user a question to get more information. Use when you need clarification or details. Parameters: question (required) attempt_completion: Present the task result to the user. Optionally provide a CLI command to demo the result. Don’t use it until previous tool uses are confirmed successful. Parameters: result (required), command (optional) Tool Use Formatting IMPORTANT REPLACE tool_name with the tool you want to use, for example read_file. IMPORTANT REPLACE parameter_name with the parameter name, for example path. Format tool use with XML tags, e.g.: text Wrap Copy value1 value2 Guidelines Choose the right tool for the task. Use one tool at a time. Format tool use correctly. Wait for user confirmation after each tool use. Don’t assume tool success; wait for user feedback. Rules pass correct paths to tools. Don’t use ~ or $HOME. Tailor commands to the user's system. Prefer other editing tools over write_to_file for changes. Provide complete file content when using write_to_file. Don’t ask unnecessary questions; use tools to get information. Don’t be conversational; be direct and technical. Consider environment_details for context. ALWAYS replace tool_name, parameter_name, and parameter_value with actual values. Objective Break task into steps. Use tools to accomplish each step. Wait for user confirmation after each tool use. Use attempt_completion when task is complete.
# NgRx Signals Patterns This document outlines the state management patterns used in our Angular applications with NgRx Signals Store. ## 1. NgRx Signals Architecture - **Component-Centric Design:** Stores are designed around component requirements - **Hierarchical State:** State is organized in hierarchical structures - **Computed State:** Derived state uses computed values - **Declarative Updates:** State updates use patchState for immutability - **Store Composition:** Stores compose using features and providers - **Reactivity:** UIs build on automatic change detection - **Signal Interoperability:** Signals integrate with existing RxJS-based systems - **SignalMethod & RxMethod:** Use `signalMethod` for lightweight, signal-driven side effects; use `rxMethod` for Observable-based side effects and RxJS integration. When a service returns an Observable, always use `rxMethod` for side effects instead of converting to Promise or using async/await. ## 2. Signal Store Structure - **Store Creation:** The `signalStore` function creates stores - **Protected State:** Signal Store state is protected by default (`{ protectedState: true }`) - **State Definition:** Initial state shape is defined with `withState<StateType>({...})` - Root level state is always an object: `withState({ users: [], count: 0 })` - Arrays are contained within objects: `withState({ items: [] })` - **Dependency Injection:** Stores are injectable with `{ providedIn: 'root' }` or feature/component providers - **Store Features:** Built-in features (`withEntities`, `withHooks`, `signalStoreFeature`) handle cross-cutting concerns and enable store composition - **State Interface:** State interfaces provide strong typing - **Private Members:** Prefix all internal state, computed signals, and methods with an underscore (`_`). Ensure unique member names across state, computed, and methods. ```typescript withState({ count: 0, _internalCount: 0 }); withComputed(({ count, _internalCount }) => ({ doubleCount: computed(() => count() * 2), _doubleInternal: computed(() => _internalCount() * 2), })); ``` - **Member Integrity:** Store members have unique names across state, computed, and methods - **Initialization:** State initializes with meaningful defaults - **Collection Management:** The `withEntities` feature manages collections. Prefer atomic entity operations (`addEntity`, `updateEntity`, `removeEntity`, `setAllEntities`) over bulk state updates. Use `entityConfig` and `selectId` for entity identification. - **Entity Adapter Configuration:** Use `entityConfig` to configure the entity adapter for each store. Always specify the `entity` type, `collection` name, and a `selectId` function for unique entity identification. Pass the config to `withEntities<T>(entityConfig)` for strong typing and consistent entity management. ```typescript const userEntityConfig = entityConfig({ entity: type<User>(), collection: "users", selectId: (user: User) => user.id, }); export const UserStore = signalStore( { providedIn: "root" }, withState(initialState), withEntities(userEntityConfig), // ... ); ``` - **Custom Store Properties:** Use `withProps` to add static properties, observables, and dependencies. Expose observables with `toObservable`. ```typescript // Signal store structure example import { signalStore, withState, withComputed, withMethods, patchState, type, } from "@ngrx/signals"; import { withEntities, entityConfig } from "@ngrx/signals/entities"; import { computed, inject } from "@angular/core"; import { UserService } from "./user.service"; import { User } from "./user.model"; import { setAllEntities } from "@ngrx/signals/entities"; export interface UserState { selectedUserId: string | null; loading: boolean; error: string | null; } const initialState: UserState = { selectedUserId: null, loading: false, error: null, }; const userEntityConfig = entityConfig({ entity: type<User>(), collection: "users", selectId: (user: User) => user.id, }); export const UserStore = signalStore( { providedIn: "root" }, withState(initialState), withEntities(userEntityConfig), withComputed(({ usersEntities, usersEntityMap, selectedUserId }) => ({ selectedUser: computed(() => { const id = selectedUserId(); return id ? usersEntityMap()[id] : undefined; }), totalUserCount: computed(() => usersEntities().length), })), withMethods((store, userService = inject(UserService)) => ({ loadUsers: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true, error: null }); return userService.getUsers().pipe( tapResponse({ next: (users) => patchState(store, setAllEntities(users, userEntityConfig), { loading: false, }), error: () => patchState(store, { loading: false, error: "Failed to load users", }), }), ); }), ), ), selectUser(userId: string | null): void { patchState(store, { selectedUserId: userId }); }, })), ); ``` ## 3. Signal Store Methods - **Method Definition:** Methods are defined within `withMethods` - **Dependency Injection:** The `inject()` function accesses services within `withMethods` - **Method Organization:** Methods are grouped by domain functionality - **Method Naming:** Methods have clear, action-oriented names - **State Updates:** `patchState(store, newStateSlice)` or `patchState(store, (currentState) => newStateSlice)` updates state immutably - **Async Operations:** Methods handle async operations and update loading/error states - **Computed Properties:** `withComputed` defines derived state - **RxJS Integration:** `rxMethod` integrates RxJS streams. Use `rxMethod` for all store methods that interact with Observable-based APIs or services. Avoid using async/await with Observables in store methods. ```typescript // Signal store method patterns import { signalStore, withState, withMethods, patchState } from "@ngrx/signals"; import { inject } from "@angular/core"; import { TodoService } from "./todo.service"; import { Todo } from "./todo.model"; export interface TodoState { todos: Todo[]; loading: boolean; } export const TodoStore = signalStore( { providedIn: "root" }, withState<TodoState>({ todos: [], loading: false }), withMethods((store, todoService = inject(TodoService)) => ({ addTodo(todo: Todo): void { patchState(store, (state) => ({ todos: [...state.todos, todo], })); }, loadTodosSimple: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true }); return todoService.getTodos().pipe( tapResponse({ next: (todos) => patchState(store, { todos, loading: false }), error: () => patchState(store, { loading: false }), }), ); }), ), ), })), ); ``` ## 4. Entity Management - **Entity Configuration:** Entity configurations include ID selectors - **Collection Operations:** Entity operations handle CRUD operations - **Entity Relationships:** Computed properties manage entity relationships - **Entity Updates:** Prefer atomic entity operations (`addEntity`, `updateEntity`, `removeEntity`, `setAllEntities`) over bulk state updates. Use `entityConfig` and `selectId` for entity identification. ```typescript // Entity management patterns const userEntityConfig = entityConfig({ entity: type<User>(), collection: "users", selectId: (user: User) => user.id, }); export const UserStore = signalStore( withEntities(userEntityConfig), withMethods((store) => ({ addUser: signalMethod<User>((user) => { patchState(store, addEntity(user, userEntityConfig)); }), updateUser: signalMethod<{ id: string; changes: Partial<User> }>( ({ id, changes }) => { patchState(store, updateEntity({ id, changes }, userEntityConfig)); }, ), removeUser: signalMethod<string>((id) => { patchState(store, removeEntity(id, userEntityConfig)); }), setUsers: signalMethod<User[]>((users) => { patchState(store, setAllEntities(users, userEntityConfig)); }), })), ); ``` ## 5. Component Integration ### Component State Access - **Signal Properties:** Components access signals directly in templates - **OnPush Strategy:** Signal-based components use OnPush change detection - **Store Injection:** Components inject store services with the `inject` function - **Default Values:** Signals have default values - **Computed Values:** Components derive computed values from signals - **Signal Effects:** Component effects handle side effects ```typescript // Component integration patterns @Component({ standalone: true, imports: [UserListComponent], template: ` @if (userStore.users().length > 0) { <app-user-list [users]="userStore.users()"></app-user-list> } @else { <p>No users loaded yet.</p> } <div>Selected user: {{ selectedUserName() }}</div> `, changeDetection: ChangeDetectionStrategy.OnPush, }) export class UsersContainerComponent implements OnInit { readonly userStore = inject(UserStore); selectedUserName = computed(() => { const user = this.userStore.selectedUser(); return user ? user.name : "None"; }); constructor() { effect(() => { const userId = this.userStore.selectedUserId(); if (userId) { console.log(`User selected: ${userId}`); } }); } ngOnInit() { this.userStore.loadUsers(); } } ``` ### Signal Store Hooks - **Lifecycle Hooks:** The `withHooks` feature adds lifecycle hooks to stores - **Initialization:** The `onInit` hook initializes stores - **Cleanup:** The `onDestroy` hook cleans up resources - **State Synchronization:** Hooks synchronize state between stores ```typescript // Signal store hooks patterns export const UserStore = signalStore( withState<UserState>({ /* initial state */ }), withMethods(/* store methods */), withHooks({ onInit: (store) => { // Initialize the store store.loadUsers(); // Return cleanup function if needed return () => { // Cleanup code }; }, }), ); ``` ## 6. Advanced Signal Patterns ### Signal Store Features - **Feature Creation:** The `signalStoreFeature` function creates reusable features - **Generic Feature Types:** Generic type parameters enhance feature reusability ```typescript function withMyFeature<T>(config: Config<T>) { return signalStoreFeature(/*...*/); } ``` - **Feature Composition:** Multiple features compose together - **Cross-Cutting Concerns:** Features handle logging, undo/redo, and other concerns - **State Slices:** Features define and manage specific state slices ```typescript // Signal store feature patterns export function withUserFeature() { return signalStoreFeature( withState<UserFeatureState>({ /* feature state */ }), withComputed((state) => ({ /* computed properties */ })), withMethods((store) => ({ /* methods */ })), ); } // Using the feature export const AppStore = signalStore( withUserFeature(), withOtherFeature(), withMethods((store) => ({ /* app-level methods */ })), ); ``` ### Signals and RxJS Integration - **Signal Conversion:** `toSignal()` and `toObservable()` convert between Signals and Observables - **Effects:** Angular's `effect()` function reacts to signal changes - **RxJS Method:** `rxMethod<T>(pipeline)` handles Observable-based side effects. Always prefer `rxMethod` for Observable-based service calls in stores. Do not convert Observables to Promises for store logic. - Accepts input values, Observables, or Signals - Manages subscription lifecycle automatically - **Reactive Patterns:** Signals combine with RxJS for complex asynchronous operations ```typescript // Signal and RxJS integration patterns import { signalStore, withState, withMethods, patchState } from "@ngrx/signals"; import { rxMethod } from "@ngrx/signals/rxjs-interop"; import { tapResponse } from "@ngrx/operators"; import { pipe, switchMap } from "rxjs"; import { inject } from "@angular/core"; import { HttpClient } from "@angular/common/http"; import { User } from "./user.model"; export interface UserState { users: User[]; loading: boolean; error: string | null; } export const UserStore = signalStore( { providedIn: "root" }, withState({ users: [], loading: false, error: null }), withMethods((store, http = inject(HttpClient)) => ({ loadUsers: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true, error: null }); return http.get<User[]>("/api/users").pipe( tapResponse({ next: (users) => patchState(store, { users, loading: false }), error: () => patchState(store, { loading: false, error: "Failed to load users", }), }), ); }), ), ), })), ); ``` ### Signal Method for Side Effects The `signalMethod` function manages side effects driven by Angular Signals within Signal Store: - **Input Flexibility:** The processor function accepts static values or Signals - **Automatic Cleanup:** The underlying effect cleans up when the store is destroyed - **Explicit Tracking:** Only the input signal passed to the processor function is tracked - **Lightweight:** Smaller bundle size compared to `rxMethod` ```typescript // Signal method patterns import { signalStore, withState, withMethods, patchState } from '@ngrx/signals'; import { signalMethod } from '@ngrx/signals'; import { inject } from '@angular/core'; import { Logger } from './logger'; interface UserPreferencesState { theme: 'light' | 'dark'; sendNotifications: boolean; const initialState: UserPreferencesState = { theme: 'light', sendNotifications: true, }; export const PreferencesStore = signalStore( { providedIn: 'root' }, withState(initialState), withProps(() => ({ logger: inject(Logger), })); withMethods((store) => ({ setSendNotifications(enabled: boolean): void { patchState(store, { sendNotifications: enabled }); }, // Signal method reacts to theme changes logThemeChange: signalMethod<'light' | 'dark'>((theme) => { store.logger.log(`Theme changed to: ${theme}`); }), setTheme(newTheme: 'light' | 'dark'): void { patchState(store, { theme: newTheme }); }, })), ); ``` ## 7. Custom Store Properties - **Custom Properties:** The `withProps` feature adds static properties, observables, and dependencies - **Observable Exposure:** `toObservable` within `withProps` exposes state as observables ```typescript withProps(({ isLoading }) => ({ isLoading$: toObservable(isLoading), })); ``` - **Dependency Grouping:** `withProps` groups dependencies for use across store features ```typescript withProps(() => ({ booksService: inject(BooksService), logger: inject(Logger), })); ``` ## 8. Project Organization ### Store Organization - **File Location:** Store definitions (`*.store.ts`) exist in dedicated files - **Naming Convention:** Stores follow the naming pattern `FeatureNameStore` - **Model Co-location:** State interfaces and models exist near store definitions - **Provider Functions:** Provider functions (`provideFeatureNameStore()`) encapsulate store providers ```typescript // Provider function pattern import { Provider } from "@angular/core"; import { UserStore } from "./user.store"; export function provideUserSignalStore(): Provider { return UserStore; } ``` ### Store Hierarchy - **Parent-Child Relationships:** Stores have clear relationships - **State Sharing:** Related components share state - **State Ownership:** Each state slice has a clear owner - **Store Composition:** Complex UIs compose multiple stores
# State Management Standards for Clean Code This document outlines coding standards for state management within Clean Code principles. It provides specific guidelines and examples to ensure code related to state is maintainable, readable, performant, and secure. These standards are designed to work with the latest recommended practices and features within the Clean Code ecosystem. ## 1. Principles of Clean State Management Clean state management is about structuring your application's data in a way that's predictable, manageable, and testable. It involves making state changes explicit, limiting side effects, and ensuring data consistency. Applying clean code principles to state enhances maintainability, reduces bugs, and improves collaborative development. * **Single Source of Truth:** Ensure each piece of data has one authoritative source. This prevents inconsistencies and simplifies debugging. * **Immutability:** Favor immutable data structures. Immutable data makes state changes more predictable and helps prevent unintended side effects. * **Explicit State Transitions:** State transitions should be clear and well-defined, making it easier to understand how the application evolves over time. * **Separation of Concerns:** Keep state management logic separate from UI components or business logic. This enhances modularity and testability. * **Minimal Global State:** Limit the use of global state. Widespread global state can make it difficult to track dependencies and lead to unexpected behavior. ## 2. Architectural Patterns for State Management Choosing the right architecture for state management depends on the complexity of the application. Here are a few common patterns and guidelines: ### 2.1 Local State Managing state within a single component should be a default option. You typically use local state for isolated functionalities that don't necessitate sharing state or reactivity beyond the component’s scope. * **Do This:** Use local state for isolated component features. * **Don't Do This:** Share local state directly between unrelated components. """javascript // Example React local state using useState import React, { useState } from 'react'; function Counter() { const [count, setCount] = useState(0); return ( <div> <p>Count: {count}</p> <button onClick={() => setCount(count + 1)}>Increment</button> </div> ); } """ ### 2.2 Redux Pattern (Centralized State) The Redux pattern emphasizes a single store for application state, using reducers to handle actions and state transitions immutably. * **Do This:** * Use Redux or similar libraries for complex, application-wide state. * Define actions as plain objects with a "type" field. * Use pure functions as reducers to ensure predictable state transitions. * Selectors should cache results to prevent unnecessary re-renders. * **Don't Do This:** * Mutate the state directly in reducers. * Perform asynchronous operations directly in reducers. * Overuse Redux for simple components with minimal state. """javascript // Example Redux setup // Action const INCREMENT = 'INCREMENT'; const DECREMENT = 'DECREMENT'; const increment = () => ({ type: INCREMENT }); const decrement = () => ({ type: DECREMENT }); // Reducer const initialState = { count: 0 }; const counterReducer = (state = initialState, action) => { switch (action.type) { case INCREMENT: return { ...state, count: state.count + 1 }; case DECREMENT: return { ...state, count: state.count - 1 }; default: return state; } }; // Store creation import { createStore } from 'redux'; const store = createStore(counterReducer); // Component integration (React example) import { useSelector, useDispatch } from 'react-redux'; function CounterComponent() { const count = useSelector(state => state.count); const dispatch = useDispatch(); return ( <div> <p>Count: {count}</p> <button onClick={() => dispatch(increment())}>Increment</button> <button onClick={() => dispatch(decrement())}>Decrement</button> </div> ); } """ ### 2.3 Context API (Scoped State) Context API provides a way to pass data through the component tree without having to pass props manually at every level. While it is simpler than Redux it is still intended for scenarios that benefit from shared state. * **Do This:** * Use Context API for theming, user authentication, or other application-wide configurations. * Use "useContext" hook to consume context values. * Combine Context API with "useReducer" for complex state logic. * **Don't Do This:** * Use Context API as a general replacement for prop drilling in scenarios where component composition is better suited. * Overuse Context API resulting in unnecessary re-renders. """javascript // Example Context API setup import React, { createContext, useContext, useState } from 'react'; // Create Context const ThemeContext = createContext(); // Context Provider function ThemeProvider({ children }) { const [theme, setTheme] = useState('light'); const toggleTheme = () => { setTheme(prevTheme => (prevTheme === 'light' ? 'dark' : 'light')); }; return ( <ThemeContext.Provider value={{ theme, toggleTheme }}> {children} </ThemeContext.Provider> ); } // Custom Hook to consume Context function useTheme() { return useContext(ThemeContext); } // Component using Context function ThemeToggler() { const { theme, toggleTheme } = useTheme(); return ( <button onClick={toggleTheme}> Toggle Theme (Current: {theme}) </button> ); } // Usage in App function App() { return ( <ThemeProvider> <div> <ThemeToggler /> </div> </ThemeProvider> ); } """ ### 2.4 Observable Pattern (Reactive State) The observable pattern, often implemented with libraries like RxJS, is used for handling asynchronous data streams and complex event-driven applications. * **Do This:** * Use RxJS or similar libraries for handling asynchronous data streams. * Structure application logic as a pipeline of observable transformations. * Use subjects to bridge different parts of the application. * **Don't Do This:** * Overuse RxJS for simple event handling. * Introduce memory leaks by not unsubscribing from observables. * Create overly complex observable chains that are hard to understand. """javascript // Example RxJS setup import { fromEvent, interval } from 'rxjs'; import { map, filter, scan, takeUntil } from 'rxjs/operators'; // Example: Click counter observable const button = document.getElementById('myButton'); const click$ = fromEvent(button, 'click'); const counter$ = click$.pipe( map(() => 1), scan((acc, value) => acc + value, 0) ); counter$.subscribe(count => { console.log("Button clicked ${count} times"); }); // Example: Auto-incrementing counter that stops after 5 seconds const interval$ = interval(1000); const stop$ = fromEvent(document.getElementById('stopButton'), 'click'); interval$.pipe( takeUntil(stop$) // Stop the interval when the stop button is clicked ).subscribe(val => console.log("Interval value: ${val}")); """ ### 2.5 State Machines State machines are useful for managing complex state transitions with clearly defined states and transitions. * **Do This:** * Use state machines for scenarios with clearly defined states and transitions. * Model state transitions explicitly, reducing possible unexpected states. * Ensure state machines are well-documented, especially for complex systems. * **Don't Do This:** * Overuse state machines for simple state management. * Create monolithic state machines that are difficult to understand. """javascript // Example: JavaScript state machine using XState import { createMachine, interpret } from 'xstate'; // Define the state machine const trafficLightMachine = createMachine({ id: 'trafficLight', initial: 'green', states: { green: { after: { 5000: 'yellow' // After 5 seconds, transition to yellow } }, yellow: { after: { 1000: 'red' // After 1 second, transition to red } }, red: { after: { 6000: 'green' // After 6 seconds, transition to green } } } }); // Interpret the state machine const trafficService = interpret(trafficLightMachine).start(); trafficService.onTransition(state => { console.log("Traffic light is now ${state.value}"); }); // Example usage (simulating events or external triggers) // trafficService.send('TIMER'); """ ## 3. Implementing Immutability Immutability ensures that once an object is created, its state cannot be changed. This helps prevent accidental state mutations, making it easier to track and manage state changes, which aids in debugging and improves performance in certain scenarios. * **Do This:** * Use immutable data structures and operations. * Make copies of objects or arrays before modifying them. * Employ libraries like Immutable.js for more complex scenarios. * **Don't Do This:** * Directly modify object properties or array elements. * Assume that passing an object or array creates a new copy. ### 3.1 JavaScript Immutability Techniques """javascript // Immutable Object Update const originalObject = { name: 'John', age: 30 }; const updatedObject = { ...originalObject, age: 31 }; // Create a new object // Immutable Array Update const originalArray = [1, 2, 3]; const updatedArray = [...originalArray, 4]; // Create a new array const removedArray = originalArray.filter(item => item !== 2); // Create new array without '2' console.log(originalObject); // { name: 'John', age: 30 } console.log(updatedObject); // { name: 'John', age: 31 } console.log(originalArray); // [1, 2, 3] console.log(updatedArray); // [1, 2, 3, 4] console.log(removedArray); // [1, 3] """ ### 3.2 Immutable.js Immutable.js provides persistent immutable data structures, improving performance and simplifying state management for complex applications. """javascript import { Map, List } from 'immutable'; // Immutable Map const originalMap = Map({ name: 'John', age: 30 }); const updatedMap = originalMap.set('age', 31); // Immutable List const originalList = List([1, 2, 3]); const updatedList = originalList.push(4); console.log(originalMap.toJS()); // { name: 'John', age: 30 } console.log(updatedMap.toJS()); // { name: 'John', age: 31 } console.log(originalList.toJS()); // [1, 2, 3] console.log(updatedList.toJS()); // [1, 2, 3, 4] """ ## 4. Handling Side Effects Side effects are operations that affect the state of the application outside of the current function or component. Properly managing side effects is crucial for maintaining predictable and testable code. * **Do This:** * Isolate side effects in dedicated functions or modules. * Use effect hooks (e.g., "useEffect" in React) to manage side effects in components. * Handle errors gracefully when performing side effects. * **Don't Do This:** * Perform side effects directly within reducers or pure functions. * Ignore potential errors in side effect operations. ### 4.1 Managing Effects with "useEffect" """javascript import React, { useState, useEffect } from 'react'; function DataFetcher({ url }) { const [data, setData] = useState(null); const [loading, setLoading] = useState(true); const [error, setError] = useState(null); useEffect(() => { const fetchData = async () => { try { const response = await fetch(url); if (!response.ok) { throw new Error("HTTP error! status: ${response.status}"); } const result = await response.json(); setData(result); } catch (e) { setError(e); } finally { setLoading(false); } }; fetchData(); // Cleanup function (optional) return () => { // Cancel any pending requests or subscriptions }; }, [url]); // Dependency array: effect runs only when 'url' changes if (loading) return <p>Loading...</p>; if (error) return <p>Error: {error.message}</p>; if (!data) return <p>No data available.</p>; return ( <pre>{JSON.stringify(data, null, 2)}</pre> ); } """ ### 4.2 Using Thunks with Redux Thunks allow you to perform asynchronous operations in Redux actions. """javascript // Example Redux Thunk Action const fetchDataRequest = () => ({ type: 'FETCH_DATA_REQUEST' }); const fetchDataSuccess = (data) => ({ type: 'FETCH_DATA_SUCCESS', payload: data }); const fetchDataFailure = (error) => ({ type: 'FETCH_DATA_FAILURE', payload: error }); // Async action using Redux Thunk const fetchData = (url) => { return async (dispatch) => { dispatch(fetchDataRequest()); try { const response = await fetch(url); if (!response.ok) { throw new Error("HTTP error! status: ${response.status}"); } const data = await response.json(); dispatch(fetchDataSuccess(data)); } catch (error) { dispatch(fetchDataFailure(error.message)); } }; }; // Usage in Component import { useDispatch } from 'react-redux'; function DataFetchButton({ url }) { const dispatch = useDispatch(); return ( <button onClick={() => dispatch(fetchData(url))}> Fetch Data </button> ); } """ ## 5. Testing State Management Testing state management involves verifying that state transitions occur correctly and that side effects are handled properly. * **Do This:** * Write unit tests for reducers to verify state transitions. * Use mock stores and actions to test components connected to Redux. * Test side effects by mocking external dependencies. * **Don't Do This:** * Omit testing for state management logic. * Write integration tests without proper unit testing. ### 5.1 Testing Reducers """javascript // Reducer Test Example (Jest) import counterReducer from './counterReducer'; // Assuming counterReducer.js import { INCREMENT, DECREMENT } from './actions'; describe('counterReducer', () => { it('should return the initial state', () => { expect(counterReducer(undefined, {})).toEqual({ count: 0 }); }); it('should handle INCREMENT', () => { expect(counterReducer({ count: 0 }, { type: INCREMENT })).toEqual({ count: 1 }); }); it('should handle DECREMENT', () => { expect(counterReducer({ count: 1 }, { type: DECREMENT })).toEqual({ count: 0 }); }); }); """ ### 5.2 Testing React Components with Redux """javascript // Component Test Example (React Testing Library and Redux Mock Store) import React from 'react'; import { render, fireEvent } from '@testing-library/react'; import { Provider } from 'react-redux'; import configureStore from 'redux-mock-store'; import CounterComponent from './CounterComponent'; // Assuming CounterComponent.js const mockStore = configureStore([]); describe('CounterComponent', () => { let store; let component; beforeEach(() => { store = mockStore({ count: 0 }); store.dispatch = jest.fn(); // Mock dispatch function component = render( <Provider store={store}> <CounterComponent /> </Provider> ); }); it('should display the initial count', () => { expect(component.getByText('Count: 0')).toBeInTheDocument(); }); it('should dispatch increment action when increment button is clicked', () => { fireEvent.click(component.getByText('Increment')); expect(store.dispatch).toHaveBeenCalledWith({ type: 'INCREMENT' }); }); }); """ ## 6. Security Considerations for State Management Security is a critical aspect of state management. Properly securing the state ensures that sensitive data is protected from unauthorized access and tampering. * **Do This:** * Protect sensitive data in the state with encryption. * Validate data received from external sources before storing it in the state. * Sanitize user input to prevent XSS. * **Don't Do This:** * Store sensitive data in plain text in the state. * Trust data received from external sources without validation. * Expose sensitive data in logs or error messages. ### 6.1 Data Validation """javascript // Example Data Validation const validateData = (data) => { if (typeof data.email !== 'string' || !data.email.includes('@')) { throw new Error('Invalid email format'); } if (typeof data.age !== 'number' || data.age < 0 || data.age > 120) { throw new Error('Invalid age'); } return data; }; // Usage in Reducer const userReducer = (state = {}, action) => { switch (action.type) { case 'UPDATE_USER': try { const validatedData = validateData(action.payload); return { ...state, ...validatedData }; } catch (error) { console.error('Data validation failed:', error.message); return state; } default: return state; } }; """ ### 6.2 Encryption Encrypting sensitive data ensures that even if the state is compromised, the data remains unreadable without the decryption key. """javascript // Example Encryption (using CryptoJS) import CryptoJS from 'crypto-js'; const encryptData = (data, key) => { const encrypted = CryptoJS.AES.encrypt(JSON.stringify(data), key).toString(); return encrypted; }; const decryptData = (encryptedData, key) => { const bytes = CryptoJS.AES.decrypt(encryptedData, key); try { const decrypted = JSON.parse(bytes.toString(CryptoJS.enc.Utf8)); return decrypted; } catch (e) { console.error("Decryption error", e); return null; // Or handle the error as appropriate } }; // Example usage const sensitiveData = { creditCardNumber: '1234-5678-9012-3456' }; const encryptionKey = 'my-secret-key'; const encryptedData = encryptData(sensitiveData, encryptionKey); console.log('Encrypted:', encryptedData); const decryptedData = decryptData(encryptedData, encryptionKey); console.log('Decrypted:', decryptedData); """ ## 7. Optimizing Performance Efficient state management is crucial for optimizing application performance, especially in complex applications with frequent state updates. * **Do This:** * Use memoization techniques to prevent unnecessary re-renders. * Implement lazy loading for components that rely on large state objects. * Batch state updates to minimize the number of renders. * **Don't Do This:** * Update the state unnecessarily. * Cause components to re-render frequently with negligible impact. ### 7.1 Memoization Memoization prevents re-renders by caching the results of expensive calculations or component renders. """javascript import React, { useState, useMemo } from 'react'; function ExpensiveComponent({ data }) { // Simulate an expensive computation const computedValue = useMemo(() => { console.log('Computing expensive value...'); // Complex calculation based on data return data.map(item => item * 2).reduce((acc, val) => acc + val, 0); }, [data]); // Only recompute if 'data' changes return ( <div> <p>Computed Value: {computedValue}</p> </div> ); } function ParentComponent() { const [count, setCount] = useState(0); const data = [1, 2, 3, 4, 5]; // Static data return ( <div> <button onClick={() => setCount(count + 1)}>Increment Count</button> <p>Count: {count}</p> {/*ExpensiveComponent only re-renders if "data" changes, not on count changes*/} <ExpensiveComponent data={data} /> </div> ); } function MemoizedComponent({ data }) { // Simulate a render-heavy component console.log('Rendering MemoizedComponent...'); return <p>Data: {data.join(', ')}</p>; } // Memoize MemoizedComponent to prevent unnecessary re-renders const OptimizedMemoizedComponent = React.memo(MemoizedComponent); function ParentMemoComponent() { const [count, setCount] = useState(0); const data = [1, 2, 3, 4, 5]; return ( <div> <button onClick={() => setCount(count + 1)}>Increment Count</button> <p>Count: {count}</p> {/* MemoizedComponent only re-renders if its props change, not on count changes */} <OptimizedMemoizedComponent data={data} /> </div> ); } """ ### 7.2 Batching Updates Batching updates ensures that multiple state updates are grouped into a single render cycle. """javascript import React, { useState } from 'react'; import { unstable_batchedUpdates } from 'react-dom'; // Available only in some React versions function BatchUpdatesComponent() { const [count1, setCount1] = useState(0); const [count2, setCount2] = useState(0); const updateBothCounts = () => { unstable_batchedUpdates(() => { // Both state updates are batched into a single render setCount1(prevCount => prevCount + 1); setCount2(prevCount => prevCount + 1); }); }; return ( <div> <p>Count 1: {count1}</p> <p>Count 2: {count2}</p> <button onClick={updateBothCounts}>Update Both Counts</button> </div> ); } """ These standards provide a comprehensive guide to managing state in a clean and maintainable way. By following these guidelines, developers can build robust, performant, and secure applications.
# API Integration Standards for Clean Code This document outlines coding standards and best practices for integrating with backend services and external APIs within the Clean Code framework. It emphasizes readability, maintainability, performance, and security. These standards are aimed at guiding developers and AI coding assistants in producing high-quality, robust, and scalable integrations. ## 1. API Integration Principles and Clean Code API integration, when approached with Clean Code principles in mind, becomes significantly more manageable and less prone to errors. This section explores the core principles and their applications in the context of API interactions. * **Single Responsibility Principle (SRP):** A class or module should have one, and only one, reason to change. For API integration, this means separating the API client logic (responsible for making requests and handling responses) from the business logic that uses the data. * **Do This:** Create dedicated classes or modules for interacting with specific APIs, encapsulating all the API-related logic within them. * **Don't Do This:** Mix API calls directly within business logic classes or functions. This makes testing and maintenance difficult. * **Why:** SRP ensures that a change in the API (e.g., endpoint change, data format update) only requires modification in the API client module, not the entire application. * **Open/Closed Principle (OCP):** Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification. This principle applies to API integration by allowing new API features or versions to be adopted without modifying existing code that uses the API. * **Do This:** Use abstract classes or interfaces to define a contract for API clients. Implementations can then be created for different API versions or services. Utilize design patterns such as Strategy or Template Method. * **Don't Do This:** Directly modify existing API client code to accommodate new API features. * **Why:** OCP ensures that changes to the API don't introduce regressions in existing functionality. * **Liskov Substitution Principle (LSP):** Subtypes must be substitutable for their base types without altering the correctness of the program. This is relevant when using polymorphism with API clients. * **Do This:** Ensure that any derived API client classes adhere to the contract defined by the base class or interface. If a method implemented in a sub-class modifies behavior in an unexpected way, it violates LSP. * **Don't Do This:** Create API client subclasses that fundamentally change the behavior of the base class's methods. * **Why:** LSP ensures that you can replace one API client implementation with another without causing unexpected errors. * **Interface Segregation Principle (ISP):** Clients should not be forced to depend on methods they do not use. In the API realm, this translates to creating specific interfaces for different API functionalities, catering to the needs of different parts of the application. * **Do This:** Define multiple, smaller interfaces tailored to specific use cases, rather than a single large interface for the entire API. * **Don't Do This:** Force clients to implement methods they don't need, leading to bloated and confusing implementations. * **Why:** ISP promotes loose coupling and reduces the risk of unintended side effects when API contracts change. * **Dependency Inversion Principle (DIP):** High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. In API integration, this means that business logic should depend on interfaces for API clients, not on concrete implementations. That allows easy swapping of implementations for testing, changing providers, or other needs. * **Do This:** Inject API client interfaces into classes that need to consume the API. * **Don't Do This:** Directly instantiate API client classes within business logic components. * **Why:** DIP promotes loose coupling, making testing easier, and allowing you to switch API providers without impacting the rest of your system. Dependency Injection (DI) frameworks are invaluable here. ## 2. Connecting with Backend Services and External APIs This section covers the practical aspects of connecting to APIs, including error handling, data transformation, and authentication. ### 2.1 Selecting an HTTP Client * **Standard:** Use a robust and well-maintained HTTP client library. Consider "aiohttp" (async) or "requests" (sync) for Python. For Javascript consider "axios" or the native "fetch" API. * **Do This:** Choose a library that supports features like connection pooling, automatic retries, timeouts, and request/response interceptors, and proper TLS/SSL verification. * **Don't Do This:** Write your own HTTP client or use a rudimentary library that lacks essential features. * **Why:** Using a mature HTTP client library simplifies development and reduces the risk of introducing bugs or security vulnerabilities. """python import requests def get_data_from_api(url): try: response = requests.get(url, timeout=10) # added timeout response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) return response.json() except requests.exceptions.RequestException as e: print(f"Error communicating with API: {e}") return None api_url = "https://example.com/api/data" data = get_data_from_api(api_url) if data: print(data) """ ### 2.2 Error Handling * **Standard:** Implement robust error handling to gracefully handle API failures. Use "try-except" blocks, check response codes, and log errors appropriately. Implement retry mechanisms with exponential backoff for transient errors. * **Do This:** Wrap API calls in "try-except" blocks to catch potential exceptions (e.g., network errors, timeouts, invalid responses). Use "response.raise_for_status()" to check for HTTP errors, making special consideration for rate limiting. Log the complete error message, request URL, and any relevant context. * **Don't Do This:** Ignore errors or simply print error messages without proper logging and handling. * **Why:** Proper error handling prevents application crashes, provides valuable debugging information, and ensures a better user experience. """python import requests import time import logging logging.basicConfig(level=logging.INFO) def get_data_from_api(url, max_retries=3, backoff_factor=2): # retry mechanism retries = 0 while retries < max_retries: try: response = requests.get(url, timeout=10) response.raise_for_status() return response.json() except requests.exceptions.RequestException as e: logging.error(f"Attempt {retries + 1} failed: {e}") retries += 1 time.sleep(backoff_factor ** retries) # Exponential backoff logging.error(f"Failed to retrieve data from {url} after {max_retries} attempts") return None """ ### 2.3 Data Transformation * **Standard:** Decouple the API's data format from your application's data model. Use data transfer objects (DTOs) or similar mechanisms to map the API response to your internal representation. * **Do This:** Create dedicated classes or functions to transform API responses into your application's data structures. Centralize this mapping logic to simplify changes. Use validation libraries like Pydantic (for Python) for structural validation and automated type conversion. * **Don't Do This:** Directly use the API's data format throughout your application. This tightly couples your code to the API and makes it difficult to adapt to changes. * **Why:** Data transformation ensures that your application remains independent of the specific API's data format, improving maintainability and flexibility. """python from pydantic import BaseModel import requests class User(BaseModel): # pydantic model id: int name: str email: str def get_user_from_api(user_id: int) -> User | None: url = f"https://example.com/api/users/{user_id}" try: response = requests.get(url) response.raise_for_status() user_data = response.json() return User(**user_data) # Validate and convert using Pydantic except requests.exceptions.RequestException as e: print(f"Error: {e}") return None user = get_user_from_api(1) if user: print(f"User name: {user.name}") """ ### 2.4 Authentication and Authorization * **Standard:** Implement secure authentication and authorization mechanisms according to the API's requirements (API Keys, OAuth 2.0, JWT). Store secrets securely using environment variables or dedicated secret management tools. * **Do This:** Use a dedicated library for handling authentication protocols. Store API keys and secrets securely (e.g., using environment variables or a vault). Properly handle token refresh flows in OAuth 2.0 when using access tokens with limited lifetimes. Use HTTPS for all API communication. * **Don't Do This:** Hardcode API keys or secrets in your code. Skip SSL/TLS verification. Store cryptographic keys anywhere in your source repository. * **Why:** Secure authentication and authorization protect your application and the API from unauthorized access. """python import os import requests API_KEY = os.environ.get("MY_API_KEY") # Store in environment variable def call_api_with_auth(url): headers = {"Authorization": f"Bearer {API_KEY}"} # authorization header response = requests.get(url, headers=headers) response.raise_for_status() return response.json() """ ### 2.5 Rate Limiting * **Standard:** Understand and respect the API's rate limits. Implement mechanisms to avoid exceeding these limits, such as caching, throttling, and exponential backoff with jitter. * **Do This:** Check the API's documentation for rate limits. Implement a throttling mechanism to control the number of requests per unit of time. Cache API responses when appropriate (especially for frequently accessed data that doesn't change often). Handle "429 Too Many Requests" errors gracefully using exponential backoff and jitter. * **Don't Do This:** Ignore rate limits and bombard the API with requests. This can lead to temporary or permanent blocking. * **Why:** Respecting rate limits ensures fair usage of the API and prevents your application from being blocked. """python import time import requests import logging import random logging.basicConfig(level=logging.INFO) def call_api_with_rate_limiting(url, delay=1): try: response = requests.get(url) response.raise_for_status() return response.json() except requests.exceptions.HTTPError as e: if e.response.status_code == 429: retry_after = int(e.response.headers.get("Retry-After", 60)) # get from header jitter = random.uniform(0, 1) # add jitter wait_time = retry_after + jitter logging.warning(f"Rate limit exceeded. Waiting {wait_time:.2f} seconds.") time.sleep(wait_time) return call_api_with_rate_limiting(url) # Recursive call! else: raise # Re-raise the exception for other errors time.sleep(delay) # delay for a specified time return response.json() """ ## 3. Design Patterns for API Integration Several design patterns can help improve the structure and maintainability of API integration code. ### 3.1 Facade Pattern * **Standard:** Use a facade to provide a simplified interface to a complex API. This hides the underlying complexity and makes it easier for clients to use the API. * **Do This:** Create a facade class that encapsulates the API client and provides a simple, high-level interface for common operations. * **Don't Do This:** Expose the raw API client directly to clients. This exposes unnecessary complexity and makes it harder to adapt to API changes. * **Why:** The Facade pattern shields calling code from the complexities of direct API interaction. """python import requests class WeatherAPIClient: BASE_URL = "https://api.weatherapi.com/v1" def __init__(self, api_key): self.api_key = api_key def get_current_weather(self, city): url = f"{self.BASE_URL}/current.json?key={self.api_key}&q={city}" response = requests.get(url) response.raise_for_status() return response.json() class WeatherFacade: def __init__(self, api_key): self.api_client = WeatherAPIClient(api_key) # encapsulates API def get_temperature(self, city): data = self.api_client.get_current_weather(city) return data["current"]["temp_c"] # simplified interface # Usage: weather_facade = WeatherFacade("YOUR_API_KEY") temperature = weather_facade.get_temperature("London") print(f"Temperature in London: {temperature}°C") """ ### 3.2 Adapter Pattern * **Standard:** Use an adapter to convert the interface of an API client class into another interface that clients expect. This is useful when integrating with APIs that have different interfaces. * **Do This:** Create an adapter class that implements the desired interface and delegates calls to the API client. Use this pattern to normalize interfaces from different API endpoint calls to make them consistent for calling code. * **Don't Do This:** Modify the API client class directly to fit the desired interface (violates OCP). * **Why:** The Adapter pattern allows integrating disparate API interfaces or data models into a common format. """python class OldAPI: def fetch_data(self): return {"old_data": "value"} class NewAPIInterface: def get_data(self): raise NotImplementedError class OldAPIToNewAPIAdapter(NewAPIInterface): # Adapter class def __init__(self, old_api): self.old_api = old_api def get_data(self): old_data = self.old_api.fetch_data() return {"new_data": old_data["old_data"]} old_api = OldAPI() adapter = OldAPIToNewAPIAdapter(old_api) new_data = adapter.get_data() print(new_data) # Output: {'new_data': 'value'} """ ### 3.3 Strategy Pattern * **Standard:** If you have multiple ways to call an API (different authentication methods, different endpoints for the same functionality, etc.) use the Strategy pattern to encapsulate each approach into a separate strategy class. This allows you to easily switch between strategies at runtime. * **Do This:** Define a common interface for all strategies. Create concrete strategy classes for each approach. Inject the desired strategy into the class that needs to call the API. * **Don't Do This:** Use conditional statements to switch between different approaches. This makes the code harder to read and maintain. * **Why:** Provides implementation flexibility and facilitates switching strategies without modifying the core logic. """python import requests class AuthStrategy: def apply_auth(self, request): raise NotImplementedError() class APIKeyAuth(AuthStrategy): def __init__(self, api_key): self.api_key = api_key def apply_auth(self, request): request.headers["X-API-Key"] = self.api_key return request class OAuth2Auth(AuthStrategy): def __init__(self, token): self.token = token def apply_auth(self, request): request.headers["Authorization"] = f"Bearer {self.token}" return request class APIClient: def __init__(self, auth_strategy: AuthStrategy): self.auth_strategy = auth_strategy def get(self, url): request = requests.Request("GET", url) prepared_request = self.auth_strategy.apply_auth(request) # apply strategy session = requests.Session() response = session.send(prepared_request.prepare()) response.raise_for_status() return response.json() # Usage: api_key_auth = APIKeyAuth("your_api_key") api_client_api_key = APIClient(api_key_auth) data = api_client_api_key.get("https://example.com/api/data") print(data) oauth2_auth = OAuth2Auth("your_oauth_token") api_client_oauth = APIClient(oauth2_auth) data = api_client_oauth.get("https://example.com/api/data") print(data) """ ## 4. Technology-Specific Considerations ### 4.1 Python * **Asyncio:** Utilize "asyncio" and "aiohttp" for asynchronous API calls to improve concurrency and performance in I/O-bound applications. * **Type Hints:** Use type hints extensively to improve code readability and catch type-related errors early. * **Pydantic:** Use Pydantic for data validation and serialization/deserialization of API requests and responses. * **Requests Library:** Using "requests" may block synchronous code so consider using threading to avoid impacting performance or using Asyncio libraries. ### 4.2 JavaScript * **Fetch API/Axios:** Use "fetch" (native) or "axios" (library) for making HTTP requests. Axios is typically preferred for providing additional error handling and legacy browser support. * **Async/Await:** Leverage "async/await" syntax for asynchronous API calls to improve code readability and maintainability. * **Typescript:** Enable TypeScript support to statically type check code during development, ensuring that API requests are correctly constructed, and API responses are properly handled. * **Node.js:** Utilize Node.js to process large amounts of asynchronous requests. ## 5. Testing API Integrations * **Standard:** Thoroughly test API integrations to ensure correctness, reliability, and performance. Use mock APIs or stubs to isolate your code during testing. * **Do This:** Write unit tests for API client classes, mocking the HTTP client to control the API responses. Use integration tests to verify the end-to-end flow, including actual API calls to a test environment. Consider contract testing to ensure that your API client adheres to the API's contract. * **Don't Do This:** Skip testing API integrations or rely solely on manual testing. This can lead to unexpected errors and regressions. * **Why:** Testing API integrations ensures that your application works correctly with the API and protects against changes in the API. """python import unittest from unittest.mock import patch import requests class MockResponse: def __init__(self, json_data, status_code): self.json_data = json_data self.status_code = status_code def json(self): return self.json_data def raise_for_status(self): if self.status_code >= 400: raise requests.exceptions.HTTPError("Error") class APITest(unittest.TestCase): @patch('requests.get') # Mock the requests.get method def test_get_data_from_api_success(self, mock_get): mock_response = MockResponse({"key": "value"}, 200) mock_get.return_value = mock_response from your_module import get_data_from_api # Import your function here data = get_data_from_api("http://example.com/api") self.assertEqual(data, {"key": "value"}) @patch('requests.get') def test_get_data_from_api_failure(self, mock_get): mock_response = MockResponse(None, 500) mock_response.raise_for_status = unittest.mock.Mock(side_effect=requests.exceptions.HTTPError("Error")) mock_get.return_value = mock_response from your_module import get_data_from_api data = get_data_from_api("http://example.com/api") self.assertIsNone(data) """ ## 6. Documentation * **Standard:** Document all API integrations, including the API's purpose, authentication methods, request/response formats, and error handling strategies. * **Do This:** Use docstrings to document API client classes and methods. Create separate documents or wikis to describe the API integration in more detail. Use tooling like Swagger or OpenAPI to document APIs and their integration points. * **Don't Do This:** Leave API integrations undocumented. This makes it difficult for others (and yourself in the future) to understand and maintain the code. * **Why:** Clear documentation is essential for maintainability and collaboration. ## 7. Security Considerations * **Standard:** Prioritize security when integrating with APIs. Enforce HTTPS, validate input, sanitize output, and protect against common web vulnerabilities. * **Do This:** Always use HTTPS for API communication. Validate all input data to prevent injection attacks. Sanitize all output data to prevent cross-site scripting (XSS) attacks. Follow the principle of least privilege when configuring API access controls. Protect against CSRF attacks. * **Don't Do This:** Store sensitive data in plain text. Trust user input without validation. Disable security features. * **Why:** Security vulnerabilities can expose sensitive data and compromise your application. By adhering to these standards, developers can ensure that API integrations are clean, maintainable, performant, and secure, aligning with the core principles of Clean Code.