# State Management Standards for Monorepo
This document outlines the standards for state management within our monorepo. Effective state management is crucial for maintainability, performance, and scalability across our applications and libraries. These standards aim to provide a consistent approach to handling application state, data flow, and reactivity within the monorepo.
## 1. Principles of State Management in a Monorepo
A monorepo architecture introduces unique challenges and opportunities regarding state management. Due to code sharing and potential inter-dependencies between projects, a unified and well-defined state management strategy becomes paramount.
* **Standard:** Utilize a predictable and unidirectional data flow.
* **Why:** Ensures that changes to state are traceable and debuggable, preventing unintended side effects across the monorepo.
* **Do This:** Favor architectures like Flux, Redux, or their modern counterparts with clear data flow patterns.
* **Don't Do This:** Avoid directly mutating state across different components or services without a defined flow.
* **Standard:** Favor immutable data structures.
* **Why:** Simplifies debugging, allows for easy change detection, and improves performance by enabling shallow comparisons.
* **Do This:** Use libraries like Immutable.js, Immer, or native JavaScript with spread operators to create new, immutable state objects.
* **Don't Do This:** Directly modify state objects, as this can lead to unpredictable behavior and difficult-to-trace bugs.
* **Standard:** Separate stateful logic from presentational components.
* **Why:** Enhances reusability, testability, and maintainability by isolating state-specific code.
* **Do This:** Implement the Container/Presentational pattern or use hooks to separate data fetching and state manipulation from UI rendering.
* **Don't Do This:** Embed complex state logic directly within UI components.
* **Standard:** Define clear boundaries for state domains.
* **Why:** Prevents components and services from accidentally modifying state that they shouldn't have access to.
* **Do This:** Use techniques like context providers or scoped state management solutions to isolate state to specific parts of the application.
* **Don't Do This:** Allow global, shared state to be modified from anywhere in the codebase without clear ownership or access controls.
* **Standard:** Handle side effects carefully.
* **Why:** Side effects (API calls, DOM manipulations, etc.) can introduce complexity and make state updates less predictable.
* **Do This:** Isolate side effects within dedicated modules or using middleware/thunks in state management libraries.
* **Don't Do This:** Perform side effects directly within reducers or component render functions.
## 2. Choosing a State Management Library
Selecting the right state management library is critical. The choice depends on the project's complexity, team familiarity, and performance requirements. The monorepo should adopt a limited set of preferred libraries to promote consistency.
* **Preferred Libraries:** For React-based applications, consider Zustand, Recoil, Jotai, or Redux Toolkit. For Vue-based applications, consider Pinia or Vuex. (These are leading contenders as of late 2024/early 2025.)
* **Zustand:** A small, fast, and scalable bearbones state-management solution using simplified flux principles.
* **Recoil:** A state management library for React that lets you create data-flow graphs. Particularly suited to complex dependencies. Can require more boilerplate than Zustand.
* **Jotai:** Primitive and flexible state management based on an atomic model.
* **Redux Toolkit:** An opinionated, batteries-included toolset for efficient Redux development, simplifying configuration and reducing boilerplate. Often combined now with RTK Query for data fetching.
* **Pinia:** The recommended state management solution for Vue 3, offering a simpler and more intuitive API compared to Vuex.
* **Vuex:** The official state management library for Vue, suitable for complex applications requiring centralized state management.
* **Standard:** Justify the choice of state management library in the project's README.
* **Why:** Provides context for other developers and helps maintain consistency across the monorepo.
* **Do This:** Document the reasons for selecting a specific library, considering factors like team expertise, project complexity, and performance requirements.
* **Don't Do This:** Choose a library arbitrarily without properly evaluating its suitability for the project.
## 3. Zustand State Management Examples
Zustand is a minimalist and flexible state management solution suitable for many projects within a monorepo.
### 3.1 Core Implementation
* **Standard:** Create a store using "create" from Zustand.
* **Standard:** Define state and actions within the store function.
"""javascript
// packages/my-app/src/store/myStore.js
import { create } from 'zustand';
const useMyStore = create((set) => ({
count: 0,
increment: () => set((state) => ({ count: state.count + 1 })),
decrement: () => set((state) => ({ count: state.count - 1 })),
reset: () => set({ count: 0 }),
// Example with async action
fetchData: async () => {
const response = await fetch('/api/data'); // Replace with real API endpoint
const data = await response.json();
set({ data: data }); // Assumes you add "data" to the initial state.
},
}));
export default useMyStore;
"""
* **Why:** Provides a simple and efficient way to manage state using hooks.
* **Do This:** Use functional updates to ensure immutability.
* **Don't Do This:** Mutate the state directly.
### 3.2 Using the Store in Components
* **Standard:** Use the custom hook "useMyStore" to access state and actions within components.
"""javascript
// packages/my-app/src/components/MyComponent.js
import React from 'react';
import useMyStore from '../store/myStore';
function MyComponent() {
const { count, increment, decrement, reset, fetchData } = useMyStore();
return (
<p>Count: {count}</p>
Increment
Decrement
Reset
Fetch Data
);
}
export default MyComponent;
"""
* **Why:** Simplifies component logic and promotes reusability.
### 3.3. Middleware and Persistence
* Zustand uses middleware for advanced functionality like persistence.
"""javascript
// packages/my-app/src/store/myStore.js
import { create } from 'zustand';
import { persist } from 'zustand/middleware'
const useMyStore = create(persist(
(set, get) => ({
count: 0,
increment: () => set({ count: get().count + 1 }),
decrement: () => set({ count: get().count - 1 }),
}),
{
name: 'my-store', // unique name
getStorage: () => localStorage, // (optional) default localStorage
}
))
export default useMyStore;
"""
* The "persist" middleware automatically saves the state to local storage.
* **Why:** Enables easy persistence of state across sessions.
## 4. Recoil State Management Examples
Recoil offers a different approach based on atoms and selectors, suitable for complex dependency graphs.
### 4.1 Core Implementation
* **Standard:** Define atoms for state and selectors for derived state.
"""javascript
// packages/my-app/src/recoil/atoms.js
import { atom } from 'recoil';
export const countState = atom({
key: 'countState',
default: 0,
});
// packages/my-app/src/recoil/selectors.js
import { selector } from 'recoil';
import { countState } from './atoms';
export const doubledCountState = selector({
key: 'doubledCountState',
get: ({ get }) => {
const count = get(countState);
return count * 2;
},
});
"""
* **Why:** Provides a flexible and efficient way to manage complex state dependencies.
* **Do This:** Use unique keys for atoms and selectors.
* **Don't Do This:** Use generic keys that might conflict with other parts of the application.
### 4.2 Using Recoil in Components
* **Standard:** Use "useRecoilState" and "useRecoilValue" hooks to access Recoil state and derived values.
"""javascript
// packages/my-app/src/components/MyComponent.js
import React from 'react';
import { useRecoilState, useRecoilValue } from 'recoil';
import { countState, doubledCountState } from '../recoil/atoms';
function MyComponent() {
const [count, setCount] = useRecoilState(countState);
const doubledCount = useRecoilValue(doubledCountState);
return (
<p>Count: {count}</p>
<p>Doubled Count: {doubledCount}</p>
setCount(count + 1)}>Increment
);
}
export default MyComponent;
"""
* **Why:** Simplifies component logic and promotes reusability.
### 4.3 Asynchronous Selectors for Data Fetching
Recoil excels with asynchronous data fetching.
"""javascript
import { selector } from 'recoil';
export const asyncDataState = selector({
key: 'asyncDataState',
get: async () => {
const response = await fetch('/api/data'); // Replace with a real API endpoint
const data = await response.json();
return data;
},
});
"""
* "useRecoilValue" is used to access the data in components.
## 5. Redux Toolkit Examples
Redux Toolkit simplifies Redux development with opinionated defaults and utility functions. RTK Query is the recommended approach to data fetching with Redux.
### 5.1 Core Implementation
* **Standard:** Configure a Redux store using "configureStore" from Redux Toolkit.
* **Standard:** Define reducers using "createSlice".
"""javascript
// packages/my-app/src/store/store.js
import { configureStore } from '@reduxjs/toolkit';
import counterReducer from './counterSlice';
export const store = configureStore({
reducer: {
counter: counterReducer,
},
});
// packages/my-app/src/store/counterSlice.js
import { createSlice } from '@reduxjs/toolkit';
export const counterSlice = createSlice({
name: 'counter',
initialState: {
value: 0,
},
reducers: {
increment: (state) => {
state.value += 1;
},
decrement: (state) => {
state.value -= 1;
},
incrementByAmount: (state, action) => {
state.value += action.payload;
},
},
});
export const { increment, decrement, incrementByAmount } = counterSlice.actions;
export default counterSlice.reducer;
"""
* **Why:** Provides a simplified and efficient way to manage Redux state.
* **Do This:** Use "createSlice" to automatically generate action creators and reducer logic.
* **Don't Do This:** Write manual action creators and reducers, as this can lead to boilerplate and errors.
### 5.2 Using Redux in Components
* **Standard:** Use "useSelector" and "useDispatch" hooks from "react-redux" to access state and dispatch actions within components.
"""javascript
// packages/my-app/src/components/MyComponent.js
import React from 'react';
import { useSelector, useDispatch } from 'react-redux';
import { increment, decrement, incrementByAmount } from '../store/counterSlice';
function MyComponent() {
const count = useSelector((state) => state.counter.value);
const dispatch = useDispatch();
return (
<p>Count: {count}</p>
dispatch(increment())}>Increment
dispatch(decrement())}>Decrement
dispatch(incrementByAmount(5))}>Increment by 5
);
}
export default MyComponent;
"""
* **Why:** Simplifies component logic and promotes reusability.
### 5.3 RTK Query for Data Fetching
RTK Query simplifies data fetching in Redux applications.
"""javascript
// packages/my-app/src/services/api.js
import { createApi, fetchBaseQuery } from '@reduxjs/toolkit/query/react'
export const api = createApi({
baseQuery: fetchBaseQuery({ baseUrl: '/' }), // Adjust base URL as needed. Consider using env vars.
endpoints: (builder) => ({
getData: builder.query({
query: () => "data", // Actual endpoint
}),
}),
});
export const { useGetDataQuery } = api;
// In store.js:
import { configureStore } from '@reduxjs/toolkit';
import { api } from './services/api';
export const store = configureStore({
reducer: {
[api.reducerPath]: api.reducer,
},
middleware: (getDefaultMiddleware) =>
getDefaultMiddleware().concat(api.middleware),
});
//In a component:
import { useGetDataQuery } from '../services/api';
function MyComponent() {
const { data, error, isLoading } = useGetDataQuery();
if (isLoading) return Loading...;
if (error) return Error: {error.message};
return (
{data.map(item => (
{item.name}
))}
);
}
"""
* **Why:** Provides a streamlined and efficient way to fetch and cache data using Redux.
* **Do This:** Define API endpoints using "createApi".
* **Don't Do This:** Manually fetch data and manage loading states and errors, as RTK Query handles this automatically.
## 6. Vue.js State Management with Pinia
Pinia is the recommended state management solution for Vue 3.
### 6.1 Core Implementation
* **Standard**: Define stores using "defineStore" from Pinia.
"""javascript
// packages/my-app/src/stores/counter.js
import { defineStore } from 'pinia'
export const useCounterStore = defineStore('counter', {
state: () => ({
count: 0,
}),
getters: {
doubleCount: (state) => state.count * 2,
},
actions: {
increment() {
this.count++
},
decrement() {
this.count--
},
async fetchData() {
// Example of making an API call, adapt to your needs
const response = await fetch('/api/data')
const data = await response.json()
// Assign the fetched data to a state variable
this.count = data.count; // Adapt based on actual returned data
}
},
})
"""
* **Why**: Provides a modular and scalable approach to managing state in Vue.js applications.
* **Do This**: Utilize actions for mutations and getters for derived data. Avoid directly mutating outside of actions.
* **Don't Do This**: Use "mapState", "mapGetters", and "mapActions" (Vuex syntax) in Pinia. Use the "use" composable hook instead.
### 6.2 Using Pinia in Components
* **Standard**: Use the "useCounterStore" custom hook to access state, getters, and actions within components via the composable "use" pattern.
"""vue
// packages/my-app/src/components/MyComponent.vue
"""
* **Why**: Provides a clear way to access store properties directly in the template and simplifies component logic. The "setup" script handles all state management.
## 7. Guidelines for Sharing State Across Packages
Sharing state across packages within the monorepo needs careful consideration.
* **Standard:** Avoid sharing mutable state directly between packages.
* **Why:** Can lead to tight coupling and difficult-to-debug issues.
* **Do This:** Use events, messages, or shared APIs to communicate state changes between packages.
* **Don't Do This:** Directly import and modify state from one package into another.
* **Standard:** Define shared state contracts using TypeScript interfaces.
* **Why:** Ensures that state is transferred consistently and predictably between packages.
* **Do This:** Create a shared "types" package to define interfaces for state objects.
* **Don't Do This:** Use dynamic or untyped data structures for shared state.
* **Standard:** Consider using a shared state management solution if multiple packages need to access the same state.
* **Why:** Provides a centralized and consistent way to manage shared state.
* **Do This:** Use a shared Redux store, Zustand store, or Recoil graph if necessary.
## 8. Testing State Management
Testing state management logic is critical for ensuring application correctness.
* **Standard:** Write unit tests for reducers, actions, and selectors.
* **Why:** Ensures that state updates are predictable and correct.
* **Do This:** Use testing libraries like Jest or Mocha to write unit tests.
* **Don't Do This:** Skip testing state management logic, as this can lead to subtle bugs.
* **Standard:** Write integration tests for components that interact with state.
* **Why:** Ensures that components correctly dispatch actions and render state.
* **Do This:** Use testing libraries like React Testing Library or Vue Test Utils to write integration tests.
* **Don't Do This:** Rely solely on manual testing to verify state management.
* **Standard:** Mock API calls when testing state management logic.
* **Why:** Prevents tests from depending on external services and makes them more reliable.
* **Do This:** Use mocking libraries like Mock Service Worker (MSW) or Nock to intercept and mock API calls.
## 9. Anti-Patterns and Mistakes to Avoid
* **Over-reliance on Global State:** Avoid storing purely local component state in the global state management solution. Performance will suffer.
* **Direct State Mutation:** Always ensure immutability.
* **Ignoring Asynchronous Actions:** Handle async operations correctly, especially API calls. Use RTK Query, thunks, or comparable patterns.
* **Lack of Testing:** State management logic is often complex and requires thorough testing.
* **Unnecessary Complexity:** Choose the simplest state management solution that meets the project's needs. Don't automatically reach for Redux when Zustand will do.
* **Tight Coupling:** Avoid creating tight dependencies between components and the state management implementation.
* **Neglecting Performance:** Be aware of performance implications, especially when dealing with large state objects.
* **Not Using Typescript:** Typescript can save lots of problems when refactoring and understanding the data structures across the monorepo. Use it!
* **Magic strings**: Use constants instead of strings for action types and other related items.
By adhering to these standards, we can ensure a consistent, maintainable, and scalable approach to state management across our monorepo. This document should be used as a reference for all development teams and integrated into code review processes. Continuously updating these standards as the ecosystem evolves is crucial for maintaining high-quality code.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Code Style and Conventions Standards for Monorepo This document outlines the code style and conventions standards for our Monorepo. Adhering to these guidelines ensures consistency, readability, and maintainability across all projects within the repository. These standards are designed to be used both by developers and as context by AI coding assistants. ## 1. Overall Principles * **Consistency is Key:** Maintain a uniform style across all packages and applications within the Monorepo. Inconsistencies create cognitive overhead and increase the risk of errors. * **Readability Matters:** Write code that is easy to understand, even for developers unfamiliar with the specific project. Clear names, concise logic, and helpful comments are essential. * **Maintainability is Paramount:** Design code that is easy to modify and extend as requirements evolve. Favor modularity, separation of concerns, and well-defined interfaces. * **Automation Over Manual Enforcement:** Utilize linters, formatters, and static analysis tools to automatically enforce these standards. This reduces the burden on code reviewers and ensures consistent application. * **Monorepo-Awareness:** Be aware of how your code interacts with other parts of the Monorepo. Avoid creating tight couplings or dependencies that could hinder refactoring or independent deployments. ## 2. Formatting Consistent formatting significantly improves code readability and reduces merge conflicts. ### 2.1. Code Formatter * **Standard:** Use Prettier with a consistent configuration file across the entire Monorepo. The configuration file (e.g., ".prettierrc.js", ".prettierrc.json") should be located at the root of the repository. * **Why:** Automated formatting eliminates subjective style debates and ensures consistent code presentation. """javascript // .prettierrc.js module.exports = { semi: true, trailingComma: 'all', singleQuote: true, printWidth: 120, tabWidth: 2, }; """ * **Do This:** * Configure your IDE or editor to automatically format code on save. * Include a pre-commit hook (e.g., using Husky and lint-staged) to ensure that all staged files are formatted before committing. * **Don't Do This:** * Manually format code. * Commit code with formatting errors. * Override the Prettier configuration in individual packages without a strong reason and approval. ### 2.2. Whitespace and Indentation * **Standard:** Use 2 spaces for indentation. Avoid tabs. * **Why:** Consistent indentation improves code readability. Spaces are generally preferred over tabs for cross-platform compatibility. * **Do This:** * Configure your editor to automatically convert tabs to spaces. * **Don't Do This:** * Use tabs for indentation. * Use inconsistent indentation within a file. ### 2.3. Line Length * **Standard:** Aim for a maximum line length of 120 characters. * **Why:** Shorter lines are easier to read on a variety of screen sizes. They also simplify code review. * **Do This:** * Break long lines into multiple shorter lines using appropriate operators and grouping. * **Don't Do This:** * Write extremely long lines that wrap awkwardly. ## 3. Naming Conventions Clear and consistent naming conventions are crucial for code understanding. ### 3.1. General Principles * **Use Descriptive Names:** Choose names that accurately reflect the purpose of the variable, function, class, or module. * **Be Concise:** While descriptive, names should also be as short as possible without sacrificing clarity. * **Avoid Abbreviations:** Prefer full words over abbreviations, unless the abbreviation is widely understood within the project or domain. * **Camel Case:** Use camelCase for variable and function names. * **Pascal Case:** Use PascalCase for class and component names. * **Constants:** Use UPPER_SNAKE_CASE for constants. ### 3.2. Variables * **Standard:** Use camelCase for variable names. * **Why:** camelCase is a widely adopted convention in JavaScript and promotes readability. * **Do This:** """javascript let firstName = 'John'; const userAge = 30; const isLoggedIn = true; """ * **Don't Do This:** """javascript let firstname = 'John'; // Incorrect case let user_age = 30; // Snake case (generally discouraged in Javascript) let a = true; // Non-descriptive name """ ### 3.3. Functions * **Standard:** Use camelCase for function names. Function names should typically be verbs or verb phrases that describe the function's action. * **Why:** Consistent function naming improves code understanding. * **Do This:** """javascript function calculateTotal(price, quantity) { return price * quantity; } function getUserName(userId) { return 'John Doe'; } """ * **Don't Do This:** """javascript function total(price, quantity) { // Non-descriptive name return price * quantity; } function userName(userId) { // Non-descriptive name return 'John Doe'; } """ ### 3.4. Classes and Components * **Standard:** Use PascalCase for class and component names. * **Why:** PascalCase clearly distinguishes classes and components from variables and functions. * **Do This:** """javascript class UserProfile { constructor(name, age) { this.name = name; this.age = age; } } function MyComponent() { return <div>Hello World</div>; } """ * **Don't Do This:** """javascript class userProfile { // Incorrect case constructor(name, age) { this.name = name; this.age = age; } } function myComponent() { // Incorrect case return <div>Hello World</div>; } """ ### 3.5. Constants * **Standard:** Use UPPER_SNAKE_CASE for constant names. * **Why:** UPPER_SNAKE_CASE clearly indicates that a variable is a constant and should not be modified. * **Do This:** """javascript const MAX_USERS = 100; const API_ENDPOINT = '/api/users'; """ * **Don't Do This:** """javascript const maxUsers = 100; // Incorrect case const apiEndpoint = '/api/users'; // Incorrect case """ ### 3.6. File and Directory Names * **Standard:** Use kebab-case (e.g., "my-component.js", "user-profile.test.js") for file and directory names. * **Why:** kebab-case is widely used throughout the JavaScript ecosystem and is easy to read. * **Do This:** """ components/ user-profile/ user-profile.js user-profile.test.js """ * **Don't Do This:** """ components/ UserProfile/ UserProfile.js // Incorrect case user_profile.test.js // incorrect case """ ### 3.7. Monorepo-Specific Naming Considerations * **Package Names:** Package names within the Monorepo (specified in "package.json") should be prefixed with a scope (e.g., "@my-org/component-library"). This prevents naming collisions when the package is published to a registry like npm. * **Module Names:** When creating shared modules that are used by multiple packages within the Monorepo, choose names that are generic and avoid being specific to a single application or feature. This ensures that the module can be reused effectively across the Monorepo. ## 4. Code Style ### 4.1. Imports * **Standard:** * Use absolute imports for modules within the same package, relative imports for modules within a separate package in the monorepo. Use path aliases (configured in "tsconfig.json" or similar) to simplify long relative paths and improve code readability. * Group imports by origin: 1) Node.js core modules, 2) external dependencies, 3) internal modules (packages within the monorepo), 4) local modules. Separate each group with a blank line. * Sort imports alphabetically within each group. * **Why:** Clear import organization improves code understanding and reduces the risk of circular dependencies. Absolute imports make it easier to refactor code within a package without breaking imports. * **Example (Using Typescript):** """typescript // src/components/UserProfile.tsx import React from 'react'; // External dependency import { useState } from 'react'; // External dependency import { useAuth } from '@my-org/auth-hooks'; // Internal module (Monorepo Package) import { Button } from '@my-org/design-system'; // Internal module (Monorepo Package) import { getUserData } from './api'; // local module import { ProfileSettings } from './ProfileSettings'; // local module interface Props { userId: string; } const UserProfile: React.FC<Props> = ({ userId }) => { const { isLoggedIn } = useAuth(); const [userData, setUserData] = useState({}); // ... component logic ... return ( <div> <p>User Profile</p> </div> ); } export default UserProfile; """ * **Do This:** * Configure your IDE to automatically sort imports. * Use path aliases to simplify imports. * **Don't Do This:** * Use deeply nested relative imports ("../../../"). * Mix different import types within the same group. * Import unused modules. ### 4.2. Comments * **Standard:** * Write comments to explain complex logic, algorithms, or non-obvious code. * Use JSDoc-style comments to document functions, classes, and interfaces. * Keep comments concise and up-to-date with the code. * **Why:** Comments help other developers (and your future self) understand the code's purpose and functionality. Well-documented code is easier to maintain and debug. * **Example (JSDoc):** """javascript /** * Calculates the total price based on the price per item and the quantity. * * @param {number} price The price per item. * @param {number} quantity The quantity of items. * @returns {number} The total price. */ function calculateTotal(price, quantity) { return price * quantity; } """ * **Do This:** * Focus on explaining *why* the code is doing something, rather than *what* it is doing (the code itself should be clear enough to explain what it's doing). * Update comments whenever you change the code. * **Don't Do This:** * Write obvious comments that simply restate the code. * Leave outdated or incorrect comments in the code. * Over-comment (comments should augment, not replace, clear code). ### 4.3. Error Handling * **Standard:** * Use try-catch blocks to handle potential errors. * Log errors with sufficient context (e.g., request ID, user ID, error message, stack trace). * Provide informative error messages to the user. * Consider using a centralized error handling service (e.g., Sentry, Rollbar) for production environments. * **Why:** Proper error handling prevents application crashes and makes it easier to diagnose and fix problems. * **Example:** """javascript try { const userData = await fetchUserData(userId); // ... process user data ... } catch (error) { console.error("Failed to fetch user data for user ID ${userId}:", error); // Send error to a centralized logging service, e.g., Sentry Sentry.captureException(error, { extra: { userId: userId } }); throw new Error('Failed to fetch user data. Please try again later.'); } """ * **Do This:** * Catch specific exceptions whenever possible. * Handle errors gracefully and avoid exposing sensitive information to the user. * **Don't Do This:** * Ignore errors. * Rely solely on console.log for error logging in production. * Throw generic errors without providing context. ### 4.4. Asynchronous Code * **Standard:** * Use "async/await" syntax for asynchronous operations. * Handle rejections in promises. * Avoid callback hell. * **Why:** "async/await" makes asynchronous code easier to read and reason about. * **Example:** """javascript async function fetchUserData(userId) { try { const response = await fetch("/api/users/${userId}"); const data = await response.json(); return data; } catch (error) { console.error('Error fetching user data:', error); throw error; // Re-throw the error to be handled by the caller } } """ * **Do This:** * Use "Promise.all" to execute multiple asynchronous operations concurrently. * Implement timeouts and retry mechanisms for unreliable asynchronous operations. * **Don't Do This:** * Use callbacks for complex asynchronous logic. * Forget to handle rejections in promises. ### 4.5. Monorepo-Specific Style Considerations * **Dependency Injection:** When creating shared components or modules that need to access configuration or services, use dependency injection (either manual or through a dependency injection container) rather than hardcoding dependencies. This makes the component or module more reusable and testable across different parts of the Monorepo. * **Feature Flags:** Use feature flags to enable or disable new features without deploying new code. This allows you to test features in production and roll them out gradually. Consider using a feature flag management service that is shared across the Monorepo. ## 5. Technology-Specific Guidelines These guidelines are tailored to specific technologies commonly used in our Monorepo. ### 5.1. React * **Standard:** * Use functional components with hooks. * Follow the single responsibility principle for components. * Use PropTypes or Typescript for type checking. * Write unit tests for components using Jest and React Testing Library. * **Why:** Functional components with hooks are the preferred way to write React components in modern React. * **Example (React with Typescript):** """typescript import React, { useState, useEffect } from 'react'; interface Props { name: string; } const Greeting: React.FC<Props> = ({ name }) => { const [greeting, setGreeting] = useState("Hello, ${name}!"); useEffect(() => { // Perform side effect, like fetching data }, [name]); return <div>{greeting}</div>; }; export default Greeting; """ ### 5.2. Node.js * **Standard:** * Use ES modules (import/export syntax). * Use a process manager like PM2 or systemd to run Node.js applications in production. * Use environment variables for configuration. * Write unit tests for modules using Jest or Mocha. * **Why:** ES modules are the standard module system in JavaScript. Process managers ensure that Node.js applications are automatically restarted if they crash. * **Example (Node.js with ES Modules):** """javascript // src/index.js import express from 'express'; const app = express(); const port = process.env.PORT || 3000; app.get('/', (req, res) => { res.send('Hello World!'); }); app.listen(port, () => { console.log("Example app listening on port ${port}"); }); """ ### 5.3. Typescript * **Standard:** * Use Typescript across the monorepo * Enable strict mode in "tsconfig.json" * Use explicit types for function parameters and return values * Avoid using "any" type * Define interfaces or types for complex data structures * **Why:** Typescript provides static typing, making code more robust and maintainable. Strict mode catches common errors at compile time. Explicit types improve code readability * **Example** """typescript interface User { id: number; name: string; email: string; } function getUser(id: number): User{ // Implementation } """ ### 5.4. Lerna/Nx * **Standard:** Use Lerna or Nx for managing the Monorepo. Use appropriate commands ("lerna run <script>" or "nx run <project>:<target>") to execute tasks across multiple packages. Configure task dependencies in "lerna.json" or "nx.json" to optimize build times. * **Why:** Lerna and Nx automate common Monorepo management tasks such as dependency management, building, testing, and publishing. They also provide advanced features like task caching and dependency graph analysis to improve performance. * **Example (Nx task dependency configuration):** """json // nx.json { "tasksRunnerOptions": { "default": { "runner": "nx/tasks-runners/default", "options": { "cacheableOperations": ["build", "test", "lint", "e2e"], "dependsOn": ["^build"], "inputs": ["production", "{projectRoot}/src/**/*", "{projectRoot}/.env"] } } }, " affected": { "defaultBase": "main" } } """ ### 5.5. Testing * **Standard**: Each package or application have associated unit and/or integration tests. Aim for high code coverage. Implement CI/CD pipelines to automatically run tests on every commit. * **Why**: Testing ensures code reliability and prevents regressions. Automated tests reduce the risk of introducing bugs during development. * **Example (Jest test case):** """javascript //MyComponent.test.js import React from 'react'; import { render, screen } from '@testing-library/react'; import MyComponent from './MyComponent'; test('renders learn react link', () => { render(<MyComponent />); const linkElement = screen.getByText(/learn react/i); expect(linkElement).toBeInTheDocument(); }); """ ## 6. Common Anti-Patterns * **Copy-Pasting Code:** Avoid duplicating code across multiple packages. Instead, create shared modules or components that can be reused. * **Tight Coupling:** Design components and modules that are loosely coupled. Avoid creating direct dependencies between unrelated parts of the Monorepo. * **Global State:** Minimize the use of global state. Instead, use dependency injection or context providers to manage state within specific scopes. * **Ignoring Warnings:** Treat compiler warnings and linter errors seriously. Fix them as soon as possible to prevent them from accumulating and masking real problems. * **Premature Optimization:** Don't optimize code prematurely. Focus on writing clear and correct code first, then optimize if necessary based on performance measurements. ## 7. Security Best Practices * **Keep Dependencies Up-to-Date:** Regularly update dependencies to patch security vulnerabilities. Use tools like "npm audit" or "yarn audit" to identify vulnerable packages. * **Sanitize User Input:** Sanitize all user input to prevent cross-site scripting (XSS) and other injection attacks. * **Avoid Storing Secrets in Code:** Store secrets (e.g., API keys, database passwords) in environment variables or a dedicated secret management service. Do not commit secrets to the repository. * **Implement Authentication and Authorization:** Implement robust authentication and authorization mechanisms to protect sensitive data and functionality. Use industry-standard protocols like OAuth 2.0 and JWT. * **Regular Security Audits:** Perform regular security audits to identify and fix potential vulnerabilities. Consider using static analysis tools and penetration testing. ## 8. Enforcing Standards * **Linters:** Use ESLint (with appropriate plugins for React and Typescript) configured for the entire Monorepo. * **Formatters:** Use Prettier for code formatting. * **Static Analysis:** Consider using static analysis tools like SonarQube or Code Climate to identify potential code quality issues and security vulnerabilities. * **Code Reviews:** Implement a robust code review process to ensure that all code changes adhere to these standards. * **CI/CD Pipeline:** Integrate linters, formatters, and tests into your CI/CD pipeline to automatically enforce these standards on every commit. By adhering to these code style and conventions standards, we can ensure that our Monorepo remains consistent, readable, maintainable, and secure as it evolves over time. It's expected that these rules be integrated into AI coding assistants to produce more consistent results.
# Security Best Practices Standards for Monorepo This document outlines security best practices for Monorepo development. It serves as a guide for developers to write secure code, protect against common vulnerabilities, and implement secure coding patterns within a Monorepo architecture. Adherence to these standards is crucial for maintaining the integrity, confidentiality, and availability of applications built with Monorepo. ## 1. Input Validation and Sanitization ### Standard All external inputs MUST be validated and sanitized before processing within any module/package in the Monorepo. This includes inputs from users, databases, APIs, and other sources. * **Do This:** Use input validation libraries specific to your technology stack (e.g., "validator.js" for Node.js) and define strict validation rules. Sanitize inputs to remove potentially malicious characters or code. * **Don't Do This:** Trust that input is safe or rely solely on client-side validation. ### Why It Matters Failing to validate and sanitize inputs can lead to various vulnerabilities, including SQL injection, cross-site scripting (XSS), command injection, and path traversal attacks. ### Code Examples **Node.js (Express.js) with "validator.js":** """javascript const express = require('express'); const { body, validationResult } = require('express-validator'); const validator = require('validator'); const app = express(); app.use(express.json()); app.post('/user', [ // Validate and sanitize the request body body('email').isEmail().normalizeEmail(), body('password').isLength({ min: 8 }).trim().escape(), body('username').matches(/^[a-zA-Z0-9]+$/).trim().escape(), ], (req, res) => { const errors = validationResult(req); if (!errors.isEmpty()) { return res.status(400).json({ errors: errors.array() }); } // Input is validated and sanitized, proceed with processing const { email, password, username } = req.body; console.log("Creating user with validated data: Email=${email}, Username=${username}"); res.send('User created successfully'); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server is running on port ${PORT}"); }); """ **Explanation:** * "express-validator" middleware is used to validate and sanitize request body parameters. * "isEmail()": Ensures the email is in a valid format. * "normalizeEmail()": Normalizes the email address (e.g., converts to lowercase). * "isLength({ min: 8 })": Requires the password to be at least 8 characters long. * "trim()": Removes whitespace from the beginning and end of the input. * "escape()": Escapes HTML characters to prevent XSS attacks. * "matches(/^[a-zA-Z0-9]+$/)": Ensures the username contains only alphanumeric characters. * Error handling is implemented using "validationResult". ### Anti-Patterns """javascript // Anti-pattern: Directly using user input without validation or sanitization app.get('/items/:id', (req, res) => { const itemId = req.params.id; // NO VALIDATION // Potentially vulnerable query db.query("SELECT * FROM items WHERE id = ${itemId}", (err, result) => { if (err) { console.error(err); return res.status(500).send('Database error'); } res.json(result); }); }); """ **Explanation:** The above code is susceptible to SQL injection because "itemId" from the request parameters is directly used in the SQL query without any validation or sanitization. An attacker could potentially inject malicious SQL code into the "itemId" parameter, leading to unauthorized data access or modification. ## 2. Authentication and Authorization ### Standard Robust authentication and authorization mechanisms MUST be implemented to protect resources within the Monorepo. * **Do This:** Use strong password hashing algorithms (e.g., bcrypt), multi-factor authentication (MFA) where possible, and role-based access control (RBAC). Utilize established libraries and frameworks for authentication and authorization. * **Don't Do This:** Store passwords in plaintext, use weak or outdated hashing algorithms, or rely solely on client-side authorization. Avoid hardcoding credentials. ### Why It Matters Proper authentication and authorization prevent unauthorized access to sensitive data and functionality. ### Code Examples **Node.js (Express.js) with JWT and bcrypt:** """javascript const express = require('express'); const bcrypt = require('bcrypt'); const jwt = require('jsonwebtoken'); const app = express(); app.use(express.json()); // In-memory user storage (in a real application, use a database) const users = []; // Registration app.post('/register', async (req, res) => { try { const hashedPassword = await bcrypt.hash(req.body.password, 10); const user = { name: req.body.name, password: hashedPassword }; users.push(user); res.status(201).send('User registered'); } catch { res.status(500).send(); } }); // Login app.post('/login', async (req, res) => { const user = users.find(user => user.name === req.body.name); if (user == null) { return res.status(400).send('Cannot find user'); } try { if (await bcrypt.compare(req.body.password, user.password)) { // Generate JWT token const accessToken = jwt.sign({ name: user.name }, 'YOUR_SECRET_KEY', { expiresIn: '15m' }); // REPLACE WITH SECURE, ENVIRONMENT-SPECIFIC KEY res.json({ accessToken: accessToken }); } else { res.status(401).send('Not Allowed'); } } catch { res.status(500).send(); } }); // Middleware to authenticate JWT token function authenticateToken(req, res, next) { const authHeader = req.headers['authorization']; const token = authHeader && authHeader.split(' ')[1]; if (token == null) return res.sendStatus(401); jwt.verify(token, 'YOUR_SECRET_KEY', (err, user) => { // REPLACE WITH SECURE, ENVIRONMENT-SPECIFIC KEY if (err) return res.sendStatus(403); req.user = user; next(); }); } // Secured route app.get('/protected', authenticateToken, (req, res) => { res.json({ message: "Welcome, ${req.user.name}!" }); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server is running on port ${PORT}"); }); """ **Explanation:** * **bcrypt:** Used to securely hash passwords before storing them. A salt is automatically generated and included in the hash. A work factor of 10 is used, offering a reasonable balance between security and performance. * **JWT (JSON Web Tokens):** Used to create access tokens for authenticated users allowing access to protected routes. * **"authenticateToken" middleware:** Verifies the JWT token and protects sensitive routes. * **Important:** Replace "'YOUR_SECRET_KEY'" with a strong, randomly generated secret key stored securely in an environment variable. ### Anti-Patterns """javascript // Anti-pattern: Storing passwords in plaintext app.post('/register', (req, res) => { const password = req.body.password; // Storing plaintext password //... }); // Anti-pattern: Weak authentication mechanism app.get('/admin', (req, res) => { const isAdmin = req.query.admin === 'true'; // Insecure authentication if (isAdmin) { // Allows access } }); """ Storing passwords in plaintext is a critical security vulnerability. Using query parameters for authentication is easily manipulated. ## 3. Secrets Management ### Standard Secrets (API keys, database passwords, encryption keys, etc.) MUST be stored securely and never hardcoded in the source code of Monorepo packages. * **Do This:** Use a dedicated secrets management solution (e.g., HashiCorp Vault, AWS Secrets Manager), environment variables, or encrypted configuration files. Apply the principle of least privilege to secret access. Rotate secrets regularly. * **Don't Do This:** Hardcode secrets directly into the code, check them into version control, or log them. ### Why It Matters Exposing secrets can lead to unauthorized access to systems, data breaches, and other severe security incidents. ### Code Examples **Node.js with Environment Variables (using "dotenv"):** """javascript require('dotenv').config(); // Load environment variables from .env file const express = require('express'); const app = express(); const apiKey = process.env.API_KEY; // Access secret from environment variable const dbPassword = process.env.DB_PASSWORD; app.get('/data', (req, res) => { // Use apiKey and dbPassword securely console.log("Using API Key: ${apiKey.substring(0,4)}..."); // Logs the first four characters of the API key; better than logging the entire value // ... your code to access date from db res.send('Data retrieved'); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server is running on port ${PORT}"); }); """ **Explanation:** * ".env" file (which should be added to ".gitignore") stores sensitive configuration data. * "dotenv" package loads the environment variables from the ".env" file into "process.env". * Secrets are accessed using "process.env.VARIABLE_NAME". ### Anti-Patterns """javascript // Anti-pattern: Hardcoding API key const apiKey = 'YOUR_API_KEY'; // DO NOT DO THIS // Anti-pattern: Logging secrets console.log('Database password:', dbPassword); // DO NOT DO THIS """ Hardcoding secrets directly within source code is a critical error and exposes the application to significant risk. Logging Secrets allows them to be captured and exposed. ## 4. Dependency Management ### Standard Monorepo dependency management MUST be carefully controlled to avoid vulnerabilities introduced through third-party libraries. * **Do This:** Use a package manager (e.g., npm, yarn, pnpm) with lockfiles to ensure consistent dependency versions. Regularly audit dependencies for known vulnerabilities using tools like "npm audit" or "yarn audit" and address identified issues promptly. Prefer well-maintained and reputable libraries. Use a dependency management tool like Dependabot, Snyk, or GitHub's automated security updates. * **Don't Do This:** Use outdated or unmaintained libraries, ignore security audit warnings, or blindly update dependencies without testing. ### Why It Matters Third-party libraries can contain vulnerabilities that can be exploited to compromise the entire application. ### Code Examples **Using npm audit:** """bash npm audit npm audit fix # attempts to automatically fix vulnerabilities """ **Using yarn audit:** """bash yarn audit yarn audit fix # attempts to automatically fix vulnerabilities """ **Explanation:** * "npm audit" and "yarn audit" scan the project's dependencies for known vulnerabilities and provide recommendations for remediation. * "npm audit fix" and "yarn audit fix" attempt to automatically update vulnerable dependencies to patched versions (use with caution - test thoroughly!). ### Anti-Patterns """javascript // Anti-pattern: Using an outdated library with known vulnerabilities const someOutdatedLibrary = require('some-outdated-library'); // Likely has known security issues // Anti-pattern: Ignoring audit warnings // After running 'npm audit' and seeing warnings, ignoring them and proceeding """ Ignoring security audits can leave your application vulnerable and susceptible to known exploits within the outdated or vulnerable libraries. Continuously monitor dependency health ## 5. Error Handling and Logging ### Standard Proper error handling and logging MUST be implemented to provide visibility into application behavior while avoiding exposing sensitive information. * **Do This:** Implement structured logging using a logging library (e.g., Winston, Morgan). Log essential events (e.g., authentication attempts, authorization failures, unexpected errors). Avoid logging sensitive data (e.g., passwords, API keys, personally identifiable information (PII)). Implement centralized logging. * **Don't Do This:** Log error details directly to users, ignore errors, or use "console.log" for production logging. ### Why It Matters Poor error handling can expose internal system details to attackers, while inadequate logging hinders incident response and security investigations. ### Code Examples **Node.js with Winston:** """javascript const express = require('express'); const winston = require('winston'); const app = express(); // Configure Winston logger const logger = winston.createLogger({ level: 'info', format: winston.format.json(), transports: [ new winston.transports.Console(), new winston.transports.File({ filename: 'error.log', level: 'error' }), new winston.transports.File({ filename: 'combined.log' }), ], }); app.get('/error', (req, res) => { try { throw new Error('Simulated error'); } catch (error) { logger.error({ message: 'Error occurred', error: error.message, stack: error.stack }); res.status(500).send('Internal server error'); // Display generic message to user } }); app.get('/info', (req, res) => { logger.info({ message: 'Accessing info endpoint' }); res.send('Info endpoint accessed'); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { logger.info("Server is running on port ${PORT}"); console.log("Server is running on port ${PORT}"); }); """ **Explanation:** * Winston is used for structured logging. * Logs are written to the console and to files. * Different log levels are used (e.g., "info", "error"). * Error details are logged, but a generic error message is shown to the user. Avoid showing stack traces directly to the client. ### Anti-Patterns """javascript // Anti-pattern: Exposing error details to users app.get('/error', (req, res) => { try { throw new Error('Detailed error message'); } catch (error) { res.status(500).send(error.message); // Exposes internal error message } }); // Anti-pattern: Using console.log for production logging console.log('User logged in'); // Inadequate for production, lacks context and control """ It is bad practice to expose detailed error messages or internal system details to end-users. It may allow threat actors to infer or discover further exploitable information. ## 6. Data Encryption ### Standard Sensitive data MUST be encrypted both in transit and at rest. * **Do This:** Use HTTPS for all network communication. Use encryption libraries (e.g., OpenSSL) to encrypt data stored in databases or files. Consider field-level encryption for highly sensitive data. Use a key management service to manage encryption keys. * **Don't Do This:** Store sensitive data in plaintext, use outdated encryption algorithms, or hardcode encryption keys. ### Why It Matters Encryption protects data from unauthorized access, even if a system is compromised. ### Code Examples **Node.js with HTTPS and "crypto" library:** """javascript const express = require('express'); const https = require('https'); const fs = require('fs'); const crypto = require('crypto'); // Import the crypto module const app = express(); // Generate a secure encryption key (store securely) const encryptionKey = crypto.randomBytes(32); const iv = crypto.randomBytes(16); function encrypt(text) { const cipher = crypto.createCipheriv('aes-256-cbc', Buffer.from(encryptionKey), iv); let encrypted = cipher.update(text); encrypted = Buffer.concat([encrypted, cipher.final()]); return { iv: iv.toString('hex'), encryptedData: encrypted.toString('hex') }; } function decrypt(text, iv) { let iv_buf = Buffer.from(iv, 'hex'); let encryptedText = Buffer.from(text, 'hex'); const decipher = crypto.createDecipheriv('aes-256-cbc', Buffer.from(encryptionKey), iv_buf); let decrypted = decipher.update(encryptedText); decrypted = Buffer.concat([decrypted, decipher.final()]); return decrypted.toString(); } app.get('/encrypt/:data', (req, res) => { const data = req.params.data; const encryptedData = encrypt(data); res.json(encryptedData); }); app.get('/decrypt/:encryptedData/:iv', (req, res) => { const encryptedData = req.params.encryptedData; const iv = req.params.iv; const decryptedData = decrypt(encryptedData, iv); res.send(decryptedData); }); // HTTPS configuration const privateKey = fs.readFileSync('sslcert/key.pem', 'utf8'); const certificate = fs.readFileSync('sslcert/cert.pem', 'utf8'); const credentials = {key: privateKey, cert: certificate}; const httpsServer = https.createServer(credentials, app); const PORT = process.env.PORT || 443; httpsServer.listen(PORT, () => { console.log("HTTPS server listening on port ${PORT}"); }); """ **Explanation:** * "crypto" module is used for encryption. * AES-256-CBC algorithm is used. * Encryption keys and IVs are generated randomly and stored securely. * HTTPS is used to secure communication. ### Anti-Patterns """javascript // Anti-pattern: Storing sensitive data in plaintext // Anti-pattern: Using hardcoded encryption key const encryptionKey = 'HardcodedKey'; // DO NOT DO THIS """ Storing sensitive data without encryption or hardcoding the encryption key presents a considerable vulnerability, as it makes the data accessible to unauthorized parties upon system compromise. ## 7. Cross-Site Scripting (XSS) Prevention ### Standard Prevent XSS vulnerabilities by properly escaping output and using appropriate security headers. * **Do This:** Use template engines with automatic escaping (e.g., Handlebars, Mustache, or JSX with React), sanitize user input before displaying it, set the "Content-Security-Policy" (CSP) header to restrict the sources from which resources can be loaded, and set the "X-XSS-Protection" header to enable the browser's built-in XSS filter.. * **Don't Do This:** Directly insert user input into HTML without escaping or sanitization. ### Why It Matters XSS allows attackers to inject malicious scripts into web pages viewed by other users. ### Code Examples **Node.js (Express.js) with escaping and CSP header:** """javascript const express = require('express'); const hbs = require('hbs'); // Using Handlebars for templating with automatic escaping const app = express(); app.set('view engine', 'hbs'); // set up handlebars view engine app.use(express.urlencoded({ extended: true })); app.use((req, res, next) => { res.setHeader("Content-Security-Policy", "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'"); res.setHeader("X-XSS-Protection", "1; mode=block"); next(); }); app.get('/', (req, res) => { res.render('index', { title: 'XSS Example', userInput: '' }); }); app.post('/submit', (req, res) => { const userInput = req.body.userInput; res.render('index', { title: 'XSS Example', userInput: userInput }); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server is running on port ${PORT}"); }); """ **index.hbs:** """html <!DOCTYPE html> <html> <head> <title>{{title}}</title> </head> <body> <h1>{{title}}</h1> <form action="/submit" method="post"> <label for="userInput">Enter some text:</label> <input type="text" id="userInput" name="userInput"> <button type="submit">Submit</button> </form> <p>You entered: {{userInput}}</p> </body> </html> """ **Explanation:** * Handlebars template engine is used, which automatically escapes output to prevent XSS. * CSP header is set to restrict the sources of content. * "X-XSS-Protection" header is set to enable the browser's XSS filter in blocking mode. ### Anti-Patterns """javascript // Anti-pattern: Directly inserting user input without escaping app.get('/display', (req, res) => { const userInput = req.query.input; res.send("<div>${userInput}</div>"); // Vulnerable to XSS }); """ Directly embedding user input onto a web page without proper encoding creates a serious security risk. The lack of proper escaping allows malicious scripts contained within the user input to execute within the user's browser. ## 8. Cross-Site Request Forgery (CSRF) Prevention ### Standard Protect against CSRF attacks by using anti-CSRF tokens. * **Do This:** Generate and validate CSRF tokens for all state-changing requests (e.g., POST, PUT, DELETE). Use a library or framework that provides built-in CSRF protection (e.g., "csurf" middleware in Express.js). Implement "SameSite" cookie attribute: set the "SameSite" attribute for cookies to either "Strict" or "Lax". * **Don't Do This:** Rely solely on "GET" requests for state-changing operations or disable CSRF protection. ### Why It Matters CSRF allows attackers to perform actions on behalf of legitimate users without their knowledge. ### Code Examples **Node.js (Express.js) with "csurf":** """javascript const express = require('express'); const cookieParser = require('cookie-parser'); const csrf = require('csurf'); const app = express(); // Middleware app.use(cookieParser()); const csrfProtection = csrf({ cookie: true }); app.use(express.urlencoded({ extended: false })); app.get('/form', csrfProtection, (req, res) => { // pass the csrfToken to the view res.send(" <form action="/process" method="POST"> <input type="hidden" name="_csrf" value="${req.csrfToken()}"> <button type="submit">Submit</button> </form> "); }); app.post('/process', csrfProtection, (req, res) => { res.send('Form is processed!'); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server is running on port ${PORT}"); }); """ **Explanation:** * "csurf" middleware is used to generate and validate CSRF tokens. * A hidden input field contains the CSRF token, which is submitted with the form. * The server verifies the CSRF token before processing the request. ### Anti-Patterns """javascript // Anti-pattern: Disabling CSRF protection app.post('/transfer', (req, res) => { // No CSRF protection //... }); """ Code that lacks proper CSRF protection is vulnerable to malicious requests made on behalf of an authenticated user without their consent. ## 9. Denial of Service (DoS) Prevention ### Standard Implement measures to mitigate DoS attacks. * **Do This:** Limit request rates, implement timeouts, use rate limiting middleware (e.g., "express-rate-limit"), use a content delivery network (CDN), and protect against Slowloris attacks. * **Don't Do This:** Allow unlimited requests or ignore potential DoS vulnerabilities. ### Why It Matters DoS attacks can disrupt service availability and prevent legitimate users from accessing the application. ### Code Examples **Node.js (Express.js) with "express-rate-limit":** """javascript const express = require('express'); const rateLimit = require('express-rate-limit'); const app = express(); const limiter = rateLimit({ windowMs: 15 * 60 * 1000, // 15 minutes max: 100, // Limit each IP to 100 requests per windowMs message: 'Too many requests from this IP, please try again after 15 minutes', standardHeaders: true, // Return rate limit info in the "RateLimit-*" headers legacyHeaders: false, // Disable the "X-RateLimit-*" headers }); app.use(limiter); app.get('/', (req, res) => { res.send('Welcome to rate limited app!'); }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server is running on port ${PORT}"); }); """ **Explanation:** * "express-rate-limit" middleware is used to limit the number of requests from each IP address. * The rate limit is configured to allow 100 requests per 15 minutes. ### Anti-Patterns """javascript // Anti-pattern: Allowing unlimited requests app.get('/unprotected', (req, res) => { // No rate limiting res.send('Unprotected route'); }); """ Allowing unlimited requests exposes the service to high traffic volume and increases the vulnerability to denial-of-service attacks. ## 10. Server-Side Request Forgery (SSRF) Prevention ### Standard Prevent SSRF vulnerabilities by validating and sanitizing outbound requests. * **Do This:** Whitelist allowed domains or IP addresses, validate URLs, disable URL redirection, and use secure protocols (HTTPS) for outbound requests. Avoid using user-supplied data directly in outbound requests. * **Don't Do This:** Allow unrestricted outbound requests or trust user-supplied URLs. ### Why It Matters SSRF allows attackers to make requests to internal resources or arbitrary external endpoints from the server, bypassing security controls. ### Code Examples **Node.js with URL validation:** """javascript const express = require('express'); const { URL } = require('url'); const https = require('https'); // Use HTTPS for outbound requests const app = express(); app.use(express.json()); const allowedHosts = ['api.example.com', 'secure.example.org']; // Whitelist app.post('/proxy', async (req, res) => { try { const targetUrl = req.body.url; // URL validation const url = new URL(targetUrl); if (!allowedHosts.includes(url.hostname)) { return res.status(400).send('Invalid target URL'); } // Make outbound request (HTTPS) https.get(targetUrl, (response) => { let data = ''; response.on('data', (chunk) => { data += chunk; }); response.on('end', () => { res.send(data); }); }).on('error', (err) => { console.error('Error making request:', err); res.status(500).send('Error making outbound request'); }); } catch (error) { console.error('Invalid URL:', error); res.status(400).send('Invalid URL'); } }); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log("Server is running on port ${PORT}"); }); """ **Explanation:** * The code validates the target URL against a whitelist of allowed hosts. * It uses "https.get" to ensure secure communication. ### Anti-Patterns """javascript // Anti-pattern: Using user-supplied URL directly app.get('/proxy', (req, res) => { const targetUrl = req.query.url; // User-supplied URL //... https.get(targetUrl, (response) => { response.pipe(res); // Vulnerable to SSRF }); }); """ Directly using user provided URLs in outbound requests, without domain validation, makes the service vulnerable to Server Side Request Forgery threats. Limit outbound requests to validated URLs to prevent vulnerability. By adhering to these security best practices, Monorepo developers can build more secure and reliable applications. These standards should be reviewed and updated regularly to stay ahead of emerging threats and vulnerabilities. Remember to implement security as a continuous process, not a one-time fix.
# Performance Optimization Standards for Monorepo This document outlines coding standards and best practices for performance optimization within a Monorepo environment. These standards are designed to improve application speed, responsiveness, and resource utilization. Following these guidelines will result in more maintainable, scalable, and performant applications. ## 1. Architectural Considerations for Performance ### 1.1. Strategic Module Decomposition **Goal:** Minimize the impact of changes and builds across the entire repository and optimize for parallel build execution. * **Do This:** * Divide the Monorepo into cohesive, independent modules (libraries, applications, shared components). * Consider the "blast radius" of changes. Modifications to one module should ideally have minimal or no impact on unrelated modules. * Ensure well-defined public APIs for modules that need to interact. * **Don't Do This:** * Create a monolithic module containing everything. * Establish circular dependencies between modules. * Expose internal implementation details through public APIs. **Why:** Poor module decomposition leads to unnecessary rebuilds, increased testing burden, and difficulty in isolating performance bottlenecks. A well-structured Monorepo facilitates parallel builds, targeted testing, and independent deployments, all of which contribute to faster development cycles and improved performance. **Example:** """ monorepo/ ├── apps/ │ ├── web-app/ # Independent web application │ │ ├── src/ │ │ └── package.json │ ├── mobile-app/ # Independent mobile application │ │ ├── src/ │ │ └── package.json ├── libs/ │ ├── ui-components/ # Reusable UI components │ │ ├── src/ │ │ └── package.json │ ├── data-access/ # Data fetching and caching logic │ │ ├── src/ │ │ └── package.json └── tools/ └── scripts/ # Utility scripts (e.g., build, test) """ ### 1.2. Dependency Management **Goal:** Reduce build times and runtime overhead by minimizing unnecessary dependencies. * **Do This:** * Declare dependencies accurately (e.g., using "devDependencies" for build-time dependencies). * Utilize dependency analysis tools (like "npm audit", "yarn audit") to identify and mitigate security vulnerabilities and outdated packages. * Keep dependencies up to date to benefit from performance improvements and security patches. * Use tools like "pnpm" or "yarn" with workspace functionality for optimal dependency sharing and installation speed * **Don't Do This:** * Include unnecessary dependencies in your modules. * Rely on transitive dependencies without declaring them explicitly. **Why:** Excessive or poorly managed dependencies increase build times, bundle sizes, and potentially introduce security vulnerabilities. Explicitly managing dependencies ensures that each module only includes what it truly needs, optimizing overall performance. **Example (package.json):** """json { "name": "@my-monorepo/ui-components", "version": "1.0.0", "dependencies": { "@emotion/react": "^11.11.1", "@emotion/styled": "^11.11.0", "@mui/material": "^5.14.18" }, "devDependencies": { "@types/react": "^18.2.33", "@types/styled-components": "^5.1.29", "typescript": "^5.2.2" } } """ ### 1.3. Build System Optimization **Goal:** Minimize build times and optimize for incremental builds. * **Do This:** * Use a modern build system tailored for Monorepos, such as Nx, Bazel, or Turborepo. * Configure the build system to leverage caching and incremental builds. * Define clear build targets and dependencies within the build configuration. * Use parallel execution where appropriate to speed up build processes. * Profile your builds regularly to identify bottlenecks. * **Don't Do This:** * Use generic build tools that don't understand Monorepo structures. * Disable caching or incremental builds. * Create complex build scripts that are difficult to maintain. **Why:** Optimized build processes significantly reduce development time and improve developer productivity. Caching and incremental builds ensure that only necessary code is rebuilt, leading to substantial performance gains. A modern build system designed for Monorepos understands the relationships between modules and can optimize the build process accordingly. **Example (Nx "nx.json"):** """json { "tasksRunnerOptions": { "default": { "runner": "nx-cloud", "options": { "cacheableOperations": ["build", "lint", "test", "e2e"], "accessToken": "YOUR_NX_CLOUD_TOKEN" } } }, "affected": { "defaultBase": "main" }, "namedInputs": { "default": ["{projectRoot}/**/*", "sharedGlobals"], "production": [ "default", "!{projectRoot}/**/?(*.)+(spec|test).[jt]s?(x)?(.snap)", "!{projectRoot}/tsconfig.spec.json", "!{projectRoot}/jest.config.[jt]s", "!{projectRoot}/.eslintrc.json" ], "sharedGlobals": [] } } """ ### 1.4. Code Sharing and Reusability **Goal:** Avoid code duplication and promote efficient use of resources. * **Do This:** * Identify common functionality across modules and extract it into shared libraries. * Use a design system or component library for consistent UI elements. * Employ code generation techniques to reduce boilerplate code. * **Don't Do This:** * Duplicate code across multiple modules. * Create tightly coupled components that are difficult to reuse. **Why:** Code duplication increases maintenance costs and potential performance issues. Sharing code reduces the overall codebase size, promotes consistency, and simplifies updates. Using a component library improves rendering performance by reducing the amount of unique CSS and JavaScript that needs to be loaded. **Example:** Move common utility functions to a shared library. """typescript // libs/utils/src/index.ts export function formatCurrency(amount: number, currencyCode: string = 'USD'): string { return new Intl.NumberFormat('en-US', { style: 'currency', currency: currencyCode, }).format(amount); } // apps/web-app/src/components/Product.tsx import { formatCurrency } from '@my-monorepo/utils'; function Product({ price }: { price: number }) { return <div>Price: {formatCurrency(price)}</div>; } """ ## 2. Coding Practices for Performance ### 2.1. Lazy Loading and Code Splitting **Goal:** Reduce initial load times by loading code only when it is needed. * **Do This:** * Implement lazy loading for modules that are not immediately required. * Use code splitting to break large bundles into smaller chunks. * Consider route-based code splitting for single-page applications. * **Don't Do This:** * Load all code upfront. * Create excessively large bundles that take a long time to download and parse. **Why:** Initial load time is critical for user experience. Lazy loading and code splitting significantly improve startup performance by deferring the loading of non-essential code. **Example (React with "React.lazy"):** """jsx import React, { lazy, Suspense } from 'react'; const AnalyticsDashboard = lazy(() => import('./AnalyticsDashboard')); // Lazy-loaded component function App() { return ( <div> {/* ... other components ... */} <Suspense fallback={<div>Loading...</div>}> <AnalyticsDashboard /> </Suspense> </div> ); } """ ### 2.2. Efficient Data Structures and Algorithms **Goal:** Optimize runtime performance by choosing appropriate data structures and algorithms. * **Do This:** * Select data structures based on access patterns (e.g., use a Set for membership tests, a Map for key-value lookups). * Use efficient algorithms for common operations (e.g., sorting, searching). * Consider the time and space complexity of your algorithms. * **Don't Do This:** * Use inefficient data structures or algorithms. * Perform unnecessary computations. **Why:** The choice of data structures and algorithms significantly impacts application performance. Choosing the right tools for the job can lead to dramatic improvements in speed and resource utilization. **Example:** """javascript // Efficiently check if an element exists in an array. Use a Set instead of an array for repeated lookups. const myArray = ['a', 'b', 'c', 'd', 'e']; const mySet = new Set(myArray); // Bad: Linear time complexity // myArray.includes('c'); // Good: Near-constant time complexity mySet.has('c'); """ ### 2.3. Memory Management **Goal:** Prevent memory leaks and optimize memory usage. * **Do This:** * Avoid creating unnecessary objects. * Release resources when they are no longer needed (e.g., event listeners, timers). * Use techniques like object pooling to reuse objects. * Be mindful of closures and their potential to capture large amounts of data. * Use tools like the Chrome DevTools memory profiler to identify memory leaks. * When possible, leverage technologies with automatic garbage collection. * **Don't Do This:** * Create large numbers of temporary objects. * Forget to release resources. * Store large amounts of data in memory unnecessarily. **Why:** Memory leaks and excessive memory usage can lead to performance degradation and application crashes. Proper memory management ensures that applications run smoothly and efficiently. **Example:** Removing event listeners to prevent memory leaks. """javascript class MyComponent { constructor() { this.handleClick = this.handleClick.bind(this); } componentDidMount() { window.addEventListener('click', this.handleClick); } componentWillUnmount() { window.removeEventListener('click', this.handleClick); // Remove the event listener } handleClick() { console.log('Clicked!'); } } """ ### 2.4. Minimize DOM Manipulation **Goal:** Reduce the performance overhead associated with updating the Document Object Model (DOM). * **Do This:** * Batch DOM updates. * Use virtual DOM techniques (e.g., React, Vue). * Avoid direct DOM manipulation where possible. * Use efficient selectors (e.g., avoid complex CSS selectors). * **Don't Do This:** * Perform frequent DOM updates. * Use inefficient DOM manipulation methods. **Why:** DOM manipulation is an expensive operation. Minimizing the number of DOM updates improves rendering performance and reduces layout thrashing. Virtual DOM techniques allow you to efficiently update the DOM by comparing the current state with the desired state and only making necessary changes. **Example (React):** """jsx import React, { useState } from 'react'; function MyComponent() { const [items, setItems] = useState(['item1', 'item2', 'item3']); const addItem = () => { // Bad: Multiple state updates trigger multiple re-renders // setItems([...items, 'newItem1']); // setItems([...items, 'newItem2']); // Good: Batch updates into a single state update setItems(prevItems => [...prevItems, 'newItem1', 'newItem2']); }; return ( <div> <ul> {items.map(item => ( <li key={item}>{item}</li> ))} </ul> <button onClick={addItem}>Add Items</button> </div> ); } """ ### 2.5. Caching Strategies **Goal:** Reduce the need to repeatedly fetch or compute the same data. * **Do This:** * Implement caching at different levels (e.g., browser caching, server-side caching, in-memory caching). * Use appropriate cache invalidation strategies (e.g., time-based expiration, event-based invalidation). * Leverage Content Delivery Networks (CDNs) for static assets. * **Don't Do This:** * Cache data indefinitely without invalidation. * Cache sensitive data inappropriately. **Why:** Caching can dramatically improve application performance by reducing the load on servers and databases. Properly invalidating caches is crucial to ensure that users see the latest data. **Example (Browser caching using "Cache-Control" headers):** """javascript // Server-side code (e.g., Node.js with Express) app.get('/api/data', (req, res) => { // Set the Cache-Control header res.set('Cache-Control', 'public, max-age=3600'); // Cache for 1 hour // ... fetch and send data ... }); """ ## 3. Technology-Specific Considerations ### 3.1. JavaScript/TypeScript * **Do This:** * Use modern JavaScript features (e.g., "async/await", "const/let") for improved readability and performance. * Use TypeScript's type system to catch errors early and improve code maintainability. * Use "Array.map", "Array.filter", and "Array.reduce" instead of "for" loops where appropriate for more concise and potentially faster code. * Use "import" and "export" for modular code that utilizes tree shaking * **Don't Do This:** * Use legacy JavaScript features that are less performant or more difficult to understand. * Ignore TypeScript's type checking. ### 3.2. React * **Do This:** * Use "React.memo" to prevent unnecessary re-renders of pure components. * Use "useCallback" and "useMemo" to memoize functions and values. * Use keys effectively when rendering lists. * Profile your components using the React Profiler to identify performance bottlenecks. * Use code splitting and lazy loading with "React.lazy" and "Suspense". * **Don't Do This:** * Rely solely on "shouldComponentUpdate" for preventing re-renders (use "React.memo" instead). * Create new objects or functions inside render methods. ### 3.3. Node.js * **Do This:** * Use asynchronous operations and event loops effectively. * Optimize database queries. * Use connection pooling to reduce database connection overhead. * Use caching mechanisms (e.g., Redis, Memcached). * Profile your application using tools like Clinic.js to identify performance bottlenecks. * **Don't Do This:** * Perform blocking operations in the main event loop. ## 4. Profiling and Monitoring ### 4.1. Performance Audits * Perform regular performance audits using tools like Lighthouse, WebPageTest, or Chrome DevTools to identify areas for improvement. ### 4.2. Monitoring * Implement monitoring solutions to track key performance indicators (KPIs) such as response time, error rate, and resource utilization. Use these KPIs to proactively identify and address performance issues. * Consider using tools like Prometheus, Grafana, or Datadog for advanced monitoring and alerting. ## 5. Continuous Improvement * **Do This:** Regularly review and update these standards to reflect the latest best practices and technology advancements. Encourage developers to propose improvements and share their knowledge. By adhering to these performance optimization standards, development teams can build high-performing Monorepo applications that deliver excellent user experiences and are easy to maintain. Remember that performance optimization is an ongoing process that requires continuous monitoring, analysis, and refinement.
# Testing Methodologies Standards for Monorepo This document outlines the testing methodology standards for our monorepo. It aims to guide developers in creating robust, reliable, and maintainable code. These standards are designed to enhance maintainability, improve developer velocity, and ensure code quality across the entire monorepo. This document serves as a reference for developers and a context for AI-assisted coding tools. ## 1. Introduction to Monorepo Testing Testing in a monorepo architecture presents unique challenges and opportunities compared to traditional, multi-repo setups. Centralized code necessitates a holistic testing strategy that accounts for inter-package dependencies and potential ripple effects of changes. The goal is to maintain high confidence in code correctness, stability, and performance with efficient and effective testing methodologies. ### 1.1. Key Principles * **Test Pyramid:** Implement a test strategy that follows the test pyramid, emphasizing unit tests, followed by integration tests, and then end-to-end tests. * **Test Automation:** Automate testing at all levels to ensure consistent and repeatable results. * **Parallel Execution:** Leverage monorepo tooling to parallelize test execution across packages to reduce overall testing time. * **Isolation:** Isolate tests to prevent interference from external systems or other packages. Provide appropriate mocking and stubbing. * **Code Coverage:** Aim for high code coverage to identify untested code paths, but prioritize meaningful tests over simply achieving a coverage percentage. * **Continuous Integration/Continuous Deployment (CI/CD):** Integrate testing into a CI/CD pipeline to automatically run tests on every commit. * **Contract Testing:** Utilize contract testing to verify interactions between services or modules. ### 1.2. Monorepo Specific Considerations * **Dependency Management:** Pay close attention to inter-package dependencies when designing tests. Changes in one package can affect others, so tests must account for potential ripple effects. * **Scoped Testing:** Implement mechanisms for running tests selectively (e.g., only tests in changed packages and their dependents). * **Shared Tooling:** Leverage shared testing infrastructure and utilities to maintain consistency and reduce duplication. (e.g., shared Jest configurations, custom matchers, testing libraries). * **Impact Analysis:** Use tooling to analyze the impact of changes before running tests, optimizing which tests need to be executed. ## 2. Unit Testing Unit tests verify the functionality of individual units of code (e.g., functions, classes, components) in isolation. They are the foundation of a robust testing strategy. ### 2.1. Standards * **Do This:** * Write unit tests for all non-trivial code. * Focus on testing the public API of modules and components. * Use mocking and stubbing to isolate units of code from their dependencies. * Write tests that are fast, reliable, and easy to understand. * Use descriptive test names that clearly indicate what is being tested. * Follow the Arrange-Act-Assert (AAA) pattern. * **Don't Do This:** * Skip unit tests for "simple" code. Even simple code can have subtle bugs. * Write unit tests that test implementation details. These tests are brittle and prone to breaking when the implementation changes. * Over-mock or over-stub, which can lead to tests that don't accurately reflect the behavior of the system. * Write slow or unreliable unit tests. These tests will slow down the development process and erode confidence. * Use vague or ambiguous test names. ### 2.2. Code Examples (JavaScript/TypeScript) """typescript // example.ts export function add(a: number, b: number): number { return a + b; } export function greet(name: string): string { if (!name) { throw new Error("Name cannot be empty"); } return "Hello, ${name}!"; } """ """typescript // example.test.ts (using Jest) import { add, greet } from './example'; describe('add', () => { it('should add two numbers correctly', () => { // Arrange const a = 2; const b = 3; // Act const result = add(a, b); // Assert expect(result).toBe(5); }); }); describe('greet', () => { it('should greet a person with their name', () => { expect(greet('Alice')).toBe('Hello, Alice!'); }); it('should throw an error if the name is empty', () => { expect(() => greet('')).toThrowError("Name cannot be empty"); }); }); """ ### 2.3. Anti-Patterns * **Testing implementation details:** Testing private methods or internal state. * **Over-mocking:** Mocking excessively can make the tests less effective in identifying real bugs. ### 2.4. Technology-Specific Details * Use Jest, Mocha, or Jasmine for JavaScript/TypeScript testing. Jest is recommended for React applications. * Use appropriate assertion libraries (e.g., Chai, Jest's built-in assertions). * Configure test runners to run in parallel and watch mode. * Use code coverage tools to measure the effectiveness of unit tests. Istanbul (nyc) integrates well with Jest. Configure "nyc" to exclude test files and generated code. * Use mocking libraries like "jest.mock" or "sinon" strategically only when necessary to isolate the unit under test. ## 3. Integration Testing Integration tests verify the interactions between different units of code or modules. They provide confidence that the system works correctly as a whole, bridging the gap between unit and end-to-end (E2E) tests. ### 3.1. Standards * **Do This:** * Write integration tests that verify the interactions between different modules or services within the monorepo. * Focus on testing the flow of data through the system. * Use real dependencies or lightweight test doubles. * Write tests that are more comprehensive than unit tests but faster than E2E tests. * Ensure that integration tests clean up any test data after they run. * **Don't Do This:** * Write integration tests that are too broad, testing too many components at once. * Use mocks for everything. Integration tests should verify real interactions. * Neglect to clean up test data. This can lead to tests that fail intermittently or pollute the environment. ### 3.2. Code Examples (Node.js/TypeScript with Express) """typescript // user-service.ts import { add } from './math-service'; // Assuming math-service is another module export class UserService { createUser(firstName: string, lastName: string): string { const userId = add(firstName.length, lastName.length); return "user-${userId}"; } } """ """typescript // math-service.ts export function add(a: number, b: number): number { return a + b; } """ """typescript // user-service.test.ts (using Jest) import { UserService } from './user-service'; import * as mathService from './math-service'; describe('UserService', () => { it('should create a user with a generated ID based on math-service', () => { const userService = new UserService(); //Mock the specific function which allows testing the service independantly jest.spyOn(mathService, 'add').mockReturnValue(10); const userId = userService.createUser('John', 'Doe'); expect(userId).toBe('user-10'); expect(mathService.add).toHaveBeenCalledWith('John'.length, 'Doe'.length); }); }); """ ### 3.3. Anti-Patterns * **Testing through the UI:** Integration tests should focus on backend interactions, not UI components. * **Not using a test database:** Use a separate database for testing to avoid affecting production data. * **Relying on external services:** Mock external services or use test doubles (e.g., using "nock" to intercept HTTP requests). ### 3.4. Technology-Specific Details * Use tools like Supertest for testing HTTP endpoints in Node.js. * Use dependency injection to make it easier to replace dependencies with test doubles. * Consider using Docker Compose to set up test environments with multiple services. ## 4. End-to-End (E2E) Testing E2E tests simulate real user interactions with the application. They provide the highest level of confidence that the system works correctly from end-to-end. These are significantly slower than unit and integration tests but critical for verifying the overall system behavior. ### 4.1. Standards * **Do This:** * Write E2E tests that cover the most critical user flows. * Use real browsers or headless browser environments (e.g., Playwright, Cypress, Puppeteer). * Set up the test environment automatically before each test run. * Clean up the test environment after each test run. * Write tests that are reliable and repeatable. * **Don't Do This:** * Write too many E2E tests. Focus on the most critical user flows. * Write E2E tests that are brittle or flaky. * Run E2E tests too frequently. Ideally within the CI/CD pipeline on merges/releases or nightly builds. ### 4.2. Code Examples (Playwright - Typescript Preferred) """typescript // playwright.config.ts import { defineConfig, devices } from '@playwright/test'; export default defineConfig({ testDir: './tests', fullyParallel: true, reporter: 'html', use: { baseURL: 'http://localhost:3000', trace: 'on-first-retry', }, projects: [ { name: 'chromium', use: { ...devices['Desktop Chrome'] }, }, ], }); """ """typescript // tests/example.spec.ts import { test, expect } from '@playwright/test'; test('should navigate to the about page', async ({ page }) => { await page.goto('/'); await page.getByRole('link', { name: 'About' }).click(); await expect(page).toHaveURL(/.*about/); await expect(page.locator('h1')).toContainText('About Us'); }); test('should allow a user to log in', async ({ page }) => { await page.goto('/login'); await page.fill('input[name="username"]', 'testuser'); await page.fill('input[name="password"]', 'password123'); await page.click('button[type="submit"]'); await page.waitForURL('/dashboard'); // Or any URL after login await expect(page.locator('#dashboard-title')).toContainText('Dashboard'); }); """ ### 4.3. Anti-Patterns * **Relying on the UI for setup:** Whenever possible, use APIs for test setup and teardown rather than the UI. This makes tests faster and more reliable. * **Not waiting for elements to load:** Use explicit waits to ensure that elements are fully loaded before interacting with them. ### 4.4. Technology-Specific Details * Use Playwright, Cypress, or Puppeteer for E2E testing. Playwright is currently favored for its speed, reliability, and multi-browser support. * Use Docker to create consistent test environments. * Use environment variables to configure tests for different environments (e.g., staging, production). * Implement retries to reduce flakiness in E2E tests. Playwright and Cypress have built-in retry mechanisms. * Integrate visual regression testing to catch unexpected UI changes. Tools like Percy or Applitools can be used. ## 5. Monorepo Testing Strategies Adapting testing strategies to the monorepo context requires optimizing test execution and understanding interdependencies. ### 5.1. Selective Test Execution Only run the tests that are affected by the changes in a commit. Utilize tooling that can identify changed packages and their dependencies to select the appropriate tests. * **Do This:** * Use tools that automatically determine which packages have changed. * Configure your CI/CD system to only run tests for changed packages and their dependents. * Create a dependency graph of packages in the monorepo. * **Don't Do This:** * Run all tests for every commit. This is inefficient and slows down the development process. ### 5.2. Parallelization Run tests in parallel across multiple agents to reduce the overall testing time. Modern monorepo tools support parallel test execution. * **Do This:** * Configure test runners to run tests in parallel. * Use a CI/CD system that can distribute tests across multiple agents. * Allocate sufficient resources to your CI/CD agents to handle the parallel test load. * **Don't Do This:** * Run tests sequentially. This is slow and inefficient. ### 5.3. Code Coverage Across Packages Aggregate code coverage data across all packages in the monorepo to provide a comprehensive view of code coverage. * **Do This:** * Configure code coverage tools to generate reports for each package. * Aggregate the reports into a single dashboard to provide a complete view of code coverage. * Set code coverage thresholds to ensure that all packages are adequately tested. * **Don't Do This:** * Ignore code coverage. This makes it difficult to identify untested code paths. ### 5.4. Example: Leveraging Nx for Affected Tests Nx provides excellent support for running affected tests. """json // nx.json { "tasksRunnerOptions": { "default": { "runner": "nx-cloud", "options": { "cacheableOperations": ["build", "lint", "test", "e2e"], "accessToken": "YOUR_NX_CLOUD_ACCESS_TOKEN" } } }, "targetDefaults": { "test": { "inputs": ["default", "{workspaceRoot}/jest.preset.js"], "cache": true } } } """ To run tests affected by a commit: """bash nx affected:test --base=main --head=HEAD """ ## 6. Contract Testing Contract testing is a specialized testing technique that verifies the interactions between services, ensuring that they adhere to a defined contract. This is especially relevant when dealing with different teams owning different parts of the monorepo that interact via APIs. ### 6.1. Standards * **Do This:** * Define clear contracts between services or modules with well-defined inputs and outputs. * Implement contract tests that verify that each service adheres to its contract. * Use tools like Pact or Spring Cloud Contract to simplify the process of writing and running contract tests. * **Don't Do This:** * Assume that services will always interact correctly. Contract tests are crucial for preventing integration issues. * Neglect to update contract tests when contracts change. * Skip contract testing when changes are isolated to one service. The other side of the contract *must* also be tested. ### 6.2 Example (Pact with JavaScript) A *consumer* project wanting to consume information from the *provider* project using an API: """javascript // Consumer: consumer.test.js const { Pact } = require('@pact-foundation/pact'); const { fetchProviderData } = require('./consumer'); // This is the code under test describe('Pact Verification', () => { const provider = new Pact({ consumer: 'MyConsumer', provider: 'MyProvider', port: 1234, // Port the mock service will run on dir: path.resolve(process.cwd(), 'pacts'), // Directory to save pact files log: path.resolve(process.cwd(), 'logs', 'pact.log'), logLevel: 'info', specVersion: 2, }); beforeAll(async () => { await provider.setup() }); afterEach(async () => { await provider.verify() }); afterAll(async () => { await provider.finalize() }); describe('When a call to retrieve data from the provider is made', () => { beforeEach(() => { provider.addInteraction({ state: 'Provider has some data', uponReceiving: 'a request for the data', withRequest: { method: 'GET', path: '/data', }, willRespondWith: { status: 200, headers: { 'Content-Type': 'application/json', }, body: { message: 'Hello, Consumer!', }, }, }); }); it('should return the correct data', async () => { const data = await fetchProviderData('http://localhost:1234'); expect(data.message).toEqual('Hello, Consumer!'); }); }); }); """ """javascript // Provider: provider.test.js (using Pact CLI or library to verify pacts) const { Verifier } = require('@pact-foundation/pact'); const path = require('path'); describe('Pact Verification', () => { it('should validate the expectations of the Consumer', () => { const opts = { providerBaseUrl: 'http://localhost:3000', // Where the provider is running pactUrls: [ path.resolve(__dirname, '../pacts/myconsumer-myprovider.json'), // Path to pact file ], publishVerificationResult: true, providerVersion: '1.0.0', }; return new Verifier(opts).verifyProvider().then(output => { console.log('Pact Verification Complete!'); console.log(output); }); }); }); """ ### 6.3. Technology-Specific Details * Utilize Pact for contract testing in polyglot environments. * Spring Cloud Contract is a great option for Java-based microservices. * Clearly define the responsibilities of consumers and providers in the contract. * Automate the process of verifying contracts in the CI/CD pipeline. ## 7. Performance Testing Performance testing is vital for ensuring applications within the monorepo remain responsive and scalable. In a monorepo, performance issues in one package can potentially affect others, making this crucial. ### 7.1. Standards: * **DO**: * Conduct load, stress, and soak tests to identify bottlenecks and performance degradation. * Use tools like JMeter, Gatling, or k6 for performance testing. * Define key performance indicators (KPIs) like response time, throughput, and error rate. * Establish performance baselines to measure improvements and regressions. * **DON'T**: * Neglect performance testing until late in the development cycle. * Rely solely on manual performance evaluations. * Ignore the impact of database queries and inefficient algorithms on performance. ### 7.2: Example using k6 """javascript import http from 'k6/http'; import { sleep } from 'k6'; export const options = { vus: 10, duration: '10s', }; export default function () { http.get('http://localhost:3000/api/data'); sleep(1); } """ ### 7.3: Considerations for Monorepos: * Isolate specific packages or APIs for testing. * Use monorepo aware CI/CD tools. * Monitor resource consumption across the monorepo. ## 8. Security Testing Security testing identifies vulnerabilities in the code and ensures that the application is protected against attacks. ### 8.1. Standards * **Do This:** * Perform static analysis to identify potential security vulnerabilities in the code. * Conduct dynamic analysis to test the application for vulnerabilities during runtime. * Use tools like SonarQube, Snyk, or OWASP ZAP to automate security testing. * Follow secure coding practices to prevent common vulnerabilities like SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). * Conduct regular penetration testing to identify weaknesses in the application's security. * **Don't Do This:** * Ignore security vulnerabilities. Even seemingly minor vulnerabilities can be exploited by attackers. * Rely solely on automated security testing. Manual code reviews and penetration testing are also important. ### 8.2. Technology-Specific Details * Use ESLint with security-related rules to identify potential vulnerabilities in JavaScript/TypeScript code. * Use npm audit or yarn audit to identify vulnerabilities in dependencies. * Use tools like Snyk to automatically fix vulnerabilities in dependencies. * Follow the OWASP Top 10 guidelines to prevent common web application vulnerabilities. ## 9. Documentation Clear documentation is crucial for maintainability and knowledge sharing within the codebase. ### 9.1. Standards: * **DO**: * Document the purpose and functionality of tests cases. * Document the integration and end-to-end testing environments. * **DON'T**: * Overlook the importance of keeping test documentation up to date. * Skip documenting even the most straightforward-looking tests. ## 10. Conclusion These testing methodology standards are designed to promote high-quality code within our monorepo. By adhering to these guidelines, developers can build robust, reliable, and maintainable applications that meet the needs of our users. This document should be reviewed and updated regularly to reflect the latest best practices and technologies. Remember that testing is an integral part of the development process and should be considered at every stage of the software lifecycle.
# Tooling and Ecosystem Standards for Monorepo This document outlines the recommended tooling and ecosystem standards for Monorepo projects, focusing on achieving maintainability, performance, security, and developer productivity. It provides guidelines applicable across various technology stacks commonly found in Monorepos (e.g., JavaScript/TypeScript, Python, Go) and emphasizes patterns that leverage the structure of a Monorepo effectively. ## 1. Build Systems and Task Runners Selecting the correct build system and task runner is crucial for managing dependencies, building, testing, and deploying projects within a Monorepo. ### 1.1. Choice of Build System **Standard:** Use modern build systems designed for Monorepos, such as Nx, Bazel, or Turborepo. These offer features like computation caching, affected commands, and dependency graph visualization. * **Why:** These systems optimize build times by only rebuilding what's necessary and parallelizing tasks efficiently. They also provide advanced features like remote caching, which drastically speeds up builds in CI/CD pipelines. **Do This:** * Choose Nx for TypeScript/JavaScript Monorepos due to its tight integration with Angular, React, Node.js, and other web technologies. Leverage Nx Cloud for distributed caching and task execution. * Consider Bazel for polyglot Monorepos with complex build requirements and stringent performance needs. It’s language-agnostic and focuses on correctness and reproducibility. * Opt for Turborepo for simpler TypeScript/JavaScript Monorepos where incremental builds and caching are primary concerns. **Don't Do This:** * Rely on simple "npm" or "yarn" scripts for complex Monorepo builds. These lack the necessary caching and dependency management features to scale efficiently. * Use custom-built build systems unless absolutely necessary. Modern build tools provide highly optimized, well-tested solutions. **Example (Nx - "nx.json"):** """json { "extends": "nx/presets/npm.json", "npmScope": "my-org", "tasksRunnerOptions": { "default": { "runner": "nx-cloud", "options": { "cacheableOperations": ["build", "lint", "test", "e2e"], "accessToken": "your-nx-cloud-token" // Securely store this } } }, "targetDefaults": { "build": { "dependsOn": ["^build"], "inputs": ["production", "^production"] } }, "namedInputs": { "default": ["{projectRoot}/**/*", "sharedGlobals"], "production": [ "default", "!{projectRoot}/**/?(*.)+(spec|test).[jt]s?(x)?(.snap)", "!{projectRoot}/tsconfig.spec.json", "!{projectRoot}/jest.config.[jt]s", "!{projectRoot}/.eslintrc.json" ], "sharedGlobals": [] // Define globals shared across projects, useful for caching }, "generators": { "@nx/react": { "style": "styled-components", "linter": "eslint", "bundler": "webpack" } } } """ This "nx.json" configuration: * Extends the default Nx preset for npm-based projects. * Configures Nx Cloud as the task runner, enabling distributed caching and execution. The "accessToken" should be sourced securely from environment variables, not hardcoded. * Defines which operations are cacheable (build, lint, test, e2e). * Specifies dependencies between projects for the build target. * Defines named inputs to influence the cache key. This is a key performance optimization! ### 1.2. Task Runners and Scripting **Standard:** Utilize task runners like Make, Just, or Taskfile.dev for defining and executing common development tasks. * **Why:** Task runners provide a consistent and repeatable way to run commands across different environments and projects within the Monorepo. **Do This:** * Use Makefiles for simple, shell-based tasks. They are widely supported but can become unwieldy for complex tasks. * Prefer Just or Taskfile.dev for more structured task definitions. These offer features like argument parsing and more readable syntax. **Don't Do This:** * Repeat complex commands directly in the terminal or in CI/CD scripts. This leads to inconsistencies and errors. * Use the "npm" scripts section for anything beyond very basic commands. **Example (Taskfile.dev):** """yaml version: "3" tasks: build: desc: Build all projects cmds: - | nx run-many --target=build --all --parallel lint: desc: Lint all projects cmds: - nx run-many --target=lint --all --parallel test: desc: Run tests for affected projects cmds: - | nx affected --target=test --parallel --watchAll=false format: desc: Format all projects cmds: - nx format:write """ This "Taskfile.dev" defines tasks for building, linting, testing, and formatting the Monorepo. It uses Nx commands to efficiently run these tasks across all projects or only affected projects. The "--parallel" flag leverages Nx's parallel execution capabilities. ## 2. Dependency Management Effective dependency management is critical in a Monorepo to avoid conflicts, reduce bundle sizes, and improve build performance. ### 2.1. Versioning Strategy **Standard:** Choose a consistent versioning strategy across all packages within the Monorepo. Semantic versioning (SemVer) is highly recommended. * **Why:** Clear versioning allows for controlled updates and prevents breaking changes from propagating unexpectedly. **Do This:** * Use independent versioning if projects evolve at different paces. Each package gets its own version number. Tools like "lerna" (with "independent" mode) or "pnpm" are useful here. This maximizes compatibility. * Consider fixed/synchronized versioning if projects are tightly coupled and released together. All packages share the same version number simplify dependency management but require careful coordination. **Don't Do This:** * Allow divergent versioning practices across different projects. This creates dependency conflicts and maintenance headaches. * Use wildcard versions ("*") or overly broad version ranges ("^1.0.0") in production dependencies. Prefer specific versions or narrow ranges ("~1.0.0"). **Example (Independent Versioning with "pnpm"):** 1. **"pnpm-workspace.yaml":** This file defines the Monorepo workspace. """yaml packages: - 'packages/*' - 'apps/*' """ 2. **Version bumps:** Run commands to handle version updates and create changelogs: """bash pnpm -r changeset # Creates a changeset file describing the changes pnpm changeset version # Bumps versions and generates changelogs based on changesets """ ### 2.2. Dependency Sharing and Hoisting **Standard:** Utilize dependency hoisting to reduce duplication and minimize the overall installation size. Tools like "pnpm" and "yarn" offer built-in hoisting mechanisms. * **Why:** Hoisting reduces the number of packages installed, saving disk space and improving install times. **Do This:** * Use "pnpm" as your primary package manager. Its strictness and built-in support for Monorepos make it ideal. It creates a non-flat "node_modules" structure, avoiding many common issues with dependency resolution leading to reproducibility and fewer subtle bugs. * If using "yarn", enable the "nodeLinker: pnp" setting in ".yarnrc.yml" for a more efficient dependency resolution strategy. Plug'n'Play (PnP) eliminates the "node_modules" folder altogether, further optimizing load times. * Avoid relying on implicit dependencies. Explicitly declare all dependencies in each project's "package.json". **Don't Do This:** * Disable dependency hoisting. This leads to unnecessary duplication and increased install times. * Accidentally introduce peer dependency conflicts. Carefully manage peer dependencies to ensure compatibility across all projects. **Example ("pnpm" hoisting and strict mode):** """ # .npmrc strict=true shamefully-hoist=false """ * "strict=true": Ensures that all dependencies are explicitly declared, preventing accidental reliance on transitive dependencies. * "shamefully-hoist=false": Disables hoisting to a limited extent, enforcing that most dependencies are only available to the packages that declare them. While this might increase installation size slightly, it drastically improves dependency isolation and reduces the risk of unexpected behavior, especially in large Monorepos. Proper use of workspaces and explicit dependencies becomes even *more* crucial. ## 3. Code Sharing and Modularity Effective code sharing and modularity are essential for reducing code duplication, promoting reuse, and simplifying maintenance within a Monorepo. ### 3.1. Shared Libraries **Standard:** Create shared libraries for common functionality that is used across multiple projects. * **Why:** Shared libraries reduce code duplication, promote consistency, and simplify updates. One change to a shared library benefits all dependent projects. **Do This:** * Group related functions and components into cohesive modules. * Publish shared libraries as internal packages within the Monorepo. * Use a clear naming convention for shared libraries (e.g., "@my-org/ui-components", "@my-org/utils"). * Document shared libraries thoroughly with JSDoc, TypeDoc, or similar tools. * Enforce a style guide across all projects for consistent function/method naming (e.g., use camelCase). **Don't Do This:** * Copy and paste code between projects. * Create overly large or tightly coupled shared libraries. * Expose internal implementation details in shared libraries. **Example (Shared Utility Library in TypeScript with Nx):** 1. **Create a library:** """bash nx generate @nx/js:library utils --name=my-utils --directory=shared --importPath=@my-org/shared/my-utils """ 2. **Implement utilities (e.g., "shared/my-utils/src/lib/my-utils.ts"):** """typescript export function formatString(str: string): string { return str.trim().toLowerCase(); } export function calculateSum(a: number, b: number): number { return a + b; } """ 3. **Use the library (e.g., in a React component):** """typescript import { formatString, calculateSum } from '@my-org/shared/my-utils'; function MyComponent() { const formattedText = formatString(" Hello World "); const sum = calculateSum(5, 10); return ( <div> <p>Formatted Text: {formattedText}</p> <p>Sum: {sum}</p> </div> ); } """ ### 3.2. Component Libraries **Standard:** Develop reusable UI component libraries to promote consistency and maintainability across front-end applications. * **Why:** Component libraries ensure a unified user experience, reduce development time, and simplify theming and branding. **Do This:** * Use component-driven development (CDD) to build and test components in isolation. * Document component APIs and usage with tools like Storybook. * Use a consistent styling approach (e.g., CSS Modules, styled-components, Tailwind CSS). * Maintain component accessibility (e.g., using ARIA attributes). **Don't Do This:** * Duplicate component implementations across different applications. * Create overly complex or inflexible components. * Neglect component documentation. **Example (React Component Library with Storybook and styled-components):** 1. **Install dependencies:** """bash npm install styled-components @types/styled-components react-dom react storybook --save-dev """ 2. **Create a button component (e.g., "shared/ui/src/lib/button/button.tsx"):** """typescript import styled from 'styled-components'; interface ButtonProps { children: React.ReactNode; onClick?: () => void; primary?: boolean; } const StyledButton = styled.button<ButtonProps>" background-color: ${(props) => (props.primary ? '#007bff' : '#fff')}; color: ${(props) => (props.primary ? '#fff' : '#000')}; border: 1px solid #007bff; padding: 10px 20px; cursor: pointer; &:hover { background-color: ${(props) => (props.primary ? '#0056b3' : '#f0f0f0')}; } "; export const Button: React.FC<ButtonProps> = ({ children, onClick, primary }) => { return <StyledButton onClick={onClick} primary={primary}>{children}</StyledButton>; }; """ 3. **Create a Storybook story (e.g., "shared/ui/src/lib/button/button.stories.tsx"):** """typescript import { Button } from './button'; export default { title: 'UI/Button', component: Button, }; const Template = (args) => <Button {...args} />; export const Primary = Template.bind({}); Primary.args = { primary: true, children: 'Primary Button', }; export const Secondary = Template.bind({}); Secondary.args = { children: 'Secondary Button', }; """ ## 4. Linting and Formatting Consistent code style is crucial for readability and maintainability within a Monorepo. ### 4.1. Code Style Enforcement **Standard:** Use linters (e.g., ESLint, TSLint, Pylint, GoLint) and formatters (e.g., Prettier, Black, Go FMT) to enforce consistent code style and catch potential errors. * **Why:** Automated linting and formatting ensure a unified code base, reduce code review time, and prevent common style-related issues. **Do This:** * Configure linters and formatters with shared configuration files that are applied across all projects in the Monorepo. * Integrate linting and formatting into the CI/CD pipeline to automatically check code style on every commit. * Use editor integrations to automatically format code on save. Consider configuring git hooks ("pre-commit") to autoformat staged changes. * Establish a common set of rules that are applied to all components. **Don't Do This:** * Allow individual projects to deviate from the established code style. * Ignore linter warnings or errors. Treat them as bugs. * Skip formatting code. **Example (ESLint and Prettier configuration with Nx):** 1. **.eslintrc.json (root):** """json { "extends": ["eslint:recommended", "plugin:@typescript-eslint/recommended", "prettier"], "plugins": ["@typescript-eslint"], "rules": { "no-unused-vars": "warn", "@typescript-eslint/explicit-function-return-type": "off", "semi": ["error", "always"] }, "ignorePatterns": ["node_modules/", "dist/"] } """ 2. **.prettierrc.js (root):** """javascript module.exports = { semi: true, trailingComma: "es5", singleQuote: true, printWidth: 120, tabWidth: 2, useTabs: false, }; """ 3. **Apply to Nx project:** The Nx CLI automates applying these configurations when the projects are created. ### 4.2. Commit Style **Standard:** Adopt a consistent commit style, such as Conventional Commits, to automate changelog generation and streamline the release process. * **Why:** Structured commit messages provide valuable information about the changes made, making it easier to understand the history of the codebase and automate tasks like versioning and release management. **Do This:** * Use a commit message format like "feat(scope): description" or "fix(scope): description". Common types include "feat", "fix", "chore", "docs", "style", "refactor", "test", and "ci". * Use scopes to indicate which part of the Monorepo is affected by the commit (e.g., "feat(ui): add new button style"). * Include a detailed description of the changes in the commit message body. * Utilize tools like Commitlint to enforce commit message conventions. **Don't Do This:** * Use vague or uninformative commit messages. * Skip the commit message body. **Example (Commitlint configuration):** 1. **Install dependencies:** """bash npm install @commitlint/cli @commitlint/config-conventional --save-dev """ 2. **Create "commitlint.config.js":** """javascript module.exports = { extends: ['@commitlint/config-conventional'] }; """ 3. **Configure Git hook using Husky:** """bash npm install husky --save-dev npx husky install npx husky add .husky/commit-msg 'npx --no-install commitlint --edit "$1"' """ This setup enforces Conventional Commits by validating commit messages against the "@commitlint/config-conventional" rules. This makes your commits standardized allowing you to automatically determine semantic versioning and release notes generation. ## 5. Tool Integrations **Standard:** Integrate the Monorepo tooling with IDEs, CI/CD systems, and other development tools to streamline the development workflow. * **Why:** Integrations reduce context switching, automate tasks, and improve developer productivity. **Do This:** * Use IDE extensions for linting, formatting, and code completion. VS Code, Sublime Text, and other popular IDEs have plugins * Configure CI/CD pipelines to automatically build, test, and deploy code changes. * Use tools like GitHub Actions, GitLab CI, or CircleCI for CI/CD. * Integrate SonarQube or similar tools for static code analysis and security vulnerability detection. * Adopt Trunk.io to centralize linting, formatting, security checks, and pre-commit hooks. Trunk integrates seamlessly into the development workflow to ensure consistency in code quality. **Don't Do This:** * Rely on manual processes for building, testing, and deploying code. * Ignore integration opportunities that can improve developer productivity. **Example (GitHub Actions for CI/CD with Nx):** """yaml name: CI/CD on: push: branches: [main] pull_request: branches: [main] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 with: fetch-depth: 0 # Required for Nx affected commands - name: Setup Node.js uses: actions/setup-node@v3 with: node-version: 18 # You can define the node version. - name: Install dependencies run: npm ci - name: Run linters run: nx run-many --target=lint --all --parallel - name: Run tests run: nx affected --target=test --parallel --watchAll=false - name: Build affected apps and libs run: nx affected --target=build --parallel --prod - name: Upload artifacts uses: actions/upload-artifact@v3 with: name: dist path: dist """ This GitHub Actions workflow: * Triggers on push and pull requests to the "main" branch. * Sets up Node.js. * Installs dependencies. * Runs linters and tests using Nx. * Builds affected apps and libraries using Nx. * Uploads the build artifacts. This comprehensive tooling and ecosystem standard provides a foundation for building and maintaining robust, scalable, and maintainable Monorepo projects. Adhering to these guidelines will lead to increased developer productivity, reduced maintenance costs, and improved overall code quality, especially with AI coding assists leveraging the structured documentation with standards.