# Component Design Standards for Azure
This document outlines component design standards for developing applications on Azure. It focuses on creating reusable, maintainable, and performant components within the Azure ecosystem. These standards will help improve code quality, reduce development time, and ensure long-term application health.
## 1. General Principles
### 1.1. Reusability
* **Do This:** Design components to be reusable across different parts of the application or even across different applications.
* **Don't Do This:** Create monolithic, tightly-coupled components that are difficult to reuse or modify.
**Why:** Reusable components reduce code duplication, making applications easier to maintain and update. They also promote consistency across different parts of the system.
**Example (Azure Functions):**
"""csharp
// Reusable function to log events to Azure Table Storage
public static class LoggingHelper
{
public static async Task LogEvent(string partitionKey, string rowKey, string message, ILogger log)
{
string storageAccountName = "yourstorageaccountname";
string storageAccountKey = "yourstorageaccountkey";
string tableName = "Logs";
string storageConnectionString = $"DefaultEndpointsProtocol=https;AccountName={storageAccountName};AccountKey={storageAccountKey};EndpointSuffix=core.windows.net";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(storageConnectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference(tableName);
await table.CreateIfNotExistsAsync();
var logEntity = new DynamicTableEntity(partitionKey, rowKey);
logEntity.Properties.Add("Message", new EntityProperty(message));
TableOperation insertOperation = TableOperation.Insert(logEntity);
try
{
await table.ExecuteAsync(insertOperation);
log.LogInformation($"Logged event to Table Storage: PartitionKey={partitionKey}, RowKey={rowKey}");
}
catch (StorageException ex)
{
log.LogError($"Error logging event: {ex.Message}");
}
}
}
// Usage in an Azure Function
public static class MyFunction
{
[FunctionName("MyFunction")]
public static async Task Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("Function executed.");
await LoggingHelper.LogEvent("FunctionLogs", Guid.NewGuid().ToString(), "MyFunction ran successfully.", log);
}
}
"""
### 1.2. Single Responsibility Principle (SRP)
* **Do This:** Ensure each component has a single, well-defined responsibility.
* **Don't Do This:** Create components that perform multiple unrelated tasks.
**Why:** SRP makes components easier to understand, test, and modify. Changes to one responsibility don't affect unrelated parts of the component.
**Example (Azure Logic Apps):** Instead of having one Logic App perform multiple complex tasks, break it down into smaller, more focused Logic Apps. One Logic App might be responsible for receiving a message from a queue, another for processing the message, and a third for sending the result to storage.
### 1.3. Abstraction and Encapsulation
* **Do This:** Use interfaces and abstract classes to define contracts for component behavior. Hide implementation details behind well-defined interfaces.
* **Don't Do This:** Directly expose internal implementation details of a component.
**Why:** Abstraction reduces dependencies between components and allows for easier swapping of implementations without affecting other parts of the system. Encapsulation protects the internal state of a component and prevents unintended modifications.
**Example (Azure Cosmos DB data access):**
"""csharp
// Interface for Cosmos DB data access
public interface ICosmosDbService
{
Task GetItemAsync(string id);
Task> GetItemsAsync(string query);
Task AddItemAsync(T item);
Task UpdateItemAsync(string id, T item);
Task DeleteItemAsync(string id);
}
// Implementation of the interface (hides Cosmos DB specific details)
public class CosmosDbService : ICosmosDbService
{
private readonly CosmosClient _cosmosClient;
private readonly string _databaseName;
private readonly string _containerName;
public CosmosDbService(CosmosClient cosmosClient, string databaseName, string containerName)
{
_cosmosClient = cosmosClient ?? throw new ArgumentNullException(nameof(cosmosClient));
_databaseName = databaseName ?? throw new ArgumentNullException(nameof(databaseName));
_containerName = containerName ?? throw new ArgumentNullException(nameof(containerName));
}
public async Task GetItemAsync(string id)
{
//Implementation details using _cosmosClient
// using the new .NET 8 SDK is highly recommended for performance + features
try
{
var container = _cosmosClient.GetContainer(_databaseName, _containerName);
ItemResponse response = await container.ReadItemAsync(id, new PartitionKey(id)); // Partition Key MUST be provided
return response.Resource;
}
catch (CosmosException ex) when (ex.StatusCode == System.Net.HttpStatusCode.NotFound)
{
return default;
}
}
public async Task> GetItemsAsync(string query)
{
//Implementation details using _cosmosClient
var container = _cosmosClient.GetContainer(_databaseName, _containerName);
var queryDefinition = new QueryDefinition(query);
FeedIterator queryResultSetIterator = container.GetItemQueryIterator(queryDefinition);
List results = new List();
while (queryResultSetIterator.HasMoreResults)
{
FeedResponse currentResultSet = await queryResultSetIterator.ReadNextAsync();
foreach (T item in currentResultSet)
{
results.Add(item);
}
}
return results;
}
public async Task AddItemAsync(T item)
{
var container = _cosmosClient.GetContainer(_databaseName, _containerName);
await container.CreateItemAsync(item, new PartitionKey(item.Id));
}
public async Task UpdateItemAsync(string id, T item)
{
var container = _cosmosClient.GetContainer(_databaseName, _containerName);
await container.ReplaceItemAsync(item, id, new PartitionKey(id));
}
public async Task DeleteItemAsync(string id)
{
var container = _cosmosClient.GetContainer(_databaseName, _containerName);
await container.DeleteItemAsync(id, new PartitionKey(id));
}
}
"""
### 1.4. Loose Coupling
* **Do This:** Minimize dependencies between components. Use dependency injection, event-driven architectures, and message queues to decouple components.
* **Don't Do This:** Create tightly coupled components that directly depend on each other's internal implementations.
**Why:** Loose coupling makes the system more flexible, easier to change, and easier to test. Changes to one component are less likely to have unintended consequences on other components.
**Example (Azure Service Bus):** Use Service Bus queues or topics to send messages between components without requiring direct knowledge of each other.
"""csharp
// Sending a message to a Service Bus queue
public static async Task SendMessageToQueue(string queueName, string messageBody, ILogger log)
{
string connectionString = "your_service_bus_connection_string"; //Store in Key Vault
ServiceBusClient client = new ServiceBusClient(connectionString);
ServiceBusSender sender = client.CreateSender(queueName);
try
{
ServiceBusMessage message = new ServiceBusMessage(messageBody);
await sender.SendMessageAsync(message);
log.LogInformation($"Sent message to queue: {queueName}");
}
catch (Exception ex)
{
log.LogError($"Error sending message to queue: {ex.Message}");
}
finally
{
await sender.CloseAsync();
await client.DisposeAsync();
}
}
// Receiving a message from a Service Bus queue
public static async Task ReceiveMessageFromQueue(string queueName, ILogger log)
{
string connectionString = "your_service_bus_connection_string"; //Store in Key Vault
ServiceBusClient client = new ServiceBusClient(connectionString);
ServiceBusProcessor processor = client.CreateProcessor(queueName, new ServiceBusProcessorOptions());
processor.ProcessMessageAsync += async (ProcessMessageEventArgs args) =>
{
string body = args.Message.Body.ToString();
log.LogInformation($"Received message: {body}");
await args.CompleteMessageAsync(args.Message); // Mark message as processed
};
processor.ProcessErrorAsync += (ProcessErrorEventArgs args) =>
{
log.LogError($"Error receiving message: {args.Exception.Message}");
return Task.CompletedTask;
};
await processor.StartProcessingAsync();
Console.WriteLine("Press any key to stop processing...");
Console.ReadKey();
await processor.StopProcessingAsync();
}
"""
### 1.5. Maintainability
* **Do This:** Write clean, well-documented code. Use consistent naming conventions and formatting.
* **Don't Do This:** Write complex, undocumented code that is difficult to understand and modify.
**Why:** Maintainable code is easier to update, debug, and extend. This reduces the cost of ownership over time.
### 1.6. Performance
* **Do This:** Design components with performance in mind. Use efficient algorithms and data structures. Optimize for the specific requirements of the Azure platform.
* **Don't Do This:** Ignore performance considerations during the design phase.
**Why:** Performance is critical for cloud applications. Poorly designed components can lead to slow response times and higher costs. Consider using tools like Azure Monitor for performance profiling. Caching is also critical to prevent unnecessary repeated operations to backend services.
### 1.7. Security
* **Do This:** Design components with security in mind. Use secure coding practices to prevent vulnerabilities such as SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). Protect sensitive data using encryption and access controls. Leverage Azure Key Vault to store secrets.
* **Don't Do This:** Ignore security considerations during the design phase.
**Why:** Security is paramount for cloud applications. Vulnerabilities can lead to data breaches and other security incidents. Consider using tools like Azure Security Center to identify and mitigate security risks.
## 2. Component Types in Azure
### 2.1. Azure Functions
* **Do This:** Use Azure Functions for small, independent units of work that can be triggered by various events.
* **Don't Do This:** Use Azure Functions for long-running processes or complex workflows. Use durable functions instead for stateful workflows .
**Specific Standards:**
* Keep functions short and focused.
* Use dependency injection to manage dependencies.
* Handle exceptions gracefully.
* Use asynchronous programming to avoid blocking threads.
* Consider using bindings to simplify integration with Azure services.
**Example:** A simple Azure Function triggered by a queue message:
"""csharp
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
namespace FunctionApp1;
public class QueueTriggerCSharp
{
[FunctionName("QueueTriggerCSharpEx")]
public void Run(
[QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}
}
"""
### 2.2. Azure Logic Apps
* **Do This:** Use Azure Logic Apps to orchestrate complex workflows and integrate different systems.
* **Don't Do This:** Use Azure Logic Apps for computationally intensive tasks.
**Specific Standards:**
* Break down complex workflows into smaller, more manageable Logic Apps.
* Use connectors to integrate with different services.
* Use error handling to handle failures gracefully.
* Use parameters and variables to make Logic Apps more configurable.
**Example:** A Logic App that receives a message from a queue, processes it, and sends an email. Use the Azure portal designer for visually creating and managing Logic Apps.
### 2.3. Azure Web Apps
* **Do This:** Use Azure Web Apps for hosting web applications and APIs.
* **Don't Do This:** Use Azure Web Apps for long-running background processes. Consider Azure Functions or Azure Container Apps for those scenarios.
**Specific Standards:**
* Use a well-defined architecture, such as Model-View-Controller (MVC) or Representational State Transfer (REST).
* Use dependency injection to manage dependencies.
* Use logging and monitoring to track application health.
* Secure web apps using authentication and authorization.
* Optimize web apps for performance using caching and compression, and CDNs.
**Example (ASP.NET Core Web API):** A simple REST API endpoint that handles a GET request.
"""csharp
using Microsoft.AspNetCore.Mvc;
namespace MyWebApp.Controllers
{
[ApiController]
[Route("[controller]")]
public class MyController : ControllerBase
{
[HttpGet]
public ActionResult Get()
{
return "Hello from my Web API!";
}
}
}
"""
### 2.4. Azure Container Apps
* **Do This:** Use Azure Container Apps to deploy and manage containerized applications and microservices. Especially useful for event-driven applications, or processing background tasks.
* **Don't Do This:** Use Azure Container Apps for simple web applications, stick with Web Apps for simplicity in this case.
**Specific standards**
* Use a robust CI/CD pipeline with your preferreed container registry to deploy changes safely and reliably across new versions
* Implement health probes (liveness/readiness) to enable ACA to automatically restart unhealthy containers
* Consider autoscaling based on CPU, memory, or custom metrics.
**Example (Azure Container App with scaling)**
"""yaml
name: my-container-app
properties:
managedEnvironmentId: /subscriptions//resourceGroups//providers/Microsoft.App/managedEnvironments/
configuration:
ingress:
external: true
targetPort: 80
secrets:
- name: mysecret
value:
registries:
- server: docker.io
username:
passwordSecretRef: mysecret
activeRevisionsMode: Single
dapr:
enabled: false
appId: my-container-app
template:
containers:
- image: docker.io//my-image:latest
name: my-container
resources:
cpu: 0.5
memory: 1Gi
ports:
- port: 80
protocol: TCP
env:
- name: MY_ENV_VAR
value: "my_env_value"
scale:
minReplicas: 1
maxReplicas: 10
rules:
- name: http-request-rule
http:
metadata:
concurrentRequests: '50'
auth:
secretRef: mysecret_apikey
revisionSuffix: latest
"""
### 2.5. Azure Data Components (Cosmos DB, SQL Database, etc.)
* **Do This:** Choose the right data store for the specific requirements of the application.
* **Don't Do This:** Use a single data store for all types of data without considering performance, scalability, and cost.
**Specific Standards:**
* Use connection pooling to improve performance.
* Use parameterized queries to prevent SQL injection.
* Use indexing to optimize query performance.
* Use appropriate data types to minimize storage costs.
* Ensure that components interacting with Azure Data components properly handle retries.
**Example (Azure SQL Database connection pooling):** Using Entity Framework Core with connection pooling enabled by default. Handle transient errors using Polly or similar retry libraries.
"""csharp
public class MyDbContext : DbContext
{
public MyDbContext(DbContextOptions options) : base(options)
{
}
public DbSet MyEntities { get; set; }
}
// In Startup.cs or Program.cs
services.AddDbContext(options =>
options.UseSqlServer(Configuration.GetConnectionString("MyDbConnection")));
"""
## 3. Design Patterns for Azure
### 3.1. Gateway Pattern
* **Description:** A gateway provides a single point of entry for all requests to a system or microservice.
* **Azure Implementation:** Use Azure API Management (APIM) as a gateway to manage and secure APIs, apply policies, and monitor usage or Azure Front Door as a global ingress point.
### 3.2. Circuit Breaker Pattern
* **Description:** Prevents an application from repeatedly trying to access a failing service.
* **Azure Implementation:** Implement the Circuit Breaker pattern using libraries like Polly in C# and integrate with monitoring tools like Application Insights to track circuit breaker state.
### 3.3. Retry Pattern
* **Description:** Automatically retries failed operations to handle transient errors.
* **Azure Implementation:** Use libraries like Polly or the built-in retry policies in Azure SDKs to automatically retry failed operations.
### 3.4. Queue-Based Load Leveling
* **Description:** Uses a queue to buffer requests and smooth out load spikes.
* **Azure Implementation:** Use Azure Service Bus queues or Azure Storage queues to buffer requests between components.
### 3.5. Event-Driven Architecture
* **Description:** Components communicate through asynchronous events.
* **Azure Implementation:** Use Azure Event Grid or Azure Event Hubs to build event-driven architectures.
## 4. Naming Conventions
* **Do This:** Establish and consistently use naming conventions for all Azure resources and components.
* **Don't Do This:** Use inconsistent or unclear names that make it difficult to understand the purpose of a resource or component.
**Examples:**
* **Azure Functions:** "FunctionApp-ResourceGroup-Environment-Functionality" (e.g., "FunctionApp-MyRG-Dev-ProcessOrders")
* **Storage Accounts:** "StorageAccount-ResourceGroup-Environment-Functionality" (e.g., "StorageAccount-MyRG-Prod-OrderData")
* **Logic Apps:** "LogicApp-ResourceGroup-Environment-Functionality" (e.g., "LogicApp-MyRG-Test-OrderProcessing")
* **Databases:** "Database-ResourceGroup-Environment-Name" (e.g., "Database-MyRG-Prod-OrdersDB")
## 5. Error Handling
* **Do This:** Implement comprehensive error handling in all components.
* **Don't Do This:** Ignore errors or allow exceptions to propagate without handling them.
**Specific Standards:**
* Use try-catch blocks to handle exceptions.
* Log errors to Azure Monitor or other logging services.
* Implement retry logic for transient errors.
* Provide meaningful error messages to users.
* Use dead-letter queues for messages that cannot be processed.
## 6. Documentation
* **Do This:** Document all components and their interfaces.
* **Don't Do This:** Neglect documentation, making it difficult for others to understand and use your components.
**Specific Standards:**
* Use inline comments to explain complex code.
* Create API documentation using tools like Swagger/OpenAPI.
* Document the purpose, inputs, and outputs of each component.
* Document any dependencies on other components or services.
## 7. Testing
* **Do This:** Write unit tests, integration tests, and end-to-end tests for all components.
* **Don't Do This:** Deploy components without adequate testing.
**Specific Standards:**
* Use mocking frameworks to isolate components during unit testing.
* Use integration tests to verify that components work together correctly.
* Use end-to-end tests to verify that the entire application works as expected.
* Automate testing using CI/CD pipelines.
## 8. Monitoring and Logging
* **Do This:** Implement comprehensive monitoring and logging for all components.
* **Don't Do This:** Deploy components without adequate monitoring and logging.
**Specific Standards:**
* Use Azure Monitor to collect metrics, logs, and traces.
* Use Application Insights to monitor application performance and availability.
* Use structured logging to make it easier to analyze logs.
* Set up alerts to notify you of critical errors or performance issues.
**Example (Logging to Application Insights):**
"""csharp
using Microsoft.Extensions.Logging;
public class MyComponent
{
private readonly ILogger _logger;
public MyComponent(ILogger logger)
{
_logger = logger;
}
public void DoSomething()
{
_logger.LogInformation("Doing something...");
try
{
// ... some code that might throw an exception
}
catch (Exception ex)
{
_logger.LogError(ex, "An error occurred.");
}
}
}
"""
By adhering to these component design standards, development teams can build robust, scalable, and maintainable applications on Azure. This document should be treated as a living document, and it should be updated regularly to reflect changes in the Azure platform and best practices.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Performance Optimization Standards for Azure This document outlines the performance optimization standards for Azure development. It serves as a guide for developers to write efficient, responsive, and resource-optimized applications within the Azure ecosystem. It is designed to be used by developers and as context for AI coding assistants. ## 1. Architectural Considerations ### 1.1 Choosing the Right Azure Services **Do This:** Carefully select Azure services based on performance requirements, scalability needs, and cost considerations. **Don't Do This:** Default to familiar services without evaluating if they are optimal for the workload. **Why:** Selecting the right service upfront can drastically reduce development effort and resource consumption in the long run. **Explanation:** Azure offers a broad range of services, each optimized for particular workloads. Choosing the correct service aligns with the workload's characteristics, leading to better performance and lower TCO. For example, using Azure Cosmos DB for high-throughput, low-latency globally distributed data is better than using Azure SQL Database when those characteristics are core requirements. **Code Example (Service Selection):** """ # Consider Azure Functions for serverless, event-driven scenarios. # Consider Azure Container Apps for microservices and scalable applications. # Consider Azure Kubernetes Service (AKS) for complex container orchestration needs. # Consider Azure App Service for web applications and APIs with simpler deployment requirements. """ ### 1.2 Region Selection **Do This:** Deploy Azure resources to the region closest to your users. **Don't Do This:** Assume all regions provide equal performance or latency. **Why:** Minimizing network latency improves application responsiveness. **Explanation:** The physical distance between your application and its users directly impacts latency. Azure regions offer varying levels of network connectivity. Choosing the closest region reduces round-trip times for data transfer. **Code Example (ARM Template Snippet for Region):** """json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "location": { "type": "string", "defaultValue": "eastus", // Default Region (change according to user base) "metadata": { "description": "The location for all resources." } } }, "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2022-09-01", "name": "myWebApp", "location": "[parameters('location')]", "properties": { // Web app settings } } ] } """ ### 1.3 Implementing Caching Strategies **Do This:** Implement caching at multiple layers (client, CDN, application, database). **Don't Do This:** Over-cache and risk serving stale data or under-cache and impact performance. **Why:** Caching reduces the load on backend services and improves response times. **Explanation:** Caching stores frequently accessed data closer to the user or application, reducing the need to repeatedly fetch it from the original source. Effective caching strategies involve selecting appropriate cache expiration policies, cache invalidation mechanisms, and cache tiers. **Code Example (Azure Cache for Redis - .NET):** """csharp using StackExchange.Redis; public class RedisCacheService { private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => { string cacheConnection = ConfigurationManager.AppSettings["RedisCacheConnection"].ToString(); return ConnectionMultiplexer.Connect(cacheConnection); }); public static ConnectionMultiplexer Connection => lazyConnection.Value; public string GetData(string key) { IDatabase cache = Connection.GetDatabase(); return cache.StringGet(key); } public void SetData(string key, string value, TimeSpan expiry) { IDatabase cache = Connection.GetDatabase(); cache.StringSet(key, value, expiry); } } // Usage: RedisCacheService cache = new RedisCacheService(); string cachedValue = cache.GetData("myKey"); if (string.IsNullOrEmpty(cachedValue)) { // Fetch data from source string dataFromSource = GetDataFromSource(); cache.SetData("myKey", dataFromSource, TimeSpan.FromMinutes(30)); cachedValue = dataFromSource; } //use cachedValue """ ### 1.4 Asynchronous Operations **Do This:** Use asynchronous operations to avoid blocking the main thread for long-running tasks. **Don't Do This:** Perform synchronous I/O operations on the main thread, especially in UI-intensive applications or API endpoints. **Why:** Asynchronous operations improve the responsiveness and scalability of applications. **Explanation:** Asynchronous programming allows the application to continue processing other tasks while waiting for the completion of a long-running operation (e.g., network request, database query). This prevents the application from becoming unresponsive. **Code Example (Asynchronous Web API Controller):** """csharp using Microsoft.AspNetCore.Mvc; using Microsoft.EntityFrameworkCore; [ApiController] [Route("[controller]")] public class ProductsController : ControllerBase { private readonly AppDbContext _context; public ProductsController(AppDbContext context) { _context = context; } [HttpGet] public async Task<ActionResult<IEnumerable<Product>>> GetProducts() { return await _context.Products.ToListAsync(); // Asynchronous database query } } """ ### 1.5 Autoscaling **Do This:** Configure autoscaling for compute resources (e.g., VMs, App Service plans, Azure Container Apps) to handle varying workloads. **Don't Do This:** Rely on fixed capacity, which can lead to resource bottlenecks or underutilization. **Why:** Autoscaling dynamically adjusts resources based on demand, ensuring optimal performance and cost-effectiveness. **Explanation:** Autoscaling automatically increases or decreases the number of compute instances based on predefined metrics (e.g., CPU utilization, memory consumption, request queue length). This ensures that the application can handle sudden spikes in traffic without performance degradation. **Code Example (ARM Template for App Service Autoscaling):** """json { "type": "Microsoft.Insights/autoscalesettings", "apiVersion": "2015-04-01", "name": "myAutoscaleSettings", "location": "[resourceGroup().location]", "properties": { "name": "myAutoscaleSettings", "targetResourceUri": "[resourceId('Microsoft.Web/sites', 'myWebApp')]", "profiles": [ { "name": "AutoScaleProfile", "capacity": { "minimum": "1", "maximum": "10", "default": "1" }, "rules": [ { "metricTrigger": { "metricName": "CpuPercentage", "metricResourceUri": "[resourceId('Microsoft.Web/sites', 'myWebApp')]", "timeGrain": "PT1M", "statistic": "Average", "timeWindow": "PT5M", "timeAggregation": "Average", "operator": "GreaterThan", "threshold": 70 }, "operation": { "operationType": "ChangeCount", "parameters": { "value": "1", "cooldown": "PT5M" } } } ] } ] } } """ ## 2. Database Optimization ### 2.1 Indexing Strategies **Do This:** Create appropriate indexes to speed up query execution. **Don't Do This:** Over-index, which can slow down write operations and increase storage costs. **Why:** Indexes allow the database to quickly locate data without scanning the entire table. **Explanation:** Indexes are data structures that improve the speed of data retrieval operations on a database table. However, excessive indexing can negatively impact write performance and increase storage requirements. It's crucial to analyze query patterns and create indexes selectively on frequently queried columns. **Code Example (SQL Index Creation):** """sql -- Create a non-clustered index on the 'LastName' column of the 'Customers' table CREATE NONCLUSTERED INDEX IX_Customers_LastName ON Customers (LastName); """ ### 2.2 Query Optimization **Do This:** Write efficient queries that minimize resource consumption. **Don't Do This:** Use wildcard characters at the beginning of search strings ("%string"), causing full table scans. Select all columns ("SELECT *") unnecessarily. **Why:** Efficient queries reduce database load and improve application performance. **Explanation:** Poorly written queries can lead to performance bottlenecks and excessive resource consumption. To optimize queries, avoid using wildcard characters at the beginning of search strings, select only the necessary columns, use appropriate JOIN clauses, and leverage parameterized queries. **Code Example (Optimized SQL Query):** """sql -- Instead of: SELECT * FROM Orders WHERE CustomerID LIKE '%123%'; -- Use: SELECT OrderID, OrderDate, ShippingAddress FROM Orders WHERE CustomerID = @CustomerID; --Parameterized query """ ### 2.3 Connection Pooling **Do This:** Use connection pooling to reuse database connections and reduce overhead. **Don't Do This:** Open and close database connections frequently, which can be resource-intensive. **Why:** Connection pooling improves database performance by reducing the overhead of establishing new connections. **Explanation:** Connection pooling maintains a pool of active database connections that can be reused by the application. This avoids the overhead of repeatedly creating and destroying connections, which can be a significant performance bottleneck. Most database drivers and frameworks provide built-in support for connection pooling. **Code Example (.NET Core Connection Pooling with Entity Framework Core):** """csharp services.AddDbContext<AppDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); """ EF Core automatically manages connection pooling. Configure connection string effectively ("Min Pool Size", "Max Pool Size"). ### 2.4 Database Sharding & Partitioning **Do This:** Consider database sharding or partitioning for very large datasets or high-throughput requirements. **Don't Do This:** Apply sharding prematurely without analyzing data volume and access patterns. **Why:** Distributing data across multiple databases or partitions improves scalability and performance. **Explanation:** Database sharding involves splitting a large database into smaller, independent databases (shards) and distributing them across multiple servers. Partitioning involves dividing a table into multiple smaller tables (partitions) within the same database. Both techniques can improve query performance and scalability by reducing the amount of data that needs to be processed. **Note:** Cosmos DB offers built-in sharding/partitioning capabilities. Choose a good partition key. ## 3. Code-Level Optimizations ### 3.1 Efficient Data Structures and Algorithms **Do This:** Choose appropriate data structures and algorithms for specific tasks. **Don't Do This:** Use inefficient data structures that lead to quadratic or exponential time complexity. **Why:** Efficient algorithms and data structures minimize resource consumption and improve performance. **Explanation:** The choice of data structures and algorithms can have a significant impact on the performance of an application. For example, using a hash table for lookups results in near constant time complexity (O(1)), while searching an unsorted array can take linear time (O(n)). **Code Example (Efficient Lookup with Dictionary):** """csharp // Instead of: List<string> names = new List<string> { "Alice", "Bob", "Charlie" }; bool found = false; foreach (string name in names) { if (name == "Bob") { found = true; break; } } // Use: Dictionary<string, bool> nameLookup = new Dictionary<string, bool> { { "Alice", true }, { "Bob", true }, { "Charlie", true } }; bool found = nameLookup.ContainsKey("Bob"); // O(1) lookup """ ### 3.2 Minimizing Object Allocation **Do This:** Minimize object allocation and garbage collection overhead. Use object pooling or caching to reuse objects. **Don't Do This:** Create excessive temporary objects, especially in performance-critical sections of code. **Why:** Frequent object allocation and garbage collection can lead to performance bottlenecks. **Explanation:** Object allocation and garbage collection are resource-intensive operations. Reducing the number of objects created and collected can improve application performance. Object pooling involves maintaining a pool of pre-allocated objects that can be reused, while caching stores frequently used objects in memory. Using "struct" instead of "class" when appropriate can also reduce memory allocation (value type vs. reference type). **Code Example (Object Pooling):** """csharp using System.Collections.Concurrent; public class StringBuilderPool { private static ConcurrentBag<StringBuilder> _objectPool = new ConcurrentBag<StringBuilder>(); public static StringBuilder Get() { if (_objectPool.TryTake(out var item)) { return item; } else { return new StringBuilder(); } } public static void Return(StringBuilder obj) { obj.Clear(); _objectPool.Add(obj); } } // Usage: StringBuilder sb = StringBuilderPool.Get(); sb.Append("Hello, world!"); string result = sb.ToString(); StringBuilderPool.Return(sb); """ ### 3.3 String Manipulation **Do This:** Use "StringBuilder" for efficient string concatenation. **Don't Do This:** Use the "+" operator repeatedly for string concatenation, which creates multiple temporary string objects. **Why:** "StringBuilder" avoids the overhead of creating new string objects for each concatenation. **Explanation:** Strings are immutable in .NET, meaning that each string concatenation operation creates a new string object. Using the "+" operator repeatedly for string concatenation can lead to excessive object allocation and garbage collection. The "StringBuilder" class provides an efficient way to build strings by concatenating multiple strings into a mutable buffer. **Code Example (Efficient String Concatenation):** """csharp // Instead of: string result = ""; for (int i = 0; i < 1000; i++) { result += i.ToString(); } // Use: StringBuilder sb = new StringBuilder(); for (int i = 0; i < 1000; i++) { sb.Append(i.ToString()); } string result = sb.ToString(); """ ### 3.4 Avoid Boxing/Unboxing **Do This:** When working with generics or collections, ensure you're not unintentionally boxing value types (like "int", "bool", "structs"). **Don't Do This:** Add value types to non-generic collections like "ArrayList" which store objects and thus require boxing. **Why:** Boxing and unboxing are performance-intensive operations that involve converting value types to reference types and vice versa. **Explanation:** Boxing is the process of converting a value type (e.g., "int", "bool", "struct") to a corresponding object reference. Unboxing is the reverse process. When value types are added to non-generic collections (e.g., "ArrayList"), they are implicitly boxed, leading to performance overhead. Using generic collections (e.g., "List<int>", "Dictionary<string, MyStruct>") avoids boxing and unboxing. **Code Example (Avoid Boxing):** """csharp // Instead of: ArrayList numbers = new ArrayList(); for (int i = 0; i < 1000; i++) { numbers.Add(i); // Boxing occurs here } // Use: List<int> numbers = new List<int>(); for (int i = 0; i < 1000; i++) { numbers.Add(i); } """ ## 4. Network Optimization ### 4.1 Minimize Data Transfer **Do This:** Only transfer the necessary data over the network. Use data compression and pagination to reduce the amount of data transferred. **Don't Do This:** Fetch large amounts of data from the server and then filter it on the client. **Why:** Reducing data transfer improves network bandwidth utilization and reduces latency. **Explanation:** Transferring large amounts of data over the network can be a significant performance bottleneck. To minimize data transfer: compress data before sending it over the network (e.g., using GZIP compression), paginate large datasets to retrieve only the data needed for the current view, and avoid fetching unnecessary data from the server. Apply filters on the server-side if possible. **Code Example (GZIP Compression in ASP.NET Core):** """csharp using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.ResponseCompression; using Microsoft.Extensions.DependencyInjection; public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddResponseCompression(options => { options.EnableForHttps = true; options.Providers.Add<GzipCompressionProvider>(); }); } public void Configure(IApplicationBuilder app) { app.UseResponseCompression(); } } """ ### 4.2 Connection Multiplexing **Do This:** Use HTTP/2 or HTTP/3 to enable connection multiplexing. **Don't Do This:** Rely on HTTP/1.1, which can lead to connection overhead due to head-of-line blocking. **Why:** Connection multiplexing allows multiple requests to be sent over a single TCP connection, reducing connection overhead. **Explanation:** HTTP/2 and HTTP/3 support connection multiplexing, which allows multiple requests to be sent over a single TCP connection. This eliminates the need to establish a new connection for each request, reducing connection overhead and improving performance. Most modern web servers and browsers support HTTP/2 and HTTP/3. Enable it in your Azure App Service configuration. ### 4.3 Content Delivery Network (CDN) **Do This:** Serve static content (e.g., images, CSS, JavaScript) from a CDN. **Don't Do This:** Serve static content directly from your application server, which can increase load and latency. **Why:** CDNs distribute content across multiple servers closer to users, reducing latency and improving performance. **Explanation:** A CDN is a distributed network of servers that delivers content to users based on their geographic location. By serving static content from a CDN, you can reduce the load on your application server and improve response times for users who are geographically dispersed. Azure CDN is a popular choice. **Code Example (Azure CDN Configuration - ARM Template - Example with Storage Account):** """json { "type": "Microsoft.Cdn/profiles", "apiVersion": "2021-06-01", "name": "myCdnProfile", "location": "[resourceGroup().location]", "sku": { "name": "Standard_Microsoft", "tier": "Standard" }, "properties": { "originHostHeader": "mystorageaccount.blob.core.windows.net" }, "resources": [ { "type": "endpoints", "apiVersion": "2021-06-01", "name": "myCdnEndpoint", "dependsOn": [ "[resourceId('Microsoft.Cdn/profiles', 'myCdnProfile')]" ], "location": "[resourceGroup().location]", "properties": { "originHostName": "mystorageaccount.blob.core.windows.net", "origins": [ { "name": "myOrigin", "properties": { "hostName": "mystorageaccount.blob.core.windows.net", "httpPort": 80, "httpsPort": 443 } } ] } } ] } """ ## 5. Monitoring and Profiling ### 5.1 Application Insights **Do This:** Use Azure Application Insights to monitor application performance, detect anomalies, and diagnose issues. **Don't Do This:** Deploy applications without proper monitoring and logging, as it hinders troubleshooting and optimization efforts. **Why:** Application Insights provides valuable insights into application behavior and performance. **Explanation:** Application Insights is a powerful monitoring and analytics service that provides insights into application performance, availability, and usage. It can be used to detect anomalies, diagnose issues, and identify areas for optimization. **Code Example (Adding Application Insights to .NET Core App):** """csharp // In Startup.cs: public void ConfigureServices(IServiceCollection services) { services.AddApplicationInsightsTelemetry(); } // To add custom telemetry: using Microsoft.ApplicationInsights; public class MyService { private readonly TelemetryClient _telemetryClient; public MyService(TelemetryClient telemetryClient) { _telemetryClient = telemetryClient; } public void DoSomething() { _telemetryClient.TrackEvent("MyCustomEvent"); _telemetryClient.TrackMetric("MyCustomMetric", 42); } } """ ### 5.2 Profiling Tools **Do This:** Use profiling tools to identify performance bottlenecks in your code. **Don't Do This:** Guess at performance issues; use data-driven analysis to identify the root cause. **Why:** Profiling tools provide detailed information about CPU usage, memory allocation, and other performance metrics. **Explanation:** Profiling tools analyze the execution of your code to identify performance bottlenecks. They provide detailed information about CPU usage, memory allocation, and other performance metrics, allowing you to pinpoint the areas of your code that are consuming the most resources. Visual Studio Profiler and PerfView are commonly used profiling tools. ### 5.3 Azure Monitor **Do This:** Utilize Azure Monitor to monitor the health and performance of Azure resources (VMs, databases, storage accounts). Create alerts for critical metrics. **Don't Do This:** Ignore resource-level metrics, which can provide valuable insights into potential issues. **Why:** Azure Monitor provides a comprehensive view of the performance and health of your Azure resources. **Explanation:** Azure Monitor provides a centralized platform for collecting and analyzing telemetry data from your Azure resources. It can be used to monitor the health and performance of VMs, databases, storage accounts, and other Azure services. Create alerts to be notified of critical issues, like high CPU or low available memory. This document provides a comprehensive overview of performance optimization standards for Azure. By following these guidelines, developers can build efficient, responsive, and scalable applications within the Azure ecosystem. Remember to continually monitor and optimize your applications based on real-world usage patterns and performance metrics gathered using Azure Monitor and Application Insights.
# API Integration Standards for Azure This document outlines coding standards and best practices for integrating APIs within the Azure ecosystem. It aims to provide clear guidance for developers to build maintainable, performant, and secure API integrations, leveraging modern Azure features and patterns. ## 1. Architecture and Design Principles ### 1.1. Standard: API Gateway Pattern **Do This:** Use Azure API Management (APIM) as the API gateway for all external and internal APIs. **Don't Do This:** Expose backend services directly to clients without an API gateway. **Why:** API Management provides a central point for managing, securing, and observing APIs. It offers features like rate limiting, authentication, transformation, and analytics. **Azure Specifics:** * Leverage APIM's built-in policies for common tasks like authentication, authorization, and request transformation. * Integrate APIM with Azure Active Directory (AAD) for identity management. * Use APIM's developer portal for API discovery and documentation. **Code Example (ARM Template for APIM):** """json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "apimServiceName": { "type": "string", "metadata": { "description": "The name of the API Management service." } }, "skuName": { "type": "string", "defaultValue": "Developer", "allowedValues": [ "Developer", "Basic", "Standard", "Premium" ], "metadata": { "description": "The pricing tier for the API Management service." } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "The location for all resources." } } }, "resources": [ { "type": "Microsoft.ApiManagement/service", "apiVersion": "2023-05-01-preview", "name": "[parameters('apimServiceName')]", "location": "[parameters('location')]", "sku": { "name": "[parameters('skuName')]", "capacity": 1 }, "properties": { "publisherEmail": "your-email@example.com", "publisherName": "Your Organization", "notificationSenderEmail": "apimgmt-noreply@mail.windowsazure.com", "hostnameConfigurations": [], "publicIPAddressId": null, "virtualNetworkType": "None", "apiVersionConstraint": { "minApiVersion": null } } } ] } """ ### 1.2. Standard: Microservices Architecture **Do This:** Design API integrations around a microservices architecture, promoting loose coupling and independent deployment of backend services. **Don't Do This:** Build monolithic applications with tightly coupled components. **Why:** Microservices enable scalability, resilience, and faster development cycles. **Azure Specifics:** * Use Azure Kubernetes Service (AKS) for orchestrating microservices. * Leverage Azure Functions for event-driven, serverless backend logic. * Employ Azure Service Bus or Event Grid for asynchronous communication between services. **Code Example (Azure Function with Service Bus Trigger):** """csharp using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Host; using Microsoft.Extensions.Logging; namespace MyFunctionApp { public static class ServiceBusTriggerFunction { [FunctionName("ServiceBusTriggerFunction")] public static void Run( [ServiceBusTrigger("myqueue", Connection = "ServiceBusConnectionString")] string myQueueItem, ILogger log) { log.LogInformation($"C# ServiceBus queue trigger function processed: {myQueueItem}"); //Your processing logic here } } } """ ### 1.3. Standard: Asynchronous Communication **Do This:** Use asynchronous communication patterns, such as queues and events, for non-critical operations to improve scalability and resilience. **Don't Do This:** Rely solely on synchronous, request-response communication for all API interactions. **Why:** Asynchronous communication decouples services, allowing them to operate independently and handle varying workloads. **Azure Specifics:** * Use Azure Service Bus for reliable message queuing with features like transactions and dead-letter queues. * Use Azure Event Grid for event-driven architectures, enabling loose coupling and real-time event processing. * Consider Azure Queue Storage for simpler queueing scenarios. **Code Example (Sending a Message to Azure Service Bus Queue):** """csharp using Azure.Messaging.ServiceBus; using System; using System.Threading.Tasks; namespace ServiceBusExample { class Program { static string connectionString = "YOUR_SERVICE_BUS_CONNECTION_STRING"; static string queueName = "myqueue"; static async Task Main(string[] args) { // Create a Service Bus client ServiceBusClient client = new ServiceBusClient(connectionString); // Create a sender for the queue ServiceBusSender sender = client.CreateSender(queueName); // Create a message ServiceBusMessage message = new ServiceBusMessage("Hello, Service Bus!"); // Send the message await sender.SendMessageAsync(message); Console.WriteLine("Message sent to Service Bus queue."); } } } """ ## 2. Implementation Details ### 2.1. Standard: API Versioning **Do This:** Implement API versioning to maintain backward compatibility and allow for evolving API features. **Don't Do This:** Introduce breaking changes without versioning. **Why:** Versioning allows clients to migrate to new API versions at their own pace, minimizing disruption. **Azure Specifics:** * Use APIM's versioning features to manage multiple API versions. * Implement versioning in the API endpoint URL or through custom headers. **Code Example (APIM Versioning):** 1. **Define API Versions in APIM:** In the Azure portal, navigate to your APIM instance, select "APIs," and choose your API. Under "Settings," configure "Versions" and create different versions (e.g., "v1","v2"). 2. **Route Requests to Backends**: Use policies to route requests based on the version specified in the URL or header to different backend services. For example: """xml <choose> <when condition="@(context.Request.Url.Path.StartsWithSegments("/v1"))"> <set-backend-service base-url="https://backend-service-v1.azurewebsites.net" /> </when> <when condition="@(context.Request.Url.Path.StartsWithSegments("/v2"))"> <set-backend-service base-url="https://backend-service-v2.azurewebsites.net" /> </when> <otherwise> <return-response> <set-status code="400" reason="Bad Request" /> <set-body>Invalid API Version</set-body> </return-response> </otherwise> </choose> """ ### 2.2. Standard: Error Handling **Do This:** Implement robust error handling with meaningful error codes and messages. **Don't Do This:** Return generic error messages or expose sensitive information in error responses. **Why:** Proper error handling improves the developer experience and helps troubleshoot issues. **Azure Specifics:** * Use structured logging with Azure Monitor to capture detailed error information. Include correlation IDs to track errors across services. * Implement retry policies for transient errors. **Code Example (Error Handling in ASP.NET Core API):** """csharp using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; namespace ErrorHandlingExample.Controllers { [ApiController] [Route("[controller]")] public class ExampleController : ControllerBase { private readonly ILogger<ExampleController> _logger; public ExampleController(ILogger<ExampleController> logger) { _logger = logger; } [HttpGet("error")] public IActionResult GetError() { try { throw new System.Exception("Simulated error."); } catch (System.Exception ex) { _logger.LogError(ex, "An error occurred."); return StatusCode(500, new { message = "An unexpected error occurred. Please contact support.", correlationId = Request.HttpContext.TraceIdentifier }); //Include correlation ID } } } } """ ### 2.3. Standard: Data Validation **Do This:** Implement input validation on all APIs to prevent invalid data from reaching backend services. **Don't Do This:** Trust client-provided data without validation. **Why:** Validation improves security and data integrity. **Azure Specifics:** * Use APIM's validation policies to enforce data validation at the API gateway level. * Implement data validation in backend services using appropriate validation libraries. **Code Example (Data Validation in ASP.NET Core API using DataAnnotations):** """csharp using System.ComponentModel.DataAnnotations; using Microsoft.AspNetCore.Mvc; namespace DataValidationExample.Models { public class User { [Required(ErrorMessage = "Name is required")] [StringLength(50, ErrorMessage = "Name cannot be longer than 50 characters")] public string Name { get; set; } [EmailAddress(ErrorMessage = "Invalid email address")] public string Email { get; set; } [Range(18, 120, ErrorMessage = "Age must be between 18 and 120")] public int Age { get; set; } } } namespace DataValidationExample.Controllers { [ApiController] [Route("[controller]")] public class UserController : ControllerBase { [HttpPost] public IActionResult CreateUser([FromBody] User user) { if (!ModelState.IsValid) { return BadRequest(ModelState); } // Process the user data return Ok(user); } } } """ ### 2.4 Standard: Authentication and Authorization **Do This:** Implement robust authentication and authorization mechanisms to protect APIs. **Don't Do This:** Use weak or insecure authentication methods. **Why:** Ensures only authorized users and applications can access sensitive data and functionality. **Azure Specifics:** * Use Azure Active Directory (AAD) for identity management. * Implement OAuth 2.0 or OpenID Connect for authentication. * Use Role-Based Access Control (RBAC) to authorize access to APIs. * Leverage APIM for securing APIs with AAD integration. * Use Managed Identities for Azure resources, which automatically manage credentials for accessing other Azure services. **Code Example (Implementing AAD Authentication in ASP.NET Core API):** """csharp using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.Identity.Web.Resource; namespace AADAuthExample.Controllers { [Authorize] [ApiController] [Route("[controller]")] [RequiredScope(RequiredScopesConfigurationKey = "AzureAd:Scopes")] // Define required scopes public class SecureController : ControllerBase { [HttpGet] public IActionResult Get() { return Ok("This is a secure API endpoint."); } } } """ Add the following to "appsettings.json": """json { "AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "yourtenant.onmicrosoft.com", "TenantId": "your-tenant-id", "ClientId": "your-client-id", "Scopes": "api://your-client-id/access_as_user" } } """ Register your API in Azure AD and configure the necessary scopes. ### 2.5 Standard: API Documentation **Do This:** Provide comprehensive and up-to-date API documentation. **Don't Do This:** Neglect API documentation, leaving developers to reverse-engineer APIs. **Why:** Good documentation improves developer productivity and reduces integration time. **Azure Specifics:** * Use APIM's developer portal to automatically generate and host API documentation based on OpenAPI specifications (Swagger). * Generate OpenAPI specifications from code using tools like Swashbuckle. **Code Example (Generating OpenAPI Specification with Swashbuckle in ASP.NET Core):** 1. **Install Swashbuckle.AspNetCore NuGet Package:** """bash Install-Package Swashbuckle.AspNetCore -Version 6.5.0 """ 2. **Configure Swagger in "Startup.cs" or "Program.cs":** """csharp // In Program.cs (for .NET 6+): builder.Services.AddSwaggerGen(c => { c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" }); }); //... app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1"); }); """ ### 2.6 Standard: Logging and Monitoring **Do This:** Implement comprehensive logging and monitoring to track API usage, performance, and errors. **Don't Do This:** Ignore logging and monitoring, making it difficult to troubleshoot issues and optimize performance. **Why:** Provides insights into API behavior and helps identify and resolve problems quickly. **Azure Specifics:** * Use Azure Monitor to collect and analyze logs and metrics. * Integrate APIM with Azure Monitor for API-specific metrics. * Use Application Insights for deep application performance monitoring. * Utilize Azure Log Analytics for querying and analyzing log data. **Code Example (Logging in ASP.NET Core API):** """csharp using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; namespace LoggingExample.Controllers { [ApiController] [Route("[controller]")] public class ExampleController : ControllerBase { private readonly ILogger<ExampleController> _logger; public ExampleController(ILogger<ExampleController> logger) { _logger = logger; } [HttpGet] public IActionResult Get() { _logger.LogInformation("GET request received at /example"); return Ok("Hello, world!"); } } } """ ## 3. Performance Optimization ### 3.1. Standard: Caching **Do This:** Implement caching to reduce latency and improve performance. **Don't Do This:** Over-cache data or cache sensitive information. **Why:** Caching reduces the load on backend services and improves response times. **Azure Specifics:** * Use Azure Cache for Redis for caching frequently accessed data. * Leverage APIM's caching policies for API responses. * Implement client-side caching using HTTP caching headers. **Code Example (Using Azure Cache for Redis in .NET):** """csharp using StackExchange.Redis; using System; using System.Threading.Tasks; namespace RedisCacheExample { class Program { private static Lazy<ConnectionMultiplexer> redisConnection = new Lazy<ConnectionMultiplexer>(() => { string connectionString = "YOUR_REDIS_CONNECTION_STRING"; return ConnectionMultiplexer.Connect(connectionString); }); public static ConnectionMultiplexer Connection { get { return redisConnection.Value; } } static async Task Main(string[] args) { IDatabase db = Connection.GetDatabase(); // Set a value await db.StringSetAsync("mykey", "Hello, Redis!"); // Get the value string value = await db.StringGetAsync("mykey"); Console.WriteLine($"Value from Redis: {value}"); } } } """ ### 3.2. Standard: Connection Pooling **Do This:** Use connection pooling to reuse database connections and avoid the overhead of creating new connections for each request. **Don't Do This:** Create a new database connection for each API request. **Why:** Connection pooling improves performance and reduces resource consumption. **Azure Specifics:** * Azure services like App Service and Functions automatically implement connection pooling for supported databases (e.g., SQL Database). Configure connection string parameters appropriately for optimal pooling. * Use appropriate connection string settings like "Min Pool Size" and "Max Pool Size" based on the expected load. ### 3.3. Standard: Gzip Compression **Do This:** Enable Gzip compression to reduce the size of API responses. **Don't Do This:** Transmit uncompressed data, especially for large responses. **Why:** Compression reduces bandwidth usage and improves response times. **Azure Specifics:** * Enable Gzip compression in APIM policies. * Configure compression in backend services like App Service. **Code Example (Enabling Gzip Compression in ASP.NET Core):** """csharp // In Program.cs builder.Services.AddResponseCompression(options => { options.EnableForHttps = true; }); // .... app.UseResponseCompression() """ ## 4. Security Considerations ### 4.1. Standard: Data Encryption **Do This:** Encrypt sensitive data at rest and in transit. **Don't Do This:** Store sensitive data in plain text or transmit it over unencrypted channels. **Why:** Encryption protects data from unauthorized access. **Azure Specifics:** * Use Azure Key Vault to store and manage encryption keys. * Enable encryption at rest for Azure Storage and Azure SQL Database. * Use HTTPS for all API communications. * Use Transport Layer Security (TLS) 1.2 or higher. ### 4.2. Standard: Input Sanitization **Do This:** Sanitize all user inputs to prevent injection attacks. **Don't Do This:** Trust user inputs without sanitization. **Why:** Sanitization prevents malicious code from being injected into backend systems. **Azure Specifics:** * Use input validation and encoding techniques to sanitize data. * Implement security policies in APIM to block malicious requests. ### 4.3. Standard: Rate Limiting and Throttling **Do This:** Implement rate limiting and throttling to protect APIs from abuse and denial-of-service attacks. **Don't Do This:** Allow unrestricted access to APIs without rate limiting. **Why:** Rate limiting protects backend services from being overwhelmed. **Azure Specifics:** * Use APIM's rate limiting policies to control the number of requests allowed per user or IP address. * Implement throttling in backend services to prevent resource exhaustion. **Example (APIM Rate Limiting Policy):** """xml <rate-limit calls="100" renewal-period="60" counter-key="@(context.SubscriptionId)" /> """ This policy limits each subscription to 100 calls per 60 seconds. By adhering to these coding standards, developers can build robust, scalable, and secure API integrations within the Azure ecosystem, leveraging the platform's powerful features and capabilities. These standards will help teams create APIs that are easier to maintain, debug, and evolve over time.
# Code Style and Conventions Standards for Azure This document outlines the code style and conventions standards for Azure development. It is designed to promote consistency, readability, maintainability, and performance across all Azure projects. These standards should be followed by all developers contributing to Azure-based solutions and used as context by AI coding assistants. ## 1. General Principles ### 1.1. Consistency is Key * **Do This:** Adhere to a consistent style throughout the codebase. * **Don't Do This:** Mix different styles within the same project or module. **Why:** Consistency reduces cognitive load, making code easier to read and understand, which improves maintainability and reduces errors. ### 1.2. Readability Matters * **Do This:** Write code that is easy to understand, even for developers unfamiliar with the specific component. * **Don't Do This:** Sacrifice readability for brevity. **Why:** Readability improves collaboration, reduces debugging time, and facilitates knowledge transfer. ### 1.3. Maintainability is Paramount * **Do This:** Design code that is easy to modify and extend without introducing bugs. * **Don't Do This:** Write tightly coupled or overly complex code. **Why:** Maintainability reduces long-term costs, improves agility, and allows for faster iteration. ### 1.4. Performance Considerations * **Do This:** Write code that is optimized for performance, considering Azure-specific constraints and best practices. * **Don't Do This:** Ignore performance implications during development. **Why:** Performance impacts user experience, cost efficiency, and scalability. ### 1.5. Security First * **Do This:** Design code with security in mind, following OWASP guidelines and Azure security recommendations. * **Don't Do This:** Neglect security vulnerabilities during development. **Why:** Security protects data integrity, prevents unauthorized access, and ensures compliance. ## 2. Naming Conventions ### 2.1. General Naming * **Do This:** Use descriptive and meaningful names for variables, functions, classes, and modules. * **Don't Do This:** Use abbreviations or single-letter names unless for very short-lived loop variables. **Why:** Clear naming improves code comprehension and reduces ambiguity. ### 2.2. Language-Specific Conventions * **C#:** * **Classes and Structs:** PascalCase (e.g., "UserService", "OrderProcessor") * **Interfaces:** "IPascalCase" (e.g., "IUserRepository", "IOrderService") * **Methods:** PascalCase (e.g., "GetUserById", "ProcessOrder") * **Variables (local):** camelCase (e.g., "userId", "orderTotal") * **Constants:** ALL_UPPER_SNAKE_CASE (e.g., "MAX_RETRIES", "DEFAULT_TIMEOUT") * **Private Fields:** "_camelCase" (e.g., "_userRepository", "_orderQueue") * **JavaScript/TypeScript:** * **Classes:** PascalCase (e.g., "UserService", "OrderProcessor") * **Interfaces:** PascalCase (e.g., "UserRepository", "OrderService") - Note: 'I' prefix is optional, but consistency is important. * **Functions/Methods:** camelCase (e.g., "getUserById", "processOrder") * **Variables:** camelCase (e.g., "userId", "orderTotal") * **Constants:** UPPER_SNAKE_CASE (e.g., "MAX_RETRIES", "DEFAULT_TIMEOUT") * **Python:** * **Classes:** PascalCase (e.g., "UserService", "OrderProcessor") * **Functions/Methods:** snake_case (e.g., "get_user_by_id", "process_order") * **Variables:** snake_case (e.g., "user_id", "order_total") * **Constants:** UPPER_SNAKE_CASE (e.g., "MAX_RETRIES", "DEFAULT_TIMEOUT") **Why:** Following language-specific conventions improves code familiarity and collaboration within language ecosystems. ### 2.3. Azure Resource Naming * **Do This:** Use a standardized naming convention for Azure resources that includes environment, resource type, and purpose. * **Don't Do This:** Use generic or ambiguous names that make it difficult to identify resources. **Azure Resource Naming Examples:** """ <Environment>-<ResourceType>-<Application>-<InstanceNumber> dev-web-myapp-001 prod-db-orders-001 """ * "Environment": "dev", "test", "prod", "stage" * "ResourceType": "web", "db", "func", "vm", "stor", "aks", "appi" (App Insights), "keyv" (Key Vault), "sql" * "Application": Your application's name (e.g., "orders", "users", "reporting") * "InstanceNumber": A sequential number for multiple instances (e.g., "001", "002") **Why:** Consistent resource naming simplifies management, improves automation, and reduces the risk of misconfiguration. Tagging Azure resources is also crucial to categorize and manage resources effectively. ## 3. Formatting and Style ### 3.1. Indentation * **Do This:** Use consistent indentation (e.g., 4 spaces or 2 spaces) throughout the codebase, enforced by a linter/formatter. * **Don't Do This:** Mix different indentation styles within the same file or project. **Why:** Consistent indentation improves code readability and structure. ### 3.2. Line Length * **Do This:** Limit line length to a reasonable number of characters (e.g., 120 characters) to improve readability. * **Don't Do This:** Write excessively long lines that require horizontal scrolling. **Why:** Limiting line length makes code easier to read on different screen sizes and improves code review efficiency. ### 3.3. Whitespace * **Do This:** Use whitespace to improve code readability (e.g., spaces around operators, blank lines between logical blocks of code). * **Don't Do This:** Write dense code with minimal whitespace. **Why:** Whitespace enhances code clarity and structure. ### 3.4. Bracing * **C#:** Use K&R style bracing: """csharp if (condition) { // Code block } else { // Code block } """ * **JavaScript/TypeScript:** Use K&R style bracing: """typescript if (condition) { // Code block } else { // Code block } """ * **Python:** Python uses indentation to define code blocks, so consistent indentation is crucial (typically 4 spaces per level are recommended): """python if condition: # Code block else: # Code block """ **Why:** Consistent bracing improves code readability and reduces ambiguity. ### 3.5. File Encoding * **Do This:** Use UTF-8 encoding for all source files. * **Don't Do This:** Use different or inconsistent file encodings. **Why:** UTF-8 is the standard encoding for text files and supports a wide range of characters. ## 4. Commenting and Documentation ### 4.1. Commenting Conventions * **Do This:** Write clear and concise comments to explain complex logic, algorithms, and design decisions. * **Don't Do This:** Write comments that state the obvious or are outdated or misleading. **Why:** Comments provide context and explain the "why" behind the code, improving understanding and maintainability. ### 4.2. Documentation Generation * **C#:** Use XML documentation comments ("///") to generate API documentation. """csharp /// <summary> /// Gets a user by ID. /// </summary> /// <param name="id">The user ID.</param> /// <returns>The user object, or null if not found.</returns> public User GetUserById(int id) { // Code implementation return null; } """ * **JavaScript/TypeScript:** Use JSDoc style comments to generate API documentation. """typescript /** * Gets a user by ID. * @param {number} id - The user ID. * @returns {Promise<User | null>} The user object, or null if not found. */ async getUserById(id: number): Promise<User | null> { // Code implementation return null; } """ * **Python:** Use docstrings to document functions, classes, and modules. """python def get_user_by_id(user_id: int) -> User: """ Gets a user by ID. Args: user_id: The user ID. Returns: The user object, or None if not found. """ # Code implementation return None """ **Why:** Documentation helps users understand and use APIs effectively, improving usability and reducing support costs. ### 4.3. Code Examples Within Documentation * **Do This:** Include clear, concise code examples in documentation to illustrate how to use APIs and components (see: [https://learn.microsoft.com/en-us/style-guide/developer-content/code-examples](https://learn.microsoft.com/en-us/style-guide/developer-content/code-examples)). * **Don't Do This:** Provide incomplete or unclear code examples. **Why:** Code examples help users quickly understand how to use APIs and components. ## 5. Language-Specific Best Practices ### 5.1. C# and .NET * **Asynchronous Programming:** Use "async" and "await" for I/O-bound operations to avoid blocking the UI thread or other critical threads. """csharp public async Task<User> GetUserByIdAsync(int id) { // Asynchronously fetch data from database using Entity Framework Core var user = await _dbContext.Users.FindAsync(id); return user; } """ * **Dependency Injection:** Use dependency
# Testing Methodologies Standards for Azure This document outlines the testing methodologies standards for Azure development, providing guidance for developers and serving as context for AI coding assistants. It focuses on unit, integration, and end-to-end (E2E) testing within the Azure ecosystem, emphasizing modern approaches, patterns, and the latest Azure features. ## 1. General Testing Principles ### 1.1 Importance of Testing * **Why:** Thorough testing is crucial for ensuring the reliability, security, and performance of Azure applications. It helps identify defects early in the development lifecycle, reducing the cost and effort of fixing them later. ### 1.2 Testing Pyramid * **Why:** The testing pyramid emphasizes having more unit tests than integration tests, and more integration tests than end-to-end tests. This approach focuses on fast, isolated tests at the base and slower, more comprehensive tests at the top. * **Do This:** Balance your testing efforts according to the pyramid: broad unit test coverage, targeted integration tests, and critical path E2E tests. * **Don't Do This:** Rely heavily on end-to-end tests while neglecting unit and integration tests, as this makes debugging and root cause analysis difficult. ## 2. Unit Testing ### 2.1 Focus and Scope * **Why:** Unit tests verify the behavior of individual components or functions in isolation. They are fast to execute and provide immediate feedback on code changes. ### 2.2 Standards * **Do This:** * Write focused unit tests that cover all code paths and edge cases within a component. * Use mocking frameworks to isolate the component being tested from external dependencies. * Follow the Arrange-Act-Assert (AAA) pattern for structuring unit tests. * Aim for high code coverage (80% or higher) with meaningful assertions. * **Don't Do This:** * Write unit tests that depend on external resources or databases. * Test implementation details rather than the intended behavior. * Skip testing exception handling and error conditions. ### 2.3 Code Examples #### 2.3.1 Using Azure Functions for Unit Testing """csharp // Function to be tested public static class MyFunction { [FunctionName("MyFunction")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request."); string name = req.Query["name"]; string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); dynamic data = JsonConvert.DeserializeObject(requestBody); name = name ?? data?.name; string responseMessage = string.IsNullOrEmpty(name) ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." : $"Hello, {name}. This HTTP triggered function executed successfully."; return new OkObjectResult(responseMessage); } } // Unit Test using xUnit and Moq using Xunit; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; using Moq; using System.IO; using Newtonsoft.Json; using System.Threading.Tasks; public class MyFunctionTests { [Fact] public async Task MyFunction_WithNameProvided_ReturnsGreeting() { // Arrange var request = new Mock<HttpRequest>(); var query = new Mock<IQueryCollection>(); query.Setup(q => q["name"]).Returns("TestUser"); request.Setup(r => r.Query).Returns(query.Object); var logger = Mock.Of<ILogger>(); // Act var result = await MyFunction.Run(request.Object, logger); // Assert var okResult = Assert.IsType<OkObjectResult>(result); Assert.Equal("Hello, TestUser. This HTTP triggered function executed successfully.", okResult.Value); } [Fact] public async Task MyFunction_NoNameProvided_ReturnsGenericGreeting() { // Arrange var request = new Mock<HttpRequest>(); var query = new Mock<IQueryCollection>(); request.Setup(r => r.Query).Returns(query.Object); var logger = Mock.Of<ILogger>(); // Act var result = await MyFunction.Run(request.Object, logger); // Assert var okResult = Assert.IsType<OkObjectResult>(result); Assert.Equal("This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.", okResult.Value); } [Fact] public async Task MyFunction_NameInBody_ReturnsGreeting() { // Arrange var request = new Mock<HttpRequest>(); var query = new Mock<IQueryCollection>(); request.Setup(r => r.Query).Returns(query.Object); var ms = new MemoryStream(); var sw = new StreamWriter(ms); string json = JsonConvert.SerializeObject(new { name = "TestUserBody" }); sw.Write(json); sw.Flush(); ms.Position = 0; request.Setup(r => r.Body).Returns(ms); var logger = Mock.Of<ILogger>(); // Act var result = await MyFunction.Run(request.Object, logger); // Assert var okResult = Assert.IsType<OkObjectResult>(result); Assert.Equal("Hello, TestUserBody. This HTTP triggered function executed successfully.", okResult.Value); } } """ #### 2.3.2 Mocking Azure Service Dependencies (Example: Cosmos DB) """csharp using Moq; using Microsoft.Azure.Cosmos; using Xunit; using System.Threading; using System.Threading.Tasks; public class CosmosDbServiceTests { [Fact] public async Task GetItemAsync_ItemExists_ReturnsItem() { // Arrange var mockCosmosClient = new Mock<CosmosClient>(); var mockDatabase = new Mock<Database>(); var mockContainer = new Mock<Container>(); // Setup mock behavior mockCosmosClient.Setup(client => client.GetDatabase(It.IsAny<string>())).Returns(mockDatabase.Object); mockDatabase.Setup(db => db.GetContainer(It.IsAny<string>())).Returns(mockContainer.Object); // Setup a successful item retrieval (using a FeedResponse for simplicity in this example) var mockItemResponse = new Mock<ItemResponse<MyItem>>(); mockItemResponse.Setup(response => response.Resource).Returns(new MyItem { Id = "1", Name = "Test Item" }); mockItemResponse.Setup(response => response.StatusCode).Returns(System.Net.HttpStatusCode.OK); mockContainer.Setup(container => container.ReadItemAsync<MyItem>( It.IsAny<string>(), It.IsAny<PartitionKey>(), It.IsAny<ItemRequestOptions>(), // Include ItemRequestOptions It.IsAny<CancellationToken>() )).ReturnsAsync(mockItemResponse.Object); var service = new CosmosDbService(mockCosmosClient.Object); // Act var result = await service.GetItemAsync("1"); // Assert Assert.NotNull(result); Assert.Equal("1", result.Id); Assert.Equal("Test Item", result.Name); } public class MyItem { public string Id { get; set; } public string Name { get; set; } } public class CosmosDbService { private readonly CosmosClient _cosmosClient; private readonly string _databaseName = "TestDatabase"; private readonly string _containerName = "TestContainer"; public CosmosDbService(CosmosClient cosmosClient) { _cosmosClient = cosmosClient; } public async Task<MyItem> GetItemAsync(string id) { try { var database = _cosmosClient.GetDatabase(_databaseName); var container = database.GetContainer(_containerName); var itemResponse = await container.ReadItemAsync<MyItem>(id, new PartitionKey(id)); // Provide partition key here when required. return itemResponse.Resource; } catch (CosmosException ex) when (ex.StatusCode == System.Net.HttpStatusCode.NotFound) { return null; } } } } """ ### 2.4 Common Anti-Patterns * **Overspecified Tests:** Writing tests that are too tightly coupled to the implementation details. This often leads to tests that break with minor code changes. * **Ignoring Edge Cases:** Only testing happy paths and neglecting error scenarios, boundary conditions, and invalid inputs. * **Insufficient Mocking:** Failing to properly mock dependencies, leading to slow and unreliable tests that behave more like integration tests. * **Testing Private Methods:** Unit tests should test the public interface (API) of a class, focusing on its behavior, not its internal implementation. ## 3. Integration Testing ### 3.1 Focus and Scope * **Why:** Integration tests verify the interactions between different components or services within the application. This helps ensure that they work together correctly. ### 3.2 Standards * **Do This:** * Focus on testing the interactions between components, not the individual components themselves. * Use real dependencies or test doubles that closely mimic the behavior of real dependencies. * Use a dedicated test environment or sandbox to avoid impacting production systems. * Clean up any test data or resources after the test execution. * **Don't Do This:** * Test every possible combination of interactions between components. * Rely on external systems that are not under your control. * Run integration tests against production environments. * Leave behind test data or resources that could interfere with other tests or applications. ### 3.3 Code Examples #### 3.3.1 Integration Testing with Azure Service Bus """csharp using Xunit; using Azure.Messaging.ServiceBus; using System; using System.Threading.Tasks; public class ServiceBusIntegrationTests : IAsyncLifetime { private const string ServiceBusConnectionString = "YOUR_SERVICE_BUS_CONNECTION_STRING"; // Replace with your connection string private const string QueueName = "myqueue"; private ServiceBusClient _client; private ServiceBusSender _sender; private ServiceBusProcessor _processor; private readonly string _messageBody = "Test Message"; public async Task InitializeAsync() { _client = new ServiceBusClient(ServiceBusConnectionString); _sender = _client.CreateSender(QueueName); //Configuring the processor. Use either SessionsProcessorOptions or ProcessorOptions, but not both ServiceBusProcessorOptions serviceBusProcessorOptions = new ServiceBusProcessorOptions { ReceiveMode = ServiceBusReceiveMode.PeekLock, AutoCompleteMessages = true, MaxConcurrentCalls = 10, PrefetchCount = 20, MaxAutoLockRenewDuration = TimeSpan.FromSeconds(60), }; _processor = _client.CreateProcessor(QueueName, serviceBusProcessorOptions); _processor.ProcessMessageAsync += MessageHandler; _processor.ProcessErrorAsync += ErrorHandler; await _processor.StartProcessingAsync(); } public async Task DisposeAsync() { await _processor.StopProcessingAsync(); await _processor.DisposeAsync(); await _sender.DisposeAsync(); await _client.DisposeAsync(); } [Fact] public async Task SendAndReceiveMessage_Success() { // Arrange bool messageReceived = false; string receivedMessageBody = null; // Set up a handler that asserts and marks the message as received Task MessageHandler(ProcessMessageEventArgs args) { receivedMessageBody = args.Message.Body.ToString(); messageReceived = true; return Task.CompletedTask; } Task ErrorHandler(ProcessErrorEventArgs args) { Console.WriteLine(args.Exception.ToString()); // Log and inspect the error return Task.CompletedTask; } ServiceBusClient receiverClient = new ServiceBusClient(ServiceBusConnectionString); ServiceBusReceiver receiver = receiverClient.CreateReceiver(QueueName); // Act await _sender.SendMessageAsync(new ServiceBusMessage(_messageBody)); ServiceBusReceivedMessage receivedMessage = await receiver.ReceiveMessageAsync(TimeSpan.FromSeconds(10)); if(receivedMessage != null) { Assert.Equal(_messageBody, receivedMessage.Body.ToString()); } else { Assert.Fail("No message was received in the allotted time."); } await receiver.DisposeAsync(); await receiverClient.DisposeAsync(); // Assert // Give some time for the message to be processed // Assert.True(messageReceived, "Message was not received."); // Assert.Equal(_messageBody, receivedMessageBody); //verify the content of message received } private async Task MessageHandler(ProcessMessageEventArgs args) { Console.WriteLine($"Received: {args.Message.Body.ToString()}"); await args.CompleteMessageAsync(args.Message); } private Task ErrorHandler(ProcessErrorEventArgs args) { Console.WriteLine(args.Exception.ToString()); return Task.CompletedTask; } } """ #### 3.3.2 Testing Azure Function Integration with Queue Storage """csharp using Xunit; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Timers; using Microsoft.Extensions.Logging; using Moq; using System; using System.Threading.Tasks; using Microsoft.Azure.Storage.Queue; using Microsoft.Azure.Storage; public class QueueIntegrationTests { [Fact] public async Task QueueTriggerFunction_AddsMessageToQueue() { // Arrange string connectionString = "UseDevelopmentStorage=true"; // Or your real connection string string queueName = "test-queue"; string expectedMessage = "Hello Queue!"; //Create Queue (only if it doesn't exist) CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString); CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient(); CloudQueue queue = queueClient.GetQueueReference(queueName); await queue.CreateIfNotExistsAsync(); // Set up the execution context and logger var loggerMock = new Mock<ILogger>(); // Mock the ICollector to capture the added message. This does not work due to the inability to Mock Extension Types! //Create a new JobHost configuration var config = new JobHostConfiguration(); config.StorageConnectionString = connectionString; // Act // await MyQueueFunction.Run(expectedMessage, myCollector, loggerMock.Object); //Cannot mock ICollector //Create a new JobHost using the configuration using(var host = new JobHost(config)) { //Call the function using the JobHost await host.CallAsync(typeof(MyQueueFunction).GetMethod("Run"), new { myQueueItem = expectedMessage, log=loggerMock.Object}); } // Assert // Retrieve the message from the queue CloudQueueMessage retrievedMessage = await queue.GetMessageAsync(); Assert.NotNull(retrievedMessage); Assert.Equal(expectedMessage, retrievedMessage.AsString); // Clean up the queue await queue.DeleteMessageAsync(retrievedMessage); //Cleanup test message } public static class MyQueueFunction { [FunctionName("QueueFunction")] public static async Task Run( [QueueTrigger("test-queue", Connection = "AzureWebJobsStorage")] string myQueueItem, ILogger log) { log.LogInformation($"C# Queue trigger function processed: {myQueueItem}"); await Task.CompletedTask; // Simulate some processing } } } """ ### 3.4 Common Anti-Patterns * **Brittle Tests:** Making tests dependent on specific data or configurations that are likely to change. This leads to tests that frequently fail for unrelated reasons. * **Ignoring Asynchronous Behavior:** Failing to properly handle asynchronous operations, leading to race conditions and intermittent test failures. * **Insufficient Setup and Teardown:** Neglecting to properly set up the test environment or clean up after the test execution, leading to inconsistent results and potential data corruption. * **Testing Too Much:** Trying to test too many interactions or components in a single integration test. This makes it difficult to isolate the cause of failures and reduces the test's effectiveness. ## 4. End-to-End (E2E) Testing ### 4.1 Focus and Scope * **Why:** E2E tests simulate real user scenarios and verify that the entire application works correctly from start to finish. This helps ensure that all components and services are properly integrated and that the application meets the user's needs. ### 4.2 Standards * **Do This:** * Focus on testing critical user flows and business processes. * Use automation frameworks and tools to simulate user interactions. * Use a dedicated test environment or staging environment that closely resembles production. * Monitor application logs and metrics to identify performance issues or errors. * **Don't Do This:** * Test every possible user interaction or scenario. * Rely on manual testing for critical functionality. * Run E2E tests against production environments. * Ignore performance issues or errors that are identified during testing. ### 4.3 Code Examples #### 4.3.1 E2E Testing with Playwright (Simulating User Interactions) """csharp using Microsoft.Playwright; using Xunit; using System.Threading.Tasks; public class PlaywrightE2ETests : IAsyncLifetime { private IPlaywright _playwright; private IBrowser _browser; public IBrowserContext _context; public IPage _page; public async Task InitializeAsync() { // Install Playwright if not already installed: pwsh bin/Debug/net8.0/playwright.ps1 install _playwright = await Playwright.CreateAsync(); _browser = await _playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions { Headless = false // Set to true for running in CI/CD }); _context = await _browser.NewContextAsync(); _page = await _context.NewPageAsync(); } public async Task DisposeAsync() { await _page.CloseAsync(); await _context.CloseAsync(); await _browser.CloseAsync(); _playwright.Dispose(); } [Fact] public async Task NavigateToHomePage_VerifyTitle() { await _page.GotoAsync("https://www.example.com"); string title = await _page.TitleAsync(); Assert.Equal("Example Domain", title); // Example of taking a screenshot // await _page.ScreenshotAsync(new PageScreenshotOptions { Path = "screenshot.png" }); } [Fact] public async Task NavigateToHomePage_VerifyH1() { await _page.GotoAsync("https://www.example.com"); string h1Text = await _page.Locator("h1").InnerTextAsync(); Assert.Equal("Example Domain", h1Text); // Example of taking a screenshot // await _page.ScreenshotAsync(new PageScreenshotOptions { Path = "screenshot.png" }); } } """ #### 4.3.2 E2E Testing with Azure DevOps Pipelines (Configuration in Azure DevOps YAML) """yaml trigger: - main pool: vmImage: 'windows-latest' steps: - task: DotNetCoreCLI@2 displayName: 'Build' inputs: command: 'build' projects: '**/*.csproj' arguments: '--configuration Release' - task: DotNetCoreCLI@2 displayName: 'Test' inputs: command: 'test' projects: '**/*Tests.csproj' arguments: '--configuration Release --collect:"XPlat Code Coverage" --logger:"trx;LogFileName=test-results.trx"' # Collect code coverage - task: PublishTestResults@2 # Publish Test Results inputs: testResultsFormat: 'VSTest' testResultsFiles: '**/test-results.trx' failTaskOnFailedTests: true #Optionally Publish code coverage results - task: PublishCodeCoverageResults@2 inputs: codeCoverageTool: 'Cobertura' summaryFileLocation: '$(Agent.TempDirectory)/**/coverage.cobertura.xml' reportDirectory: '$(Agent.TempDirectory)/**/coveragereport' condition: succeeded() # Execute this task only if the previous tasks succeeded # Optionally, if needing to run Playwright: #- script: pwsh bin/Debug/net8.0/playwright.ps1 install --browser chromium # displayName: 'Install Playwright Browsers' #- task: DotNetCoreCLI@2 #E2E (Playwright tests e.g.) # displayName: 'Run E2E Tests' # inputs: # command: 'test' # projects: '**/E2ETests.csproj' # Update the project path # arguments: '--configuration Release' """ ### 4.4 Common Anti-Patterns * **Unreliable Tests:** Creating tests that are prone to failure due to external factors, such as network issues or service outages. * **Slow Test Execution:** Designing tests that take a long time to execute, slowing down the development process and reducing the frequency of testing. * **Lack of Observability:** Failing to properly monitor the application during testing, making it difficult to diagnose the cause of failures or performance issues. * **Ignoring Accessibility:** Neglecting to test the application's accessibility features, potentially excluding users with disabilities. ## 5. Performance Testing/Load Testing ### 5.1 Focus and Scope * **Why:** Performance testing aims to identify potential bottlenecks and ensure the application can handle the expected load under various conditions. ### 5.2 Standards * **Do This:** * Define clear performance goals and metrics (e.g., response time, throughput, resource utilization). * Simulate realistic user scenarios and workloads. * Use dedicated performance testing tools such as JMeter, LoadView, or Azure Load Testing. * Monitor resource utilization (CPU, memory, network) on Azure resources. * **Don't Do This:** * Run performance tests in production environments. * Ignore performance degradation or bottlenecks identified during testing. * Fail to baseline and track performance over time. ### 5.3 Code Example: Azure Load Testing Azure Load Testing (ALT) is a fully managed load-testing service. 1. **Create an Azure Load Testing Resource**: Provision an instance through the Azure portal. 2. **Create a Test**: Upload a JMeter script or define a simple URL-based test. 3. **Configure**: Specify test parameters (e.g., number of virtual users, duration) 4. **Run Test**: Execute and monitor real-time metrics. 5. **Analyze Results**: Review performance insights and identify bottlenecks. """Azure CLI #Create an Azure load testing resource az load create --name <load_testing_resource_name> --location <location> --resource-group <resource_group_name> --description "Testing some APIs." #Upload and run the jmx az load test create --test-id <test_id> --resource-group <resource_group_name> --load-testing-resource <load_testing_resource_name> --display-name <test_display_name> --description <test_description> --test-plan <test_plan.jmx> """ ### 5.4 Common Anti-Patterns - **Insufficient Load**: Using too few virtual users, missing peak load moments resulting in unrealistic insights. - **Ignoring External Dependencies**: Neglecting the impact of external services that can impact results. - **Testing Too Late**: Post-deployment efforts may be too late and expensive to address issues, hence performance awareness must be inculcated earlier in stages. ## 6. Security Testing ### 6.1 Focus and Scope * **Why:** Security testing aims to identify vulnerabilities in the application that could be exploited by attackers. ### 6.2 Standards * **Do This:** * Perform regular vulnerability scans and penetration testing. * Follow security best practices, such as the OWASP Top Ten. * Use static analysis tools to identify potential security flaws in the code. * Implement security measures at all levels of the application, including authentication, authorization, and data encryption. * **Don't Do This:** * Ignore security vulnerabilities or potential risks. * Rely solely on perimeter security measures. * Store sensitive data in plain text. * Use weak or default passwords. ### 6.3 Code Example: Static Code Analysis with SonarCloud 1. **Setup SonarCloud Integration**: Connect your Azure DevOps project to SonarCloud. 2. **Add SonarCloud Task**: Include the SonarCloud Analyze task in your Azure DevOps pipeline. 3. **Configure Quality Gate**: Define quality criteria such as vulnerability rates, bug counts and coverage %. """yaml # Add SonarCloud prepare analysis configuration task - task: SonarCloudPrepare@1 inputs: SonarCloud: 'YourSonarCloudServiceConnection' organization: 'your-sonarcloud-organization' scannerMode: 'MSBuild' projectKey: 'your-project-key' projectName: 'Your Project Name' # Add MSBuild task to build the project - task: MSBuild@1 inputs: solution: '**\*.sln' msbuildArguments: '/t:Rebuild' # Add SonarCloud analysis task - task: SonarCloudAnalyze@1 # Add SonarCloud publish quality gate result task - task: SonarCloudPublish@1 inputs: pollingTimeoutSec: '300' """ ### 6.4 Common Anti-Patterns - **Lack of Regular Assessments**: Infrequent security testing and assessments can leave systems vulnerable for long periods. - **Ignoring Third-Party Components**: Failing to assess the security of libraries, dependencies, and other external components. - **Poor Secrets Management**: Embedding sensitive keys, tokens, and passwords directly into code or configuration files. By adhering to these testing methodology standards, Azure developers can ensure that their applications are reliable, secure, and performant. This document provides a foundation for building high-quality Azure applications.
# Core Architecture Standards for Azure This document outlines the core architectural standards for developing applications on Microsoft Azure. These standards are designed to promote maintainability, scalability, security, and performance by guiding developers toward best practices and modern approaches. This document will also be used as a reference point for AI coding assistants to provide relevant and accurate suggestions. ## 1. Architectural Principles These overarching principles should guide all architectural decisions on Azure. * **Principle of Least Privilege:** Grant services and users only the permissions they require to function. * **Defense in Depth:** Implement multiple layers of security controls to protect against various threats. * **Scalability and Elasticity:** Design applications to scale automatically based on demand, leveraging Azure's elasticity. * **Resiliency:** Implement fault tolerance and self-healing mechanisms to ensure continuous availability. * **Observability:** Implement comprehensive logging, monitoring, and tracing to gain insights into application behavior and performance. * **Cost Optimization:** Design applications to minimize resource consumption and take advantage of Azure's cost management features. * **Infrastructure as Code (IaC):** Manage infrastructure using code, enabling automation, version control, and repeatability. ## 2. Fundamental Architectural Patterns ### 2.1 Microservices Architecture **Do This:** * Embrace microservices for complex applications that require independent scalability and deployment. * Design microservices around business capabilities, not technical functions. * Use lightweight communication protocols like REST or gRPC for inter-service communication. * Implement API gateways for external access to microservices. * Use Azure Kubernetes Service (AKS) for container orchestration. * Implement service discovery using Azure DNS or a dedicated service registry. **Don't Do This:** * Create monolithic applications that are difficult to scale and maintain. * Introduce tight coupling between microservices. * Expose internal microservice endpoints directly to external users. * Neglect monitoring and logging for each microservice. **Why This Matters:** Microservices enable independent scaling, deployment, and fault isolation, leading to more resilient and maintainable applications. AKS simplifies container orchestration. **Code Example (AKS Deployment):** """yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-microservice spec: replicas: 3 selector: matchLabels: app: my-microservice template: metadata: labels: app: my-microservice spec: containers: - name: my-microservice image: myregistry.azurecr.io/my-microservice:latest ports: - containerPort: 8080 """ ### 2.2 Event-Driven Architecture **Do This:** * Use event-driven architecture for asynchronous communication between services. * Leverage Azure Event Hubs or Azure Service Bus for event ingestion and distribution. * Implement idempotent event handlers to prevent duplicate processing. * Design events to be immutable and contain all necessary context. * Use Azure Functions or Logic Apps to process events. **Don't Do This:** * Rely on synchronous communication for long-running operations. * Create complex event schemas without versioning. * Neglect error handling and dead-letter queues for failed events. **Why This Matters:** Asynchronous communication improves performance, scalability, and resilience. Event Hubs and Service Bus provide reliable and scalable eventing platforms. **Code Example (Azure Function Triggered by Event Hub):** """csharp using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Host; using Microsoft.Extensions.Logging; public static class EventHubTriggerCSharp { [FunctionName("EventHubTriggerCSharp")] public static void Run([EventHubTrigger("myhub", Connection = "EventHubConnectionAppSetting")] string myEventHubMessage, ILogger log) { log.LogInformation($"C# Event Hub trigger function processed a message: {myEventHubMessage}"); } } """ ### 2.3 Serverless Architecture **Do This:** * Utilize Azure Functions and Logic Apps for stateless, event-driven workloads. * Design functions to be small and focused on a single task. * Leverage Azure API Management for managing and securing serverless APIs. * Use Azure Durable Functions for orchestrating complex workflows. * Implement monitoring and logging using Azure Monitor. **Don't Do This:** * Develop long-running or stateful functions. * Overuse serverless functions for tasks that are better suited for virtual machines or containers. * Neglect security considerations when exposing serverless APIs. **Why This Matters:** Serverless architectures reduce operational overhead, scale automatically, and offer a pay-per-use pricing model. **Code Example (Azure Function with HTTP Trigger):** """csharp using System.Net; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; public static class HttpExample { [FunctionName("HttpExample")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request."); string name = req.Query["name"]; string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); dynamic data = JsonConvert.DeserializeObject(requestBody); name = name ?? data?.name; return name != null ? (ActionResult)new OkObjectResult($"Hello, {name}") : new BadRequestObjectResult( "Please pass a name on the query string or in the request body"); } } """ ### 2.4 Data Lake Architecture **Do This:** * Use Azure Data Lake Storage Gen2 as the central repository for all data, structured and unstructured. * Partition data logically based on business needs and query patterns. * Implement access control using Azure Active Directory and role-based access control (RBAC). * Use Azure Data Factory for data ingestion and transformation. * Leverage Azure Synapse Analytics for data warehousing and analytics. **Don't Do This:** * Create data silos that are difficult to access and integrate. * Store sensitive data without proper encryption and access controls. * Neglect data governance and metadata management. **Why This Matters:** A data lake provides a centralized and scalable platform for storing and processing large volumes of data, enabling advanced analytics and machine learning. **Code Example (Azure Data Factory Pipeline):** """json { "name": "my_data_pipeline", "properties": { "activities": [ { "name": "CopyData", "type": "Copy", "inputs": [ { "referenceName": "SourceDataset", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "DestinationDataset", "type": "DatasetReference" } ], "translator": { "type": "TabularTranslator", "typeConversion": true, "typeConversionSettings": { "allowDataTruncation": true, "treatAsEmptyString": "" } }, "enableStaging": false } ], "datasets": [ /* Define Source and Destination Datasets here */ ] } } """ ## 3. Project Structure and Organization ### 3.1 Logical Grouping by Functionality **Do This:** * Organize code into logical modules based on functionality or business domain. * Use namespaces or folders to encapsulate related classes and functions. * Follow a consistent naming convention for modules, classes, and functions. **Don't Do This:** * Create large, monolithic projects with tightly coupled code. * Mix unrelated functionalities within the same module. * Use inconsistent naming conventions. **Why This Matters:** Logical grouping improves code readability, maintainability, and testability. **Code Example (C# Project Structure):** """ MyProject/ ├── MyProject.sln ├── MyProject.Core/ │ ├── Models/ │ │ └── Customer.cs │ ├── Services/ │ │ └── CustomerService.cs │ ├── Interfaces/ │ │ └── ICustomerService.cs ├── MyProject.API/ │ ├── Controllers/ │ │ └── CustomerController.cs │ ├── Startup.cs │ ├── appsettings.json """ ### 3.2 Separation of Concerns (SoC) **Do This:** * Apply the principle of separation of concerns (SoC) by dividing the application into distinct layers, such as presentation, business logic, and data access. * Use dependency injection to decouple components and improve testability. * Define clear interfaces between layers to promote loose coupling. **Don't Do This:** * Mix presentation logic with business logic or data access code. * Create tight dependencies between layers. * Repeat code across different layers. **Why This Matters:** SoC enhances maintainability, testability, and reusability by isolating different responsibilities within the application. **Code Example (Dependency Injection in ASP.NET Core):** """csharp // Startup.cs public void ConfigureServices(IServiceCollection services) { services.AddTransient<ICustomerService, CustomerService>(); services.AddControllers(); } // Controller public class CustomerController : ControllerBase { private readonly ICustomerService _customerService; public CustomerController(ICustomerService customerService) { _customerService = customerService; } [HttpGet] public IActionResult GetCustomers() { var customers = _customerService.GetCustomers(); return Ok(customers); } } """ ### 3.3 Infrastructure as Code (IaC) Organization **Do This:** * Use Azure Resource Manager (ARM) templates, Bicep, or Terraform to define and manage infrastructure as code. * Organize IaC code into logical modules based on the resources being provisioned. * Use parameterization to customize deployments for different environments. * Implement version control for IaC code. **Don't Do This:** * Manually provision resources through the Azure portal. * Store sensitive information (e.g., passwords, API keys) directly in IaC code. * Neglect testing and validation of IaC code. **Why This Matters:** IaC enables automation, repeatability, and version control for infrastructure deployments, reducing errors and improving consistency. **Code Example (Bicep Template