# Testing Methodologies Standards for Azure
This document outlines the testing methodologies standards for Azure development, providing guidance for developers and serving as context for AI coding assistants. It focuses on unit, integration, and end-to-end (E2E) testing within the Azure ecosystem, emphasizing modern approaches, patterns, and the latest Azure features.
## 1. General Testing Principles
### 1.1 Importance of Testing
* **Why:** Thorough testing is crucial for ensuring the reliability, security, and performance of Azure applications. It helps identify defects early in the development lifecycle, reducing the cost and effort of fixing them later.
### 1.2 Testing Pyramid
* **Why:** The testing pyramid emphasizes having more unit tests than integration tests, and more integration tests than end-to-end tests. This approach focuses on fast, isolated tests at the base and slower, more comprehensive tests at the top.
* **Do This:** Balance your testing efforts according to the pyramid: broad unit test coverage, targeted integration tests, and critical path E2E tests.
* **Don't Do This:** Rely heavily on end-to-end tests while neglecting unit and integration tests, as this makes debugging and root cause analysis difficult.
## 2. Unit Testing
### 2.1 Focus and Scope
* **Why:** Unit tests verify the behavior of individual components or functions in isolation. They are fast to execute and provide immediate feedback on code changes.
### 2.2 Standards
* **Do This:**
* Write focused unit tests that cover all code paths and edge cases within a component.
* Use mocking frameworks to isolate the component being tested from external dependencies.
* Follow the Arrange-Act-Assert (AAA) pattern for structuring unit tests.
* Aim for high code coverage (80% or higher) with meaningful assertions.
* **Don't Do This:**
* Write unit tests that depend on external resources or databases.
* Test implementation details rather than the intended behavior.
* Skip testing exception handling and error conditions.
### 2.3 Code Examples
#### 2.3.1 Using Azure Functions for Unit Testing
"""csharp
// Function to be tested
public static class MyFunction
{
[FunctionName("MyFunction")]
public static async Task Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
string responseMessage = string.IsNullOrEmpty(name)
? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
: $"Hello, {name}. This HTTP triggered function executed successfully.";
return new OkObjectResult(responseMessage);
}
}
// Unit Test using xUnit and Moq
using Xunit;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Moq;
using System.IO;
using Newtonsoft.Json;
using System.Threading.Tasks;
public class MyFunctionTests
{
[Fact]
public async Task MyFunction_WithNameProvided_ReturnsGreeting()
{
// Arrange
var request = new Mock();
var query = new Mock();
query.Setup(q => q["name"]).Returns("TestUser");
request.Setup(r => r.Query).Returns(query.Object);
var logger = Mock.Of();
// Act
var result = await MyFunction.Run(request.Object, logger);
// Assert
var okResult = Assert.IsType(result);
Assert.Equal("Hello, TestUser. This HTTP triggered function executed successfully.", okResult.Value);
}
[Fact]
public async Task MyFunction_NoNameProvided_ReturnsGenericGreeting()
{
// Arrange
var request = new Mock();
var query = new Mock();
request.Setup(r => r.Query).Returns(query.Object);
var logger = Mock.Of();
// Act
var result = await MyFunction.Run(request.Object, logger);
// Assert
var okResult = Assert.IsType(result);
Assert.Equal("This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.", okResult.Value);
}
[Fact]
public async Task MyFunction_NameInBody_ReturnsGreeting()
{
// Arrange
var request = new Mock();
var query = new Mock();
request.Setup(r => r.Query).Returns(query.Object);
var ms = new MemoryStream();
var sw = new StreamWriter(ms);
string json = JsonConvert.SerializeObject(new { name = "TestUserBody" });
sw.Write(json);
sw.Flush();
ms.Position = 0;
request.Setup(r => r.Body).Returns(ms);
var logger = Mock.Of();
// Act
var result = await MyFunction.Run(request.Object, logger);
// Assert
var okResult = Assert.IsType(result);
Assert.Equal("Hello, TestUserBody. This HTTP triggered function executed successfully.", okResult.Value);
}
}
"""
#### 2.3.2 Mocking Azure Service Dependencies (Example: Cosmos DB)
"""csharp
using Moq;
using Microsoft.Azure.Cosmos;
using Xunit;
using System.Threading;
using System.Threading.Tasks;
public class CosmosDbServiceTests
{
[Fact]
public async Task GetItemAsync_ItemExists_ReturnsItem()
{
// Arrange
var mockCosmosClient = new Mock();
var mockDatabase = new Mock();
var mockContainer = new Mock();
// Setup mock behavior
mockCosmosClient.Setup(client => client.GetDatabase(It.IsAny())).Returns(mockDatabase.Object);
mockDatabase.Setup(db => db.GetContainer(It.IsAny())).Returns(mockContainer.Object);
// Setup a successful item retrieval (using a FeedResponse for simplicity in this example)
var mockItemResponse = new Mock>();
mockItemResponse.Setup(response => response.Resource).Returns(new MyItem { Id = "1", Name = "Test Item" });
mockItemResponse.Setup(response => response.StatusCode).Returns(System.Net.HttpStatusCode.OK);
mockContainer.Setup(container => container.ReadItemAsync(
It.IsAny(),
It.IsAny(),
It.IsAny(), // Include ItemRequestOptions
It.IsAny()
)).ReturnsAsync(mockItemResponse.Object);
var service = new CosmosDbService(mockCosmosClient.Object);
// Act
var result = await service.GetItemAsync("1");
// Assert
Assert.NotNull(result);
Assert.Equal("1", result.Id);
Assert.Equal("Test Item", result.Name);
}
public class MyItem
{
public string Id { get; set; }
public string Name { get; set; }
}
public class CosmosDbService
{
private readonly CosmosClient _cosmosClient;
private readonly string _databaseName = "TestDatabase";
private readonly string _containerName = "TestContainer";
public CosmosDbService(CosmosClient cosmosClient)
{
_cosmosClient = cosmosClient;
}
public async Task GetItemAsync(string id)
{
try
{
var database = _cosmosClient.GetDatabase(_databaseName);
var container = database.GetContainer(_containerName);
var itemResponse = await container.ReadItemAsync(id, new PartitionKey(id)); // Provide partition key here when required.
return itemResponse.Resource;
}
catch (CosmosException ex) when (ex.StatusCode == System.Net.HttpStatusCode.NotFound)
{
return null;
}
}
}
}
"""
### 2.4 Common Anti-Patterns
* **Overspecified Tests:** Writing tests that are too tightly coupled to the implementation details. This often leads to tests that break with minor code changes.
* **Ignoring Edge Cases:** Only testing happy paths and neglecting error scenarios, boundary conditions, and invalid inputs.
* **Insufficient Mocking:** Failing to properly mock dependencies, leading to slow and unreliable tests that behave more like integration tests.
* **Testing Private Methods:** Unit tests should test the public interface (API) of a class, focusing on its behavior, not its internal implementation.
## 3. Integration Testing
### 3.1 Focus and Scope
* **Why:** Integration tests verify the interactions between different components or services within the application. This helps ensure that they work together correctly.
### 3.2 Standards
* **Do This:**
* Focus on testing the interactions between components, not the individual components themselves.
* Use real dependencies or test doubles that closely mimic the behavior of real dependencies.
* Use a dedicated test environment or sandbox to avoid impacting production systems.
* Clean up any test data or resources after the test execution.
* **Don't Do This:**
* Test every possible combination of interactions between components.
* Rely on external systems that are not under your control.
* Run integration tests against production environments.
* Leave behind test data or resources that could interfere with other tests or applications.
### 3.3 Code Examples
#### 3.3.1 Integration Testing with Azure Service Bus
"""csharp
using Xunit;
using Azure.Messaging.ServiceBus;
using System;
using System.Threading.Tasks;
public class ServiceBusIntegrationTests : IAsyncLifetime
{
private const string ServiceBusConnectionString = "YOUR_SERVICE_BUS_CONNECTION_STRING"; // Replace with your connection string
private const string QueueName = "myqueue";
private ServiceBusClient _client;
private ServiceBusSender _sender;
private ServiceBusProcessor _processor;
private readonly string _messageBody = "Test Message";
public async Task InitializeAsync()
{
_client = new ServiceBusClient(ServiceBusConnectionString);
_sender = _client.CreateSender(QueueName);
//Configuring the processor. Use either SessionsProcessorOptions or ProcessorOptions, but not both
ServiceBusProcessorOptions serviceBusProcessorOptions = new ServiceBusProcessorOptions
{
ReceiveMode = ServiceBusReceiveMode.PeekLock,
AutoCompleteMessages = true,
MaxConcurrentCalls = 10,
PrefetchCount = 20,
MaxAutoLockRenewDuration = TimeSpan.FromSeconds(60),
};
_processor = _client.CreateProcessor(QueueName, serviceBusProcessorOptions);
_processor.ProcessMessageAsync += MessageHandler;
_processor.ProcessErrorAsync += ErrorHandler;
await _processor.StartProcessingAsync();
}
public async Task DisposeAsync()
{
await _processor.StopProcessingAsync();
await _processor.DisposeAsync();
await _sender.DisposeAsync();
await _client.DisposeAsync();
}
[Fact]
public async Task SendAndReceiveMessage_Success()
{
// Arrange
bool messageReceived = false;
string receivedMessageBody = null;
// Set up a handler that asserts and marks the message as received
Task MessageHandler(ProcessMessageEventArgs args)
{
receivedMessageBody = args.Message.Body.ToString();
messageReceived = true;
return Task.CompletedTask;
}
Task ErrorHandler(ProcessErrorEventArgs args)
{
Console.WriteLine(args.Exception.ToString()); // Log and inspect the error
return Task.CompletedTask;
}
ServiceBusClient receiverClient = new ServiceBusClient(ServiceBusConnectionString);
ServiceBusReceiver receiver = receiverClient.CreateReceiver(QueueName);
// Act
await _sender.SendMessageAsync(new ServiceBusMessage(_messageBody));
ServiceBusReceivedMessage receivedMessage = await receiver.ReceiveMessageAsync(TimeSpan.FromSeconds(10));
if(receivedMessage != null)
{
Assert.Equal(_messageBody, receivedMessage.Body.ToString());
}
else
{
Assert.Fail("No message was received in the allotted time.");
}
await receiver.DisposeAsync();
await receiverClient.DisposeAsync();
// Assert
// Give some time for the message to be processed
// Assert.True(messageReceived, "Message was not received.");
// Assert.Equal(_messageBody, receivedMessageBody); //verify the content of message received
}
private async Task MessageHandler(ProcessMessageEventArgs args)
{
Console.WriteLine($"Received: {args.Message.Body.ToString()}");
await args.CompleteMessageAsync(args.Message);
}
private Task ErrorHandler(ProcessErrorEventArgs args)
{
Console.WriteLine(args.Exception.ToString());
return Task.CompletedTask;
}
}
"""
#### 3.3.2 Testing Azure Function Integration with Queue Storage
"""csharp
using Xunit;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Timers;
using Microsoft.Extensions.Logging;
using Moq;
using System;
using System.Threading.Tasks;
using Microsoft.Azure.Storage.Queue;
using Microsoft.Azure.Storage;
public class QueueIntegrationTests
{
[Fact]
public async Task QueueTriggerFunction_AddsMessageToQueue()
{
// Arrange
string connectionString = "UseDevelopmentStorage=true"; // Or your real connection string
string queueName = "test-queue";
string expectedMessage = "Hello Queue!";
//Create Queue (only if it doesn't exist)
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
CloudQueue queue = queueClient.GetQueueReference(queueName);
await queue.CreateIfNotExistsAsync();
// Set up the execution context and logger
var loggerMock = new Mock();
// Mock the ICollector to capture the added message. This does not work due to the inability to Mock Extension Types!
//Create a new JobHost configuration
var config = new JobHostConfiguration();
config.StorageConnectionString = connectionString;
// Act
// await MyQueueFunction.Run(expectedMessage, myCollector, loggerMock.Object); //Cannot mock ICollector
//Create a new JobHost using the configuration
using(var host = new JobHost(config))
{
//Call the function using the JobHost
await host.CallAsync(typeof(MyQueueFunction).GetMethod("Run"), new { myQueueItem = expectedMessage, log=loggerMock.Object});
}
// Assert
// Retrieve the message from the queue
CloudQueueMessage retrievedMessage = await queue.GetMessageAsync();
Assert.NotNull(retrievedMessage);
Assert.Equal(expectedMessage, retrievedMessage.AsString);
// Clean up the queue
await queue.DeleteMessageAsync(retrievedMessage); //Cleanup test message
}
public static class MyQueueFunction
{
[FunctionName("QueueFunction")]
public static async Task Run(
[QueueTrigger("test-queue", Connection = "AzureWebJobsStorage")] string myQueueItem,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
await Task.CompletedTask; // Simulate some processing
}
}
}
"""
### 3.4 Common Anti-Patterns
* **Brittle Tests:** Making tests dependent on specific data or configurations that are likely to change. This leads to tests that frequently fail for unrelated reasons.
* **Ignoring Asynchronous Behavior:** Failing to properly handle asynchronous operations, leading to race conditions and intermittent test failures.
* **Insufficient Setup and Teardown:** Neglecting to properly set up the test environment or clean up after the test execution, leading to inconsistent results and potential data corruption.
* **Testing Too Much:** Trying to test too many interactions or components in a single integration test. This makes it difficult to isolate the cause of failures and reduces the test's effectiveness.
## 4. End-to-End (E2E) Testing
### 4.1 Focus and Scope
* **Why:** E2E tests simulate real user scenarios and verify that the entire application works correctly from start to finish. This helps ensure that all components and services are properly integrated and that the application meets the user's needs.
### 4.2 Standards
* **Do This:**
* Focus on testing critical user flows and business processes.
* Use automation frameworks and tools to simulate user interactions.
* Use a dedicated test environment or staging environment that closely resembles production.
* Monitor application logs and metrics to identify performance issues or errors.
* **Don't Do This:**
* Test every possible user interaction or scenario.
* Rely on manual testing for critical functionality.
* Run E2E tests against production environments.
* Ignore performance issues or errors that are identified during testing.
### 4.3 Code Examples
#### 4.3.1 E2E Testing with Playwright (Simulating User Interactions)
"""csharp
using Microsoft.Playwright;
using Xunit;
using System.Threading.Tasks;
public class PlaywrightE2ETests : IAsyncLifetime
{
private IPlaywright _playwright;
private IBrowser _browser;
public IBrowserContext _context;
public IPage _page;
public async Task InitializeAsync()
{
// Install Playwright if not already installed: pwsh bin/Debug/net8.0/playwright.ps1 install
_playwright = await Playwright.CreateAsync();
_browser = await _playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
{
Headless = false // Set to true for running in CI/CD
});
_context = await _browser.NewContextAsync();
_page = await _context.NewPageAsync();
}
public async Task DisposeAsync()
{
await _page.CloseAsync();
await _context.CloseAsync();
await _browser.CloseAsync();
_playwright.Dispose();
}
[Fact]
public async Task NavigateToHomePage_VerifyTitle()
{
await _page.GotoAsync("https://www.example.com");
string title = await _page.TitleAsync();
Assert.Equal("Example Domain", title);
// Example of taking a screenshot
// await _page.ScreenshotAsync(new PageScreenshotOptions { Path = "screenshot.png" });
}
[Fact]
public async Task NavigateToHomePage_VerifyH1()
{
await _page.GotoAsync("https://www.example.com");
string h1Text = await _page.Locator("h1").InnerTextAsync();
Assert.Equal("Example Domain", h1Text);
// Example of taking a screenshot
// await _page.ScreenshotAsync(new PageScreenshotOptions { Path = "screenshot.png" });
}
}
"""
#### 4.3.2 E2E Testing with Azure DevOps Pipelines
(Configuration in Azure DevOps YAML)
"""yaml
trigger:
- main
pool:
vmImage: 'windows-latest'
steps:
- task: DotNetCoreCLI@2
displayName: 'Build'
inputs:
command: 'build'
projects: '**/*.csproj'
arguments: '--configuration Release'
- task: DotNetCoreCLI@2
displayName: 'Test'
inputs:
command: 'test'
projects: '**/*Tests.csproj'
arguments: '--configuration Release --collect:"XPlat Code Coverage" --logger:"trx;LogFileName=test-results.trx"' # Collect code coverage
- task: PublishTestResults@2 # Publish Test Results
inputs:
testResultsFormat: 'VSTest'
testResultsFiles: '**/test-results.trx'
failTaskOnFailedTests: true
#Optionally Publish code coverage results
- task: PublishCodeCoverageResults@2
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: '$(Agent.TempDirectory)/**/coverage.cobertura.xml'
reportDirectory: '$(Agent.TempDirectory)/**/coveragereport'
condition: succeeded() # Execute this task only if the previous tasks succeeded
# Optionally, if needing to run Playwright:
#- script: pwsh bin/Debug/net8.0/playwright.ps1 install --browser chromium
# displayName: 'Install Playwright Browsers'
#- task: DotNetCoreCLI@2 #E2E (Playwright tests e.g.)
# displayName: 'Run E2E Tests'
# inputs:
# command: 'test'
# projects: '**/E2ETests.csproj' # Update the project path
# arguments: '--configuration Release'
"""
### 4.4 Common Anti-Patterns
* **Unreliable Tests:** Creating tests that are prone to failure due to external factors, such as network issues or service outages.
* **Slow Test Execution:** Designing tests that take a long time to execute, slowing down the development process and reducing the frequency of testing.
* **Lack of Observability:** Failing to properly monitor the application during testing, making it difficult to diagnose the cause of failures or performance issues.
* **Ignoring Accessibility:** Neglecting to test the application's accessibility features, potentially excluding users with disabilities.
## 5. Performance Testing/Load Testing
### 5.1 Focus and Scope
* **Why:** Performance testing aims to identify potential bottlenecks and ensure the application can handle the expected load under various conditions.
### 5.2 Standards
* **Do This:**
* Define clear performance goals and metrics (e.g., response time, throughput, resource utilization).
* Simulate realistic user scenarios and workloads.
* Use dedicated performance testing tools such as JMeter, LoadView, or Azure Load Testing.
* Monitor resource utilization (CPU, memory, network) on Azure resources.
* **Don't Do This:**
* Run performance tests in production environments.
* Ignore performance degradation or bottlenecks identified during testing.
* Fail to baseline and track performance over time.
### 5.3 Code Example: Azure Load Testing
Azure Load Testing (ALT) is a fully managed load-testing service.
1. **Create an Azure Load Testing Resource**: Provision an instance through the Azure portal.
2. **Create a Test**: Upload a JMeter script or define a simple URL-based test.
3. **Configure**: Specify test parameters (e.g., number of virtual users, duration)
4. **Run Test**: Execute and monitor real-time metrics.
5. **Analyze Results**: Review performance insights and identify bottlenecks.
"""Azure CLI
#Create an Azure load testing resource
az load create --name --location --resource-group --description "Testing some APIs."
#Upload and run the jmx
az load test create --test-id --resource-group --load-testing-resource --display-name --description --test-plan
"""
### 5.4 Common Anti-Patterns
- **Insufficient Load**: Using too few virtual users, missing peak load moments resulting in unrealistic insights.
- **Ignoring External Dependencies**: Neglecting the impact of external services that can impact results.
- **Testing Too Late**: Post-deployment efforts may be too late and expensive to address issues, hence performance awareness must be inculcated earlier in stages.
## 6. Security Testing
### 6.1 Focus and Scope
* **Why:** Security testing aims to identify vulnerabilities in the application that could be exploited by attackers.
### 6.2 Standards
* **Do This:**
* Perform regular vulnerability scans and penetration testing.
* Follow security best practices, such as the OWASP Top Ten.
* Use static analysis tools to identify potential security flaws in the code.
* Implement security measures at all levels of the application, including authentication, authorization, and data encryption.
* **Don't Do This:**
* Ignore security vulnerabilities or potential risks.
* Rely solely on perimeter security measures.
* Store sensitive data in plain text.
* Use weak or default passwords.
### 6.3 Code Example: Static Code Analysis with SonarCloud
1. **Setup SonarCloud Integration**: Connect your Azure DevOps project to SonarCloud.
2. **Add SonarCloud Task**: Include the SonarCloud Analyze task in your Azure DevOps pipeline.
3. **Configure Quality Gate**: Define quality criteria such as vulnerability rates, bug counts and coverage %.
"""yaml
# Add SonarCloud prepare analysis configuration task
- task: SonarCloudPrepare@1
inputs:
SonarCloud: 'YourSonarCloudServiceConnection'
organization: 'your-sonarcloud-organization'
scannerMode: 'MSBuild'
projectKey: 'your-project-key'
projectName: 'Your Project Name'
# Add MSBuild task to build the project
- task: MSBuild@1
inputs:
solution: '**\*.sln'
msbuildArguments: '/t:Rebuild'
# Add SonarCloud analysis task
- task: SonarCloudAnalyze@1
# Add SonarCloud publish quality gate result task
- task: SonarCloudPublish@1
inputs:
pollingTimeoutSec: '300'
"""
### 6.4 Common Anti-Patterns
- **Lack of Regular Assessments**: Infrequent security testing and assessments can leave systems vulnerable for long periods.
- **Ignoring Third-Party Components**: Failing to assess the security of libraries, dependencies, and other external components.
- **Poor Secrets Management**: Embedding sensitive keys, tokens, and passwords directly into code or configuration files.
By adhering to these testing methodology standards, Azure developers can ensure that their applications are reliable, secure, and performant. This document provides a foundation for building high-quality Azure applications.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Performance Optimization Standards for Azure This document outlines the performance optimization standards for Azure development. It serves as a guide for developers to write efficient, responsive, and resource-optimized applications within the Azure ecosystem. It is designed to be used by developers and as context for AI coding assistants. ## 1. Architectural Considerations ### 1.1 Choosing the Right Azure Services **Do This:** Carefully select Azure services based on performance requirements, scalability needs, and cost considerations. **Don't Do This:** Default to familiar services without evaluating if they are optimal for the workload. **Why:** Selecting the right service upfront can drastically reduce development effort and resource consumption in the long run. **Explanation:** Azure offers a broad range of services, each optimized for particular workloads. Choosing the correct service aligns with the workload's characteristics, leading to better performance and lower TCO. For example, using Azure Cosmos DB for high-throughput, low-latency globally distributed data is better than using Azure SQL Database when those characteristics are core requirements. **Code Example (Service Selection):** """ # Consider Azure Functions for serverless, event-driven scenarios. # Consider Azure Container Apps for microservices and scalable applications. # Consider Azure Kubernetes Service (AKS) for complex container orchestration needs. # Consider Azure App Service for web applications and APIs with simpler deployment requirements. """ ### 1.2 Region Selection **Do This:** Deploy Azure resources to the region closest to your users. **Don't Do This:** Assume all regions provide equal performance or latency. **Why:** Minimizing network latency improves application responsiveness. **Explanation:** The physical distance between your application and its users directly impacts latency. Azure regions offer varying levels of network connectivity. Choosing the closest region reduces round-trip times for data transfer. **Code Example (ARM Template Snippet for Region):** """json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "location": { "type": "string", "defaultValue": "eastus", // Default Region (change according to user base) "metadata": { "description": "The location for all resources." } } }, "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2022-09-01", "name": "myWebApp", "location": "[parameters('location')]", "properties": { // Web app settings } } ] } """ ### 1.3 Implementing Caching Strategies **Do This:** Implement caching at multiple layers (client, CDN, application, database). **Don't Do This:** Over-cache and risk serving stale data or under-cache and impact performance. **Why:** Caching reduces the load on backend services and improves response times. **Explanation:** Caching stores frequently accessed data closer to the user or application, reducing the need to repeatedly fetch it from the original source. Effective caching strategies involve selecting appropriate cache expiration policies, cache invalidation mechanisms, and cache tiers. **Code Example (Azure Cache for Redis - .NET):** """csharp using StackExchange.Redis; public class RedisCacheService { private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => { string cacheConnection = ConfigurationManager.AppSettings["RedisCacheConnection"].ToString(); return ConnectionMultiplexer.Connect(cacheConnection); }); public static ConnectionMultiplexer Connection => lazyConnection.Value; public string GetData(string key) { IDatabase cache = Connection.GetDatabase(); return cache.StringGet(key); } public void SetData(string key, string value, TimeSpan expiry) { IDatabase cache = Connection.GetDatabase(); cache.StringSet(key, value, expiry); } } // Usage: RedisCacheService cache = new RedisCacheService(); string cachedValue = cache.GetData("myKey"); if (string.IsNullOrEmpty(cachedValue)) { // Fetch data from source string dataFromSource = GetDataFromSource(); cache.SetData("myKey", dataFromSource, TimeSpan.FromMinutes(30)); cachedValue = dataFromSource; } //use cachedValue """ ### 1.4 Asynchronous Operations **Do This:** Use asynchronous operations to avoid blocking the main thread for long-running tasks. **Don't Do This:** Perform synchronous I/O operations on the main thread, especially in UI-intensive applications or API endpoints. **Why:** Asynchronous operations improve the responsiveness and scalability of applications. **Explanation:** Asynchronous programming allows the application to continue processing other tasks while waiting for the completion of a long-running operation (e.g., network request, database query). This prevents the application from becoming unresponsive. **Code Example (Asynchronous Web API Controller):** """csharp using Microsoft.AspNetCore.Mvc; using Microsoft.EntityFrameworkCore; [ApiController] [Route("[controller]")] public class ProductsController : ControllerBase { private readonly AppDbContext _context; public ProductsController(AppDbContext context) { _context = context; } [HttpGet] public async Task<ActionResult<IEnumerable<Product>>> GetProducts() { return await _context.Products.ToListAsync(); // Asynchronous database query } } """ ### 1.5 Autoscaling **Do This:** Configure autoscaling for compute resources (e.g., VMs, App Service plans, Azure Container Apps) to handle varying workloads. **Don't Do This:** Rely on fixed capacity, which can lead to resource bottlenecks or underutilization. **Why:** Autoscaling dynamically adjusts resources based on demand, ensuring optimal performance and cost-effectiveness. **Explanation:** Autoscaling automatically increases or decreases the number of compute instances based on predefined metrics (e.g., CPU utilization, memory consumption, request queue length). This ensures that the application can handle sudden spikes in traffic without performance degradation. **Code Example (ARM Template for App Service Autoscaling):** """json { "type": "Microsoft.Insights/autoscalesettings", "apiVersion": "2015-04-01", "name": "myAutoscaleSettings", "location": "[resourceGroup().location]", "properties": { "name": "myAutoscaleSettings", "targetResourceUri": "[resourceId('Microsoft.Web/sites', 'myWebApp')]", "profiles": [ { "name": "AutoScaleProfile", "capacity": { "minimum": "1", "maximum": "10", "default": "1" }, "rules": [ { "metricTrigger": { "metricName": "CpuPercentage", "metricResourceUri": "[resourceId('Microsoft.Web/sites', 'myWebApp')]", "timeGrain": "PT1M", "statistic": "Average", "timeWindow": "PT5M", "timeAggregation": "Average", "operator": "GreaterThan", "threshold": 70 }, "operation": { "operationType": "ChangeCount", "parameters": { "value": "1", "cooldown": "PT5M" } } } ] } ] } } """ ## 2. Database Optimization ### 2.1 Indexing Strategies **Do This:** Create appropriate indexes to speed up query execution. **Don't Do This:** Over-index, which can slow down write operations and increase storage costs. **Why:** Indexes allow the database to quickly locate data without scanning the entire table. **Explanation:** Indexes are data structures that improve the speed of data retrieval operations on a database table. However, excessive indexing can negatively impact write performance and increase storage requirements. It's crucial to analyze query patterns and create indexes selectively on frequently queried columns. **Code Example (SQL Index Creation):** """sql -- Create a non-clustered index on the 'LastName' column of the 'Customers' table CREATE NONCLUSTERED INDEX IX_Customers_LastName ON Customers (LastName); """ ### 2.2 Query Optimization **Do This:** Write efficient queries that minimize resource consumption. **Don't Do This:** Use wildcard characters at the beginning of search strings ("%string"), causing full table scans. Select all columns ("SELECT *") unnecessarily. **Why:** Efficient queries reduce database load and improve application performance. **Explanation:** Poorly written queries can lead to performance bottlenecks and excessive resource consumption. To optimize queries, avoid using wildcard characters at the beginning of search strings, select only the necessary columns, use appropriate JOIN clauses, and leverage parameterized queries. **Code Example (Optimized SQL Query):** """sql -- Instead of: SELECT * FROM Orders WHERE CustomerID LIKE '%123%'; -- Use: SELECT OrderID, OrderDate, ShippingAddress FROM Orders WHERE CustomerID = @CustomerID; --Parameterized query """ ### 2.3 Connection Pooling **Do This:** Use connection pooling to reuse database connections and reduce overhead. **Don't Do This:** Open and close database connections frequently, which can be resource-intensive. **Why:** Connection pooling improves database performance by reducing the overhead of establishing new connections. **Explanation:** Connection pooling maintains a pool of active database connections that can be reused by the application. This avoids the overhead of repeatedly creating and destroying connections, which can be a significant performance bottleneck. Most database drivers and frameworks provide built-in support for connection pooling. **Code Example (.NET Core Connection Pooling with Entity Framework Core):** """csharp services.AddDbContext<AppDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); """ EF Core automatically manages connection pooling. Configure connection string effectively ("Min Pool Size", "Max Pool Size"). ### 2.4 Database Sharding & Partitioning **Do This:** Consider database sharding or partitioning for very large datasets or high-throughput requirements. **Don't Do This:** Apply sharding prematurely without analyzing data volume and access patterns. **Why:** Distributing data across multiple databases or partitions improves scalability and performance. **Explanation:** Database sharding involves splitting a large database into smaller, independent databases (shards) and distributing them across multiple servers. Partitioning involves dividing a table into multiple smaller tables (partitions) within the same database. Both techniques can improve query performance and scalability by reducing the amount of data that needs to be processed. **Note:** Cosmos DB offers built-in sharding/partitioning capabilities. Choose a good partition key. ## 3. Code-Level Optimizations ### 3.1 Efficient Data Structures and Algorithms **Do This:** Choose appropriate data structures and algorithms for specific tasks. **Don't Do This:** Use inefficient data structures that lead to quadratic or exponential time complexity. **Why:** Efficient algorithms and data structures minimize resource consumption and improve performance. **Explanation:** The choice of data structures and algorithms can have a significant impact on the performance of an application. For example, using a hash table for lookups results in near constant time complexity (O(1)), while searching an unsorted array can take linear time (O(n)). **Code Example (Efficient Lookup with Dictionary):** """csharp // Instead of: List<string> names = new List<string> { "Alice", "Bob", "Charlie" }; bool found = false; foreach (string name in names) { if (name == "Bob") { found = true; break; } } // Use: Dictionary<string, bool> nameLookup = new Dictionary<string, bool> { { "Alice", true }, { "Bob", true }, { "Charlie", true } }; bool found = nameLookup.ContainsKey("Bob"); // O(1) lookup """ ### 3.2 Minimizing Object Allocation **Do This:** Minimize object allocation and garbage collection overhead. Use object pooling or caching to reuse objects. **Don't Do This:** Create excessive temporary objects, especially in performance-critical sections of code. **Why:** Frequent object allocation and garbage collection can lead to performance bottlenecks. **Explanation:** Object allocation and garbage collection are resource-intensive operations. Reducing the number of objects created and collected can improve application performance. Object pooling involves maintaining a pool of pre-allocated objects that can be reused, while caching stores frequently used objects in memory. Using "struct" instead of "class" when appropriate can also reduce memory allocation (value type vs. reference type). **Code Example (Object Pooling):** """csharp using System.Collections.Concurrent; public class StringBuilderPool { private static ConcurrentBag<StringBuilder> _objectPool = new ConcurrentBag<StringBuilder>(); public static StringBuilder Get() { if (_objectPool.TryTake(out var item)) { return item; } else { return new StringBuilder(); } } public static void Return(StringBuilder obj) { obj.Clear(); _objectPool.Add(obj); } } // Usage: StringBuilder sb = StringBuilderPool.Get(); sb.Append("Hello, world!"); string result = sb.ToString(); StringBuilderPool.Return(sb); """ ### 3.3 String Manipulation **Do This:** Use "StringBuilder" for efficient string concatenation. **Don't Do This:** Use the "+" operator repeatedly for string concatenation, which creates multiple temporary string objects. **Why:** "StringBuilder" avoids the overhead of creating new string objects for each concatenation. **Explanation:** Strings are immutable in .NET, meaning that each string concatenation operation creates a new string object. Using the "+" operator repeatedly for string concatenation can lead to excessive object allocation and garbage collection. The "StringBuilder" class provides an efficient way to build strings by concatenating multiple strings into a mutable buffer. **Code Example (Efficient String Concatenation):** """csharp // Instead of: string result = ""; for (int i = 0; i < 1000; i++) { result += i.ToString(); } // Use: StringBuilder sb = new StringBuilder(); for (int i = 0; i < 1000; i++) { sb.Append(i.ToString()); } string result = sb.ToString(); """ ### 3.4 Avoid Boxing/Unboxing **Do This:** When working with generics or collections, ensure you're not unintentionally boxing value types (like "int", "bool", "structs"). **Don't Do This:** Add value types to non-generic collections like "ArrayList" which store objects and thus require boxing. **Why:** Boxing and unboxing are performance-intensive operations that involve converting value types to reference types and vice versa. **Explanation:** Boxing is the process of converting a value type (e.g., "int", "bool", "struct") to a corresponding object reference. Unboxing is the reverse process. When value types are added to non-generic collections (e.g., "ArrayList"), they are implicitly boxed, leading to performance overhead. Using generic collections (e.g., "List<int>", "Dictionary<string, MyStruct>") avoids boxing and unboxing. **Code Example (Avoid Boxing):** """csharp // Instead of: ArrayList numbers = new ArrayList(); for (int i = 0; i < 1000; i++) { numbers.Add(i); // Boxing occurs here } // Use: List<int> numbers = new List<int>(); for (int i = 0; i < 1000; i++) { numbers.Add(i); } """ ## 4. Network Optimization ### 4.1 Minimize Data Transfer **Do This:** Only transfer the necessary data over the network. Use data compression and pagination to reduce the amount of data transferred. **Don't Do This:** Fetch large amounts of data from the server and then filter it on the client. **Why:** Reducing data transfer improves network bandwidth utilization and reduces latency. **Explanation:** Transferring large amounts of data over the network can be a significant performance bottleneck. To minimize data transfer: compress data before sending it over the network (e.g., using GZIP compression), paginate large datasets to retrieve only the data needed for the current view, and avoid fetching unnecessary data from the server. Apply filters on the server-side if possible. **Code Example (GZIP Compression in ASP.NET Core):** """csharp using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.ResponseCompression; using Microsoft.Extensions.DependencyInjection; public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddResponseCompression(options => { options.EnableForHttps = true; options.Providers.Add<GzipCompressionProvider>(); }); } public void Configure(IApplicationBuilder app) { app.UseResponseCompression(); } } """ ### 4.2 Connection Multiplexing **Do This:** Use HTTP/2 or HTTP/3 to enable connection multiplexing. **Don't Do This:** Rely on HTTP/1.1, which can lead to connection overhead due to head-of-line blocking. **Why:** Connection multiplexing allows multiple requests to be sent over a single TCP connection, reducing connection overhead. **Explanation:** HTTP/2 and HTTP/3 support connection multiplexing, which allows multiple requests to be sent over a single TCP connection. This eliminates the need to establish a new connection for each request, reducing connection overhead and improving performance. Most modern web servers and browsers support HTTP/2 and HTTP/3. Enable it in your Azure App Service configuration. ### 4.3 Content Delivery Network (CDN) **Do This:** Serve static content (e.g., images, CSS, JavaScript) from a CDN. **Don't Do This:** Serve static content directly from your application server, which can increase load and latency. **Why:** CDNs distribute content across multiple servers closer to users, reducing latency and improving performance. **Explanation:** A CDN is a distributed network of servers that delivers content to users based on their geographic location. By serving static content from a CDN, you can reduce the load on your application server and improve response times for users who are geographically dispersed. Azure CDN is a popular choice. **Code Example (Azure CDN Configuration - ARM Template - Example with Storage Account):** """json { "type": "Microsoft.Cdn/profiles", "apiVersion": "2021-06-01", "name": "myCdnProfile", "location": "[resourceGroup().location]", "sku": { "name": "Standard_Microsoft", "tier": "Standard" }, "properties": { "originHostHeader": "mystorageaccount.blob.core.windows.net" }, "resources": [ { "type": "endpoints", "apiVersion": "2021-06-01", "name": "myCdnEndpoint", "dependsOn": [ "[resourceId('Microsoft.Cdn/profiles', 'myCdnProfile')]" ], "location": "[resourceGroup().location]", "properties": { "originHostName": "mystorageaccount.blob.core.windows.net", "origins": [ { "name": "myOrigin", "properties": { "hostName": "mystorageaccount.blob.core.windows.net", "httpPort": 80, "httpsPort": 443 } } ] } } ] } """ ## 5. Monitoring and Profiling ### 5.1 Application Insights **Do This:** Use Azure Application Insights to monitor application performance, detect anomalies, and diagnose issues. **Don't Do This:** Deploy applications without proper monitoring and logging, as it hinders troubleshooting and optimization efforts. **Why:** Application Insights provides valuable insights into application behavior and performance. **Explanation:** Application Insights is a powerful monitoring and analytics service that provides insights into application performance, availability, and usage. It can be used to detect anomalies, diagnose issues, and identify areas for optimization. **Code Example (Adding Application Insights to .NET Core App):** """csharp // In Startup.cs: public void ConfigureServices(IServiceCollection services) { services.AddApplicationInsightsTelemetry(); } // To add custom telemetry: using Microsoft.ApplicationInsights; public class MyService { private readonly TelemetryClient _telemetryClient; public MyService(TelemetryClient telemetryClient) { _telemetryClient = telemetryClient; } public void DoSomething() { _telemetryClient.TrackEvent("MyCustomEvent"); _telemetryClient.TrackMetric("MyCustomMetric", 42); } } """ ### 5.2 Profiling Tools **Do This:** Use profiling tools to identify performance bottlenecks in your code. **Don't Do This:** Guess at performance issues; use data-driven analysis to identify the root cause. **Why:** Profiling tools provide detailed information about CPU usage, memory allocation, and other performance metrics. **Explanation:** Profiling tools analyze the execution of your code to identify performance bottlenecks. They provide detailed information about CPU usage, memory allocation, and other performance metrics, allowing you to pinpoint the areas of your code that are consuming the most resources. Visual Studio Profiler and PerfView are commonly used profiling tools. ### 5.3 Azure Monitor **Do This:** Utilize Azure Monitor to monitor the health and performance of Azure resources (VMs, databases, storage accounts). Create alerts for critical metrics. **Don't Do This:** Ignore resource-level metrics, which can provide valuable insights into potential issues. **Why:** Azure Monitor provides a comprehensive view of the performance and health of your Azure resources. **Explanation:** Azure Monitor provides a centralized platform for collecting and analyzing telemetry data from your Azure resources. It can be used to monitor the health and performance of VMs, databases, storage accounts, and other Azure services. Create alerts to be notified of critical issues, like high CPU or low available memory. This document provides a comprehensive overview of performance optimization standards for Azure. By following these guidelines, developers can build efficient, responsive, and scalable applications within the Azure ecosystem. Remember to continually monitor and optimize your applications based on real-world usage patterns and performance metrics gathered using Azure Monitor and Application Insights.
# API Integration Standards for Azure This document outlines coding standards and best practices for integrating APIs within the Azure ecosystem. It aims to provide clear guidance for developers to build maintainable, performant, and secure API integrations, leveraging modern Azure features and patterns. ## 1. Architecture and Design Principles ### 1.1. Standard: API Gateway Pattern **Do This:** Use Azure API Management (APIM) as the API gateway for all external and internal APIs. **Don't Do This:** Expose backend services directly to clients without an API gateway. **Why:** API Management provides a central point for managing, securing, and observing APIs. It offers features like rate limiting, authentication, transformation, and analytics. **Azure Specifics:** * Leverage APIM's built-in policies for common tasks like authentication, authorization, and request transformation. * Integrate APIM with Azure Active Directory (AAD) for identity management. * Use APIM's developer portal for API discovery and documentation. **Code Example (ARM Template for APIM):** """json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "apimServiceName": { "type": "string", "metadata": { "description": "The name of the API Management service." } }, "skuName": { "type": "string", "defaultValue": "Developer", "allowedValues": [ "Developer", "Basic", "Standard", "Premium" ], "metadata": { "description": "The pricing tier for the API Management service." } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "The location for all resources." } } }, "resources": [ { "type": "Microsoft.ApiManagement/service", "apiVersion": "2023-05-01-preview", "name": "[parameters('apimServiceName')]", "location": "[parameters('location')]", "sku": { "name": "[parameters('skuName')]", "capacity": 1 }, "properties": { "publisherEmail": "your-email@example.com", "publisherName": "Your Organization", "notificationSenderEmail": "apimgmt-noreply@mail.windowsazure.com", "hostnameConfigurations": [], "publicIPAddressId": null, "virtualNetworkType": "None", "apiVersionConstraint": { "minApiVersion": null } } } ] } """ ### 1.2. Standard: Microservices Architecture **Do This:** Design API integrations around a microservices architecture, promoting loose coupling and independent deployment of backend services. **Don't Do This:** Build monolithic applications with tightly coupled components. **Why:** Microservices enable scalability, resilience, and faster development cycles. **Azure Specifics:** * Use Azure Kubernetes Service (AKS) for orchestrating microservices. * Leverage Azure Functions for event-driven, serverless backend logic. * Employ Azure Service Bus or Event Grid for asynchronous communication between services. **Code Example (Azure Function with Service Bus Trigger):** """csharp using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Host; using Microsoft.Extensions.Logging; namespace MyFunctionApp { public static class ServiceBusTriggerFunction { [FunctionName("ServiceBusTriggerFunction")] public static void Run( [ServiceBusTrigger("myqueue", Connection = "ServiceBusConnectionString")] string myQueueItem, ILogger log) { log.LogInformation($"C# ServiceBus queue trigger function processed: {myQueueItem}"); //Your processing logic here } } } """ ### 1.3. Standard: Asynchronous Communication **Do This:** Use asynchronous communication patterns, such as queues and events, for non-critical operations to improve scalability and resilience. **Don't Do This:** Rely solely on synchronous, request-response communication for all API interactions. **Why:** Asynchronous communication decouples services, allowing them to operate independently and handle varying workloads. **Azure Specifics:** * Use Azure Service Bus for reliable message queuing with features like transactions and dead-letter queues. * Use Azure Event Grid for event-driven architectures, enabling loose coupling and real-time event processing. * Consider Azure Queue Storage for simpler queueing scenarios. **Code Example (Sending a Message to Azure Service Bus Queue):** """csharp using Azure.Messaging.ServiceBus; using System; using System.Threading.Tasks; namespace ServiceBusExample { class Program { static string connectionString = "YOUR_SERVICE_BUS_CONNECTION_STRING"; static string queueName = "myqueue"; static async Task Main(string[] args) { // Create a Service Bus client ServiceBusClient client = new ServiceBusClient(connectionString); // Create a sender for the queue ServiceBusSender sender = client.CreateSender(queueName); // Create a message ServiceBusMessage message = new ServiceBusMessage("Hello, Service Bus!"); // Send the message await sender.SendMessageAsync(message); Console.WriteLine("Message sent to Service Bus queue."); } } } """ ## 2. Implementation Details ### 2.1. Standard: API Versioning **Do This:** Implement API versioning to maintain backward compatibility and allow for evolving API features. **Don't Do This:** Introduce breaking changes without versioning. **Why:** Versioning allows clients to migrate to new API versions at their own pace, minimizing disruption. **Azure Specifics:** * Use APIM's versioning features to manage multiple API versions. * Implement versioning in the API endpoint URL or through custom headers. **Code Example (APIM Versioning):** 1. **Define API Versions in APIM:** In the Azure portal, navigate to your APIM instance, select "APIs," and choose your API. Under "Settings," configure "Versions" and create different versions (e.g., "v1","v2"). 2. **Route Requests to Backends**: Use policies to route requests based on the version specified in the URL or header to different backend services. For example: """xml <choose> <when condition="@(context.Request.Url.Path.StartsWithSegments("/v1"))"> <set-backend-service base-url="https://backend-service-v1.azurewebsites.net" /> </when> <when condition="@(context.Request.Url.Path.StartsWithSegments("/v2"))"> <set-backend-service base-url="https://backend-service-v2.azurewebsites.net" /> </when> <otherwise> <return-response> <set-status code="400" reason="Bad Request" /> <set-body>Invalid API Version</set-body> </return-response> </otherwise> </choose> """ ### 2.2. Standard: Error Handling **Do This:** Implement robust error handling with meaningful error codes and messages. **Don't Do This:** Return generic error messages or expose sensitive information in error responses. **Why:** Proper error handling improves the developer experience and helps troubleshoot issues. **Azure Specifics:** * Use structured logging with Azure Monitor to capture detailed error information. Include correlation IDs to track errors across services. * Implement retry policies for transient errors. **Code Example (Error Handling in ASP.NET Core API):** """csharp using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; namespace ErrorHandlingExample.Controllers { [ApiController] [Route("[controller]")] public class ExampleController : ControllerBase { private readonly ILogger<ExampleController> _logger; public ExampleController(ILogger<ExampleController> logger) { _logger = logger; } [HttpGet("error")] public IActionResult GetError() { try { throw new System.Exception("Simulated error."); } catch (System.Exception ex) { _logger.LogError(ex, "An error occurred."); return StatusCode(500, new { message = "An unexpected error occurred. Please contact support.", correlationId = Request.HttpContext.TraceIdentifier }); //Include correlation ID } } } } """ ### 2.3. Standard: Data Validation **Do This:** Implement input validation on all APIs to prevent invalid data from reaching backend services. **Don't Do This:** Trust client-provided data without validation. **Why:** Validation improves security and data integrity. **Azure Specifics:** * Use APIM's validation policies to enforce data validation at the API gateway level. * Implement data validation in backend services using appropriate validation libraries. **Code Example (Data Validation in ASP.NET Core API using DataAnnotations):** """csharp using System.ComponentModel.DataAnnotations; using Microsoft.AspNetCore.Mvc; namespace DataValidationExample.Models { public class User { [Required(ErrorMessage = "Name is required")] [StringLength(50, ErrorMessage = "Name cannot be longer than 50 characters")] public string Name { get; set; } [EmailAddress(ErrorMessage = "Invalid email address")] public string Email { get; set; } [Range(18, 120, ErrorMessage = "Age must be between 18 and 120")] public int Age { get; set; } } } namespace DataValidationExample.Controllers { [ApiController] [Route("[controller]")] public class UserController : ControllerBase { [HttpPost] public IActionResult CreateUser([FromBody] User user) { if (!ModelState.IsValid) { return BadRequest(ModelState); } // Process the user data return Ok(user); } } } """ ### 2.4 Standard: Authentication and Authorization **Do This:** Implement robust authentication and authorization mechanisms to protect APIs. **Don't Do This:** Use weak or insecure authentication methods. **Why:** Ensures only authorized users and applications can access sensitive data and functionality. **Azure Specifics:** * Use Azure Active Directory (AAD) for identity management. * Implement OAuth 2.0 or OpenID Connect for authentication. * Use Role-Based Access Control (RBAC) to authorize access to APIs. * Leverage APIM for securing APIs with AAD integration. * Use Managed Identities for Azure resources, which automatically manage credentials for accessing other Azure services. **Code Example (Implementing AAD Authentication in ASP.NET Core API):** """csharp using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.Identity.Web.Resource; namespace AADAuthExample.Controllers { [Authorize] [ApiController] [Route("[controller]")] [RequiredScope(RequiredScopesConfigurationKey = "AzureAd:Scopes")] // Define required scopes public class SecureController : ControllerBase { [HttpGet] public IActionResult Get() { return Ok("This is a secure API endpoint."); } } } """ Add the following to "appsettings.json": """json { "AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "yourtenant.onmicrosoft.com", "TenantId": "your-tenant-id", "ClientId": "your-client-id", "Scopes": "api://your-client-id/access_as_user" } } """ Register your API in Azure AD and configure the necessary scopes. ### 2.5 Standard: API Documentation **Do This:** Provide comprehensive and up-to-date API documentation. **Don't Do This:** Neglect API documentation, leaving developers to reverse-engineer APIs. **Why:** Good documentation improves developer productivity and reduces integration time. **Azure Specifics:** * Use APIM's developer portal to automatically generate and host API documentation based on OpenAPI specifications (Swagger). * Generate OpenAPI specifications from code using tools like Swashbuckle. **Code Example (Generating OpenAPI Specification with Swashbuckle in ASP.NET Core):** 1. **Install Swashbuckle.AspNetCore NuGet Package:** """bash Install-Package Swashbuckle.AspNetCore -Version 6.5.0 """ 2. **Configure Swagger in "Startup.cs" or "Program.cs":** """csharp // In Program.cs (for .NET 6+): builder.Services.AddSwaggerGen(c => { c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" }); }); //... app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1"); }); """ ### 2.6 Standard: Logging and Monitoring **Do This:** Implement comprehensive logging and monitoring to track API usage, performance, and errors. **Don't Do This:** Ignore logging and monitoring, making it difficult to troubleshoot issues and optimize performance. **Why:** Provides insights into API behavior and helps identify and resolve problems quickly. **Azure Specifics:** * Use Azure Monitor to collect and analyze logs and metrics. * Integrate APIM with Azure Monitor for API-specific metrics. * Use Application Insights for deep application performance monitoring. * Utilize Azure Log Analytics for querying and analyzing log data. **Code Example (Logging in ASP.NET Core API):** """csharp using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; namespace LoggingExample.Controllers { [ApiController] [Route("[controller]")] public class ExampleController : ControllerBase { private readonly ILogger<ExampleController> _logger; public ExampleController(ILogger<ExampleController> logger) { _logger = logger; } [HttpGet] public IActionResult Get() { _logger.LogInformation("GET request received at /example"); return Ok("Hello, world!"); } } } """ ## 3. Performance Optimization ### 3.1. Standard: Caching **Do This:** Implement caching to reduce latency and improve performance. **Don't Do This:** Over-cache data or cache sensitive information. **Why:** Caching reduces the load on backend services and improves response times. **Azure Specifics:** * Use Azure Cache for Redis for caching frequently accessed data. * Leverage APIM's caching policies for API responses. * Implement client-side caching using HTTP caching headers. **Code Example (Using Azure Cache for Redis in .NET):** """csharp using StackExchange.Redis; using System; using System.Threading.Tasks; namespace RedisCacheExample { class Program { private static Lazy<ConnectionMultiplexer> redisConnection = new Lazy<ConnectionMultiplexer>(() => { string connectionString = "YOUR_REDIS_CONNECTION_STRING"; return ConnectionMultiplexer.Connect(connectionString); }); public static ConnectionMultiplexer Connection { get { return redisConnection.Value; } } static async Task Main(string[] args) { IDatabase db = Connection.GetDatabase(); // Set a value await db.StringSetAsync("mykey", "Hello, Redis!"); // Get the value string value = await db.StringGetAsync("mykey"); Console.WriteLine($"Value from Redis: {value}"); } } } """ ### 3.2. Standard: Connection Pooling **Do This:** Use connection pooling to reuse database connections and avoid the overhead of creating new connections for each request. **Don't Do This:** Create a new database connection for each API request. **Why:** Connection pooling improves performance and reduces resource consumption. **Azure Specifics:** * Azure services like App Service and Functions automatically implement connection pooling for supported databases (e.g., SQL Database). Configure connection string parameters appropriately for optimal pooling. * Use appropriate connection string settings like "Min Pool Size" and "Max Pool Size" based on the expected load. ### 3.3. Standard: Gzip Compression **Do This:** Enable Gzip compression to reduce the size of API responses. **Don't Do This:** Transmit uncompressed data, especially for large responses. **Why:** Compression reduces bandwidth usage and improves response times. **Azure Specifics:** * Enable Gzip compression in APIM policies. * Configure compression in backend services like App Service. **Code Example (Enabling Gzip Compression in ASP.NET Core):** """csharp // In Program.cs builder.Services.AddResponseCompression(options => { options.EnableForHttps = true; }); // .... app.UseResponseCompression() """ ## 4. Security Considerations ### 4.1. Standard: Data Encryption **Do This:** Encrypt sensitive data at rest and in transit. **Don't Do This:** Store sensitive data in plain text or transmit it over unencrypted channels. **Why:** Encryption protects data from unauthorized access. **Azure Specifics:** * Use Azure Key Vault to store and manage encryption keys. * Enable encryption at rest for Azure Storage and Azure SQL Database. * Use HTTPS for all API communications. * Use Transport Layer Security (TLS) 1.2 or higher. ### 4.2. Standard: Input Sanitization **Do This:** Sanitize all user inputs to prevent injection attacks. **Don't Do This:** Trust user inputs without sanitization. **Why:** Sanitization prevents malicious code from being injected into backend systems. **Azure Specifics:** * Use input validation and encoding techniques to sanitize data. * Implement security policies in APIM to block malicious requests. ### 4.3. Standard: Rate Limiting and Throttling **Do This:** Implement rate limiting and throttling to protect APIs from abuse and denial-of-service attacks. **Don't Do This:** Allow unrestricted access to APIs without rate limiting. **Why:** Rate limiting protects backend services from being overwhelmed. **Azure Specifics:** * Use APIM's rate limiting policies to control the number of requests allowed per user or IP address. * Implement throttling in backend services to prevent resource exhaustion. **Example (APIM Rate Limiting Policy):** """xml <rate-limit calls="100" renewal-period="60" counter-key="@(context.SubscriptionId)" /> """ This policy limits each subscription to 100 calls per 60 seconds. By adhering to these coding standards, developers can build robust, scalable, and secure API integrations within the Azure ecosystem, leveraging the platform's powerful features and capabilities. These standards will help teams create APIs that are easier to maintain, debug, and evolve over time.
# Code Style and Conventions Standards for Azure This document outlines the code style and conventions standards for Azure development. It is designed to promote consistency, readability, maintainability, and performance across all Azure projects. These standards should be followed by all developers contributing to Azure-based solutions and used as context by AI coding assistants. ## 1. General Principles ### 1.1. Consistency is Key * **Do This:** Adhere to a consistent style throughout the codebase. * **Don't Do This:** Mix different styles within the same project or module. **Why:** Consistency reduces cognitive load, making code easier to read and understand, which improves maintainability and reduces errors. ### 1.2. Readability Matters * **Do This:** Write code that is easy to understand, even for developers unfamiliar with the specific component. * **Don't Do This:** Sacrifice readability for brevity. **Why:** Readability improves collaboration, reduces debugging time, and facilitates knowledge transfer. ### 1.3. Maintainability is Paramount * **Do This:** Design code that is easy to modify and extend without introducing bugs. * **Don't Do This:** Write tightly coupled or overly complex code. **Why:** Maintainability reduces long-term costs, improves agility, and allows for faster iteration. ### 1.4. Performance Considerations * **Do This:** Write code that is optimized for performance, considering Azure-specific constraints and best practices. * **Don't Do This:** Ignore performance implications during development. **Why:** Performance impacts user experience, cost efficiency, and scalability. ### 1.5. Security First * **Do This:** Design code with security in mind, following OWASP guidelines and Azure security recommendations. * **Don't Do This:** Neglect security vulnerabilities during development. **Why:** Security protects data integrity, prevents unauthorized access, and ensures compliance. ## 2. Naming Conventions ### 2.1. General Naming * **Do This:** Use descriptive and meaningful names for variables, functions, classes, and modules. * **Don't Do This:** Use abbreviations or single-letter names unless for very short-lived loop variables. **Why:** Clear naming improves code comprehension and reduces ambiguity. ### 2.2. Language-Specific Conventions * **C#:** * **Classes and Structs:** PascalCase (e.g., "UserService", "OrderProcessor") * **Interfaces:** "IPascalCase" (e.g., "IUserRepository", "IOrderService") * **Methods:** PascalCase (e.g., "GetUserById", "ProcessOrder") * **Variables (local):** camelCase (e.g., "userId", "orderTotal") * **Constants:** ALL_UPPER_SNAKE_CASE (e.g., "MAX_RETRIES", "DEFAULT_TIMEOUT") * **Private Fields:** "_camelCase" (e.g., "_userRepository", "_orderQueue") * **JavaScript/TypeScript:** * **Classes:** PascalCase (e.g., "UserService", "OrderProcessor") * **Interfaces:** PascalCase (e.g., "UserRepository", "OrderService") - Note: 'I' prefix is optional, but consistency is important. * **Functions/Methods:** camelCase (e.g., "getUserById", "processOrder") * **Variables:** camelCase (e.g., "userId", "orderTotal") * **Constants:** UPPER_SNAKE_CASE (e.g., "MAX_RETRIES", "DEFAULT_TIMEOUT") * **Python:** * **Classes:** PascalCase (e.g., "UserService", "OrderProcessor") * **Functions/Methods:** snake_case (e.g., "get_user_by_id", "process_order") * **Variables:** snake_case (e.g., "user_id", "order_total") * **Constants:** UPPER_SNAKE_CASE (e.g., "MAX_RETRIES", "DEFAULT_TIMEOUT") **Why:** Following language-specific conventions improves code familiarity and collaboration within language ecosystems. ### 2.3. Azure Resource Naming * **Do This:** Use a standardized naming convention for Azure resources that includes environment, resource type, and purpose. * **Don't Do This:** Use generic or ambiguous names that make it difficult to identify resources. **Azure Resource Naming Examples:** """ <Environment>-<ResourceType>-<Application>-<InstanceNumber> dev-web-myapp-001 prod-db-orders-001 """ * "Environment": "dev", "test", "prod", "stage" * "ResourceType": "web", "db", "func", "vm", "stor", "aks", "appi" (App Insights), "keyv" (Key Vault), "sql" * "Application": Your application's name (e.g., "orders", "users", "reporting") * "InstanceNumber": A sequential number for multiple instances (e.g., "001", "002") **Why:** Consistent resource naming simplifies management, improves automation, and reduces the risk of misconfiguration. Tagging Azure resources is also crucial to categorize and manage resources effectively. ## 3. Formatting and Style ### 3.1. Indentation * **Do This:** Use consistent indentation (e.g., 4 spaces or 2 spaces) throughout the codebase, enforced by a linter/formatter. * **Don't Do This:** Mix different indentation styles within the same file or project. **Why:** Consistent indentation improves code readability and structure. ### 3.2. Line Length * **Do This:** Limit line length to a reasonable number of characters (e.g., 120 characters) to improve readability. * **Don't Do This:** Write excessively long lines that require horizontal scrolling. **Why:** Limiting line length makes code easier to read on different screen sizes and improves code review efficiency. ### 3.3. Whitespace * **Do This:** Use whitespace to improve code readability (e.g., spaces around operators, blank lines between logical blocks of code). * **Don't Do This:** Write dense code with minimal whitespace. **Why:** Whitespace enhances code clarity and structure. ### 3.4. Bracing * **C#:** Use K&R style bracing: """csharp if (condition) { // Code block } else { // Code block } """ * **JavaScript/TypeScript:** Use K&R style bracing: """typescript if (condition) { // Code block } else { // Code block } """ * **Python:** Python uses indentation to define code blocks, so consistent indentation is crucial (typically 4 spaces per level are recommended): """python if condition: # Code block else: # Code block """ **Why:** Consistent bracing improves code readability and reduces ambiguity. ### 3.5. File Encoding * **Do This:** Use UTF-8 encoding for all source files. * **Don't Do This:** Use different or inconsistent file encodings. **Why:** UTF-8 is the standard encoding for text files and supports a wide range of characters. ## 4. Commenting and Documentation ### 4.1. Commenting Conventions * **Do This:** Write clear and concise comments to explain complex logic, algorithms, and design decisions. * **Don't Do This:** Write comments that state the obvious or are outdated or misleading. **Why:** Comments provide context and explain the "why" behind the code, improving understanding and maintainability. ### 4.2. Documentation Generation * **C#:** Use XML documentation comments ("///") to generate API documentation. """csharp /// <summary> /// Gets a user by ID. /// </summary> /// <param name="id">The user ID.</param> /// <returns>The user object, or null if not found.</returns> public User GetUserById(int id) { // Code implementation return null; } """ * **JavaScript/TypeScript:** Use JSDoc style comments to generate API documentation. """typescript /** * Gets a user by ID. * @param {number} id - The user ID. * @returns {Promise<User | null>} The user object, or null if not found. */ async getUserById(id: number): Promise<User | null> { // Code implementation return null; } """ * **Python:** Use docstrings to document functions, classes, and modules. """python def get_user_by_id(user_id: int) -> User: """ Gets a user by ID. Args: user_id: The user ID. Returns: The user object, or None if not found. """ # Code implementation return None """ **Why:** Documentation helps users understand and use APIs effectively, improving usability and reducing support costs. ### 4.3. Code Examples Within Documentation * **Do This:** Include clear, concise code examples in documentation to illustrate how to use APIs and components (see: [https://learn.microsoft.com/en-us/style-guide/developer-content/code-examples](https://learn.microsoft.com/en-us/style-guide/developer-content/code-examples)). * **Don't Do This:** Provide incomplete or unclear code examples. **Why:** Code examples help users quickly understand how to use APIs and components. ## 5. Language-Specific Best Practices ### 5.1. C# and .NET * **Asynchronous Programming:** Use "async" and "await" for I/O-bound operations to avoid blocking the UI thread or other critical threads. """csharp public async Task<User> GetUserByIdAsync(int id) { // Asynchronously fetch data from database using Entity Framework Core var user = await _dbContext.Users.FindAsync(id); return user; } """ * **Dependency Injection:** Use dependency
# Core Architecture Standards for Azure This document outlines the core architectural standards for developing applications on Microsoft Azure. These standards are designed to promote maintainability, scalability, security, and performance by guiding developers toward best practices and modern approaches. This document will also be used as a reference point for AI coding assistants to provide relevant and accurate suggestions. ## 1. Architectural Principles These overarching principles should guide all architectural decisions on Azure. * **Principle of Least Privilege:** Grant services and users only the permissions they require to function. * **Defense in Depth:** Implement multiple layers of security controls to protect against various threats. * **Scalability and Elasticity:** Design applications to scale automatically based on demand, leveraging Azure's elasticity. * **Resiliency:** Implement fault tolerance and self-healing mechanisms to ensure continuous availability. * **Observability:** Implement comprehensive logging, monitoring, and tracing to gain insights into application behavior and performance. * **Cost Optimization:** Design applications to minimize resource consumption and take advantage of Azure's cost management features. * **Infrastructure as Code (IaC):** Manage infrastructure using code, enabling automation, version control, and repeatability. ## 2. Fundamental Architectural Patterns ### 2.1 Microservices Architecture **Do This:** * Embrace microservices for complex applications that require independent scalability and deployment. * Design microservices around business capabilities, not technical functions. * Use lightweight communication protocols like REST or gRPC for inter-service communication. * Implement API gateways for external access to microservices. * Use Azure Kubernetes Service (AKS) for container orchestration. * Implement service discovery using Azure DNS or a dedicated service registry. **Don't Do This:** * Create monolithic applications that are difficult to scale and maintain. * Introduce tight coupling between microservices. * Expose internal microservice endpoints directly to external users. * Neglect monitoring and logging for each microservice. **Why This Matters:** Microservices enable independent scaling, deployment, and fault isolation, leading to more resilient and maintainable applications. AKS simplifies container orchestration. **Code Example (AKS Deployment):** """yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-microservice spec: replicas: 3 selector: matchLabels: app: my-microservice template: metadata: labels: app: my-microservice spec: containers: - name: my-microservice image: myregistry.azurecr.io/my-microservice:latest ports: - containerPort: 8080 """ ### 2.2 Event-Driven Architecture **Do This:** * Use event-driven architecture for asynchronous communication between services. * Leverage Azure Event Hubs or Azure Service Bus for event ingestion and distribution. * Implement idempotent event handlers to prevent duplicate processing. * Design events to be immutable and contain all necessary context. * Use Azure Functions or Logic Apps to process events. **Don't Do This:** * Rely on synchronous communication for long-running operations. * Create complex event schemas without versioning. * Neglect error handling and dead-letter queues for failed events. **Why This Matters:** Asynchronous communication improves performance, scalability, and resilience. Event Hubs and Service Bus provide reliable and scalable eventing platforms. **Code Example (Azure Function Triggered by Event Hub):** """csharp using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Host; using Microsoft.Extensions.Logging; public static class EventHubTriggerCSharp { [FunctionName("EventHubTriggerCSharp")] public static void Run([EventHubTrigger("myhub", Connection = "EventHubConnectionAppSetting")] string myEventHubMessage, ILogger log) { log.LogInformation($"C# Event Hub trigger function processed a message: {myEventHubMessage}"); } } """ ### 2.3 Serverless Architecture **Do This:** * Utilize Azure Functions and Logic Apps for stateless, event-driven workloads. * Design functions to be small and focused on a single task. * Leverage Azure API Management for managing and securing serverless APIs. * Use Azure Durable Functions for orchestrating complex workflows. * Implement monitoring and logging using Azure Monitor. **Don't Do This:** * Develop long-running or stateful functions. * Overuse serverless functions for tasks that are better suited for virtual machines or containers. * Neglect security considerations when exposing serverless APIs. **Why This Matters:** Serverless architectures reduce operational overhead, scale automatically, and offer a pay-per-use pricing model. **Code Example (Azure Function with HTTP Trigger):** """csharp using System.Net; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; public static class HttpExample { [FunctionName("HttpExample")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request."); string name = req.Query["name"]; string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); dynamic data = JsonConvert.DeserializeObject(requestBody); name = name ?? data?.name; return name != null ? (ActionResult)new OkObjectResult($"Hello, {name}") : new BadRequestObjectResult( "Please pass a name on the query string or in the request body"); } } """ ### 2.4 Data Lake Architecture **Do This:** * Use Azure Data Lake Storage Gen2 as the central repository for all data, structured and unstructured. * Partition data logically based on business needs and query patterns. * Implement access control using Azure Active Directory and role-based access control (RBAC). * Use Azure Data Factory for data ingestion and transformation. * Leverage Azure Synapse Analytics for data warehousing and analytics. **Don't Do This:** * Create data silos that are difficult to access and integrate. * Store sensitive data without proper encryption and access controls. * Neglect data governance and metadata management. **Why This Matters:** A data lake provides a centralized and scalable platform for storing and processing large volumes of data, enabling advanced analytics and machine learning. **Code Example (Azure Data Factory Pipeline):** """json { "name": "my_data_pipeline", "properties": { "activities": [ { "name": "CopyData", "type": "Copy", "inputs": [ { "referenceName": "SourceDataset", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "DestinationDataset", "type": "DatasetReference" } ], "translator": { "type": "TabularTranslator", "typeConversion": true, "typeConversionSettings": { "allowDataTruncation": true, "treatAsEmptyString": "" } }, "enableStaging": false } ], "datasets": [ /* Define Source and Destination Datasets here */ ] } } """ ## 3. Project Structure and Organization ### 3.1 Logical Grouping by Functionality **Do This:** * Organize code into logical modules based on functionality or business domain. * Use namespaces or folders to encapsulate related classes and functions. * Follow a consistent naming convention for modules, classes, and functions. **Don't Do This:** * Create large, monolithic projects with tightly coupled code. * Mix unrelated functionalities within the same module. * Use inconsistent naming conventions. **Why This Matters:** Logical grouping improves code readability, maintainability, and testability. **Code Example (C# Project Structure):** """ MyProject/ ├── MyProject.sln ├── MyProject.Core/ │ ├── Models/ │ │ └── Customer.cs │ ├── Services/ │ │ └── CustomerService.cs │ ├── Interfaces/ │ │ └── ICustomerService.cs ├── MyProject.API/ │ ├── Controllers/ │ │ └── CustomerController.cs │ ├── Startup.cs │ ├── appsettings.json """ ### 3.2 Separation of Concerns (SoC) **Do This:** * Apply the principle of separation of concerns (SoC) by dividing the application into distinct layers, such as presentation, business logic, and data access. * Use dependency injection to decouple components and improve testability. * Define clear interfaces between layers to promote loose coupling. **Don't Do This:** * Mix presentation logic with business logic or data access code. * Create tight dependencies between layers. * Repeat code across different layers. **Why This Matters:** SoC enhances maintainability, testability, and reusability by isolating different responsibilities within the application. **Code Example (Dependency Injection in ASP.NET Core):** """csharp // Startup.cs public void ConfigureServices(IServiceCollection services) { services.AddTransient<ICustomerService, CustomerService>(); services.AddControllers(); } // Controller public class CustomerController : ControllerBase { private readonly ICustomerService _customerService; public CustomerController(ICustomerService customerService) { _customerService = customerService; } [HttpGet] public IActionResult GetCustomers() { var customers = _customerService.GetCustomers(); return Ok(customers); } } """ ### 3.3 Infrastructure as Code (IaC) Organization **Do This:** * Use Azure Resource Manager (ARM) templates, Bicep, or Terraform to define and manage infrastructure as code. * Organize IaC code into logical modules based on the resources being provisioned. * Use parameterization to customize deployments for different environments. * Implement version control for IaC code. **Don't Do This:** * Manually provision resources through the Azure portal. * Store sensitive information (e.g., passwords, API keys) directly in IaC code. * Neglect testing and validation of IaC code. **Why This Matters:** IaC enables automation, repeatability, and version control for infrastructure deployments, reducing errors and improving consistency. **Code Example (Bicep Template
# State Management Standards for Azure This document outlines the coding standards and best practices for state management in Azure applications. These standards are intended to promote maintainability, performance, scalability, and security. By following these guidelines, developers can build robust and efficient Azure solutions. ## 1. Introduction to State Management in Azure State management is critical for building scalable and resilient applications in Azure. It involves handling data persistence, session state, user profiles, and application settings across multiple requests and instances. Choosing the right state management strategy is crucial for achieving optimal performance and reliability. ### 1.1 Key Concepts * **Statelessness:** Applications function independently of prior interactions ("no memory"). Highly scalable but requires passing all necessary context with each request. * **Stateful Applications:** Maintain context between interactions. Can improve user experience but introduces complexity in scaling and fault tolerance. * **Distributed Caching:** A shared cache accessible by multiple application instances. Essential for improving application performance and reducing database load. * **Session State:** Data associated with a specific user's session, often stored in-memory or in a distributed cache. * **Data Persistence:** Storing data permanently in a database or other storage system. ### 1.2 Azure Services for State Management * **Azure Cache for Redis:** A fully managed, in-memory data cache service based on the popular open-source Redis. * **Azure SQL Database:** A fully managed relational database service. * **Azure Cosmos DB:** A globally distributed, multi-model database service. * **Azure Blob Storage:** Object storage for unstructured data, like images, videos, and documents. * **Azure Table Storage:** NoSQL key-attribute data store for rapid development and fast access to data. (Consider Cosmos DB table API for new projects) * **Azure App Configuration:** A centralized service for managing application settings. ## 2. General Principles for Azure State Management These principles apply to all aspects of state management, regardless of the specific Azure service used. ### 2.1 Prefer Stateless Architectures * **Do This:** Design applications to be as stateless as possible, relying on external storage for state. * **Don't Do This:** Store session state or application data directly on individual VM instances. **Why:** Statelessness promotes scalability and resilience. Each application instance can handle any request without relying on local data. **Example:** Instead of storing session data in-memory on the web server: """csharp // Anti-pattern: Storing session in-memory HttpContext.Session.SetString("UserId", userId); """ Use Azure Cache for Redis: """csharp // Correct: Storing session in Redis using Microsoft.Extensions.Caching.Distributed; public class SessionController : ControllerBase { private readonly IDistributedCache _cache; public SessionController(IDistributedCache cache) { _cache = cache; } [HttpPost("set-user-id")] public async Task<IActionResult> SetUserId(string userId) { await _cache.SetStringAsync("UserId", userId); return Ok(); } [HttpGet("get-user-id")] public async Task<IActionResult> GetUserId() { string userId = await _cache.GetStringAsync("UserId"); if(userId == null) { return NotFound(); } return Ok(userId); } } """ ### 2.2 Use Distributed Caching for Read-Heavy Data * **Do This:** Implement a caching strategy using Azure Cache for Redis to reduce database load and improve response times. * **Don't Do This:** Retrieve data directly from the database for every request, especially for frequently accessed information. **Why:** Caching significantly improves performance and reduces the cost of database operations. **Example:** """csharp // Anti-pattern: Retrieving product details from the database on every request public async Task<Product> GetProductById(int productId) { using (var db = new MyDbContext()) { return await db.Products.FindAsync(productId); } } """ Using Azure Cache for Redis: """csharp // Correct: Using Azure Cache for Redis using Microsoft.Extensions.Caching.Distributed; using Newtonsoft.Json; // Ensure you have Newtonsoft.Json NuGet package installed public class ProductsController : ControllerBase { private readonly IDistributedCache _cache; private readonly MyDbContext _dbContext; public ProductsController(IDistributedCache cache, MyDbContext dbContext) { _cache = cache; _dbContext = dbContext; } [HttpGet("products/{id}")] public async Task<IActionResult> GetProduct(int id) { string recordKey = $"product:{id}"; string cachedProduct = await _cache.GetStringAsync(recordKey); if (!string.IsNullOrEmpty(cachedProduct)) { // Product found in cache Product product = JsonConvert.DeserializeObject<Product>(cachedProduct); return Ok(product); } else { // Product not found in cache, retrieve from database var product = await _dbContext.Products.FindAsync(id); if (product == null) { return NotFound(); } // Serialize and store in cache string serializedProduct = JsonConvert.SerializeObject(product); var options = new DistributedCacheEntryOptions().SetAbsoluteExpiration(DateTime.Now.AddMinutes(10)).SetSlidingExpiration(TimeSpan.FromMinutes(2)); // Cache entry options await _cache.SetStringAsync(recordKey, serializedProduct, options); return Ok(product); } } public class Product { public int ProductId { get; set; } public string Name { get; set; } public decimal Price { get; set; } } public class MyDbContext : DbContext { public MyDbContext() { } // Empty constructor required for EF public MyDbContext(DbContextOptions<MyDbContext> options) : base(options) { } // Modified Context for service dependencies // If you intend to setup a SQL Server connection, please provide connection string. protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { if (!optionsBuilder.IsConfigured) { optionsBuilder.UseSqlServer("Server=localhost;Database=ProductsDb;Trusted_Connection=True;MultipleActiveResultSets=true"); // Replace connection to desired. } } public DbSet<Product> Products { get; set; } } } """ ### 2.3 Choose the Right Data Store for Your Needs * **Do This:** Select the appropriate Azure data store based on the data type, access patterns, and scalability requirements. Use Azure Cosmos DB for diverse data access scenarios. * **Don't Do This:** Use a single data store for all types of data, especially if it doesn't fit the specific needs. **Why:** Each Azure data store is optimized for different types of data and workloads. **Examples:** * **Azure SQL Database:** Suitable for relational data with complex query requirements. * **Azure Cosmos DB:** Best for NoSQL data with global distribution and high scalability needs. Good when you need flexible schema, or you need multiple APIs (SQL, MongoDB, Cassandra, Gremlin, Table). * **Azure Blob Storage:** Ideal for storing unstructured data like media files. * **Azure Cache for Redis:** Best for caching frequently accessed data. ### 2.4 Apply Appropriate Expiration Policies * **Do This:** Define expiration policies for cached data to ensure data freshness and prevent stale data from being served. * **Don't Do This:** Cache data indefinitely without considering its volatility. **Why:** Stale data can lead to incorrect application behavior and poor user experience. **Example:** """csharp // Setting expiration policies for cached data var options = new DistributedCacheEntryOptions() .SetAbsoluteExpiration(DateTime.Now.AddMinutes(30)) // Expire after 30 minutes .SetSlidingExpiration(TimeSpan.FromMinutes(10)); // Expire if inactive for 10 minutes await _cache.SetStringAsync("ProductList", productListJson, options); """ ### 2.5 Implement Data Partitioning for Scalability * **Do This:** Partition large datasets across multiple nodes or containers to improve scalability and performance. * **Don't Do This:** Store all data in a single partition, which can become a bottleneck. **Why:** Partitioning distributes the load and allows for parallel processing of data. **Example (Azure Cosmos DB):** Choosing a partition key smartly is critical for optimal performance. Choose a key that distributes the load evenly and allows for efficient querying. For frequently queried field(s), use that as your partition key. The ideal candidate doesn't have only a few distinct values. ### 2.6 Secure Sensitive Data at Rest and in Transit * **Do This:** Encrypt sensitive data at rest using Azure Key Vault and in transit using HTTPS. * **Don't Do This:** Store sensitive data in plain text or transmit it over unencrypted channels. **Why:** Protecting sensitive data is crucial for compliance and data security. **Examples:** * **Azure Key Vault:** Store connection strings, API keys, and other secrets securely. * **HTTPS:** Ensure all communication between the client and server is encrypted. ### 2.7 Handle Concurrency Correctly * **Do This:** Implement optimistic or pessimistic concurrency control to prevent data corruption when multiple users access the same data simultaneously. * **Don't Do This:** Ignore concurrency issues, which can lead to data loss or inconsistencies. **Why:** Concurrency control ensures data integrity in multi-user environments. **Example (Optimistic Concurrency with Azure Cosmos DB):** Leverage the "_etag" property for optimistic concurrency. Fetch the "_etag" when reading the document and include it in the "ReplaceItemAsync" method. If the "_etag" has changed since you read the document, the operation will fail, indicating a concurrency conflict. ### 2.8 Monitor and Optimize State Management Performance * **Do This:** Use Azure Monitor to track the performance of your data stores