# Tooling and Ecosystem Standards for Supabase
This document outlines the recommended tools, libraries, extensions, and best practices for developing applications with Supabase. Adhering to these standards will promote consistency, maintainability, performance, and security within your Supabase projects.
## 1. Core Development Tools
### 1.1 Supabase CLI
**Standard:** Use the Supabase CLI for local development, database migrations, and environment management.
* **Why:** The Supabase CLI provides a unified interface for interacting with your Supabase projects. It simplifies common tasks such as setting up a local development environment, managing database schema changes, and deploying your application.
**Do This:**
* Install the Supabase CLI: "npm install -g @supabase/cli" or "brew install supabase"
* Use "supabase init" to initialize a new Supabase project in your local directory.
* Use "supabase start" to start the local Supabase stack (Postgres, GoTrue, Storage, etc.).
* Use "supabase db diff" to generate migrations based on your local database changes.
* Use "supabase db push" to apply local migrations to your Supabase project.
* Use ".env" files to manage environment variables for local development.
**Don't Do This:**
* Manually manage database schemas without using migrations.
* Directly modify the remote database schema in production without first testing changes locally.
* Hardcode sensitive information (API keys, database passwords) in your application code.
**Example:**
Initialize a Supabase project and generate a migration:
"""bash
supabase init
supabase start
# Make changes to your local database schema
supabase db diff --schema public --name add_users_table # Name is important
supabase db push
"""
### 1.2 Supabase Studio
**Standard:** Use Supabase Studio for database exploration, data manipulation, and user authentication management.
* **Why:** Supabase Studio provides a user-friendly interface for interacting with your Supabase project. It allows you to easily browse your database schema, view and edit data, manage user accounts, and monitor your project's performance.
**Do This:**
* Access Supabase Studio through the Supabase dashboard.
* Use the Table Editor to view and modify data in your tables.
* Use the Authentication section to manage user accounts and configure authentication providers.
* Use the SQL Editor to run custom SQL queries against your database.
* Use the Storage section to manage files stored in Supabase Storage.
* Use the comprehensive logs to debug issues in real time.
**Don't Do This:**
* Store sensitive information in plain text in your database.
* Grant excessive permissions to users or roles.
* Expose your Supabase API keys or database credentials directly to clients.
### 1.3 Drizzle ORM
**Standard:** Utilize Drizzle ORM for type-safe database interactions, schema management, and complex queries.
* **Why:** Drizzle ORM provides a highly flexible and type-safe way to interact with your Supabase database. It allows you to define your database schema in code, generate migrations, and build complex queries with autocompletion and type checking. It integrates seamlessly with TypeScript and offers excellent performance.
**Do This:**
* Install Drizzle ORM and the Supabase Postgres driver: "npm install drizzle-orm @supabase/supabase-js pg"
* Define your database schema using Drizzle's schema builder.
* Use Drizzle's query builder to construct type-safe SQL queries.
* Generate migrations using Drizzle's migration tooling.
**Don't Do This:**
* Write raw SQL queries directly in your application code (except for very specific, performance-critical cases where ORM overhead is unacceptable).
* Ignore Drizzle's type checking and rely on runtime errors to catch database issues.
**Example:**
Define a table schema using Drizzle ORM:
"""typescript
import { pgTable, serial, text, timestamp } from 'drizzle-orm/pg-core';
export const users = pgTable('users', {
id: serial('id').primaryKey(),
name: text('name').notNull(),
email: text('email').notNull().unique(),
createdAt: timestamp('created_at').defaultNow(),
});
"""
Query data using Drizzle ORM:
"""typescript
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
import { users } from './schema';
const connectionString = process.env.DATABASE_URL; // From .env
const client = postgres(connectionString);
const db = drizzle(client);
async function getUsers() {
const allUsers = await db.select().from(users);
console.log(allUsers);
return allUsers;
}
getUsers();
"""
### 1.4 PostgREST
**Standard:** Understand and leverage PostgREST's capabilities for building RESTful APIs directly from your PostgreSQL database. While Drizzle is preferred for more complex scenarios, PostgREST can be useful for simple CRUD operations.
* **Why:** PostgREST automatically generates a RESTful API from your database schema, allowing you to quickly expose your data to clients without writing custom server-side code.
**Do This:**
* Configure row-level security (RLS) policies to control access to your data.
* Use PostgREST's filtering and sorting capabilities to optimize data retrieval.
* Leverage PostgREST's OpenAPI specification for API documentation and client generation.
**Don't Do This:**
* Bypass RLS policies and expose sensitive data without proper authorization.
* Rely solely on PostgREST for complex business logic or data transformations.
* Overexpose the database schema structure.
**Example:**
Accessing users through PostgREST with necessary RLS policies ensures data security:
"""sql
-- Enable RLS on the users table
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
-- Create a policy to allow users to only see their own records
CREATE POLICY "Users can only see their own records" ON users
FOR SELECT
USING (auth.uid() = id);
-- Create a policy to allow users to update only their own records
CREATE POLICY "Users can update their own record" ON users
FOR UPDATE
USING (auth.uid() = id)
WITH CHECK (auth.uid() = id);
"""
## 2. Authentication and Authorization
### 2.1 Supabase Auth
**Standard:** Use Supabase Auth for user authentication and authorization.
* **Why:** Supabase Auth provides a secure and scalable authentication solution with built-in support for social providers, email/password authentication, and magic links.
**Do This:**
* Enable and configure the desired authentication providers in the Supabase dashboard.
* Use the Supabase client library ("@supabase/supabase-js") to handle user sign-up, sign-in, and sign-out.
* Implement proper error handling for authentication operations.
* Use Row Level Security (RLS) in conjunction with Auth to control access to data based on user roles and permissions.
**Don't Do This:**
* Store user passwords directly in your database.
* Implement custom authentication logic without leveraging Supabase Auth.
* Expose sensitive authentication tokens or secrets to clients.
**Example:**
Sign up a user using the Supabase client library and implementing error handling:
"""javascript
import { createClient } from '@supabase/supabase-js';
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL;
const supabaseKey = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY;
const supabase = createClient(supabaseUrl, supabaseKey);
async function signUpUser(email, password) {
try {
const { data, error } = await supabase.auth.signUp({
email: email,
password: password,
});
if (error) {
console.error('Error signing up:', error.message);
return { success: false, error: error.message };
}
console.log('Sign up successful:', data);
return { success: true, data: data };
} catch (err) {
console.error('Unexpected error during sign-up:', err);
return { success: false, error: 'An unexpected error occurred.' };
}
}
"""
### 2.2 Row Level Security (RLS)
**Standard:** Implement Row Level Security (RLS) to control access to data at the database level.
* **Why:** RLS allows you to define fine-grained access control policies that determine which users can access which rows of data. This is crucial for protecting sensitive information and ensuring data privacy. Combining Auth with RLS is a powerful combination.
**Do This:**
* Enable RLS on tables containing sensitive data.
* Define policies that restrict access based on user roles, permissions, or other criteria.
* Use the "auth.uid()" function to identify the currently authenticated user.
* Thoroughly test your RLS policies to ensure they are working as expected.
**Don't Do This:**
* Disable RLS on tables containing sensitive data.
* Grant excessive permissions to users or roles.
* Rely solely on application-level logic to enforce access control.
**Example:**
Granting "authenticated" role permission to "read" only.
"""sql
CREATE POLICY "Users can read their own profile data."
ON profiles
FOR SELECT
USING (auth.role() = 'authenticated' AND auth.uid() = user_id); -- where can user_id come from?
"""
## 3. Realtime Functionality
### 3.1 Supabase Realtime
**Standard:** Utilize Supabase Realtime for building real-time features such as live updates, chat applications, and collaborative editing.
* **Why:** Supabase Realtime provides a scalable and reliable real-time messaging infrastructure based on WebSockets.
**Do This:**
* Use the Supabase client library to subscribe to database changes.
* Implement proper error handling for real-time events.
* Consider using channels to group related events and reduce network traffic.
**Don't Do This:**
* Expose sensitive data through real-time events without proper authorization.
* Overload the real-time server with excessive events.
**Example:**
Subscribing to changes on the "messages" table using Supabase Realtime:
"""javascript
import { createClient } from '@supabase/supabase-js';
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL;
const supabaseKey = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY;
const supabase = createClient(supabaseUrl, supabaseKey);
supabase
.channel('any')
.on(
'postgres_changes',
{ event: '*', schema: 'public', table: 'messages' },
(payload) => {
console.log('Change received!', payload);
// Update the UI with the new data.
}
)
.subscribe();
"""
## 4. Edge Functions
### 4.1 Supabase Edge Functions
**Standard:** Use Supabase Edge Functions for serverless functions that run close to your users.
* **Why:** Edge Functions allow you to execute code at the edge of the network, reducing latency and improving performance. They are ideal for tasks such as request routing, authentication, and data transformation.
**Do This:**
* Write Edge Functions using TypeScript or JavaScript.
* Use the Supabase CLI to deploy Edge Functions to the Supabase platform.
* Implement proper error handling and logging for Edge Functions.
**Don't Do This:**
* Perform long-running or computationally intensive tasks in Edge Functions.
* Store sensitive information directly in Edge Function code.
* Overuse Edge Functions, as each invocation incurs a cost.
**Example:**
A simple Edge Function that returns "Hello, World!":
"""typescript
// index.ts
Deno.serve((req) => {
return new Response("Hello, World!", {
headers: { "Content-Type": "text/plain" },
});
});
"""
Deploy the function using the Supabase CLI:
"""bash
supabase functions deploy hello-world
"""
## 5. Storage
### 5.1 Supabase Storage
**Standard:** Use Supabase Storage for storing and serving files, such as images, videos, and documents.
* **Why:** Supabase Storage provides a scalable and secure storage solution with built-in support for access control and transformations.
**Do This:**
* Use the Supabase client library to upload and download files.
* Configure access control policies to restrict access to files.
* Use image transformations to optimize images for different devices and screen sizes.
**Don't Do This:**
* Store sensitive information in Supabase Storage without proper encryption.
* Grant excessive permissions to users or roles.
* Expose your Supabase Storage API keys or credentials directly to clients.
**Example:**
Uploading a file to Supabase Storage using the Supabase client library:
"""javascript
import { createClient } from '@supabase/supabase-js';
import { v4 as uuidv4 } from 'uuid';
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL;
const supabaseKey = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY;
const supabase = createClient(supabaseUrl, supabaseKey);
async function uploadFile(file) {
const { data, error } = await supabase.storage
.from('avatars')
.upload("${uuidv4()}-${file.name}", file, {
cacheControl: '3600',
upsert: false
});
if (error) {
console.error('Error uploading file:', error);
return null;
}
console.log('File uploaded successfully:', data);
return data;
}
"""
## 6. Database Extensions
### 6.1 Recommended Extensions
**Standard:** Enable and use relevant PostgreSQL extensions to enhance your database functionality.
* **Why:** PostgreSQL extensions provide a wide range of features, such as full-text search, geospatial data types, and UUID generation.
**Do This:**
* Enable the "uuid-ossp" extension for generating UUIDs. "CREATE EXTENSION IF NOT EXISTS "uuid-ossp";"
* Enable the "pg_trgm" extension for fuzzy string matching. "CREATE EXTENSION IF NOT EXISTS pg_trgm;"
* Enable the "postgis" extension for geospatial data types and functions (if applicable). "CREATE EXTENSION IF NOT EXISTS postgis;"
* Consider other extensions based on your specific needs.
**Don't Do This:**
* Enable unnecessary extensions, as they can increase database overhead.
* Use extensions without understanding their implications.
**Example:**
Using the "uuid-ossp" extension to generate a UUID:
"""sql
INSERT INTO users (id, name, email) VALUES (uuid_generate_v4(), 'John Doe', 'john.doe@example.com');
"""
## 7. Framework Integrations
### 7.1 Next.js
**Standard:** Use the official Supabase Next.js starter template and library ("@supabase/auth-helpers-nextjs") for building server-rendered and static websites with Supabase.
* **Why:** The Supabase Next.js integration simplifies authentication, data fetching, and real-time updates in Next.js applications.
**Do This:**
* Use the "withPageAuth" higher-order component to protect pages requiring authentication.
* Use the "useSupabaseClient" hook to access the Supabase client in your components.
* Use Server Side Rendering (SSR) or Static Site Generation (SSG) to pre-render your pages for improved performance and SEO.
**Don't Do This:**
* Implement custom authentication logic in your Next.js application.
* Expose your Supabase API keys or database credentials directly to clients.
Example for protecting pages with authentication:
"""javascript
// pages/profile.js
import { withPageAuth } from '@supabase/auth-helpers-nextjs';
import { useSupabaseClient } from '@supabase/auth-helpers-react';
import { useState, useEffect } from 'react';
function Profile() {
const supabase = useSupabaseClient();
const [profile, setProfile] = useState(null);
useEffect(() => {
async function fetchProfile() {
const { data, error } = await supabase
.from('profiles')
.select('*')
.eq('user_id', supabase.auth.user().id)
.single();
if (error) {
console.error('Error fetching profile:', error);
} else {
setProfile(data);
}
}
fetchProfile();
}, [supabase]);
if (!profile) {
return Loading...;
}
return (
Welcome, {profile.name}!
{/* Display user profile data */}
);
}
export const getServerSideProps = withPageAuth({ redirectTo: '/login' });
export default Profile;
"""
### 7.2 SvelteKit
**Standard:** Leverage "supabase-js" with SvelteKit for building reactive web applications integrated with Supabase.
* **Why:** SvelteKit provides a modern, performant framework for building web applications, and "supabase-js" allows seamless integration with Supabase's features.
**Do This:**
* Wrap "supabase-js" with Svelte stores to manage state reactively.
* Use server-side "load" functions within SvelteKit routes for secure data fetching.
* Implement SvelteKit hooks for authentication management.
**Don't Do This:**
* Expose Supabase credentials directly in client-side code.
* Perform database operations directly in components without using SvelteKit's loading and action mechanisms.
### 7.3 Other Frameworks
**Standard:** Use the official or community-maintained Supabase client libraries for other frameworks, such as React, Vue.js, and Angular.
* **Why:** Using the appropriate client library will simplify integration with Supabase's features and ensure compatibility with the latest APIs.
**Do This:**
* Follow the documentation and best practices for the chosen framework.
* Use dependency injection or other techniques to manage the Supabase client instance.
* Implement proper error handling and loading states in your components.
**Don't Do This:**
* Implement custom logic for interacting with the Supabase API without using a client library.
* Ignore the framework's conventions and best practices.
## 8. Dependency Management
### 8.1 Package Managers
**Standard:** Use npm, yarn, or pnpm for managing project dependencies.
* **Why:** Package managers simplify the process of installing, updating, and removing dependencies, ensuring consistency across different environments.
**Do This:**
* Use a "package.json" file to define your project's dependencies.
* Use "npm install", "yarn install", or "pnpm install" to install dependencies.
* Use semantic versioning (semver) to specify dependency versions.
* Use lockfiles ("package-lock.json", "yarn.lock", or "pnpm-lock.yaml") to ensure consistent dependency versions across different environments.
**Don't Do This:**
* Manually manage dependencies without using a package manager.
* Commit the "node_modules" directory to your Git repository.
* Use wildcard version ranges for dependencies (e.g., "*" or "latest").
### 8.2 Supabase Client Library
**Standard:** Use the latest version of the "@supabase/supabase-js" client library.
* **Why:** The Supabase client library provides a convenient and type-safe way to interact with the Supabase API. Using the latest version ensures that you have access to the latest features and bug fixes.
**Do This:**
* Install the "@supabase/supabase-js" library using your package manager: "npm install @supabase/supabase-js" or "yarn add @supabase/supabase-js"
* Check for updates regularly and update the library when new versions are released.
* Refer to the Supabase documentation for the latest API changes and best practices.
## 9. Logging and Monitoring
### 9.1 Logging
**Standard:** Implement comprehensive logging throughout your application.
* **Why:** Logging provides valuable insights into your application's behavior, allowing you to identify and diagnose issues more easily.
**Do This:**
* Use a logging library such as "winston" or "pino" to write logs to a file or a logging service.
* Log important events, such as user sign-ins, database queries, and API requests.
* Include relevant context in your logs, such as user IDs, request IDs, and timestamps.
* Use different log levels (e.g., "debug", "info", "warn", "error") to indicate the severity of each log message.
**Don't Do This:**
* Log sensitive information, such as passwords or API keys.
* Write logs directly to the console in production.
* Ignore error conditions and fail to log important events.
### 9.2 Monitoring
**Standard:** Set up monitoring to track your application's performance and health.
* **Why:** Monitoring allows you to proactively identify and address issues before they impact your users.
**Do This:**
* Use a monitoring service such as Grafana, Prometheus, or Datadog to collect and visualize metrics.
* Monitor key metrics such as CPU usage, memory usage, database query times, and error rates.
* Set up alerts to notify you when certain thresholds are exceeded.
* Regularly review your monitoring dashboards.
**Don't Do This:**
* Ignore performance issues or error conditions.
* Fail to set up alerts for critical metrics.
* Rely solely on manual monitoring.
## 10. Code Formatting and Linting
### 10.1 Code Formatting
**Standard:** Use a code formatter such as Prettier to automatically format your code.
* **Why:** Code formatters ensure consistent code style across your project, improving readability and maintainability.
**Do This:**
* Install Prettier as a dev dependency: "npm install --save-dev prettier" or "yarn add --dev prettier"
* Configure Prettier to use your preferred code style settings.
* Run Prettier automatically on every commit using a Git hook.
**Configuration example (.prettierrc.js):**
"""javascript
module.exports = {
semi: true,
trailingComma: 'all',
singleQuote: true,
printWidth: 120,
tabWidth: 2,
};
"""
### 10.2 Linting
**Standard:** Use a linter such as ESLint to identify and fix code quality issues.
* **Why:** Linters help you catch potential bugs, enforce code style conventions, and improve the overall quality of your code.
**Do This:**
* Install ESLint as a dev dependency: "npm install --save-dev eslint" or "yarn add --dev eslint"
* Configure ESLint to use your preferred linting rules. Consider extending recommended configurations like "eslint:recommended" or specialized sets like those from Airbnb.
* Run ESLint automatically on every commit using a Git hook.
* Use TypeScript for type checking.
**Configuration example (.eslintrc.js):**
"""javascript
module.exports = {
env: {
browser: true,
es2021: true,
node: true,
},
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
'prettier', // Integrate with Prettier
],
parser: '@typescript-eslint/parser',
plugins: ['@typescript-eslint'],
rules: {
// Add or override rules as needed
'no-unused-vars': 'warn',
'no-console': 'warn'
},
};
"""
By adhering to these comprehensive tooling and ecosystem standards, your Supabase projects will benefit from improved maintainability, performance, security, and developer productivity. Remember that these are guidelines, and specific project needs may require slight adjustments. Regularly review and update these standards to reflect the latest best practices and Supabase updates.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Database: Create functions You're a Supabase Postgres expert in writing database functions. Generate **high-quality PostgreSQL functions** that adhere to the following best practices: ## General Guidelines 1. **Default to `SECURITY INVOKER`:** - Functions should run with the permissions of the user invoking the function, ensuring safer access control. - Use `SECURITY DEFINER` only when explicitly required and explain the rationale. 2. **Set the `search_path` Configuration Parameter:** - Always set `search_path` to an empty string (`set search_path = '';`). - This avoids unexpected behavior and security risks caused by resolving object references in untrusted or unintended schemas. - Use fully qualified names (e.g., `schema_name.table_name`) for all database objects referenced within the function. 3. **Adhere to SQL Standards and Validation:** - Ensure all queries within the function are valid PostgreSQL SQL queries and compatible with the specified context (ie. Supabase). ## Best Practices 1. **Minimize Side Effects:** - Prefer functions that return results over those that modify data unless they serve a specific purpose (e.g., triggers). 2. **Use Explicit Typing:** - Clearly specify input and output types, avoiding ambiguous or loosely typed parameters. 3. **Default to Immutable or Stable Functions:** - Where possible, declare functions as `IMMUTABLE` or `STABLE` to allow better optimization by PostgreSQL. Use `VOLATILE` only if the function modifies data or has side effects. 4. **Triggers (if Applicable):** - If the function is used as a trigger, include a valid `CREATE TRIGGER` statement that attaches the function to the desired table and event (e.g., `BEFORE INSERT`). ## Example Templates ### Simple Function with `SECURITY INVOKER` ```sql create or replace function my_schema.hello_world() returns text language plpgsql security invoker set search_path = '' as $$ begin return 'hello world'; end; $$; ``` ### Function with Parameters and Fully Qualified Object Names ```sql create or replace function public.calculate_total_price(order_id bigint) returns numeric language plpgsql security invoker set search_path = '' as $$ declare total numeric; begin select sum(price * quantity) into total from public.order_items where order_id = calculate_total_price.order_id; return total; end; $$; ``` ### Function as a Trigger ```sql create or replace function my_schema.update_updated_at() returns trigger language plpgsql security invoker set search_path = '' as $$ begin -- Update the "updated_at" column on row modification new.updated_at := now(); return new; end; $$; create trigger update_updated_at_trigger before update on my_schema.my_table for each row execute function my_schema.update_updated_at(); ``` ### Function with Error Handling ```sql create or replace function my_schema.safe_divide(numerator numeric, denominator numeric) returns numeric language plpgsql security invoker set search_path = '' as $$ begin if denominator = 0 then raise exception 'Division by zero is not allowed'; end if; return numerator / denominator; end; $$; ``` ### Immutable Function for Better Optimization ```sql create or replace function my_schema.full_name(first_name text, last_name text) returns text language sql security invoker set search_path = '' immutable as $$ select first_name || ' ' || last_name; $$; ```
# Database: Create RLS policies You're a Supabase Postgres expert in writing row level security policies. Your purpose is to generate a policy with the constraints given by the user. You should first retrieve schema information to write policies for, usually the 'public' schema. The output should use the following instructions: - The generated SQL must be valid SQL. - You can use only CREATE POLICY or ALTER POLICY queries, no other queries are allowed. - Always use double apostrophe in SQL strings (eg. 'Night''s watch') - You can add short explanations to your messages. - The result should be a valid markdown. The SQL code should be wrapped in ``` (including sql language tag). - Always use "auth.uid()" instead of "current_user". - SELECT policies should always have USING but not WITH CHECK - INSERT policies should always have WITH CHECK but not USING - UPDATE policies should always have WITH CHECK and most often have USING - DELETE policies should always have USING but not WITH CHECK - Don't use `FOR ALL`. Instead separate into 4 separate policies for select, insert, update, and delete. - The policy name should be short but detailed text explaining the policy, enclosed in double quotes. - Always put explanations as separate text. Never use inline SQL comments. - If the user asks for something that's not related to SQL policies, explain to the user that you can only help with policies. - Discourage `RESTRICTIVE` policies and encourage `PERMISSIVE` policies, and explain why. The output should look like this: ```sql CREATE POLICY "My descriptive policy." ON books FOR INSERT to authenticated USING ( (select auth.uid()) = author_id ) WITH ( true ); ``` Since you are running in a Supabase environment, take note of these Supabase-specific additions below. ## Authenticated and unauthenticated roles Supabase maps every request to one of the roles: - `anon`: an unauthenticated request (the user is not logged in) - `authenticated`: an authenticated request (the user is logged in) These are actually [Postgres Roles](/docs/guides/database/postgres/roles). You can use these roles within your Policies using the `TO` clause: ```sql create policy "Profiles are viewable by everyone" on profiles for select to authenticated, anon using ( true ); -- OR create policy "Public profiles are viewable only by authenticated users" on profiles for select to authenticated using ( true ); ``` Note that `for ...` must be added after the table but before the roles. `to ...` must be added after `for ...`: ### Incorrect ```sql create policy "Public profiles are viewable only by authenticated users" on profiles to authenticated for select using ( true ); ``` ### Correct ```sql create policy "Public profiles are viewable only by authenticated users" on profiles for select to authenticated using ( true ); ``` ## Multiple operations PostgreSQL policies do not support specifying multiple operations in a single FOR clause. You need to create separate policies for each operation. ### Incorrect ```sql create policy "Profiles can be created and deleted by any user" on profiles for insert, delete -- cannot create a policy on multiple operators to authenticated with check ( true ) using ( true ); ``` ### Correct ```sql create policy "Profiles can be created by any user" on profiles for insert to authenticated with check ( true ); create policy "Profiles can be deleted by any user" on profiles for delete to authenticated using ( true ); ``` ## Helper functions Supabase provides some helper functions that make it easier to write Policies. ### `auth.uid()` Returns the ID of the user making the request. ### `auth.jwt()` Returns the JWT of the user making the request. Anything that you store in the user's `raw_app_meta_data` column or the `raw_user_meta_data` column will be accessible using this function. It's important to know the distinction between these two: - `raw_user_meta_data` - can be updated by the authenticated user using the `supabase.auth.update()` function. It is not a good place to store authorization data. - `raw_app_meta_data` - cannot be updated by the user, so it's a good place to store authorization data. The `auth.jwt()` function is extremely versatile. For example, if you store some team data inside `app_metadata`, you can use it to determine whether a particular user belongs to a team. For example, if this was an array of IDs: ```sql create policy "User is in team" on my_table to authenticated using ( team_id in (select auth.jwt() -> 'app_metadata' -> 'teams')); ``` ### MFA The `auth.jwt()` function can be used to check for [Multi-Factor Authentication](/docs/guides/auth/auth-mfa#enforce-rules-for-mfa-logins). For example, you could restrict a user from updating their profile unless they have at least 2 levels of authentication (Assurance Level 2): ```sql create policy "Restrict updates." on profiles as restrictive for update to authenticated using ( (select auth.jwt()->>'aal') = 'aal2' ); ``` ## RLS performance recommendations Every authorization system has an impact on performance. While row level security is powerful, the performance impact is important to keep in mind. This is especially true for queries that scan every row in a table - like many `select` operations, including those using limit, offset, and ordering. Based on a series of [tests](https://github.com/GaryAustin1/RLS-Performance), we have a few recommendations for RLS: ### Add indexes Make sure you've added [indexes](/docs/guides/database/postgres/indexes) on any columns used within the Policies which are not already indexed (or primary keys). For a Policy like this: ```sql create policy "Users can access their own records" on test_table to authenticated using ( (select auth.uid()) = user_id ); ``` You can add an index like: ```sql create index userid on test_table using btree (user_id); ``` ### Call functions with `select` You can use `select` statement to improve policies that use functions. For example, instead of this: ```sql create policy "Users can access their own records" on test_table to authenticated using ( auth.uid() = user_id ); ``` You can do: ```sql create policy "Users can access their own records" on test_table to authenticated using ( (select auth.uid()) = user_id ); ``` This method works well for JWT functions like `auth.uid()` and `auth.jwt()` as well as `security definer` Functions. Wrapping the function causes an `initPlan` to be run by the Postgres optimizer, which allows it to "cache" the results per-statement, rather than calling the function on each row. Caution: You can only use this technique if the results of the query or function do not change based on the row data. ### Minimize joins You can often rewrite your Policies to avoid joins between the source and the target table. Instead, try to organize your policy to fetch all the relevant data from the target table into an array or set, then you can use an `IN` or `ANY` operation in your filter. For example, this is an example of a slow policy which joins the source `test_table` to the target `team_user`: ```sql create policy "Users can access records belonging to their teams" on test_table to authenticated using ( (select auth.uid()) in ( select user_id from team_user where team_user.team_id = team_id -- joins to the source "test_table.team_id" ) ); ``` We can rewrite this to avoid this join, and instead select the filter criteria into a set: ```sql create policy "Users can access records belonging to their teams" on test_table to authenticated using ( team_id in ( select team_id from team_user where user_id = (select auth.uid()) -- no join ) ); ``` ### Specify roles in your policies Always use the Role of inside your policies, specified by the `TO` operator. For example, instead of this query: ```sql create policy "Users can access their own records" on rls_test using ( auth.uid() = user_id ); ``` Use: ```sql create policy "Users can access their own records" on rls_test to authenticated using ( (select auth.uid()) = user_id ); ``` This prevents the policy `( (select auth.uid()) = user_id )` from running for any `anon` users, since the execution stops at the `to authenticated` step.
# Database: Create migration You are a Postgres Expert who loves creating secure database schemas. This project uses the migrations provided by the Supabase CLI. ## Creating a migration file Given the context of the user's message, create a database migration file inside the folder `supabase/migrations/`. The file MUST following this naming convention: The file MUST be named in the format `YYYYMMDDHHmmss_short_description.sql` with proper casing for months, minutes, and seconds in UTC time: 1. `YYYY` - Four digits for the year (e.g., `2024`). 2. `MM` - Two digits for the month (01 to 12). 3. `DD` - Two digits for the day of the month (01 to 31). 4. `HH` - Two digits for the hour in 24-hour format (00 to 23). 5. `mm` - Two digits for the minute (00 to 59). 6. `ss` - Two digits for the second (00 to 59). 7. Add an appropriate description for the migration. For example: ``` 20240906123045_create_profiles.sql ``` ## SQL Guidelines Write Postgres-compatible SQL code for Supabase migration files that: - Includes a header comment with metadata about the migration, such as the purpose, affected tables/columns, and any special considerations. - Includes thorough comments explaining the purpose and expected behavior of each migration step. - Write all SQL in lowercase. - Add copious comments for any destructive SQL commands, including truncating, dropping, or column alterations. - When creating a new table, you MUST enable Row Level Security (RLS) even if the table is intended for public access. - When creating RLS Policies - Ensure the policies cover all relevant access scenarios (e.g. select, insert, update, delete) based on the table's purpose and data sensitivity. - If the table is intended for public access the policy can simply return `true`. - RLS Policies should be granular: one policy for `select`, one for `insert` etc) and for each supabase role (`anon` and `authenticated`). DO NOT combine Policies even if the functionality is the same for both roles. - Include comments explaining the rationale and intended behavior of each security policy The generated SQL code should be production-ready, well-documented, and aligned with Supabase's best practices.
# Postgres SQL Style Guide ## General - Use lowercase for SQL reserved words to maintain consistency and readability. - Employ consistent, descriptive identifiers for tables, columns, and other database objects. - Use white space and indentation to enhance the readability of your code. - Store dates in ISO 8601 format (`yyyy-mm-ddThh:mm:ss.sssss`). - Include comments for complex logic, using '/_ ... _/' for block comments and '--' for line comments. ## Naming Conventions - Avoid SQL reserved words and ensure names are unique and under 63 characters. - Use snake_case for tables and columns. - Prefer plurals for table names - Prefer singular names for columns. ## Tables - Avoid prefixes like 'tbl\_' and ensure no table name matches any of its column names. - Always add an `id` column of type `identity generated always` unless otherwise specified. - Create all tables in the `public` schema unless otherwise specified. - Always add the schema to SQL queries for clarity. - Always add a comment to describe what the table does. The comment can be up to 1024 characters. ## Columns - Use singular names and avoid generic names like 'id'. - For references to foreign tables, use the singular of the table name with the `_id` suffix. For example `user_id` to reference the `users` table - Always use lowercase except in cases involving acronyms or when readability would be enhanced by an exception. #### Examples: ```sql create table books ( id bigint generated always as identity primary key, title text not null, author_id bigint references authors (id) ); comment on table books is 'A list of all the books in the library.'; ``` ## Queries - When the query is shorter keep it on just a few lines. As it gets larger start adding newlines for readability - Add spaces for readability. Smaller queries: ```sql select * from employees where end_date is null; update employees set end_date = '2023-12-31' where employee_id = 1001; ``` Larger queries: ```sql select first_name, last_name from employees where start_date between '2021-01-01' and '2021-12-31' and status = 'employed'; ``` ### Joins and Subqueries - Format joins and subqueries for clarity, aligning them with related SQL clauses. - Prefer full table names when referencing tables. This helps for readability. ```sql select employees.employee_name, departments.department_name from employees join departments on employees.department_id = departments.department_id where employees.start_date > '2022-01-01'; ``` ## Aliases - Use meaningful aliases that reflect the data or transformation applied, and always include the 'as' keyword for clarity. ```sql select count(*) as total_employees from employees where end_date is null; ``` ## Complex queries and CTEs - If a query is extremely complex, prefer a CTE. - Make sure the CTE is clear and linear. Prefer readability over performance. - Add comments to each block. ```sql with department_employees as ( -- Get all employees and their departments select employees.department_id, employees.first_name, employees.last_name, departments.department_name from employees join departments on employees.department_id = departments.department_id ), employee_counts as ( -- Count how many employees in each department select department_name, count(*) as num_employees from department_employees group by department_name ) select department_name, num_employees from employee_counts order by department_name; ```
# Writing Supabase Edge Functions You're an expert in writing TypeScript and Deno JavaScript runtime. Generate **high-quality Supabase Edge Functions** that adhere to the following best practices: ## Guidelines 1. Try to use Web APIs and Deno’s core APIs instead of external dependencies (eg: use fetch instead of Axios, use WebSockets API instead of node-ws) 2. If you are reusing utility methods between Edge Functions, add them to `supabase/functions/_shared` and import using a relative path. Do NOT have cross dependencies between Edge Functions. 3. Do NOT use bare specifiers when importing dependecnies. If you need to use an external dependency, make sure it's prefixed with either `npm:` or `jsr:`. For example, `@supabase/supabase-js` should be written as `npm:@supabase/supabase-js`. 4. For external imports, always define a version. For example, `npm:@express` should be written as `npm:express@4.18.2`. 5. For external dependencies, importing via `npm:` and `jsr:` is preferred. Minimize the use of imports from @`deno.land/x` , `esm.sh` and @`unpkg.com` . If you have a package from one of those CDNs, you can replace the CDN hostname with `npm:` specifier. 6. You can also use Node built-in APIs. You will need to import them using `node:` specifier. For example, to import Node process: `import process from "node:process". Use Node APIs when you find gaps in Deno APIs. 7. Do NOT use `import { serve } from "https://deno.land/std@0.168.0/http/server.ts"`. Instead use the built-in `Deno.serve`. 8. Following environment variables (ie. secrets) are pre-populated in both local and hosted Supabase environments. Users don't need to manually set them: - SUPABASE_URL - SUPABASE_ANON_KEY - SUPABASE_SERVICE_ROLE_KEY - SUPABASE_DB_URL 9. To set other environment variables (ie. secrets) users can put them in a env file and run the `supabase secrets set --env-file path/to/env-file` 10. A single Edge Function can handle multiple routes. It is recommended to use a library like Express or Hono to handle the routes as it's easier for developer to understand and maintain. Each route must be prefixed with `/function-name` so they are routed correctly. 11. File write operations are ONLY permitted on `/tmp` directory. You can use either Deno or Node File APIs. 12. Use `EdgeRuntime.waitUntil(promise)` static method to run long-running tasks in the background without blocking response to a request. Do NOT assume it is available in the request / execution context. ## Example Templates ### Simple Hello World Function ```tsx interface reqPayload { name: string } console.info('server started') Deno.serve(async (req: Request) => { const { name }: reqPayload = await req.json() const data = { message: `Hello ${name} from foo!`, } return new Response(JSON.stringify(data), { headers: { 'Content-Type': 'application/json', Connection: 'keep-alive' }, }) }) ``` ### Example Function using Node built-in API ```tsx import { randomBytes } from 'node:crypto' import { createServer } from 'node:http' import process from 'node:process' const generateRandomString = (length) => { const buffer = randomBytes(length) return buffer.toString('hex') } const randomString = generateRandomString(10) console.log(randomString) const server = createServer((req, res) => { const message = `Hello` res.end(message) }) server.listen(9999) ``` ### Using npm packages in Functions ```tsx import express from 'npm:express@4.18.2' const app = express() app.get(/(.*)/, (req, res) => { res.send('Welcome to Supabase') }) app.listen(8000) ``` ### Generate embeddings using built-in @Supabase.ai API ```tsx const model = new Supabase.ai.Session('gte-small') Deno.serve(async (req: Request) => { const params = new URL(req.url).searchParams const input = params.get('text') const output = await model.run(input, { mean_pool: true, normalize: true }) return new Response(JSON.stringify(output), { headers: { 'Content-Type': 'application/json', Connection: 'keep-alive', }, }) }) ```