
Introduction
Supabase load testing is essential if you rely on its PostgREST APIs, authentication services, storage, and edge-connected backend workflows to power production applications. As usage grows, even well-designed Supabase projects can run into bottlenecks around database query performance, row-level security evaluation, auth token issuance, connection limits, and burst traffic against REST endpoints.
Because Supabase combines several critical backend services behind a clean API layer, performance testing should go beyond simple “can this endpoint respond?” checks. A proper load testing strategy should measure how your Supabase project behaves under realistic user traffic: sign-ups and logins, authenticated reads and writes, filtered queries, pagination, and storage-related operations. This helps you identify whether slowdowns come from the database itself, RLS policies, inefficient queries, or API rate constraints.
In this guide, you’ll learn how to load test Supabase with LoadForge using Locust-based Python scripts. We’ll cover basic API load testing, authenticated scenarios, mixed workloads, and advanced patterns that more closely reflect real application traffic. Along the way, we’ll show how LoadForge’s distributed testing, real-time reporting, cloud-based infrastructure, and CI/CD integration can help you validate backend scalability before users feel the pain.
Prerequisites
Before you start performance testing Supabase, make sure you have the following:
- A Supabase project
- Your Supabase project URL, such as:
https://your-project-ref.supabase.co
- A valid API key:
anonkey for public client-style trafficservice_rolekey only for trusted server-side testing where appropriate
- A test dataset in your Supabase database
- At least one table exposed through the REST API
- Optional test users for authentication flows
- Optional storage bucket if you want to simulate file-related traffic
- A LoadForge account
You should also understand the main Supabase endpoints you’ll likely test:
- REST API:
/rest/v1/<table>
- Authentication:
/auth/v1/token?grant_type=password/auth/v1/signup/auth/v1/user
- Storage:
/storage/v1/object/<bucket>/<path>
For realistic load testing, avoid using production data or production credentials. Create a dedicated test environment or at least a dedicated schema, dataset, and test users. This is especially important when stress testing write-heavy endpoints.
Understanding Supabase Under Load
Supabase sits on top of PostgreSQL and exposes several services that can behave differently under concurrent traffic.
REST API performance
Supabase’s REST interface is powered by PostgREST. It translates HTTP requests into SQL queries. Under load, performance depends heavily on:
- Query complexity
- Index coverage
- Pagination strategy
- Sorting and filtering patterns
- Row-level security policy evaluation
- Database connection availability
A simple indexed lookup like:
GET /rest/v1/products?id=eq.123
will usually scale far better than a broad filtered query with sorting and no proper indexes.
Authentication traffic
Supabase Auth can become a hotspot during peak login or signup events. Common issues include:
- Token issuance latency
- Password-based auth spikes
- Repeated
/auth/v1/uservalidation calls - Session refresh storms from many clients at once
If your app has login-heavy usage, load testing auth endpoints separately from data endpoints is important.
Database-backed writes
Insert and update traffic can expose:
- Lock contention
- Constraint overhead
- RLS write policy costs
- Trigger or function execution time
- Replication lag in read-heavy architectures
Write performance is often where seemingly healthy systems break first during stress testing.
Storage and backend workflows
If your Supabase application uses storage uploads, signed URLs, or edge functions that interact with the database, you should test those paths too. End-to-end performance can degrade even when the raw database is healthy.
This is why a good Supabase performance testing plan should include:
- Anonymous read traffic
- Authenticated user traffic
- Mixed read/write API traffic
- Authentication bursts
- Optional storage or function calls
Writing Your First Load Test
Let’s start with a basic Supabase load test against a public table exposed through the REST API. This is a good first step for measuring baseline read performance.
Assume you have a products table with columns like:
idnamecategorypricein_stockcreated_at
This script simulates users browsing products, filtering by category, and fetching a single record.
from locust import HttpUser, task, between
SUPABASE_ANON_KEY = "your-anon-key"
class SupabasePublicReadUser(HttpUser):
wait_time = between(1, 3)
host = "https://your-project-ref.supabase.co"
common_headers = {
"apikey": SUPABASE_ANON_KEY,
"Authorization": f"Bearer {SUPABASE_ANON_KEY}",
"Content-Type": "application/json",
}
@task(5)
def list_products(self):
self.client.get(
"/rest/v1/products?select=id,name,category,price&limit=20",
headers=self.common_headers,
name="GET /products list"
)
@task(3)
def filter_products_by_category(self):
self.client.get(
"/rest/v1/products?select=id,name,price&category=eq.electronics&order=created_at.desc&limit=10",
headers=self.common_headers,
name="GET /products filtered"
)
@task(1)
def get_single_product(self):
self.client.get(
"/rest/v1/products?select=*&id=eq.101",
headers=self.common_headers,
name="GET /products by id"
)What this test does
This basic load test simulates common public API access patterns:
- Listing records
- Filtering records
- Looking up a specific row
Why this is useful
This gives you a baseline for:
- Average response time
- P95 and P99 latency
- Error rates under moderate concurrency
- Query efficiency for your most common reads
What to watch for
If this simple test performs poorly, investigate:
- Missing indexes on filtered or sorted columns
- Large payloads from overly broad
select=* - Expensive RLS policies
- Insufficient pagination
In LoadForge, you can scale this script across distributed generators and observe how performance changes as concurrent users increase.
Advanced Load Testing Scenarios
Once you’ve established a baseline, move on to more realistic Supabase scenarios.
Scenario 1: Authentication and authenticated profile access
Many Supabase applications depend heavily on auth. This script simulates users logging in with email and password, fetching their profile, and validating session-backed access.
Assume you have a profiles table linked to authenticated users and protected by RLS.
from locust import HttpUser, task, between
import random
TEST_USERS = [
{"email": "loadtest1@example.com", "password": "SupabaseTest123!"},
{"email": "loadtest2@example.com", "password": "SupabaseTest123!"},
{"email": "loadtest3@example.com", "password": "SupabaseTest123!"},
]
SUPABASE_ANON_KEY = "your-anon-key"
class SupabaseAuthUser(HttpUser):
wait_time = between(1, 2)
host = "https://your-project-ref.supabase.co"
def on_start(self):
user = random.choice(TEST_USERS)
login_headers = {
"apikey": SUPABASE_ANON_KEY,
"Content-Type": "application/json",
}
login_payload = {
"email": user["email"],
"password": user["password"]
}
response = self.client.post(
"/auth/v1/token?grant_type=password",
json=login_payload,
headers=login_headers,
name="POST /auth token"
)
if response.status_code == 200:
data = response.json()
self.access_token = data.get("access_token")
self.user_id = data.get("user", {}).get("id")
else:
self.access_token = None
self.user_id = None
def auth_headers(self):
return {
"apikey": SUPABASE_ANON_KEY,
"Authorization": f"Bearer {self.access_token}",
"Content-Type": "application/json",
}
@task(3)
def get_current_user(self):
if not self.access_token:
return
self.client.get(
"/auth/v1/user",
headers=self.auth_headers(),
name="GET /auth user"
)
@task(5)
def get_profile(self):
if not self.access_token or not self.user_id:
return
self.client.get(
f"/rest/v1/profiles?select=id,username,full_name,avatar_url&id=eq.{self.user_id}",
headers=self.auth_headers(),
name="GET /profiles self"
)Why this scenario matters
This test measures how Supabase handles:
- Login bursts
- JWT issuance
- Authenticated RLS-protected reads
- User profile lookups
Common findings
Developers often discover:
- Login endpoints become slower than expected during spikes
- Profile queries are slowed by RLS policy complexity
- Excessive auth validation requests create avoidable overhead
If your application has frequent login traffic, consider isolating auth load testing from general API traffic to understand where latency originates.
Scenario 2: Mixed read/write workload for a task management app
Now let’s simulate a more realistic application workload. Imagine a Supabase-backed task management app with a tasks table containing:
iduser_idtitlestatusprioritydue_datecreated_at
This script logs in, fetches tasks, creates a task, and updates task status.
from locust import HttpUser, task, between
import random
import uuid
from datetime import datetime, timedelta
TEST_USERS = [
{"email": "worker1@example.com", "password": "TaskLoad123!"},
{"email": "worker2@example.com", "password": "TaskLoad123!"},
]
SUPABASE_ANON_KEY = "your-anon-key"
class SupabaseTaskUser(HttpUser):
wait_time = between(1, 4)
host = "https://your-project-ref.supabase.co"
def on_start(self):
self.access_token = None
self.user_id = None
self.task_ids = []
user = random.choice(TEST_USERS)
response = self.client.post(
"/auth/v1/token?grant_type=password",
json={
"email": user["email"],
"password": user["password"]
},
headers={
"apikey": SUPABASE_ANON_KEY,
"Content-Type": "application/json"
},
name="POST /auth token"
)
if response.status_code == 200:
payload = response.json()
self.access_token = payload.get("access_token")
self.user_id = payload.get("user", {}).get("id")
def auth_headers(self):
return {
"apikey": SUPABASE_ANON_KEY,
"Authorization": f"Bearer {self.access_token}",
"Content-Type": "application/json",
"Prefer": "return=representation"
}
@task(5)
def list_open_tasks(self):
if not self.access_token:
return
response = self.client.get(
"/rest/v1/tasks?select=id,title,status,priority,due_date&status=eq.open&order=due_date.asc&limit=15",
headers=self.auth_headers(),
name="GET /tasks open"
)
if response.status_code == 200:
tasks = response.json()
self.task_ids = [task["id"] for task in tasks if "id" in task]
@task(2)
def create_task(self):
if not self.access_token:
return
due_date = (datetime.utcnow() + timedelta(days=random.randint(1, 14))).isoformat()
payload = {
"title": f"Load test task {uuid.uuid4().hex[:8]}",
"status": "open",
"priority": random.choice(["low", "medium", "high"]),
"due_date": due_date
}
self.client.post(
"/rest/v1/tasks",
json=payload,
headers=self.auth_headers(),
name="POST /tasks create"
)
@task(2)
def complete_task(self):
if not self.access_token or not self.task_ids:
return
task_id = random.choice(self.task_ids)
self.client.patch(
f"/rest/v1/tasks?id=eq.{task_id}",
json={"status": "completed"},
headers=self.auth_headers(),
name="PATCH /tasks complete"
)What this test reveals
This is a strong Supabase stress testing scenario because it combines:
- Authenticated reads
- Inserts
- Updates
- Query sorting
- RLS-protected user data
This helps uncover issues like:
- Slow writes from triggers or constraints
- Update lock contention
- Poor indexing on
statusordue_date - RLS policy overhead during write operations
Scenario 3: Admin analytics with heavier queries
Some Supabase workloads are not user-facing CRUD flows but internal dashboards and analytics-style queries. These often create the biggest database pressure.
Suppose you expose an orders table through the REST API and an internal dashboard queries recent orders, high-value orders, and customer-specific history.
from locust import HttpUser, task, between
SUPABASE_SERVICE_ROLE_KEY = "your-service-role-key"
class SupabaseAnalyticsUser(HttpUser):
wait_time = between(2, 5)
host = "https://your-project-ref.supabase.co"
headers = {
"apikey": SUPABASE_SERVICE_ROLE_KEY,
"Authorization": f"Bearer {SUPABASE_SERVICE_ROLE_KEY}",
"Content-Type": "application/json",
}
@task(4)
def recent_orders(self):
self.client.get(
"/rest/v1/orders?select=id,customer_id,total,status,created_at&order=created_at.desc&limit=50",
headers=self.headers,
name="GET /orders recent"
)
@task(2)
def high_value_orders(self):
self.client.get(
"/rest/v1/orders?select=id,customer_id,total,status&total=gte.500&status=in.(paid,shipped)&order=total.desc&limit=25",
headers=self.headers,
name="GET /orders high value"
)
@task(1)
def customer_order_history(self):
self.client.get(
"/rest/v1/orders?select=id,total,status,created_at&customer_id=eq.c8f2c2c1-3c49-4d3b-9c8b-111122223333&order=created_at.desc&limit=100",
headers=self.headers,
name="GET /orders by customer"
)Important note
Only use the service_role key in trusted backend testing scenarios. Never simulate browser traffic with it. This scenario is useful when you want to load test internal APIs, admin jobs, or backend services that legitimately operate with elevated privileges.
Why this scenario matters
Analytics and dashboard traffic can be deceptively expensive because it often involves:
- Larger result sets
- Sorting on large tables
- Multi-condition filters
- Broad scans if indexes are missing
If these queries slow down under load, they can affect the entire Supabase database environment.
Analyzing Your Results
After running your Supabase load testing scenarios in LoadForge, focus on a few critical metrics.
Response time percentiles
Average response time is helpful, but percentiles matter more:
- P50 shows typical experience
- P95 shows what many users feel during load
- P99 reveals tail latency and backend instability
For Supabase performance testing, rising P95/P99 often points to:
- Slow SQL queries
- Lock contention
- RLS overhead
- Connection pool saturation
Error rates
Watch for:
429 Too Many Requests500or503server-side failures- Auth failures from expired or invalid tokens
- Timeouts during write-heavy traffic
Even a low error rate can be significant if it appears only at concurrency thresholds that match production peaks.
Requests per second
Throughput tells you how much traffic your Supabase project can sustain, but don’t optimize for throughput alone. A very high request rate with unacceptable latency is not success.
Endpoint-level breakdowns
Use LoadForge’s real-time reporting to compare:
- Auth endpoints vs REST endpoints
- Reads vs writes
- Public vs authenticated traffic
- Simple queries vs analytics-heavy queries
This helps you isolate the exact workload causing degradation.
Concurrency thresholds
One of the biggest goals of stress testing is identifying when performance sharply degrades. Look for the point where:
- Response times suddenly spike
- Error rates begin rising
- Throughput plateaus
- Specific endpoints become unstable
LoadForge’s cloud-based infrastructure and global test locations make it easier to validate whether issues are backend-related or region-specific.
Performance Optimization Tips
If your Supabase load test results show bottlenecks, these are the first areas to review.
Add the right indexes
Supabase REST performance depends on PostgreSQL indexing. Add indexes for:
- Frequently filtered columns
- Sort columns like
created_at - Combined filter/sort access patterns
- Foreign keys used in user-scoped queries
For example, if you frequently query tasks by status and due_date, index those fields appropriately.
Reduce payload size
Avoid select=* unless you truly need all columns. Returning fewer fields reduces:
- Query cost
- Network transfer
- Serialization overhead
Review RLS policies
Row-level security is powerful, but poorly designed policies can become expensive under load. Test with realistic authenticated traffic and inspect whether policy checks are slowing reads or writes.
Optimize write paths
If inserts and updates are slow, check for:
- Triggers doing too much work
- Expensive constraints
- Unnecessary synchronous operations
- Large transactional batches
Separate auth and data testing
Auth traffic and database traffic often fail in different ways. Measure them independently before combining them into mixed scenarios.
Use realistic pacing
Don’t create unrealistic no-wait loops unless you are intentionally performing stress testing. A realistic wait time produces more useful performance insights.
Test from multiple regions
If your users are global, use LoadForge’s distributed testing to simulate traffic from multiple geographies and understand the effect of network distance on Supabase API performance.
Common Pitfalls to Avoid
Supabase load testing can produce misleading results if you make these common mistakes.
Testing with unrealistic queries
If your test uses only trivial lookups, you may conclude the system is healthy while real-world filtered or sorted queries remain slow.
Ignoring authentication behavior
Many applications spend significant time in login, token refresh, and authenticated profile access. Skipping auth traffic creates blind spots.
Using production credentials unsafely
Never use privileged keys casually in broad tests. Use service_role only for trusted backend scenarios, and keep it out of client-style simulations.
Load testing without enough data
A table with only a few hundred rows won’t reveal the same performance problems as one with millions. Seed realistic data volumes before testing.
Forgetting cleanup for write tests
If your scripts create tasks, orders, or other records, repeated runs can distort results over time. Plan data cleanup or use isolated test tenants.
Not correlating latency with database design
Slow API performance in Supabase is often really slow SQL performance. If an endpoint degrades under load, inspect the underlying query patterns and schema design.
Overlooking ramp-up behavior
Jumping instantly from 0 to thousands of users can be useful for spike testing, but it doesn’t replace gradual ramp-up tests that reveal sustainable capacity.
Conclusion
Supabase makes it fast to build modern applications, but that speed of development should be matched with disciplined load testing and performance testing. Whether you’re validating public REST APIs, authentication traffic, user-scoped CRUD operations, or internal analytics queries, realistic Supabase load tests help you find bottlenecks before they affect your users.
With LoadForge, you can run Locust-based Supabase tests at scale using distributed generators, real-time reporting, cloud-based infrastructure, global test locations, and CI/CD integration. That makes it easier to move from guesswork to measurable backend confidence.
If you’re ready to validate your Supabase scalability, start building these scripts in LoadForge and run your first test today.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

CockroachDB Load Testing with LoadForge
Load test CockroachDB with LoadForge to evaluate SQL performance, transaction latency, and horizontal scalability.

DynamoDB Load Testing with LoadForge
Load test DynamoDB with LoadForge to validate read and write capacity, throttling behavior, and performance at scale.

Firebase Firestore Load Testing with LoadForge
Load test Firebase Firestore with LoadForge to evaluate document reads, writes, latency, and scaling under heavy traffic.