
Introduction
PostgreSQL is one of the most trusted relational databases for modern applications, powering everything from SaaS platforms and e-commerce stores to analytics pipelines and internal business systems. But even a well-designed PostgreSQL deployment can struggle when traffic increases, query patterns change, or too many application instances compete for limited database connections.
That’s why PostgreSQL load testing matters. With the right performance testing strategy, you can measure query latency, validate connection pool behavior, identify locking issues, and understand how your database scales under realistic concurrency. Whether you are preparing for a product launch, validating a migration, or troubleshooting slow queries, load testing PostgreSQL helps you find bottlenecks before they impact users.
In this guide, you’ll learn how to load test PostgreSQL-backed applications using LoadForge. Because LoadForge is built on Locust, all examples use Python-based Locust scripts. We’ll focus on realistic API-driven scenarios that exercise PostgreSQL through your application layer, which is the safest and most production-relevant way to test database performance. You’ll also see how LoadForge’s distributed testing, real-time reporting, cloud-based infrastructure, and CI/CD integration can help you run repeatable PostgreSQL performance tests at scale.
Prerequisites
Before you start load testing PostgreSQL with LoadForge, make sure you have the following:
- A web application or API that uses PostgreSQL as its backend
- Test environment access, ideally separate from production
- API authentication credentials, such as:
- JWT bearer tokens
- session-based login credentials
- API keys
- A representative PostgreSQL dataset
- Knowledge of your critical database-backed workflows, such as:
- user login
- product catalog search
- order creation
- reporting queries
- account dashboards
- LoadForge account access
You should also gather baseline PostgreSQL metrics from your environment, including:
- active connections
- slow query logs
- CPU and memory usage
- disk I/O
- lock waits
- replication lag, if applicable
- connection pool saturation from tools like PgBouncer or application pool metrics
A key best practice is to load test PostgreSQL through the application endpoints that generate database traffic, rather than opening direct database connections from your load generator. This approach better reflects real user behavior, includes authentication and business logic, and avoids unrealistic test patterns.
Understanding PostgreSQL Under Load
PostgreSQL handles concurrency extremely well, but database performance testing often reveals issues outside the database engine itself. When you load test a PostgreSQL-backed system, you are usually testing a combination of:
- application server throughput
- ORM query efficiency
- connection pooling behavior
- SQL execution time
- index effectiveness
- lock contention
- transaction duration
- caching layers
- disk and memory performance
Common PostgreSQL bottlenecks under load
Connection exhaustion
PostgreSQL has a finite number of concurrent connections. If your application opens too many connections, you may see errors like:
remaining connection slots are reserved- increased latency from connection waits
- pool timeouts in the application
This is especially common when application instances scale faster than connection pool limits.
Slow or unindexed queries
Queries that seem acceptable with low traffic can become serious performance problems during load testing. Full table scans, missing indexes, and inefficient joins often show up as latency spikes in read-heavy endpoints.
Lock contention
Write-heavy PostgreSQL workloads can suffer from row locks, deadlocks, or long-running transactions. During stress testing, endpoints that update shared records often reveal these issues.
Transaction bloat and long-running operations
Large transactions, report generation, or batch updates can consume resources and hold locks longer than expected. PostgreSQL performs best when transactions are short and focused.
I/O saturation
If your workload involves many writes, large sorts, or index updates, disk I/O can become a bottleneck before CPU usage appears high.
What to measure during PostgreSQL load testing
When running PostgreSQL performance testing, pay close attention to:
- response time percentiles for database-backed endpoints
- requests per second
- error rate
- connection pool wait time
- database CPU and memory
- query execution time
- lock waits and deadlocks
- throughput under increasing concurrency
LoadForge’s real-time reporting makes it easier to correlate rising response times with changes in request volume, while distributed testing lets you simulate geographically diverse traffic patterns against your PostgreSQL-backed application.
Writing Your First Load Test
Your first PostgreSQL load test should target a simple, read-heavy endpoint that performs a realistic SQL query behind the scenes. A common example is a product catalog API backed by PostgreSQL.
Example 1: Basic read workload against a PostgreSQL-backed API
This script simulates users browsing products, filtering by category, and looking up individual product details.
from locust import HttpUser, task, between
import random
class PostgreSQLCatalogUser(HttpUser):
wait_time = between(1, 3)
categories = ["electronics", "books", "fitness", "kitchen", "gaming"]
product_ids = [101, 102, 103, 104, 105, 201, 202, 203]
def on_start(self):
self.client.headers.update({
"Accept": "application/json",
"User-Agent": "LoadForge-PostgreSQL-Test"
})
@task(3)
def browse_products(self):
category = random.choice(self.categories)
params = {
"category": category,
"limit": 20,
"sort": "popularity"
}
self.client.get("/api/v1/products", params=params, name="/api/v1/products?category=[category]")
@task(2)
def search_products(self):
search_terms = ["laptop", "wireless", "cookware", "yoga mat", "novel"]
params = {
"q": random.choice(search_terms),
"limit": 10
}
self.client.get("/api/v1/products/search", params=params, name="/api/v1/products/search")
@task(1)
def view_product_detail(self):
product_id = random.choice(self.product_ids)
self.client.get(f"/api/v1/products/{product_id}", name="/api/v1/products/[id]")What this test measures
This basic load test is useful for measuring:
- PostgreSQL read query performance
- indexing effectiveness on category and search columns
- latency consistency for common SELECT queries
- how the application handles moderate concurrent read traffic
If your /api/v1/products/search endpoint performs ILIKE searches or full-text search in PostgreSQL, this test can quickly reveal whether indexes such as GIN or trigram indexes are missing.
Running the test in LoadForge
In LoadForge, configure your test with:
- a realistic host, such as
https://staging-api.example.com - a gradual user ramp-up
- enough duration to observe sustained database behavior
- optional distributed load generators if your traffic is global
A good starting point:
- 25 users ramping to 200 users
- duration of 10 to 15 minutes
Advanced Load Testing Scenarios
Once your basic PostgreSQL load testing is working, move on to more realistic workflows that involve authentication, writes, transactions, and reporting queries.
Example 2: Authenticated user workflow with PostgreSQL-backed account data
This scenario simulates a typical login flow followed by account and order history queries. It is useful for testing indexed lookups, session handling, and user-specific PostgreSQL queries.
from locust import HttpUser, task, between
import random
class AuthenticatedPostgreSQLUser(HttpUser):
wait_time = between(1, 2)
credentials = [
{"email": "loadtest1@example.com", "password": "TestPass123!"},
{"email": "loadtest2@example.com", "password": "TestPass123!"},
{"email": "loadtest3@example.com", "password": "TestPass123!"},
]
def on_start(self):
user = random.choice(self.credentials)
response = self.client.post(
"/api/v1/auth/login",
json={
"email": user["email"],
"password": user["password"]
},
headers={"Content-Type": "application/json"},
name="/api/v1/auth/login"
)
if response.status_code == 200:
token = response.json().get("access_token")
self.client.headers.update({
"Authorization": f"Bearer {token}",
"Accept": "application/json"
})
@task(3)
def get_profile(self):
self.client.get("/api/v1/account/profile", name="/api/v1/account/profile")
@task(2)
def get_order_history(self):
params = {
"limit": 25,
"status": random.choice(["completed", "shipped", "processing"])
}
self.client.get("/api/v1/account/orders", params=params, name="/api/v1/account/orders")
@task(1)
def get_saved_addresses(self):
self.client.get("/api/v1/account/addresses", name="/api/v1/account/addresses")Why this matters for PostgreSQL
This test is especially effective for measuring:
- login query performance against the users table
- indexed lookups by email or user ID
- performance of joins between users, orders, and addresses
- application connection pool behavior during authenticated sessions
If response times increase significantly during /api/v1/account/orders, you may be dealing with:
- missing indexes on
user_id,status, orcreated_at - N+1 ORM queries
- inefficient joins
- oversized JSON payload generation
Example 3: Write-heavy transaction workflow for orders and inventory
Read tests are important, but PostgreSQL load testing should also include write-heavy workflows. This example simulates cart checkout, which typically triggers inserts, updates, and transactional consistency checks.
from locust import HttpUser, task, between
import random
import uuid
class CheckoutUser(HttpUser):
wait_time = between(2, 5)
def on_start(self):
response = self.client.post(
"/api/v1/auth/login",
json={
"email": "buyer-loadtest@example.com",
"password": "TestPass123!"
},
headers={"Content-Type": "application/json"},
name="/api/v1/auth/login"
)
if response.status_code == 200:
token = response.json().get("access_token")
self.client.headers.update({
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
"Accept": "application/json"
})
@task(2)
def add_item_to_cart(self):
payload = {
"product_id": random.choice([101, 104, 202, 305, 411]),
"quantity": random.randint(1, 3)
}
self.client.post("/api/v1/cart/items", json=payload, name="/api/v1/cart/items")
@task(1)
def checkout(self):
payload = {
"payment_method": "credit_card",
"shipping_address_id": 9001,
"billing_address_id": 9001,
"idempotency_key": str(uuid.uuid4())
}
self.client.post("/api/v1/orders/checkout", json=payload, name="/api/v1/orders/checkout")What this workflow tests in PostgreSQL
This scenario often drives the most meaningful PostgreSQL stress testing results because it exercises:
- transactional inserts into
ordersandorder_items - inventory updates
- payment audit logging
- row-level locking
- rollback behavior during contention
- consistency under concurrent writes
If checkout latency rises sharply under load, investigate:
- lock contention on inventory rows
- long-running transactions
- missing indexes on order lookup tables
- synchronous downstream operations inside the transaction
- insufficient connection pool sizing
Example 4: Database-heavy analytics and reporting endpoints
Many PostgreSQL systems struggle most with reporting endpoints, especially when large aggregations run alongside transactional traffic. This scenario simulates expensive dashboard queries.
from locust import HttpUser, task, between
import random
from datetime import datetime, timedelta
class ReportingUser(HttpUser):
wait_time = between(3, 6)
def on_start(self):
response = self.client.post(
"/api/v1/auth/login",
json={
"email": "analyst@example.com",
"password": "AnalyticsPass123!"
},
headers={"Content-Type": "application/json"},
name="/api/v1/auth/login"
)
if response.status_code == 200:
token = response.json().get("access_token")
self.client.headers.update({
"Authorization": f"Bearer {token}",
"Accept": "application/json"
})
@task(2)
def sales_summary_report(self):
end_date = datetime.utcnow().date()
start_date = end_date - timedelta(days=30)
params = {
"start_date": start_date.isoformat(),
"end_date": end_date.isoformat(),
"group_by": random.choice(["day", "week", "category"]),
"region": random.choice(["us", "eu", "apac"])
}
self.client.get("/api/v1/reports/sales-summary", params=params, name="/api/v1/reports/sales-summary")
@task(1)
def top_customers_report(self):
params = {
"limit": 50,
"period": random.choice(["7d", "30d", "90d"])
}
self.client.get("/api/v1/reports/top-customers", params=params, name="/api/v1/reports/top-customers")Why reporting tests are important
These reporting endpoints often generate:
- large aggregations
- GROUP BY operations
- joins across orders, customers, and products
- sorting and pagination
- temporary file usage on disk
A PostgreSQL performance test that mixes reporting traffic with transactional traffic can reveal whether heavy analytical queries are starving user-facing operations. In many environments, this is the moment when teams realize they need:
- read replicas
- materialized views
- query caching
- background report generation
- workload isolation
Analyzing Your Results
After running your PostgreSQL load test in LoadForge, focus on more than just average response time. Database systems often fail gradually, so percentile analysis matters.
Key metrics to inspect
Response time percentiles
Look at p50, p95, and p99 response times for each endpoint:
- low p50 with high p95 often indicates lock contention or occasional slow queries
- consistently rising p95 and p99 during ramp-up may suggest pool saturation or query inefficiency
Error rates
Watch for application errors such as:
- 500 Internal Server Error
- 503 Service Unavailable
- timeout responses
- authentication failures caused by overloaded backend services
These often correlate with PostgreSQL issues like connection exhaustion or long-running transactions.
Throughput vs latency
A healthy PostgreSQL-backed application should maintain predictable throughput as concurrency rises. If requests per second flatten while latency climbs, your database or connection pool may be saturated.
Endpoint-specific degradation
Compare read endpoints and write endpoints separately:
- if reads degrade first, investigate indexes and query plans
- if writes degrade first, investigate locks and transaction scope
- if all endpoints degrade together, check connection pooling, CPU, and I/O
Correlating LoadForge with PostgreSQL metrics
The best PostgreSQL load testing results come from combining LoadForge metrics with database observability. During the test, compare LoadForge charts with:
pg_stat_activitypg_stat_statements- slow query logs
- lock monitoring
- CPU, memory, and disk metrics
- PgBouncer or application pool metrics
For example:
- rising
/api/v1/orders/checkoutlatency plus increased lock waits usually means write contention - slower
/api/v1/reports/sales-summaryplus high disk I/O may indicate sort spills or large scans - growing login latency plus high active connections often points to connection pool pressure
LoadForge’s real-time reporting is especially useful here because you can see exactly when latency shifts as user load increases.
Performance Optimization Tips
Once your PostgreSQL load testing uncovers bottlenecks, these optimizations often deliver immediate gains.
Index the queries you actually use
Review the SQL behind your slowest endpoints and ensure the right indexes exist for:
WHEREclauses- joins
- sorting
- pagination
- full-text search
Avoid adding indexes blindly, since too many indexes can slow writes.
Use connection pooling correctly
PostgreSQL does not benefit from unlimited direct connections. Use a connection pooler such as PgBouncer or tune your application pool to keep connection counts stable under load.
Keep transactions short
Long transactions increase lock duration and reduce concurrency. Move non-essential work, such as notifications or analytics events, out of the critical transaction path.
Optimize ORM usage
Many PostgreSQL performance issues come from the application layer:
- N+1 queries
- overly chatty data access
- unnecessary eager loading
- repeated count queries
Profile generated SQL, not just endpoint latency.
Separate transactional and reporting workloads
If heavy reports affect user-facing traffic, consider:
- read replicas
- materialized views
- precomputed aggregates
- asynchronous exports
Test with realistic data volumes
A PostgreSQL query that performs well with 10,000 rows may fail with 10 million. Always load test with production-like data distribution and cardinality.
Common Pitfalls to Avoid
PostgreSQL load testing is extremely valuable, but only if the test design is realistic.
Testing the database directly instead of the application
Opening raw PostgreSQL connections from Locust may seem tempting, but it bypasses authentication, business logic, caching, and application connection pools. In most cases, API-level load testing is more representative.
Using unrealistic test data
If every virtual user searches for the same product or updates the same record, you may create artificial lock hotspots or cache effects that do not reflect real usage.
Ignoring ramp-up behavior
A sudden spike to maximum concurrency can be useful for stress testing, but it does not replace a gradual load test. PostgreSQL issues such as pool saturation and lock buildup often appear during ramp-up.
Looking only at average response time
Averages hide the tail latency that users actually feel. Always inspect p95 and p99.
Running tests without database observability
If you only look at HTTP metrics, you may know something is slow without knowing why. Pair LoadForge with PostgreSQL monitoring for meaningful diagnosis.
Testing with tiny datasets
Small test databases often mask missing indexes, poor query plans, and pagination issues. Use realistic row counts whenever possible.
Forgetting background jobs and scheduled tasks
Your PostgreSQL environment may already be handling cron jobs, ETL jobs, or reporting queries. Include these in your planning, because they can materially affect load testing results.
Conclusion
PostgreSQL load testing is essential for understanding how your application behaves under real-world concurrency. By testing realistic read, write, authentication, and reporting workflows, you can measure query performance, validate connection limits, uncover lock contention, and improve database scalability before problems reach production.
LoadForge makes this process much easier with Locust-based scripting, cloud-based infrastructure, distributed testing, real-time reporting, global test locations, and CI/CD integration for repeatable performance testing. If you want to confidently load test PostgreSQL and identify bottlenecks in your database-backed application, try LoadForge and start building tests like the ones in this guide.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

CockroachDB Load Testing with LoadForge
Load test CockroachDB with LoadForge to evaluate SQL performance, transaction latency, and horizontal scalability.

DynamoDB Load Testing with LoadForge
Load test DynamoDB with LoadForge to validate read and write capacity, throttling behavior, and performance at scale.

Firebase Firestore Load Testing with LoadForge
Load test Firebase Firestore with LoadForge to evaluate document reads, writes, latency, and scaling under heavy traffic.