
Introduction
CockroachDB is built for resilience, scale, and strong consistency, which makes it a popular choice for modern distributed applications. But even a database designed for horizontal scalability can become a bottleneck when transaction volume spikes, query patterns change, or application traffic becomes uneven across regions.
That’s why CockroachDB load testing matters. A proper load testing and performance testing strategy helps you understand how your application behaves when backed by CockroachDB under realistic concurrency. You can measure transaction latency, identify slow SQL paths, validate connection pooling behavior, and evaluate how well your system scales during stress testing.
With LoadForge, you can simulate distributed user traffic from global test locations using Locust-based Python scripts, then analyze real-time reporting to spot latency regressions and throughput limits quickly. In this guide, you’ll learn how to load test applications that rely on CockroachDB, with practical examples that model realistic API traffic patterns rather than synthetic database calls in isolation.
Prerequisites
Before you start load testing CockroachDB with LoadForge, make sure you have:
- A CockroachDB-backed application or API environment to test
- A staging or pre-production environment that closely matches production
- Test data seeded into CockroachDB, such as users, products, carts, or orders
- Authentication credentials for your application endpoints
- Knowledge of the key SQL-backed workflows you want to test
- LoadForge access to run distributed load testing from the cloud
Because LoadForge uses Locust under the hood, your scripts should model real user behavior against your application layer. In most production architectures, CockroachDB is not directly exposed to end users. Instead, users interact with REST or GraphQL APIs, and those services execute SQL queries and transactions against CockroachDB. This is the most realistic and safest way to perform performance testing.
You should also identify:
- Critical read-heavy endpoints
- Write-heavy transactional flows
- Multi-step business transactions
- Bulk import or file-processing workflows
- Region-sensitive or latency-sensitive operations
For CockroachDB specifically, it’s useful to know whether your workload includes:
- Single-row point lookups
- Secondary index scans
- Distributed joins
- Multi-statement transactions
- Serializable retry scenarios
- High-contention updates on the same rows
Understanding CockroachDB Under Load
CockroachDB behaves differently from a traditional single-node relational database because it is distributed by design. That gives it strong availability and scaling benefits, but it also introduces specific performance characteristics you should understand during load testing.
What happens during concurrent traffic
When many users interact with your application at once, your API tier sends a large number of SQL statements and transactions to CockroachDB. Under load, several factors influence performance:
- Transaction coordination across nodes
- Contention on frequently updated rows
- Index efficiency
- Network latency between application and database nodes
- Transaction retries due to serialization conflicts
- Connection pool saturation in the application layer
CockroachDB uses serializable isolation, which is excellent for correctness but can increase retry frequency under write contention. That means a workflow that looks fast at low concurrency may degrade sharply during stress testing if many requests update the same records.
Common CockroachDB bottlenecks
Here are the most common bottlenecks you’ll uncover with CockroachDB load testing:
Hot rows and write contention
If many users update the same inventory record, account balance, or counter, transaction retries can increase latency significantly.
Inefficient indexes
A query that performs well with a small dataset may become slow when the optimizer has to scan large ranges or perform distributed reads.
Large multi-region round trips
In geo-distributed deployments, transaction latency can increase if reads and writes cross regions frequently.
Long-running transactions
Transactions that touch many rows or perform multiple dependent operations can hold resources longer and increase contention.
Application connection handling
Sometimes the issue is not CockroachDB itself, but the application tier using too many connections, retrying poorly, or failing to batch operations efficiently.
A strong load testing plan should therefore target the full application workflow and not just isolated queries.
Writing Your First Load Test
Let’s start with a basic read-heavy scenario for a typical ecommerce API backed by CockroachDB. This script simulates users browsing products and viewing product details. These are common read operations that exercise indexed lookups and paginated queries.
Basic product browsing test
from locust import HttpUser, task, between
import random
class CockroachDBBrowseUser(HttpUser):
wait_time = between(1, 3)
host = "https://api.shop.example.com"
product_ids = [
"prod_1001", "prod_1002", "prod_1003", "prod_1004", "prod_1005",
"prod_1006", "prod_1007", "prod_1008", "prod_1009", "prod_1010"
]
categories = ["electronics", "books", "fitness", "home", "gaming"]
@task(3)
def list_products(self):
category = random.choice(self.categories)
params = {
"category": category,
"limit": 24,
"sort": "popularity",
"page": random.randint(1, 5)
}
with self.client.get("/v1/products", params=params, name="/v1/products", catch_response=True) as response:
if response.status_code == 200 and "items" in response.text:
response.success()
else:
response.failure(f"Unexpected response: {response.status_code}")
@task(2)
def product_detail(self):
product_id = random.choice(self.product_ids)
with self.client.get(f"/v1/products/{product_id}", name="/v1/products/[id]", catch_response=True) as response:
if response.status_code == 200 and product_id in response.text:
response.success()
else:
response.failure(f"Failed to fetch product {product_id}")What this test measures
This basic Locust script helps you evaluate:
- Read throughput against CockroachDB-backed endpoints
- Product listing query latency under concurrency
- Product detail lookup performance
- Impact of pagination and filtering on SQL execution
In CockroachDB, the /v1/products endpoint may translate into a query using category filters, sorting, and pagination. If the right indexes are missing, latency will rise quickly as concurrency increases. The /v1/products/{id} endpoint should usually be fast if backed by a primary key or well-designed secondary index.
Why this is a good first test
A simple read-heavy test gives you a baseline before adding more complex transaction flows. It helps answer questions like:
- How many requests per second can the application sustain?
- Are p95 and p99 latencies acceptable?
- Do product list queries slow down before product detail lookups?
- Does horizontal scaling improve throughput consistently?
Once you understand your read baseline, you can move on to authenticated and write-heavy workflows.
Advanced Load Testing Scenarios
To properly load test CockroachDB, you should simulate realistic application behavior, including login flows, transactional writes, and contention-prone operations.
Scenario 1: Authenticated cart and checkout workflow
This example simulates a user logging in, browsing products, adding items to a cart, and checking out. This is a much better representation of real CockroachDB load because it includes both reads and writes, plus a multi-step transaction.
from locust import HttpUser, task, between
import random
import uuid
class CockroachDBCheckoutUser(HttpUser):
wait_time = between(1, 2)
host = "https://api.shop.example.com"
def on_start(self):
self.email = f"loadtest_{uuid.uuid4().hex[:8]}@example.com"
self.password = "Str0ngP@ssword!"
self.token = None
self.cart_id = None
self.client.post("/v1/users/register", json={
"email": self.email,
"password": self.password,
"first_name": "Load",
"last_name": "Tester"
}, name="/v1/users/register")
response = self.client.post("/v1/auth/login", json={
"email": self.email,
"password": self.password
}, name="/v1/auth/login")
if response.status_code == 200:
self.token = response.json().get("access_token")
headers = self.auth_headers()
cart_response = self.client.post("/v1/carts", json={}, headers=headers, name="/v1/carts")
if cart_response.status_code in (200, 201):
self.cart_id = cart_response.json().get("cart_id")
def auth_headers(self):
return {
"Authorization": f"Bearer {self.token}",
"Content-Type": "application/json"
}
@task(3)
def browse_and_add_to_cart(self):
headers = self.auth_headers()
product_response = self.client.get(
"/v1/products",
params={"category": "electronics", "limit": 10, "sort": "newest"},
headers=headers,
name="/v1/products"
)
if product_response.status_code != 200:
return
items = product_response.json().get("items", [])
if not items:
return
product = random.choice(items)
quantity = random.randint(1, 3)
self.client.post(
f"/v1/carts/{self.cart_id}/items",
json={
"product_id": product["id"],
"quantity": quantity
},
headers=headers,
name="/v1/carts/[id]/items"
)
@task(1)
def checkout(self):
headers = self.auth_headers()
with self.client.post(
f"/v1/carts/{self.cart_id}/checkout",
json={
"payment_method": "card_tok_visa_4242",
"shipping_address": {
"line1": "100 Market Street",
"city": "San Francisco",
"state": "CA",
"postal_code": "94105",
"country": "US"
}
},
headers=headers,
name="/v1/carts/[id]/checkout",
catch_response=True
) as response:
if response.status_code in (200, 201):
response.success()
elif response.status_code == 409:
response.failure("Transaction conflict or inventory issue")
else:
response.failure(f"Checkout failed: {response.status_code}")Why this matters for CockroachDB
This workflow often triggers:
- Inventory checks
- Cart row inserts and updates
- Order creation transactions
- Payment and fulfillment state changes
- Serializable transaction retry behavior
If your checkout transaction touches inventory, orders, payments, and cart tables in one transaction, CockroachDB may show increased latency under concurrency, especially if many users try to purchase the same popular item. This is exactly the kind of stress testing scenario that reveals contention hotspots.
Scenario 2: Banking-style money transfer test
CockroachDB is often used for financial or ledger-style systems because of its transactional guarantees. This test simulates authenticated account balance checks and fund transfers between accounts.
from locust import HttpUser, task, between
import random
class CockroachDBTransferUser(HttpUser):
wait_time = between(1, 2)
host = "https://bank-api.example.com"
def on_start(self):
login_response = self.client.post("/api/v1/auth/token", json={
"username": "loadtest_user",
"password": "SuperSecure123!"
}, name="/api/v1/auth/token")
self.token = login_response.json().get("access_token")
self.account_ids = ["acct_2001", "acct_2002", "acct_2003", "acct_2004", "acct_2005"]
def headers(self):
return {
"Authorization": f"Bearer {self.token}",
"Content-Type": "application/json"
}
@task(4)
def get_account_balance(self):
account_id = random.choice(self.account_ids)
self.client.get(
f"/api/v1/accounts/{account_id}",
headers=self.headers(),
name="/api/v1/accounts/[id]"
)
@task(2)
def transfer_funds(self):
from_account, to_account = random.sample(self.account_ids, 2)
amount = round(random.uniform(10.00, 250.00), 2)
with self.client.post(
"/api/v1/transfers",
json={
"from_account_id": from_account,
"to_account_id": to_account,
"amount": amount,
"currency": "USD",
"reference": "Load test transfer"
},
headers=self.headers(),
name="/api/v1/transfers",
catch_response=True
) as response:
if response.status_code in (200, 201):
response.success()
elif response.status_code == 409:
response.failure("Serializable retry or insufficient funds conflict")
else:
response.failure(f"Transfer failed: {response.status_code}")What this uncovers
This kind of performance testing is excellent for detecting:
- Write contention on account rows
- Transaction retries under concurrent updates
- Latency spikes for balance reads during heavy write load
- Application retry logic weaknesses
CockroachDB can handle these workloads well, but the schema and access patterns matter. If many virtual users repeatedly transfer funds involving the same small set of accounts, you may create artificial hotspots. That can be useful during stress testing if your goal is to validate worst-case contention handling.
Scenario 3: Bulk import and job status polling
Another realistic CockroachDB workload is ingesting large datasets through an API, then polling for processing status. This often maps to batch inserts, staging tables, or asynchronous jobs backed by CockroachDB.
from locust import HttpUser, task, between
import random
import uuid
class CockroachDBImportUser(HttpUser):
wait_time = between(2, 5)
host = "https://admin-api.example.com"
def on_start(self):
response = self.client.post("/v1/admin/login", json={
"email": "ops-team@example.com",
"password": "AdminPassw0rd!"
}, name="/v1/admin/login")
self.token = response.json().get("access_token")
self.job_ids = []
def headers(self):
return {
"Authorization": f"Bearer {self.token}",
"Content-Type": "application/json"
}
@task(1)
def upload_inventory_batch(self):
sku_prefix = uuid.uuid4().hex[:6]
payload = {
"source": "supplier_feed",
"warehouse_id": "wh_us_west_1",
"items": [
{
"sku": f"{sku_prefix}-{i}",
"name": f"Load Test Item {i}",
"quantity": random.randint(10, 500),
"price": round(random.uniform(5.99, 299.99), 2),
"category": random.choice(["electronics", "home", "fitness"]),
"updated_at": "2026-04-06T12:00:00Z"
}
for i in range(100)
]
}
response = self.client.post(
"/v1/inventory/imports",
json=payload,
headers=self.headers(),
name="/v1/inventory/imports"
)
if response.status_code in (200, 201, 202):
job_id = response.json().get("job_id")
if job_id:
self.job_ids.append(job_id)
@task(3)
def poll_import_status(self):
if not self.job_ids:
return
job_id = random.choice(self.job_ids)
self.client.get(
f"/v1/inventory/imports/{job_id}",
headers=self.headers(),
name="/v1/inventory/imports/[job_id]"
)Why this scenario is important
Bulk imports can stress CockroachDB in very different ways than transactional APIs:
- High insert volume
- Secondary index maintenance cost
- Background job tracking writes
- Read-after-write polling pressure
- Potential lock or contention amplification in downstream processing
This is especially useful when evaluating horizontal scalability. With LoadForge’s cloud-based infrastructure and distributed testing, you can simulate multiple admin or partner systems importing data concurrently from different regions.
Analyzing Your Results
Once your LoadForge test is running, focus on a few key metrics to understand CockroachDB performance under load.
Response time percentiles
Average latency is helpful, but p95 and p99 are much more important. CockroachDB workloads often show acceptable averages while tail latency grows dramatically during contention or distributed transaction coordination.
Watch for:
/v1/productsp95 rising steadily as users increase/v1/carts/[id]/checkoutp99 spikes under write-heavy load/api/v1/transferslatency jumps during contention- Polling endpoints remaining fast while write endpoints degrade
Error rates
Pay close attention to:
- HTTP 409 conflicts
- HTTP 500 application errors
- Authentication failures caused by token handling bugs
- Timeouts from overloaded API or database tiers
Not every 409 is necessarily bad in CockroachDB-backed systems. Some applications surface retry-related or conflict-related responses explicitly. What matters is whether the rate becomes unacceptable and whether the application handles retries correctly.
Throughput trends
Look at requests per second over time. If throughput plateaus while latency rises, you may have hit a database, application, or connection-pool bottleneck.
Endpoint comparison
Compare read-heavy and write-heavy endpoints separately:
- Fast reads with slow writes often indicate transaction contention
- Slow list endpoints may indicate missing indexes or inefficient pagination
- Slow imports may indicate batch processing inefficiencies or index overhead
Correlate with CockroachDB observability
LoadForge gives you real-time reporting, but you should also correlate results with CockroachDB metrics such as:
- SQL statement latency
- Contention events
- Transaction retries
- CPU and memory usage
- Node-level distribution
- Range hotspots
This combined view helps you determine whether the bottleneck is in SQL execution, transaction coordination, or the application layer.
Performance Optimization Tips
After load testing CockroachDB, these are some of the most effective improvements to consider.
Optimize indexes for real query patterns
Use your load testing results to identify slow endpoints, then inspect the SQL behind them. Ensure filters, joins, and sort columns are supported by appropriate indexes.
Reduce contention on hot rows
If a small number of rows are updated frequently, redesign the workflow. For example:
- Avoid centralized counters
- Partition high-write entities
- Use append-only event models where possible
Keep transactions short
Shorter transactions generally perform better under concurrency and reduce the chance of retries.
Improve retry handling in the application
CockroachDB’s serializable model may require retries. Your application should handle these gracefully and efficiently.
Tune connection pools
Too many open connections can hurt performance rather than help it. Validate pool sizes during performance testing.
Batch writes carefully
Bulk imports should batch work efficiently without creating excessively large transactions.
Test from multiple regions
If your users are global, use LoadForge’s distributed testing and global test locations to see how geography affects CockroachDB-backed application latency.
Automate performance regression checks
Use LoadForge’s CI/CD integration to run recurring load testing against critical CockroachDB workflows before deployment.
Common Pitfalls to Avoid
CockroachDB load testing is most useful when it reflects real usage. Avoid these common mistakes.
Testing the database in isolation when users hit APIs
Most teams should load test the application endpoints that execute SQL, not the database protocol directly.
Using unrealistic datasets
A tiny dataset can hide query planning and index problems. Seed enough data to reflect production scale.
Ignoring transaction retries
CockroachDB may retry transactions under contention. If your application does not handle this well, your load test results will expose it.
Creating artificial hotspots by accident
If every virtual user updates the same account or product, you may be testing a worst-case scenario unintentionally. Sometimes that’s useful, but make sure it matches your goal.
Focusing only on average response time
Tail latency and error rates matter more for transactional systems.
Not separating read and write workloads
Read-heavy and write-heavy workloads stress CockroachDB differently. Test both.
Running only one load profile
Use baseline, peak, and stress testing patterns. A system that performs well at 200 users may fail abruptly at 500.
Forgetting the application tier
Slow API serialization, poor ORM usage, or weak connection pooling can look like database problems when they are not.
Conclusion
CockroachDB is designed for distributed SQL performance and resilience, but every workload is different. The only reliable way to understand how your application will behave is to perform realistic load testing, performance testing, and stress testing against the actual user flows that depend on CockroachDB.
With LoadForge, you can build Locust-based tests for read-heavy browsing, transactional checkout flows, financial transfers, and bulk import workloads, then run them at scale using cloud-based infrastructure. Features like distributed testing, real-time reporting, global test locations, and CI/CD integration make it easier to identify bottlenecks before they affect production users.
If you’re ready to validate SQL performance, transaction latency, and horizontal scalability for your CockroachDB-backed application, try LoadForge and start testing with realistic distributed load today.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

DynamoDB Load Testing with LoadForge
Load test DynamoDB with LoadForge to validate read and write capacity, throttling behavior, and performance at scale.

Firebase Firestore Load Testing with LoadForge
Load test Firebase Firestore with LoadForge to evaluate document reads, writes, latency, and scaling under heavy traffic.

How to Load Test Databases with LoadForge
Discover how to load test databases with LoadForge, from SQL to NoSQL, and identify bottlenecks before production.