
Introduction
MySQL is often the performance backbone of web applications, SaaS platforms, e-commerce systems, and internal business tools. Even when your application servers are well optimized, database bottlenecks can quickly become the limiting factor under real traffic. Slow queries, connection pool exhaustion, lock contention, replication lag, and inefficient indexing can all surface when concurrency rises.
That’s why MySQL load testing matters. By simulating realistic user behavior against application endpoints that depend on MySQL, you can benchmark query speed, measure concurrent connection handling, and uncover database performance issues before they impact production users. While you generally should not point Locust directly at the MySQL protocol, the most effective approach is to load test the APIs and services that interact with MySQL, then observe how the database behaves under pressure.
In this guide, you’ll learn how to perform practical MySQL load testing with LoadForge using Locust-based Python scripts. We’ll cover basic read traffic, authenticated workflows, write-heavy scenarios, and reporting endpoints that typically stress sorting, filtering, and aggregation queries. Along the way, we’ll show how LoadForge’s distributed testing, cloud-based infrastructure, real-time reporting, CI/CD integration, and global test locations can help you run meaningful database performance testing at scale.
Prerequisites
Before you start load testing MySQL-backed applications with LoadForge, make sure you have:
- A MySQL-backed application or API to test
- A staging or pre-production environment that closely matches production
- Test user accounts or API credentials
- Seeded database data large enough to produce realistic query execution plans
- Access to MySQL monitoring metrics such as:
- Query latency
- Slow query log
- CPU and memory usage
- InnoDB buffer pool stats
- Active connections
- Lock waits
- Replication lag, if applicable
- Endpoint documentation for the application layer that reads from and writes to MySQL
You should also know:
- Which endpoints are read-heavy versus write-heavy
- Which workflows trigger joins, aggregations, transactions, or full-text search
- Expected traffic patterns, such as login bursts, product searches, dashboard reporting, or checkout flows
A key best practice: load test the application tier, not the database port directly, unless you are specifically benchmarking a custom MySQL client. This gives you more realistic performance testing results because it includes authentication, ORM behavior, caching, connection pooling, and business logic.
Understanding MySQL Under Load
MySQL performance under load is shaped by several factors. To run effective stress testing and load testing, it helps to understand the common bottlenecks.
Concurrent Connections
As traffic increases, your application may open more database connections through its pool. If the pool is too small, requests queue up. If it is too large, MySQL may become overwhelmed with context switching and memory pressure.
Common symptoms include:
- Increased response times during peak concurrency
- Connection timeout errors
- Too many connections errors
- Saturated CPU on the database host
Query Efficiency
A single inefficient query may not seem problematic at low volume, but under high concurrency it can dominate database resources. Typical causes include:
- Missing indexes
- Large table scans
- Expensive joins
- ORDER BY or GROUP BY on unindexed columns
- N+1 query patterns from the application layer
Lock Contention
Write-heavy workloads can trigger row-level or table-level contention, especially in transactional workflows like order creation, inventory updates, and account balance changes. Under load, these issues often show up as:
- Spikes in latency for write endpoints
- Deadlocks
- Lock wait timeouts
- Throughput collapse during stress testing
Buffer Pool and Disk I/O
If your working set no longer fits in memory, MySQL may start relying more heavily on disk. This can dramatically increase latency for queries that were previously fast.
Replication and Read Scaling
If your architecture uses replicas for reads, load testing should verify:
- Whether read traffic is actually routed to replicas
- Whether replicas can sustain expected query volume
- Whether replication lag becomes unacceptable under heavy writes
The goal of MySQL performance testing is not just to find the maximum requests per second. It is to identify the point where latency, error rate, or database health metrics begin to degrade.
Writing Your First Load Test
Let’s start with a basic MySQL load testing scenario: browsing products from an e-commerce API backed by MySQL. This type of test is useful for benchmarking read queries, pagination, indexed lookups, and category filtering.
Scenario
We’ll test these realistic endpoints:
GET /api/v1/productsGET /api/v1/products/{id}GET /api/v1/categories/{slug}/products
These typically map to MySQL queries involving:
- Product listing with pagination
- Primary key lookup
- Category join/filter queries
Basic Locust Script
from locust import HttpUser, task, between
import random
class MySQLReadHeavyUser(HttpUser):
wait_time = between(1, 3)
product_ids = [101, 102, 103, 104, 105, 110, 115]
categories = ["laptops", "monitors", "keyboards", "networking"]
@task(5)
def list_products(self):
params = {
"page": random.randint(1, 5),
"limit": 20,
"sort": "popularity",
"in_stock": "true"
}
self.client.get("/api/v1/products", params=params, name="/api/v1/products")
@task(3)
def view_product_detail(self):
product_id = random.choice(self.product_ids)
self.client.get(f"/api/v1/products/{product_id}", name="/api/v1/products/:id")
@task(2)
def browse_category(self):
category = random.choice(self.categories)
params = {
"page": 1,
"limit": 24,
"sort": "price_asc"
}
self.client.get(
f"/api/v1/categories/{category}/products",
params=params,
name="/api/v1/categories/:slug/products"
)What This Test Measures
This script is simple, but it already reveals useful MySQL performance characteristics:
- Product list queries may stress indexes on
in_stock,popularity, andcategory_id - Product detail endpoints usually hit primary key lookups and should remain fast
- Category browsing often reveals whether joins and sorting are properly indexed
Why This Is a Good First Test
A read-heavy workload is often the safest starting point for database load testing. It helps you establish a baseline for:
- Median and p95 response times
- Throughput under normal concurrency
- Error rates
- Query efficiency for common read paths
In LoadForge, you can scale this script across distributed cloud generators to simulate traffic from multiple regions and compare latency patterns in real time.
Advanced Load Testing Scenarios
Once you have a baseline, move on to more realistic workflows that stress MySQL in different ways.
Authenticated User Workflow with Session-Based Access
Many MySQL-backed applications perform multiple database operations after login: user lookup, session creation, permissions loading, and profile retrieval. The following script simulates a realistic authenticated flow for a SaaS dashboard.
from locust import HttpUser, task, between
import random
class AuthenticatedDashboardUser(HttpUser):
wait_time = between(1, 2)
credentials = [
{"email": "qa_user1@example.com", "password": "LoadTest!2024"},
{"email": "qa_user2@example.com", "password": "LoadTest!2024"},
{"email": "qa_user3@example.com", "password": "LoadTest!2024"},
]
account_ids = [2001, 2002, 2003, 2004]
invoice_statuses = ["paid", "pending", "overdue"]
def on_start(self):
user = random.choice(self.credentials)
response = self.client.post(
"/api/v1/auth/login",
json={
"email": user["email"],
"password": user["password"],
"remember_me": False
},
name="/api/v1/auth/login"
)
if response.status_code != 200:
response.failure(f"Login failed: {response.status_code}")
return
token = response.json().get("access_token")
self.client.headers.update({
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
})
@task(4)
def load_dashboard_summary(self):
account_id = random.choice(self.account_ids)
self.client.get(
f"/api/v1/accounts/{account_id}/dashboard-summary",
name="/api/v1/accounts/:id/dashboard-summary"
)
@task(3)
def list_invoices(self):
account_id = random.choice(self.account_ids)
params = {
"status": random.choice(self.invoice_statuses),
"page": 1,
"per_page": 50,
"sort": "created_at_desc"
}
self.client.get(
f"/api/v1/accounts/{account_id}/invoices",
params=params,
name="/api/v1/accounts/:id/invoices"
)
@task(2)
def view_profile(self):
self.client.get("/api/v1/users/me", name="/api/v1/users/me")
@task(1)
def logout_and_relogin(self):
self.client.post("/api/v1/auth/logout", name="/api/v1/auth/logout")
self.on_start()What This Scenario Exposes
This test is useful for MySQL performance testing because it exercises:
- User authentication queries
- Session/token persistence
- Account-scoped lookups
- Filtered invoice listings with sorting
- Repeated reads against potentially large transactional tables
If /invoices becomes slow under load, investigate:
- Composite indexes on
account_id,status, andcreated_at - Whether pagination uses offset inefficiently
- Whether the API triggers extra count queries
- Whether the ORM is issuing N+1 queries for related records
Write-Heavy Transaction Scenario
Read tests are valuable, but MySQL bottlenecks often appear during writes. Let’s simulate a checkout workflow that inserts orders, updates inventory, and writes payment records.
from locust import HttpUser, task, between
import random
import uuid
class CheckoutUser(HttpUser):
wait_time = between(2, 5)
tokens = [
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.user1",
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.user2",
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.user3"
]
product_catalog = [
{"product_id": 101, "qty": 1},
{"product_id": 104, "qty": 2},
{"product_id": 115, "qty": 1},
{"product_id": 121, "qty": 3}
]
def on_start(self):
token = random.choice(self.tokens)
self.client.headers.update({
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
})
@task
def complete_checkout(self):
cart_items = random.sample(self.product_catalog, k=2)
cart_response = self.client.post(
"/api/v1/cart/items/bulk",
json={
"items": cart_items
},
name="/api/v1/cart/items/bulk"
)
if cart_response.status_code not in (200, 201):
return
order_reference = f"ORD-{uuid.uuid4().hex[:10].upper()}"
self.client.post(
"/api/v1/checkout",
json={
"order_reference": order_reference,
"shipping_address": {
"first_name": "Taylor",
"last_name": "Morgan",
"line1": "425 Market Street",
"city": "San Francisco",
"state": "CA",
"postal_code": "94105",
"country": "US"
},
"payment_method": {
"type": "card",
"token": "tok_visa_4242"
},
"currency": "USD"
},
name="/api/v1/checkout"
)Why This Matters for MySQL Stress Testing
This scenario can surface database issues such as:
- Transaction lock contention on inventory rows
- Slow inserts into
orders,order_items, andpayments - Foreign key overhead
- Deadlocks under concurrent checkout attempts
- Increased latency from synchronous audit logging
For write-heavy testing, monitor both application response times and MySQL internals. A good throughput number is meaningless if deadlocks or lock waits are growing in the background.
Reporting and Aggregation Scenario
One of the most common MySQL pain points is reporting. Dashboards and analytics endpoints often use large joins, aggregation functions, date filtering, and sorting. These can perform well in development with small datasets but degrade badly in production-sized environments.
from locust import HttpUser, task, between
import random
class ReportingUser(HttpUser):
wait_time = between(3, 6)
auth_tokens = [
"Bearer report_token_user_1",
"Bearer report_token_user_2"
]
regions = ["us-east", "us-west", "eu-central"]
date_ranges = [
("2024-01-01", "2024-01-31"),
("2024-02-01", "2024-02-29"),
("2024-03-01", "2024-03-31")
]
def on_start(self):
self.client.headers.update({
"Authorization": random.choice(self.auth_tokens),
"Accept": "application/json"
})
@task(3)
def revenue_report(self):
start_date, end_date = random.choice(self.date_ranges)
params = {
"start_date": start_date,
"end_date": end_date,
"group_by": "day",
"region": random.choice(self.regions)
}
self.client.get(
"/api/v1/reports/revenue",
params=params,
name="/api/v1/reports/revenue"
)
@task(2)
def customer_ltv_report(self):
params = {
"segment": "enterprise",
"limit": 100,
"sort": "ltv_desc"
}
self.client.get(
"/api/v1/reports/customer-ltv",
params=params,
name="/api/v1/reports/customer-ltv"
)
@task(1)
def export_orders_csv(self):
params = {
"from": "2024-03-01",
"to": "2024-03-31",
"status": "completed",
"format": "csv"
}
self.client.get(
"/api/v1/reports/orders/export",
params=params,
name="/api/v1/reports/orders/export"
)What to Look For
Reporting endpoints often reveal:
- Missing covering indexes
- Temporary table creation
- Filesort operations
- Slow aggregate queries over large datasets
- Excessive export generation time
These are ideal candidates for LoadForge because you can use distributed load to simulate many internal users hitting dashboards at once, while real-time reporting helps you see exactly when response times start climbing.
Analyzing Your Results
After your MySQL load test runs, focus on both application-level and database-level metrics.
Application Metrics in LoadForge
LoadForge gives you real-time reporting on:
- Requests per second
- Average response time
- p50, p95, and p99 latency
- Error rates
- Endpoint-specific performance trends
For MySQL-backed systems, pay special attention to endpoint groupings. Averages can hide important problems. For example:
/api/v1/products/:idmay stay fast/api/v1/reports/revenuemay degrade sharply/api/v1/checkoutmay show low average latency but occasional severe spikes due to lock contention
Database Metrics to Correlate
Correlate LoadForge results with MySQL observations such as:
- Slow query log entries
- Queries per second
- Threads connected and threads running
- InnoDB row lock time
- Buffer pool hit ratio
- Temporary tables on disk
- Disk I/O utilization
- CPU saturation
Questions to Ask
When analyzing MySQL performance testing results, ask:
- At what user count did p95 latency materially increase?
- Which endpoints degraded first?
- Did errors start before or after latency spiked?
- Were slow queries caused by reads, writes, or reporting workloads?
- Did connection counts approach pool or server limits?
- Did write-heavy traffic create lock waits or deadlocks?
Establish Performance Thresholds
Define pass/fail criteria before running tests. For example:
- p95 response time under 500 ms for product browse endpoints
- p95 under 1.5 seconds for reporting endpoints
- error rate below 1%
- no deadlock spikes during checkout
- CPU below 80% sustained on the MySQL primary
This makes your load testing repeatable and useful in CI/CD pipelines, where LoadForge can help automate performance regression detection before deployment.
Performance Optimization Tips
If your MySQL load testing reveals issues, these are the most common fixes.
Add or Improve Indexes
Look for slow filters, sorts, and joins. Composite indexes are often more effective than single-column indexes for real workloads.
Optimize Pagination
Large offset-based pagination can become expensive. Consider keyset pagination for high-volume tables.
Reduce N+1 Queries
If your application uses an ORM, inspect generated SQL. A single API call may trigger dozens of queries under the hood.
Tune Connection Pooling
Make sure the application connection pool is sized appropriately for your MySQL instance and workload. More connections do not always mean better performance.
Cache Expensive Reads
Frequently accessed product listings, dashboard summaries, and report snapshots may benefit from caching to reduce repeated MySQL load.
Separate Read and Write Workloads
If your architecture supports replicas, route read-heavy traffic away from the primary. Then validate replica behavior during load testing.
Review Transaction Scope
Keep transactions short. Long-running transactions increase lock contention and reduce concurrency.
Precompute Reports
For expensive analytics endpoints, consider materialized summaries, scheduled rollups, or dedicated reporting tables.
Common Pitfalls to Avoid
MySQL load testing is straightforward in concept, but several mistakes can make results misleading.
Testing with Unrealistically Small Data
A query that is fast with 10,000 rows may fail badly with 10 million. Always test with production-like data volumes.
Ignoring Database Monitoring
Application latency alone is not enough. Without MySQL metrics, you may know something is slow but not why.
Running Only Read Tests
Many serious MySQL problems appear during writes, especially in transactional systems. Include inserts, updates, and mixed workloads.
Overlooking Authentication and Session Overhead
Login flows and permission checks often involve multiple database queries. Include them in realistic scenarios.
Not Isolating Background Jobs
Exports, report generation, and scheduled jobs can distort your results if they run during tests. Control for them where possible.
Using a Single Geographic Test Source
If your users are global, run tests from multiple regions. LoadForge’s global test locations help simulate realistic access patterns and reveal latency differences across environments.
Stress Testing Production Without Safeguards
Never run aggressive database stress testing against production without strict controls, rate limits, and stakeholder approval.
Conclusion
MySQL load testing is essential for understanding how your application behaves when real users generate real database pressure. Whether you’re benchmarking query speed, validating concurrent connection handling, or uncovering bottlenecks in reporting and checkout workflows, a realistic performance testing strategy will help you find issues before your users do.
With LoadForge, you can build Locust-based MySQL load tests that reflect actual user behavior, run them at scale with distributed cloud infrastructure, monitor performance in real time, and integrate performance testing into your CI/CD workflow. If you’re ready to improve database reliability and confidently stress test your MySQL-backed application, try LoadForge and start testing today.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

Cassandra Load Testing with LoadForge
Run Cassandra load tests with LoadForge to measure write throughput, query latency, and distributed database resilience.

Elasticsearch Load Testing with LoadForge
Learn how to load test Elasticsearch with LoadForge to benchmark search latency, indexing speed, and cluster performance.

How to Load Test Databases with LoadForge
Discover how to load test databases with LoadForge, from SQL to NoSQL, and identify bottlenecks before production.