
Introduction
Amazon DynamoDB is built for massive scale, low-latency access, and predictable performance, but that does not mean you can skip load testing. Even a well-designed DynamoDB table can experience hot partitions, throttling, uneven access patterns, or unexpected latency spikes when traffic grows. If your application depends on DynamoDB for session storage, product catalogs, user profiles, carts, event logs, or time-series data, you need to understand how it behaves under realistic production traffic.
DynamoDB load testing helps you validate read and write throughput, observe throttling behavior, confirm auto scaling performance, and identify bottlenecks in your application before they become customer-facing incidents. It is also one of the best ways to test whether your partition key design is truly distributing traffic evenly.
In this guide, you will learn how to use LoadForge to run realistic DynamoDB performance testing and stress testing using Locust-based Python scripts. We will cover basic health checks, authenticated read and write operations against the DynamoDB API, mixed workloads, batch operations, and strategies for interpreting the results. Because LoadForge uses cloud-based infrastructure and distributed testing, you can simulate high-volume traffic from multiple global test locations and monitor everything with real-time reporting.
Prerequisites
Before you start load testing DynamoDB with LoadForge, make sure you have the following:
- An AWS account with access to DynamoDB
- One or more DynamoDB tables to test
- AWS credentials with permission to call DynamoDB APIs
- A clear understanding of your target workload:
- Read-heavy
- Write-heavy
- Mixed read/write
- Burst traffic
- Hot key scenarios
- The AWS region where your DynamoDB table is hosted
- Knowledge of your table schema, especially:
- Table name
- Partition key
- Sort key, if applicable
- Secondary indexes, if used
For the examples below, assume you have a DynamoDB table named Orders in us-east-1 with this schema:
- Partition key:
customer_id(String) - Sort key:
order_id(String)
A sample item might look like this:
{
"customer_id": "cust-10293",
"order_id": "ord-20250406-0001",
"status": "PROCESSING",
"total_amount": 149.99,
"currency": "USD",
"created_at": "2025-04-06T10:15:00Z",
"items": [
{
"sku": "SKU-1001",
"quantity": 2,
"price": 49.99
},
{
"sku": "SKU-2003",
"quantity": 1,
"price": 50.01
}
]
}You will also need AWS Signature Version 4 signing for requests to the DynamoDB HTTPS API. Since DynamoDB does not use simple bearer tokens for direct API access, realistic load testing should include properly signed requests.
Understanding DynamoDB Under Load
DynamoDB behaves differently from a traditional relational database under load. Instead of connection pool exhaustion or lock contention, the most common performance issues are related to throughput limits, partition distribution, and request patterns.
Key DynamoDB performance concepts
Read and write capacity behavior
DynamoDB supports both on-demand and provisioned capacity modes:
- On-demand scales automatically, but can still throttle during sudden or extreme traffic increases
- Provisioned capacity requires you to size RCUs and WCUs correctly, with optional auto scaling
When load testing, you want to measure:
- Average and percentile response times
- Read and write success rates
- Throttled requests
- Error patterns during spikes
- Scaling lag during traffic ramp-up
Partition hot spots
DynamoDB distributes data across partitions based on the partition key. If your load test sends most requests to a small number of keys, you may create hot partitions, leading to throttling even if overall table capacity appears sufficient.
This is one of the most important reasons to run realistic DynamoDB performance testing. A table that looks fine under uniform traffic may fail badly under skewed production access patterns.
Request size and complexity
Latency can increase due to:
- Large item payloads
- Batch operations
- Queries returning many items
- Conditional writes
- Transaction operations
- Strongly consistent reads
Application-side bottlenecks
Sometimes DynamoDB is not the real problem. Your API layer, Lambda functions, ECS services, or EC2 instances may become saturated before DynamoDB does. LoadForge is especially useful here because you can test the DynamoDB API directly or through your application stack to isolate the bottleneck.
Writing Your First Load Test
Let’s start with a basic DynamoDB load test that performs a GetItem request against the Orders table. This example uses the DynamoDB JSON API over HTTPS and signs requests with AWS SigV4.
Basic GetItem load test
from locust import HttpUser, task, between
from requests_aws4auth import AWS4Auth
import boto3
import random
class DynamoDBReadUser(HttpUser):
wait_time = between(1, 3)
host = "https://dynamodb.us-east-1.amazonaws.com"
def on_start(self):
session = boto3.Session()
credentials = session.get_credentials().get_frozen_credentials()
self.aws_auth = AWS4Auth(
credentials.access_key,
credentials.secret_key,
"us-east-1",
"dynamodb",
session_token=credentials.token
)
self.customer_ids = [
"cust-10293",
"cust-20481",
"cust-30922",
"cust-41277"
]
self.order_ids = [
"ord-20250406-0001",
"ord-20250406-0002",
"ord-20250406-0003",
"ord-20250406-0004"
]
@task
def get_order(self):
customer_id = random.choice(self.customer_ids)
order_id = random.choice(self.order_ids)
payload = {
"TableName": "Orders",
"Key": {
"customer_id": {"S": customer_id},
"order_id": {"S": order_id}
},
"ConsistentRead": False
}
headers = {
"Content-Type": "application/x-amz-json-1.0",
"X-Amz-Target": "DynamoDB_20120810.GetItem"
}
with self.client.post(
"/",
json=payload,
headers=headers,
auth=self.aws_auth,
name="DynamoDB GetItem",
catch_response=True
) as response:
if response.status_code == 200:
response.success()
elif response.status_code == 400 and "ProvisionedThroughputExceededException" in response.text:
response.failure("Throttled by DynamoDB")
else:
response.failure(f"Unexpected response: {response.status_code} {response.text}")What this script does
This first script simulates users reading individual orders from DynamoDB. It:
- Connects to the DynamoDB API endpoint in
us-east-1 - Uses AWS credentials from the runtime environment
- Signs requests with SigV4
- Sends
GetItemrequests to theOrderstable - Flags throttling separately from generic failures
This is a good starting point for basic load testing, but by itself it is not enough. Real DynamoDB workloads usually involve a mix of reads, writes, queries, and bursts.
Running this test in LoadForge
In LoadForge, you can paste this Locust script into your test, configure user count and spawn rate, and then run it from cloud-based infrastructure. Start with a small ramp, such as:
Users: 25
Spawn rate: 5 users/second
Duration: 5 minutesThen increase to realistic production levels and beyond for stress testing.
Advanced Load Testing Scenarios
To properly validate DynamoDB at scale, you should test more than simple point reads. Below are several realistic scenarios developers commonly need.
Mixed read and write workload
Most production systems are not purely read-only. This example simulates a mixed workload where users create orders and read them back.
from locust import HttpUser, task, between
from requests_aws4auth import AWS4Auth
import boto3
import random
import uuid
from datetime import datetime, timezone
class DynamoDBMixedUser(HttpUser):
wait_time = between(0.5, 2)
host = "https://dynamodb.us-east-1.amazonaws.com"
def on_start(self):
session = boto3.Session()
credentials = session.get_credentials().get_frozen_credentials()
self.aws_auth = AWS4Auth(
credentials.access_key,
credentials.secret_key,
"us-east-1",
"dynamodb",
session_token=credentials.token
)
self.customer_ids = [f"cust-{10000+i}" for i in range(1000)]
@task(3)
def read_order(self):
customer_id = random.choice(self.customer_ids)
order_id = f"ord-{random.randint(1, 5000):08d}"
payload = {
"TableName": "Orders",
"Key": {
"customer_id": {"S": customer_id},
"order_id": {"S": order_id}
}
}
headers = {
"Content-Type": "application/x-amz-json-1.0",
"X-Amz-Target": "DynamoDB_20120810.GetItem"
}
with self.client.post(
"/",
json=payload,
headers=headers,
auth=self.aws_auth,
name="GetItem /Orders",
catch_response=True
) as response:
if response.status_code == 200:
response.success()
else:
response.failure(f"Read failed: {response.status_code} {response.text}")
@task(1)
def create_order(self):
customer_id = random.choice(self.customer_ids)
order_id = f"ord-{uuid.uuid4()}"
now = datetime.now(timezone.utc).isoformat()
payload = {
"TableName": "Orders",
"Item": {
"customer_id": {"S": customer_id},
"order_id": {"S": order_id},
"status": {"S": "PENDING"},
"total_amount": {"N": str(round(random.uniform(10, 500), 2))},
"currency": {"S": "USD"},
"created_at": {"S": now},
"item_count": {"N": str(random.randint(1, 5))}
},
"ConditionExpression": "attribute_not_exists(order_id)"
}
headers = {
"Content-Type": "application/x-amz-json-1.0",
"X-Amz-Target": "DynamoDB_20120810.PutItem"
}
with self.client.post(
"/",
json=payload,
headers=headers,
auth=self.aws_auth,
name="PutItem /Orders",
catch_response=True
) as response:
if response.status_code == 200:
response.success()
elif "ProvisionedThroughputExceededException" in response.text:
response.failure("Write throttled")
elif "ConditionalCheckFailedException" in response.text:
response.success()
else:
response.failure(f"Write failed: {response.status_code} {response.text}")Why this scenario matters
This test is more realistic because it:
- Mixes reads and writes in a 3:1 ratio
- Uses varied customer IDs to spread load
- Creates realistic order data
- Includes a condition expression to mimic safe insert patterns
- Helps validate both read and write capacity under concurrent traffic
This is an excellent baseline for DynamoDB performance testing in e-commerce, order management, and transactional systems.
Query workload against a customer’s order history
Many applications use DynamoDB Query operations to fetch all records for a partition key, often with sort key filtering. This pattern can become expensive or slow if partitions are too large or request patterns are uneven.
from locust import HttpUser, task, between
from requests_aws4auth import AWS4Auth
import boto3
import random
class DynamoDBQueryUser(HttpUser):
wait_time = between(1, 2)
host = "https://dynamodb.us-east-1.amazonaws.com"
def on_start(self):
session = boto3.Session()
credentials = session.get_credentials().get_frozen_credentials()
self.aws_auth = AWS4Auth(
credentials.access_key,
credentials.secret_key,
"us-east-1",
"dynamodb",
session_token=credentials.token
)
self.customer_ids = [f"cust-{10000+i}" for i in range(500)]
@task
def query_recent_orders(self):
customer_id = random.choice(self.customer_ids)
payload = {
"TableName": "Orders",
"KeyConditionExpression": "customer_id = :cid AND begins_with(order_id, :prefix)",
"ExpressionAttributeValues": {
":cid": {"S": customer_id},
":prefix": {"S": "ord-2025"}
},
"Limit": 25,
"ScanIndexForward": False
}
headers = {
"Content-Type": "application/x-amz-json-1.0",
"X-Amz-Target": "DynamoDB_20120810.Query"
}
with self.client.post(
"/",
json=payload,
headers=headers,
auth=self.aws_auth,
name="Query customer orders",
catch_response=True
) as response:
if response.status_code == 200:
result = response.json()
count = result.get("Count", 0)
if count >= 0:
response.success()
else:
response.failure("Invalid query result")
elif "ProvisionedThroughputExceededException" in response.text:
response.failure("Query throttled")
else:
response.failure(f"Query failed: {response.status_code} {response.text}")What this test reveals
A query-focused load test is useful for:
- Measuring latency for partition-based lookups
- Detecting hot customer partitions
- Validating sort key design
- Understanding how large partition histories impact performance
- Stress testing access patterns that are common in dashboards and account pages
If your queries are slow or frequently throttled, the issue may not be total table capacity. It may be your key design.
Batch writes and burst traffic simulation
DynamoDB often performs well under steady traffic but behaves differently during bursts. Batch operations are common in ingestion pipelines, analytics feeders, and event-driven systems.
from locust import HttpUser, task, between
from requests_aws4auth import AWS4Auth
import boto3
import random
import uuid
from datetime import datetime, timezone
class DynamoDBBatchWriteUser(HttpUser):
wait_time = between(0.1, 0.5)
host = "https://dynamodb.us-east-1.amazonaws.com"
def on_start(self):
session = boto3.Session()
credentials = session.get_credentials().get_frozen_credentials()
self.aws_auth = AWS4Auth(
credentials.access_key,
credentials.secret_key,
"us-east-1",
"dynamodb",
session_token=credentials.token
)
@task
def batch_write_orders(self):
requests = []
now = datetime.now(timezone.utc).isoformat()
for _ in range(10):
customer_id = f"cust-{random.randint(10000, 10100)}"
order_id = f"ord-{uuid.uuid4()}"
requests.append({
"PutRequest": {
"Item": {
"customer_id": {"S": customer_id},
"order_id": {"S": order_id},
"status": {"S": random.choice(["PENDING", "PROCESSING", "SHIPPED"])},
"total_amount": {"N": str(round(random.uniform(20, 1000), 2))},
"currency": {"S": "USD"},
"created_at": {"S": now},
"source": {"S": "bulk-import"}
}
}
})
payload = {
"RequestItems": {
"Orders": requests
}
}
headers = {
"Content-Type": "application/x-amz-json-1.0",
"X-Amz-Target": "DynamoDB_20120810.BatchWriteItem"
}
with self.client.post(
"/",
json=payload,
headers=headers,
auth=self.aws_auth,
name="BatchWriteItem /Orders",
catch_response=True
) as response:
if response.status_code == 200:
result = response.json()
unprocessed = result.get("UnprocessedItems", {})
if unprocessed and unprocessed.get("Orders"):
response.failure(f"Batch partially throttled: {len(unprocessed['Orders'])} unprocessed")
else:
response.success()
else:
response.failure(f"Batch write failed: {response.status_code} {response.text}")When to use this scenario
This script is valuable for stress testing DynamoDB under ingestion-heavy conditions such as:
- Event collection
- Order import jobs
- Log aggregation
- IoT telemetry writes
- Queue consumer bursts
It also helps validate how your system responds when DynamoDB returns unprocessed items, which is a normal part of high-throughput batch behavior.
Analyzing Your Results
Once your DynamoDB load testing run completes, the real value comes from interpreting the results correctly. LoadForge gives you real-time reporting and historical metrics, but you should also correlate those results with AWS CloudWatch.
Key metrics to watch in LoadForge
Response times
Focus on:
- Median latency
- P95 latency
- P99 latency
- Max response time
For DynamoDB, averages can be misleading. A table may look healthy on average while P95 and P99 degrade badly under contention or hot partitions.
Request rate
Measure:
- Reads per second
- Writes per second
- Queries per second
- Batch requests per second
Compare these against your expected production workload.
Failure rate
Separate failures into categories:
- Throttling
- Authentication/signing errors
- Validation errors
- Conditional check failures
- Timeouts
Not all failures are equal. For example, ConditionalCheckFailedException may be expected in some write patterns, while ProvisionedThroughputExceededException usually indicates capacity or partition pressure.
Correlate with CloudWatch metrics
During performance testing, review these DynamoDB CloudWatch metrics:
ConsumedReadCapacityUnitsConsumedWriteCapacityUnitsReadThrottleEventsWriteThrottleEventsSuccessfulRequestLatencySystemErrorsUserErrors
If using provisioned mode, also review:
ProvisionedReadCapacityUnitsProvisionedWriteCapacityUnits
If using auto scaling, look for delays between increased traffic and increased provisioned capacity.
Look for these patterns
Rising latency with no throttling
This may indicate:
- Large item sizes
- Inefficient query patterns
- Application-layer bottlenecks
- Network issues between test agents and AWS region
Throttling at lower-than-expected throughput
This often suggests:
- Hot partition keys
- Uneven traffic distribution
- Misconfigured capacity
- Secondary index bottlenecks
Burst failures during ramp-up
This can happen when:
- On-demand scaling has not adapted yet
- Auto scaling reacts too slowly
- Batch traffic is too concentrated
LoadForge’s distributed testing is especially helpful here because you can simulate traffic waves from multiple regions and see whether bursts affect all users equally.
Performance Optimization Tips
After running your DynamoDB stress testing, use the results to improve performance.
Choose better partition keys
If you observe throttling under moderate load, review your partition key strategy. Good partition keys distribute requests evenly. Avoid keys with highly skewed access patterns unless you intentionally shard them.
Reduce hot key traffic
If a small number of items receive most requests:
- Add caching with ElastiCache or CloudFront where appropriate
- Replicate frequently read data
- Introduce write sharding for heavy write keys
Keep items lean
Larger items increase latency and throughput consumption. Store only what you need for the access pattern. Consider splitting infrequently used attributes into separate items.
Use efficient access patterns
Prefer:
GetItemfor direct lookupsQueryoverScan- Smaller result sets with
Limit - Proper sort key design for range access
Avoid full table scans in performance-sensitive paths.
Handle retries correctly
DynamoDB clients should implement exponential backoff and jitter for throttled requests. Your load tests should account for this behavior if you want to model real client resilience.
Test realistic consistency settings
Strongly consistent reads consume more resources and may impact latency. If your application can tolerate eventual consistency, test both modes and compare the results.
Validate indexes separately
Global secondary indexes can throttle independently of the base table. If your application relies heavily on GSIs, create dedicated load tests for those query paths.
Common Pitfalls to Avoid
DynamoDB load testing is powerful, but several mistakes can make your results misleading.
Testing only uniform traffic
Real production traffic is rarely evenly distributed. If your test randomly spreads requests across thousands of keys but production traffic concentrates around a few popular users or products, your test may miss hot partition issues.
Ignoring throttling details
Do not lump all failures together. Throttling is one of the most important signals in DynamoDB performance testing, and it should be tracked separately.
Using unrealistic item sizes
Tiny synthetic payloads may produce unrealistically good results. Use realistic attribute counts and payload sizes.
Forgetting secondary indexes
A table may perform well while a GSI becomes the real bottleneck. Test the actual access paths your application uses.
Not correlating with AWS metrics
LoadForge shows request-side performance, but CloudWatch tells you what DynamoDB itself experienced. You need both views for accurate analysis.
Overlooking burst behavior
Steady-state load testing is useful, but many DynamoDB incidents happen during traffic spikes. Include ramp tests and burst tests in your plan.
Testing from only one region
If your users are global, latency and network conditions vary. LoadForge’s global test locations can help you simulate more realistic traffic patterns across regions.
Missing CI/CD integration
Performance regressions often happen after schema changes, new indexes, or application logic updates. Add DynamoDB load testing to your CI/CD pipeline so you catch capacity and latency issues before deployment.
Conclusion
DynamoDB is highly scalable, but scalable does not mean limitless or immune to poor access patterns. Effective load testing helps you validate read and write capacity, understand throttling behavior, uncover hot partitions, and measure performance under realistic traffic conditions. With the right Locust scripts, you can test direct GetItem, PutItem, Query, and BatchWriteItem workloads that reflect how your application actually uses DynamoDB.
LoadForge makes this process much easier with distributed testing, cloud-based infrastructure, real-time reporting, global test locations, and CI/CD integration. Whether you are validating a new table design, stress testing a production workload, or tuning for peak traffic, LoadForge gives you the visibility you need to test DynamoDB with confidence.
Try LoadForge to run your DynamoDB load testing at scale and catch performance issues before your users do.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

CockroachDB Load Testing with LoadForge
Load test CockroachDB with LoadForge to evaluate SQL performance, transaction latency, and horizontal scalability.

Firebase Firestore Load Testing with LoadForge
Load test Firebase Firestore with LoadForge to evaluate document reads, writes, latency, and scaling under heavy traffic.

How to Load Test Databases with LoadForge
Discover how to load test databases with LoadForge, from SQL to NoSQL, and identify bottlenecks before production.