
Introduction
Medusa Load Testing: How to Stress Test Medusa Commerce with LoadForge
Medusa is a popular open-source commerce engine built for modern storefronts, custom backend workflows, and flexible API-driven e-commerce experiences. Whether you are running a headless storefront, a custom admin dashboard, or a multi-region commerce backend, Medusa performance directly affects conversion rates, checkout completion, and customer satisfaction.
Load testing Medusa is essential because e-commerce traffic is rarely uniform. Product launches, seasonal promotions, flash sales, and influencer campaigns can create sudden spikes in traffic that stress product APIs, cart operations, region pricing, shipping calculations, and order placement workflows. If your Medusa backend slows down or fails under load, you can lose revenue quickly.
In this guide, you will learn how to perform realistic Medusa load testing with LoadForge using Locust-based Python scripts. We will cover basic storefront API testing, authenticated customer flows, cart and checkout scenarios, and admin API stress testing. Along the way, we will highlight how LoadForge helps with distributed testing, real-time reporting, cloud-based infrastructure, CI/CD integration, and global test locations for complete Medusa performance testing.
Prerequisites
Before you start load testing Medusa, make sure you have the following:
- A running Medusa backend environment
- The base URL for your Medusa server, such as
https://shop.example.com - At least one configured region in Medusa
- Seeded products, variants, shipping options, and payment setup
- Test customer accounts for authenticated scenarios
- Optional admin credentials or API keys for admin endpoint testing
- A LoadForge account to run distributed load testing in the cloud
It also helps to understand the key Medusa API groups you will likely test:
- Store API endpoints under
/store/* - Admin API endpoints under
/admin/* - Cart and checkout endpoints
- Product listing and search endpoints
- Customer authentication and account endpoints
- Order creation and post-purchase flows
For realistic performance testing, your Medusa environment should resemble production as much as possible, including:
- Similar database size
- Similar Redis/cache configuration
- Similar plugin configuration
- Similar payment and shipping provider behavior
- Similar CDN and reverse proxy setup
Understanding Medusa Under Load
Medusa is API-first, which makes it ideal for custom commerce builds, but it also means traffic often concentrates on specific backend endpoints. Under load, Medusa typically experiences pressure in a few common areas.
Product Catalog and Variant Retrieval
Storefronts frequently call endpoints like:
GET /store/productsGET /store/products/:idGET /store/regionsGET /store/collectionsGET /store/product-categories
These endpoints can become expensive when they include:
- Large product catalogs
- Deep relations such as variants, prices, images, and collections
- Region-based pricing logic
- Filtering and pagination
- Search integrations
If response times increase here, the storefront feels slow before users even add items to their cart.
Cart and Checkout Workflows
Cart flows are among the most important Medusa user journeys. Common endpoints include:
POST /store/cartsPOST /store/carts/:id/line-itemsPOST /store/carts/:id/shipping-methodsPOST /store/carts/:id/payment-sessionsPOST /store/carts/:id/complete
These operations often trigger:
- Database writes
- Pricing recalculation
- Tax calculation
- Shipping option lookup
- Inventory checks
- Payment provider interactions
This makes checkout one of the most critical areas for stress testing.
Customer Authentication and Account Access
Authenticated customers may use:
POST /store/customers/tokenGET /store/customers/meGET /store/orders
Login bursts or account-heavy workflows can reveal bottlenecks in session handling, token issuance, and customer/order retrieval.
Admin API Load
Operational teams and back-office systems often hit:
POST /admin/authGET /admin/ordersGET /admin/productsPOST /admin/productsPOST /admin/draft-orders
Admin load matters too, especially during peak periods when support teams are processing orders and inventory updates in real time.
Common Medusa Bottlenecks
When performance testing Medusa, watch for these common issues:
- Slow PostgreSQL queries on products, variants, carts, and orders
- Missing indexes on frequently filtered fields
- Repeated pricing calculations
- Shipping and tax provider latency
- Redis contention or session/cache misconfiguration
- Excessive relation loading in API responses
- Plugin overhead from search, event buses, or custom workflows
A good Medusa load testing strategy should combine read-heavy traffic, write-heavy checkout flows, and authenticated customer behavior.
Writing Your First Load Test
Let’s begin with a simple Medusa storefront load test. This scenario simulates anonymous shoppers browsing regions, product listings, and individual product details.
Basic Medusa Storefront Browsing Test
from locust import HttpUser, task, between
import random
class MedusaStoreBrowsingUser(HttpUser):
wait_time = between(1, 3)
product_handles = [
"classic-tshirt",
"premium-hoodie",
"canvas-tote-bag",
"running-sneakers"
]
@task(2)
def get_regions(self):
self.client.get(
"/store/regions",
headers={
"x-publishable-api-key": "pk_test_123456789"
},
name="/store/regions"
)
@task(4)
def list_products(self):
self.client.get(
"/store/products?limit=12&offset=0®ion_id=reg_01HZX8K4EXAMPLE&fields=*variants.calculated_price",
headers={
"x-publishable-api-key": "pk_test_123456789"
},
name="/store/products"
)
@task(3)
def view_product_by_handle(self):
handle = random.choice(self.product_handles)
self.client.get(
f"/store/products?handle={handle}®ion_id=reg_01HZX8K4EXAMPLE",
headers={
"x-publishable-api-key": "pk_test_123456789"
},
name="/store/products?handle=[handle]"
)What This Test Does
This first script focuses on read-heavy storefront traffic:
- Fetches available regions
- Lists products for a specific region
- Retrieves products by handle, which is common in headless storefronts
This is a good starting point for baseline load testing because it helps you measure:
- Product API response times
- Region pricing overhead
- Catalog query throughput
- Error rates under concurrent browsing traffic
In LoadForge, you can run this test from multiple global test locations to see how your Medusa storefront performs for users in different regions. Real-time reporting will show percentile latency, requests per second, and failure rates as traffic ramps up.
Advanced Load Testing Scenarios
Once you have established a baseline, move on to realistic user journeys. For Medusa, that means testing customer authentication, cart creation, adding line items, selecting shipping, and completing orders.
Scenario 1: Authenticated Customer Login and Profile Access
This script simulates customers logging in and accessing their account details and order history.
from locust import HttpUser, task, between
import random
class MedusaCustomerUser(HttpUser):
wait_time = between(2, 5)
customers = [
{"email": "alice@example.com", "password": "SuperSecret123"},
{"email": "bob@example.com", "password": "SuperSecret123"},
{"email": "carol@example.com", "password": "SuperSecret123"}
]
def on_start(self):
self.token = None
creds = random.choice(self.customers)
response = self.client.post(
"/store/customers/token",
json={
"email": creds["email"],
"password": creds["password"]
},
headers={
"Content-Type": "application/json",
"x-publishable-api-key": "pk_test_123456789"
},
name="/store/customers/token"
)
if response.status_code == 200:
self.token = response.json().get("access_token")
@task(3)
def get_customer_profile(self):
if not self.token:
return
self.client.get(
"/store/customers/me",
headers={
"Authorization": f"Bearer {self.token}",
"x-publishable-api-key": "pk_test_123456789"
},
name="/store/customers/me"
)
@task(2)
def get_customer_orders(self):
if not self.token:
return
self.client.get(
"/store/orders",
headers={
"Authorization": f"Bearer {self.token}",
"x-publishable-api-key": "pk_test_123456789"
},
name="/store/orders"
)Why This Scenario Matters
Customer authentication load testing helps you validate:
- Login endpoint scalability
- JWT or token generation performance
- Customer session handling
- Order history query efficiency
This is especially important for stores with loyalty features, self-service order lookup, or account-heavy repeat customer traffic.
Scenario 2: Full Cart Flow with Line Items and Shipping
This is one of the most valuable Medusa performance testing scripts you can run. It simulates a shopper creating a cart, adding items, setting an address, retrieving shipping options, and attaching a shipping method.
from locust import HttpUser, task, between, sequential_taskset
import random
class MedusaCartUser(HttpUser):
wait_time = between(1, 2)
region_id = "reg_01HZX8K4EXAMPLE"
variant_ids = [
"variant_01HZX8V1AAA111",
"variant_01HZX8V1BBB222",
"variant_01HZX8V1CCC333"
]
def on_start(self):
self.cart_id = None
self.shipping_option_id = None
self.headers = {
"Content-Type": "application/json",
"x-publishable-api-key": "pk_test_123456789"
}
@task
def cart_flow(self):
if not self.cart_id:
response = self.client.post(
"/store/carts",
json={
"region_id": self.region_id
},
headers=self.headers,
name="/store/carts"
)
if response.status_code != 200:
return
self.cart_id = response.json()["cart"]["id"]
self.client.post(
f"/store/carts/{self.cart_id}/line-items",
json={
"variant_id": random.choice(self.variant_ids),
"quantity": random.randint(1, 3)
},
headers=self.headers,
name="/store/carts/[id]/line-items"
)
self.client.post(
f"/store/carts/{self.cart_id}",
json={
"shipping_address": {
"first_name": "Jane",
"last_name": "Doe",
"address_1": "123 Market Street",
"city": "New York",
"country_code": "us",
"province": "NY",
"postal_code": "10001",
"phone": "+12125550199"
},
"email": "jane.doe@example.com"
},
headers=self.headers,
name="/store/carts/[id]"
)
shipping_response = self.client.get(
f"/store/shipping-options?cart_id={self.cart_id}",
headers={
"x-publishable-api-key": "pk_test_123456789"
},
name="/store/shipping-options"
)
if shipping_response.status_code == 200:
shipping_options = shipping_response.json().get("shipping_options", [])
if shipping_options:
shipping_option_id = shipping_options[0]["id"]
self.client.post(
f"/store/carts/{self.cart_id}/shipping-methods",
json={
"option_id": shipping_option_id
},
headers=self.headers,
name="/store/carts/[id]/shipping-methods"
)What This Reveals
This workflow puts real pressure on Medusa internals because it combines reads and writes. It helps uncover issues with:
- Cart creation throughput
- Variant lookup and inventory checks
- Address updates and tax/shipping recalculation
- Shipping option retrieval performance
- Write contention in the cart pipeline
If your Medusa store slows down during promotions, this is often the workflow that exposes the problem.
Scenario 3: Checkout Completion and Order Placement
Now let’s simulate the final and most business-critical step: completing checkout. In a real environment, you may use a test payment provider such as Stripe in test mode or a manual payment session configured for non-production testing.
from locust import HttpUser, task, between
import random
class MedusaCheckoutUser(HttpUser):
wait_time = between(2, 4)
headers = {
"Content-Type": "application/json",
"x-publishable-api-key": "pk_test_123456789"
}
region_id = "reg_01HZX8K4EXAMPLE"
variant_id = "variant_01HZX8V1AAA111"
@task
def complete_checkout(self):
cart_response = self.client.post(
"/store/carts",
json={"region_id": self.region_id},
headers=self.headers,
name="/store/carts"
)
if cart_response.status_code != 200:
return
cart = cart_response.json()["cart"]
cart_id = cart["id"]
self.client.post(
f"/store/carts/{cart_id}/line-items",
json={
"variant_id": self.variant_id,
"quantity": 1
},
headers=self.headers,
name="/store/carts/[id]/line-items"
)
self.client.post(
f"/store/carts/{cart_id}",
json={
"email": "checkout.user@example.com",
"billing_address": {
"first_name": "Checkout",
"last_name": "User",
"address_1": "500 Commerce Ave",
"city": "Los Angeles",
"country_code": "us",
"province": "CA",
"postal_code": "90001",
"phone": "+13105550123"
},
"shipping_address": {
"first_name": "Checkout",
"last_name": "User",
"address_1": "500 Commerce Ave",
"city": "Los Angeles",
"country_code": "us",
"province": "CA",
"postal_code": "90001",
"phone": "+13105550123"
}
},
headers=self.headers,
name="/store/carts/[id]"
)
shipping_response = self.client.get(
f"/store/shipping-options?cart_id={cart_id}",
headers={
"x-publishable-api-key": "pk_test_123456789"
},
name="/store/shipping-options"
)
if shipping_response.status_code != 200:
return
shipping_options = shipping_response.json().get("shipping_options", [])
if not shipping_options:
return
option_id = shipping_options[0]["id"]
self.client.post(
f"/store/carts/{cart_id}/shipping-methods",
json={"option_id": option_id},
headers=self.headers,
name="/store/carts/[id]/shipping-methods"
)
self.client.post(
f"/store/carts/{cart_id}/payment-sessions",
json={},
headers=self.headers,
name="/store/carts/[id]/payment-sessions"
)
self.client.post(
f"/store/carts/{cart_id}/payment-session",
json={
"provider_id": "manual"
},
headers=self.headers,
name="/store/carts/[id]/payment-session"
)
self.client.post(
f"/store/carts/{cart_id}/complete",
headers=self.headers,
name="/store/carts/[id]/complete"
)Why Checkout Testing Is Critical
This script is the closest thing to revenue-path performance testing. It validates whether Medusa can handle:
- Concurrent cart completion
- Payment session creation
- Shipping and tax calculations at scale
- Order creation throughput
- Database transaction consistency under load
This is the scenario to use for stress testing before major sales events.
Scenario 4: Admin API Load Test for Operations Teams
Many Medusa deployments also need admin-side performance testing. This script simulates an admin user authenticating and querying orders and products.
from locust import HttpUser, task, between
class MedusaAdminUser(HttpUser):
wait_time = between(1, 3)
def on_start(self):
self.token = None
response = self.client.post(
"/admin/auth",
json={
"email": "admin@example.com",
"password": "AdminSecret123"
},
headers={
"Content-Type": "application/json"
},
name="/admin/auth"
)
if response.status_code == 200:
self.token = response.json().get("access_token")
@task(3)
def list_orders(self):
if not self.token:
return
self.client.get(
"/admin/orders?limit=20&offset=0&expand=items,customer",
headers={
"Authorization": f"Bearer {self.token}"
},
name="/admin/orders"
)
@task(2)
def list_products(self):
if not self.token:
return
self.client.get(
"/admin/products?limit=20&offset=0&expand=variants",
headers={
"Authorization": f"Bearer {self.token}"
},
name="/admin/products"
)This scenario is useful when your support, merchandising, or fulfillment teams rely heavily on the Medusa admin during peak order volume.
Analyzing Your Results
After you run your Medusa load test in LoadForge, focus on a few key metrics.
Response Time Percentiles
Average latency can be misleading. Look at:
- P50 for typical user experience
- P95 for slow-user experience
- P99 for worst-case spikes
For Medusa, high P95 or P99 values on cart and checkout endpoints often indicate database or external service bottlenecks.
Requests Per Second
Measure how many requests your Medusa backend can sustain for:
- Product listing
- Cart updates
- Checkout completion
- Admin queries
If throughput plateaus while users increase, your application may be saturating CPU, database connections, or I/O.
Error Rate
Watch for:
500server errors502/504gateway errors429rate limits- Timeouts from payment, shipping, or tax providers
Even a small error rate in /store/carts/[id]/complete can have major revenue consequences.
Endpoint-Level Comparison
In LoadForge real-time reporting, compare endpoint performance side by side. For example:
/store/productsmay stay fast under load/store/carts/[id]/line-itemsmay degrade due to recalculation/store/carts/[id]/completemay fail because of downstream provider latency
This helps isolate the exact layer causing performance degradation.
Ramp-Up Behavior
Do not only test steady-state load. Also examine:
- How Medusa behaves during sudden traffic spikes
- Whether latency increases gradually or sharply
- Whether recovery happens after traffic drops
LoadForge’s distributed testing infrastructure makes it easy to simulate both gradual ramp-up and flash-sale-style stress testing from multiple regions.
Performance Optimization Tips
If your Medusa performance testing reveals issues, these are the first areas to investigate.
Optimize Database Queries
Medusa depends heavily on PostgreSQL. Review slow queries for:
- Product filtering
- Variant joins
- Cart totals recalculation
- Order retrieval with expansions
Add indexes where needed and reduce unnecessary expand or relation-heavy queries.
Reduce Payload Size
Large JSON responses slow down both server processing and client rendering. Limit fields where possible and avoid over-fetching product data in storefront requests.
Cache Product and Region Data
Catalog and region endpoints are often ideal for caching. Consider:
- CDN caching for public storefront endpoints
- Reverse proxy caching
- Application-level caching for frequently requested catalog data
Review Shipping and Tax Providers
External provider calls can dominate checkout latency. Use test runs to determine whether shipping or tax integrations are slowing down cart and checkout endpoints.
Tune Infrastructure
For Medusa deployments, common improvements include:
- Increasing app worker count
- Tuning PostgreSQL connection pooling
- Optimizing Redis memory and eviction settings
- Scaling horizontally behind a load balancer
Test with Production-Like Data
A small seed dataset may perform well while a real catalog performs poorly. Always validate with realistic product counts, variant complexity, and order history volume.
Automate Performance Testing in CI/CD
LoadForge supports CI/CD integration, making it easier to catch Medusa performance regressions before deployment. This is especially useful when introducing new plugins, pricing logic, or checkout customizations.
Common Pitfalls to Avoid
Medusa load testing is most useful when it mirrors real production behavior. Avoid these common mistakes.
Testing Only the Homepage or Product List
A store may appear healthy under browse-only traffic but fail during cart updates or checkout. Always include transactional flows in your performance testing plan.
Ignoring Authentication Patterns
Customer and admin authentication can become bottlenecks. Include token generation and authenticated requests where relevant.
Using Unrealistic Test Data
If all users add the same variant, use the same cart flow, or hit a tiny dataset, your results may not reflect real behavior. Use varied product IDs, customer accounts, and order patterns.
Forgetting External Dependencies
Medusa performance may depend on:
- Payment providers
- Shipping providers
- Tax engines
- Search services
- Event buses
If these are mocked or disabled, your checkout load test may underestimate production latency.
Running Tests Without Monitoring the Database
Application metrics alone are not enough. Correlate LoadForge results with:
- PostgreSQL CPU and query timing
- Redis performance
- Container or VM resource usage
- Reverse proxy metrics
Overlooking Cleanup
Checkout and cart tests can generate lots of test orders, carts, and customer activity. Make sure your test environment can be reset or isolated from production data.
Conclusion
Medusa is a powerful commerce platform, but like any e-commerce system, it must be validated under realistic traffic before peak demand hits. By load testing Medusa storefront APIs, customer authentication, cart flows, checkout completion, and admin operations, you can identify bottlenecks before they affect revenue.
Using LoadForge, you can run realistic Medusa load testing at scale with Locust-based scripts, cloud-based infrastructure, distributed testing, global test locations, real-time reporting, and CI/CD integration. That makes it much easier to move from guesswork to measurable Medusa performance testing.
If you want to improve API speed, cart reliability, and order handling at scale, now is the perfect time to build your Medusa load tests and run them with LoadForge.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

Contentful Load Testing: How to Load Test Contentful APIs with LoadForge
Learn how to load test Contentful APIs with LoadForge to measure content delivery performance and handle traffic surges.

Ghost Load Testing: Performance Testing Ghost CMS with LoadForge
Learn how to load test Ghost CMS with LoadForge to benchmark publishing performance and keep content delivery fast under load.

Magento Load Testing: Performance Testing Magento Stores with LoadForge
Run Magento load tests with LoadForge to identify slow pages, optimize checkout flows, and validate store performance at scale.