
Introduction
Phoenix is built for concurrency. Running on the Erlang VM and leveraging lightweight processes, Phoenix applications can handle a large number of simultaneous connections far more efficiently than many traditional web frameworks. That architectural advantage is a big reason teams choose Phoenix for APIs, real-time applications, dashboards, marketplaces, and chat systems.
But Phoenix being fast does not mean every Phoenix app is automatically production-ready. Your controllers may be efficient while your database queries are not. Your LiveView pages may feel snappy with a handful of users but degrade under spikes in websocket traffic. Your authentication flow may perform well in isolation but become a bottleneck when thousands of users log in at once. This is why load testing Phoenix matters.
In this guide, you’ll learn how to use LoadForge to run realistic load testing, performance testing, and stress testing against Phoenix applications. We’ll cover how Phoenix behaves under load, how to write Locust scripts tailored to Phoenix APIs and browser-style flows, and how to interpret the results so you can improve concurrency, response times, and overall production readiness.
LoadForge makes this especially practical by providing cloud-based infrastructure, distributed testing, real-time reporting, global test locations, and CI/CD integration, so you can validate Phoenix performance before release and continuously monitor regressions over time.
Prerequisites
Before you begin load testing your Phoenix application, make sure you have:
- A running Phoenix application in a staging or pre-production environment
- A list of realistic endpoints to test, such as:
//products/api/v1/products/api/v1/users/sign_in/api/v1/cart/live
- Test accounts for authenticated scenarios
- Seeded data in your database so requests reflect realistic usage
- A clear performance goal, such as:
- 500 concurrent users
- p95 response time under 300ms
- error rate below 1%
- stable throughput under peak traffic
- LoadForge account access to run distributed load tests
It also helps to understand the Phoenix stack you are testing:
- Phoenix controllers and REST APIs
- Ecto database interactions
- Plug middleware and authentication
- Phoenix LiveView or websocket-backed features
- Caching layers such as Redis or CDN usage
- Background jobs with Oban or similar queues
When load testing Phoenix, always use a non-production environment unless you have explicit safeguards in place. Performance testing can create significant traffic and may affect real users if pointed at production systems.
Understanding Phoenix Under Load
Phoenix is highly concurrent because it runs on the BEAM, where each request or connection can be handled by lightweight processes. This gives it a strong foundation for high-throughput systems, but application-level bottlenecks still matter.
How Phoenix handles concurrency
Under load, Phoenix typically benefits from:
- Lightweight request handling on the BEAM
- Efficient connection management
- Strong support for websocket and real-time workloads
- Fault tolerance through process isolation
This often means Phoenix can continue serving requests even when parts of the system are under stress. However, the framework is only one part of the overall performance picture.
Common Phoenix bottlenecks
Even with Phoenix’s concurrency model, these issues commonly appear during load testing:
Database saturation
Ecto queries, connection pool limits, N+1 queries, and missing indexes can quickly dominate response times. Many Phoenix apps appear fast until concurrent traffic drives up database wait time.
Authentication overhead
Session-based auth, token verification, password hashing, and repeated user lookups can become expensive under heavy login or API traffic.
LiveView and websocket pressure
LiveView reduces client-side complexity, but each active connection consumes server resources. Large numbers of connected users, frequent updates, and heavy assigns can increase memory and CPU usage.
File uploads and large payloads
Phoenix apps that handle media, CSV imports, or large JSON bodies can see increased latency from request parsing, validation, and storage operations.
External dependencies
If your Phoenix app calls payment APIs, email providers, search backends, or internal microservices, those dependencies may become the true bottleneck during stress testing.
Static and dynamic page mix
A homepage may be fast due to caching while product search or checkout is much slower. Good Phoenix load testing needs a realistic mix of endpoints and user behavior.
Writing Your First Load Test
Let’s start with a basic Locust script for a Phoenix e-commerce application. This example simulates anonymous users browsing public pages and reading product data from a Phoenix JSON API.
Typical Phoenix routes might look like this:
get "/", PageController, :index
get "/products", ProductController, :index
get "/products/:slug", ProductController, :show
scope "/api/v1", MyAppWeb do
pipe_through :api
get "/products", ProductApiController, :index
get "/products/:id", ProductApiController, :show
endHere is a realistic first load test.
from locust import HttpUser, task, between
class PhoenixAnonymousUser(HttpUser):
wait_time = between(1, 3)
@task(3)
def homepage(self):
self.client.get("/", name="GET /")
@task(2)
def product_listing_page(self):
self.client.get("/products?page=1&sort=popular", name="GET /products")
@task(4)
def api_product_list(self):
self.client.get(
"/api/v1/products?category=electronics&page=1&per_page=24",
name="GET /api/v1/products"
)
@task(2)
def api_product_detail(self):
product_id = 101
self.client.get(f"/api/v1/products/{product_id}", name="GET /api/v1/products/:id")
@task(1)
def product_detail_page(self):
self.client.get("/products/phoenix-wireless-headphones", name="GET /products/:slug")What this test does
This script simulates a common anonymous browsing pattern:
- Visiting the homepage
- Viewing a product listing page
- Fetching product data through the API
- Opening product detail pages
This is useful for baseline performance testing because it isolates public read-heavy traffic, which is often the largest portion of real-world traffic.
Why this matters for Phoenix
For Phoenix, this kind of load test helps identify:
- Controller rendering performance
- JSON serialization overhead
- Database query efficiency
- Static page and API response consistency
- Cache effectiveness under concurrent traffic
Running this in LoadForge
In LoadForge, you can upload this Locust script, configure user count and spawn rate, and launch distributed load from multiple regions if needed. This is especially valuable when testing Phoenix apps serving users across different geographies.
A good first target might be:
- 100 to 500 concurrent users
- Spawn rate of 10 to 25 users per second
- Test duration of 10 to 15 minutes
This gives you a stable baseline before moving into more advanced Phoenix scenarios.
Advanced Load Testing Scenarios
A realistic Phoenix load testing strategy should go beyond public pages. Let’s look at more advanced scenarios common in production Phoenix applications.
Authenticated API Load Testing with JWT
Many Phoenix apps expose authenticated JSON APIs using Guardian, Pow, or custom token-based auth. A realistic performance test should simulate login and subsequent authenticated requests.
Assume the Phoenix app exposes these endpoints:
POST /api/v1/users/sign_inGET /api/v1/accountGET /api/v1/ordersPOST /api/v1/cart/items
Here is a Locust script that logs in and performs authenticated actions.
from locust import HttpUser, task, between
import random
class PhoenixAuthenticatedApiUser(HttpUser):
wait_time = between(1, 2)
token = None
def on_start(self):
credentials = {
"email": "loadtest1@example.com",
"password": "SuperSecret123!"
}
with self.client.post(
"/api/v1/users/sign_in",
json=credentials,
name="POST /api/v1/users/sign_in",
catch_response=True
) as response:
if response.status_code == 200:
body = response.json()
self.token = body.get("data", {}).get("token")
if self.token:
response.success()
else:
response.failure("JWT token missing in sign-in response")
else:
response.failure(f"Login failed: {response.status_code}")
def auth_headers(self):
return {
"Authorization": f"Bearer {self.token}",
"Accept": "application/json"
}
@task(2)
def get_account(self):
self.client.get(
"/api/v1/account",
headers=self.auth_headers(),
name="GET /api/v1/account"
)
@task(3)
def get_orders(self):
self.client.get(
"/api/v1/orders?status=recent&limit=10",
headers=self.auth_headers(),
name="GET /api/v1/orders"
)
@task(1)
def add_to_cart(self):
payload = {
"product_id": random.choice([101, 102, 103, 104]),
"quantity": random.randint(1, 3)
}
self.client.post(
"/api/v1/cart/items",
json=payload,
headers=self.auth_headers(),
name="POST /api/v1/cart/items"
)What this test reveals
This scenario is useful for evaluating:
- Login throughput and auth latency
- Password hashing overhead
- JWT generation and validation cost
- Session/account lookup performance
- Cart and order endpoint behavior under concurrency
For Phoenix specifically, if login is slow under load, examine:
- Bcrypt or Argon2 cost settings
- Database lookups on user records
- Plug pipeline overhead
- Token signing and verification
- Connection pool limits in Ecto
If your app uses session cookies instead of JWTs, you can adapt the same pattern by capturing cookies from the login response and reusing them automatically through the Locust client session.
Testing a Checkout Flow with CSRF and Session Auth
Many server-rendered Phoenix applications use session-based authentication and CSRF protection. A realistic browser-like flow should include login, browsing, cart updates, and checkout initiation.
Assume routes like:
GET /users/log_inPOST /users/log_inGET /cartPOST /cart/itemsGET /checkoutPOST /checkout
In a Phoenix app, CSRF tokens are usually embedded in forms. Here is a simplified but realistic test that extracts the token and submits authenticated requests.
from locust import HttpUser, task, between
from bs4 import BeautifulSoup
import random
class PhoenixSessionUser(HttpUser):
wait_time = between(2, 5)
def extract_csrf_token(self, html):
soup = BeautifulSoup(html, "html.parser")
token_input = soup.find("input", {"name": "_csrf_token"})
return token_input["value"] if token_input else None
def on_start(self):
login_page = self.client.get("/users/log_in", name="GET /users/log_in")
csrf_token = self.extract_csrf_token(login_page.text)
login_data = {
"_csrf_token": csrf_token,
"user[email]": "buyer1@example.com",
"user[password]": "SuperSecret123!"
}
self.client.post(
"/users/log_in",
data=login_data,
name="POST /users/log_in"
)
@task(3)
def view_cart(self):
self.client.get("/cart", name="GET /cart")
@task(2)
def add_cart_item(self):
cart_page = self.client.get("/products/phoenix-wireless-headphones", name="GET product for cart")
csrf_token = self.extract_csrf_token(cart_page.text)
payload = {
"_csrf_token": csrf_token,
"cart_item[product_id]": "101",
"cart_item[quantity]": str(random.randint(1, 2))
}
self.client.post(
"/cart/items",
data=payload,
name="POST /cart/items"
)
@task(1)
def start_checkout(self):
checkout_page = self.client.get("/checkout", name="GET /checkout")
csrf_token = self.extract_csrf_token(checkout_page.text)
payload = {
"_csrf_token": csrf_token,
"checkout[address]": "123 Market Street",
"checkout[city]": "San Francisco",
"checkout[state]": "CA",
"checkout[postal_code]": "94105",
"checkout[payment_method]": "card"
}
self.client.post(
"/checkout",
data=payload,
name="POST /checkout"
)Why this scenario matters
This test is closer to how real users interact with many Phoenix applications:
- HTML pages rendered by controllers or LiveView
- Session cookies managed automatically
- CSRF token extraction from forms
- Stateful cart and checkout actions
This type of load testing is valuable because it exposes issues that API-only tests miss, including:
- Template rendering bottlenecks
- Session store contention
- Form processing overhead
- Middleware and plug pipeline latency
- Database writes during cart and checkout operations
If checkout slows down under load, investigate:
- Transaction scope in Ecto
- inventory locking patterns
- payment provider latency
- synchronous email sending
- expensive side effects in controller actions
Database-Heavy Search and Filtering Scenario
Search, filtering, and faceted browse pages are often among the most expensive endpoints in Phoenix apps. They may involve multiple joins, sorting, pagination, aggregations, and dynamic query building in Ecto.
Assume an endpoint like:
GET /api/v1/search?q=laptop&category=electronics&min_price=500&max_price=2000&sort=rating_desc&page=1
Here is a Locust script for stressing that behavior.
from locust import HttpUser, task, between
import random
class PhoenixSearchUser(HttpUser):
wait_time = between(1, 3)
search_terms = ["laptop", "keyboard", "monitor", "chair", "desk"]
categories = ["electronics", "office", "accessories"]
sort_options = ["price_asc", "price_desc", "rating_desc", "newest"]
@task
def search_products(self):
q = random.choice(self.search_terms)
category = random.choice(self.categories)
sort = random.choice(self.sort_options)
min_price = random.choice([50, 100, 250, 500])
max_price = random.choice([500, 1000, 2000, 3000])
page = random.randint(1, 5)
self.client.get(
f"/api/v1/search?q={q}&category={category}&min_price={min_price}&max_price={max_price}&sort={sort}&page={page}",
name="GET /api/v1/search"
)What this uncovers
This is a classic performance testing scenario for Phoenix because search endpoints often trigger:
- Dynamic Ecto query generation
- expensive
ORDER BYoperations - large result sets
- pagination inefficiencies
- missing indexes
- cache misses
If response times climb sharply during this test, the issue is often not Phoenix itself but the database access pattern behind the endpoint.
Analyzing Your Results
After running your Phoenix load test in LoadForge, focus on a few key metrics rather than just average response time.
Response time percentiles
Look at:
- p50 for typical experience
- p95 for degraded but common experience
- p99 for worst-case user impact
A Phoenix app may show a good average while still having poor p95 or p99 latency due to database contention or intermittent slow queries.
Throughput
Check requests per second across your endpoints:
- Can Phoenix sustain the expected production rate?
- Does throughput plateau unexpectedly?
- Does higher concurrency increase throughput or only increase latency?
If throughput stops scaling while response times spike, you may have hit a bottleneck in the database, app instance CPU, or connection pool.
Error rate
Watch for:
- 401 or 403 errors from auth misconfiguration
- 422 errors from invalid CSRF or form payloads
- 429 errors from rate limiting
- 500 errors from application exceptions
- 502/503/504 errors from proxy or upstream failures
In Phoenix, 500s under load often point to resource exhaustion, timeouts, or unhandled edge cases triggered by concurrency.
Endpoint-level breakdown
LoadForge’s real-time reporting helps you identify which Phoenix endpoints degrade first. This is important because:
/may remain fast/api/v1/searchmay become slow/api/v1/users/sign_inmay fail under stress/checkoutmay have the worst p95
This endpoint-specific visibility helps prioritize optimization work.
Geographic differences
If your users are globally distributed, use LoadForge’s global test locations to compare latency by region. This is especially useful for Phoenix apps deployed behind CDNs, regional load balancers, or multi-region infrastructure.
Correlate with Phoenix telemetry
For best results, compare LoadForge metrics with application-side observability:
- Phoenix telemetry
- Ecto query logs
- database CPU and connection usage
- BEAM memory and scheduler utilization
- reverse proxy metrics
- cache hit rates
This combination makes it much easier to determine whether the bottleneck is in Phoenix, PostgreSQL, Redis, or an external dependency.
Performance Optimization Tips
Once your load testing identifies weak spots, these are the most common Phoenix optimization opportunities.
Optimize Ecto queries
- Add indexes for frequently filtered or sorted columns
- Avoid N+1 queries with preloading where appropriate
- Limit selected columns for API responses
- Review slow joins and large scans
- Tune pagination queries
Tune database connection pools
If Phoenix handles many concurrent requests but the database pool is too small, requests will queue. Review:
- Ecto repo pool size
- database max connections
- queue time in telemetry metrics
Cache expensive reads
For public endpoints and repeated lookups:
- Cache product listings or search results where possible
- Use CDN caching for static and semi-static content
- Cache computed aggregates
Reduce authentication overhead
- Avoid unnecessary user lookups on every request
- Tune password hashing cost appropriately for your environment
- Cache session-related metadata if safe
- Keep auth plugs efficient
Offload background work
Don’t perform slow side effects inline during request handling. Move tasks such as:
- email sending
- analytics events
- image processing
- report generation
to background jobs using Oban or another queue system.
Review LiveView payload size
For LiveView-heavy Phoenix apps:
- minimize assigns
- avoid large diffs
- reduce update frequency
- paginate or lazy load large datasets
Scale horizontally
Phoenix scales well across multiple nodes. Use load testing to validate how additional instances improve throughput. LoadForge’s distributed testing is especially useful when validating autoscaling behavior or multi-instance deployments.
Common Pitfalls to Avoid
Phoenix load testing is straightforward, but teams often make a few avoidable mistakes.
Testing only the homepage
A homepage test rarely reflects real production load. Include authenticated flows, search, cart actions, and write-heavy endpoints.
Ignoring database realism
If your staging database is tiny, your results may be overly optimistic. Seed realistic volumes of users, products, orders, and logs.
Skipping authentication flows
Many bottlenecks appear only after login. Test JWT auth, session auth, and permission-checked endpoints.
Not modeling user think time
Real users pause between actions. Without wait times, your test may behave more like a synthetic stress test than a realistic load test.
Overlooking websocket or LiveView traffic
If your Phoenix app depends heavily on real-time features, pure HTTP tests may not capture the full production profile. Pair HTTP load testing with websocket-focused validation where needed.
Running tests from a single region
If your users are spread globally, testing from one location may hide latency and routing issues. Use LoadForge’s cloud-based infrastructure and global test locations for more realistic results.
Focusing only on averages
Average response time can look fine while p95 and p99 are unacceptable. Always inspect percentiles and error rates.
Forgetting CI/CD performance checks
Performance regressions often sneak in over time. LoadForge CI/CD integration makes it easier to run repeatable Phoenix performance testing as part of deployment pipelines.
Conclusion
Phoenix is an excellent framework for high-concurrency applications, but real production readiness still depends on how your app handles authentication, database load, dynamic queries, and stateful user flows under pressure. With the right load testing strategy, you can validate that your Phoenix application performs well not just in development, but under realistic traffic and peak demand.
Using LoadForge, you can create Locust-based Phoenix load tests, run them at scale with distributed cloud infrastructure, analyze results in real time, and integrate performance testing into your release process. Whether you’re validating API throughput, stress testing checkout flows, or measuring search performance, LoadForge gives you the tools to test Phoenix with confidence.
If you’re ready to see how your Phoenix app performs under real-world load, try LoadForge and start building a repeatable, production-focused performance testing workflow today.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

ASP.NET Load Testing Guide with LoadForge
Learn how to load test ASP.NET applications with LoadForge to find performance issues and ensure your app handles peak traffic.

CakePHP Load Testing Guide with LoadForge
Load test CakePHP applications with LoadForge to benchmark app performance, simulate traffic, and improve scalability.

Django Load Testing Guide with LoadForge
Discover how to load test Django applications with LoadForge to measure performance, handle traffic spikes, and improve stability.