
Caddy Load Testing Guide with LoadForge
Introduction
Caddy is a modern web server and reverse proxy known for its automatic HTTPS, simple configuration, and strong performance. Teams often choose Caddy to terminate TLS, serve static assets, route traffic to upstream applications, and simplify edge infrastructure. But even though Caddy is easy to configure, you still need to verify how it behaves under real traffic conditions.
Load testing Caddy helps you answer critical questions:
- How many requests per second can your Caddy server handle?
- What happens to latency when TLS handshakes increase?
- How well does Caddy perform as a reverse proxy in front of APIs and web apps?
- Does request throughput degrade under burst traffic or sustained concurrency?
- Are there bottlenecks in upstream services, compression, caching, or certificate handling?
In this guide, you’ll learn how to use LoadForge to run realistic load testing, performance testing, and stress testing scenarios against Caddy. We’ll cover basic throughput testing, authenticated API traffic through a reverse proxy, file uploads, and mixed workloads that better reflect production conditions. Since LoadForge is built on Locust, all examples use practical Python scripts you can run and expand.
If you’re benchmarking Caddy for edge delivery, reverse proxy performance, or TLS-heavy workloads, this guide will help you build meaningful tests and interpret the results.
Prerequisites
Before you begin load testing Caddy with LoadForge, make sure you have:
- A running Caddy server in a staging or test environment
- The base URL for your Caddy instance, such as
https://edge.example.com - Knowledge of what Caddy is doing:
- Serving static files
- Terminating TLS
- Reverse proxying requests to upstream services
- Handling authentication or headers
- Accepting uploads or API traffic
- Test accounts or API credentials if your routes require authentication
- A list of realistic endpoints to test, such as:
//assets/app.js/healthz/api/v1/products/api/v1/auth/login/api/v1/orders/upload
It also helps to understand your Caddy configuration. For example, a typical Caddyfile might include:
example.com {
encode gzip zstd
tls admin@example.com
@api path /api/*
reverse_proxy @api app:8080
handle_path /assets/* {
root * /srv/www
file_server
}
handle /healthz {
respond "ok" 200
}
log {
output file /var/log/caddy/access.log
format json
}
}This matters because your load test results may reflect not just Caddy itself, but also compression, TLS negotiation, static file serving, header manipulation, and upstream application performance.
With LoadForge, you can execute these tests from distributed cloud locations, monitor real-time reporting, and integrate performance testing into CI/CD workflows for ongoing validation.
Understanding Caddy Under Load
Caddy is efficient, but like any web server or reverse proxy, it can encounter bottlenecks depending on traffic patterns and configuration.
Common Caddy load testing scenarios
When load testing Caddy, you’re often evaluating one or more of these behaviors:
- TLS termination performance
- CPU usage from handshakes
- Connection reuse efficiency
- Cipher suite overhead
- Reverse proxy throughput
- Request forwarding latency
- Upstream connection pooling
- Header rewriting overhead
- Static asset delivery
- File serving speed
- Compression overhead with gzip or zstd
- Cache-control behavior
- Request buffering and uploads
- Memory pressure during large payloads
- Timeout handling
- Upstream backpressure
- Authentication and routing
- JWT or session cookie forwarding
- Path matching and middleware overhead
- Rate limiting or access policy impact
Common bottlenecks
Even if Caddy is fast, your performance testing may expose issues in surrounding infrastructure:
- Slow upstream applications behind
reverse_proxy - Insufficient CPU for TLS-heavy traffic
- Compression increasing response times under load
- Disk I/O bottlenecks when serving large static files
- Misconfigured keep-alive settings
- Small upstream connection pools
- Rate limiting or WAF rules triggering unexpectedly
- Large request bodies causing buffering delays
What to measure
During load testing and stress testing, pay close attention to:
- Requests per second
- Median and p95/p99 response times
- TLS connection behavior
- Error rates such as 502, 503, and 504
- Throughput by endpoint
- Latency differences between static and proxied routes
- Resource utilization on the Caddy host and upstream services
The key point is that Caddy may appear healthy while upstream services are failing, or vice versa. Your test design should isolate both edge performance and full end-to-end behavior.
Writing Your First Load Test
Let’s start with a simple performance testing script that checks basic Caddy functionality: the homepage, a health endpoint, and a static asset. This is useful for benchmarking request throughput and baseline latency.
Basic Caddy throughput test
from locust import HttpUser, task, between
class CaddyBasicUser(HttpUser):
wait_time = between(1, 3)
@task(5)
def homepage(self):
self.client.get(
"/",
name="GET /",
headers={
"Accept": "text/html,application/xhtml+xml",
"User-Agent": "LoadForge-Caddy-BasicTest/1.0"
}
)
@task(3)
def health_check(self):
self.client.get(
"/healthz",
name="GET /healthz",
headers={
"Accept": "text/plain",
"User-Agent": "LoadForge-Caddy-BasicTest/1.0"
}
)
@task(2)
def static_asset(self):
self.client.get(
"/assets/app.js",
name="GET /assets/app.js",
headers={
"Accept": "application/javascript",
"Accept-Encoding": "gzip, deflate, br",
"User-Agent": "LoadForge-Caddy-BasicTest/1.0"
}
)What this test does
This script simulates users hitting three common Caddy routes:
/for your main HTML page/healthzfor a lightweight health check/assets/app.jsfor static asset delivery
This gives you a quick view of:
- Static file serving performance
- Compression overhead
- Basic request throughput
- Latency consistency for simple routes
When to use this test
Use this baseline test when you want to:
- Validate that Caddy is reachable and stable
- Compare performance before and after config changes
- Benchmark TLS and static content serving
- Establish a baseline before more complex reverse proxy tests
In LoadForge, set the host to your Caddy URL, such as https://edge.example.com, and gradually ramp users to observe how latency changes as concurrency increases.
Advanced Load Testing Scenarios
Basic throughput tests are useful, but real Caddy deployments often sit in front of authenticated APIs, uploads, and dynamic endpoints. The following examples simulate more realistic workloads.
Scenario 1: Authenticated API traffic through Caddy reverse proxy
A common Caddy setup places it in front of an application API. In this case, you want to test not only Caddy’s routing and TLS termination, but also how it handles authenticated traffic, JSON payloads, and upstream response times.
from locust import HttpUser, task, between
import random
class CaddyAuthenticatedAPIUser(HttpUser):
wait_time = between(1, 2)
token = None
def on_start(self):
response = self.client.post(
"/api/v1/auth/login",
json={
"email": "loadtest.user@example.com",
"password": "SuperSecure123!"
},
headers={
"Content-Type": "application/json",
"Accept": "application/json",
"User-Agent": "LoadForge-Caddy-APIAuth/1.0"
},
name="POST /api/v1/auth/login"
)
if response.status_code == 200:
body = response.json()
self.token = body.get("access_token")
def auth_headers(self):
return {
"Authorization": f"Bearer {self.token}",
"Accept": "application/json",
"Content-Type": "application/json",
"User-Agent": "LoadForge-Caddy-APIAuth/1.0"
}
@task(4)
def list_products(self):
category = random.choice(["networking", "compute", "storage"])
self.client.get(
f"/api/v1/products?category={category}&limit=20",
headers=self.auth_headers(),
name="GET /api/v1/products"
)
@task(2)
def get_product_details(self):
product_id = random.choice([1012, 1045, 1099, 1201])
self.client.get(
f"/api/v1/products/{product_id}",
headers=self.auth_headers(),
name="GET /api/v1/products/:id"
)
@task(1)
def create_order(self):
payload = {
"customer_id": 55021,
"currency": "USD",
"items": [
{"product_id": 1012, "quantity": 1},
{"product_id": 1045, "quantity": 2}
],
"shipping_method": "express",
"notes": "Priority order created during load test"
}
self.client.post(
"/api/v1/orders",
json=payload,
headers=self.auth_headers(),
name="POST /api/v1/orders"
)Why this scenario matters
This test is realistic for Caddy reverse proxy performance because it includes:
- TLS termination at the edge
- Authentication traffic
- Repeated bearer token usage
- Dynamic API reads and writes
- Mixed request weights that reflect real user behavior
This helps you identify whether latency is caused by:
- Caddy proxying overhead
- Upstream application slowness
- Authentication middleware
- Backend database write contention
Scenario 2: File upload testing through Caddy
Caddy is often used to front services that accept uploads such as profile images, documents, or media files. Upload testing is important because request body handling can behave very differently from normal GET traffic.
from locust import HttpUser, task, between
from io import BytesIO
import uuid
class CaddyFileUploadUser(HttpUser):
wait_time = between(2, 5)
def on_start(self):
response = self.client.post(
"/api/v1/auth/login",
json={
"email": "uploader@example.com",
"password": "UploadTest123!"
},
headers={
"Content-Type": "application/json",
"Accept": "application/json"
},
name="POST /api/v1/auth/login"
)
self.token = response.json().get("access_token")
@task(3)
def upload_profile_image(self):
file_content = BytesIO(b"\x89PNG\r\n\x1a\n" + b"A" * 1024 * 256)
filename = f"profile-{uuid.uuid4()}.png"
with self.client.post(
"/api/v1/uploads/profile-image",
headers={
"Authorization": f"Bearer {self.token}",
"User-Agent": "LoadForge-Caddy-UploadTest/1.0"
},
files={
"file": (filename, file_content, "image/png")
},
data={
"user_id": "55021",
"folder": "avatars"
},
name="POST /api/v1/uploads/profile-image",
catch_response=True
) as response:
if response.status_code not in [200, 201]:
response.failure(f"Unexpected status code: {response.status_code}")
else:
response.success()
@task(1)
def upload_document(self):
pdf_content = BytesIO(b"%PDF-1.4\n" + b"B" * 1024 * 1024)
filename = f"report-{uuid.uuid4()}.pdf"
with self.client.post(
"/api/v1/uploads/documents",
headers={
"Authorization": f"Bearer {self.token}",
"User-Agent": "LoadForge-Caddy-UploadTest/1.0"
},
files={
"file": (filename, pdf_content, "application/pdf")
},
data={
"project_id": "infra-benchmark-2025",
"visibility": "internal"
},
name="POST /api/v1/uploads/documents",
catch_response=True
) as response:
if response.status_code not in [200, 201, 202]:
response.failure(f"Upload failed with status {response.status_code}")
else:
response.success()What this test reveals
This upload-focused load test helps uncover:
- Request body buffering delays
- Reverse proxy timeout issues
- Upload throughput ceilings
- Memory and CPU pressure during multipart processing
- Upstream storage latency
If you see errors like 413, 502, or 504, inspect both your Caddy configuration and upstream application limits.
Scenario 3: Mixed workload for static content and proxied APIs
A realistic performance testing strategy for Caddy should combine static file delivery and reverse-proxied API requests. This mixed workload often gives the clearest picture of production behavior.
from locust import HttpUser, task, between
import random
class CaddyMixedWorkloadUser(HttpUser):
wait_time = between(1, 4)
token = None
def on_start(self):
response = self.client.post(
"/api/v1/auth/login",
json={
"email": "mixed.user@example.com",
"password": "MixedLoad123!"
},
headers={
"Content-Type": "application/json",
"Accept": "application/json"
},
name="POST /api/v1/auth/login"
)
if response.status_code == 200:
self.token = response.json().get("access_token")
def api_headers(self):
return {
"Authorization": f"Bearer {self.token}",
"Accept": "application/json",
"Content-Type": "application/json",
"User-Agent": "LoadForge-Caddy-MixedWorkload/1.0"
}
@task(5)
def fetch_static_pages(self):
path = random.choice([
"/",
"/pricing",
"/docs/getting-started",
"/assets/css/site.css",
"/assets/js/runtime.js"
])
self.client.get(
path,
headers={
"Accept-Encoding": "gzip, br",
"User-Agent": "LoadForge-Caddy-MixedWorkload/1.0"
},
name="GET static content"
)
@task(3)
def browse_api(self):
endpoint = random.choice([
"/api/v1/products?limit=10",
"/api/v1/regions",
"/api/v1/status",
])
self.client.get(
endpoint,
headers=self.api_headers(),
name="GET API browse"
)
@task(1)
def submit_search(self):
payload = {
"query": random.choice(["tls certificate", "reverse proxy", "cdn edge", "object storage"]),
"filters": {
"region": random.choice(["us-east-1", "eu-west-1", "ap-southeast-1"]),
"availability": "in_stock"
},
"page": 1,
"page_size": 20
}
self.client.post(
"/api/v1/search",
json=payload,
headers=self.api_headers(),
name="POST /api/v1/search"
)
@task(1)
def create_session_event(self):
payload = {
"session_id": f"sess-{random.randint(100000, 999999)}",
"event_type": "page_view",
"path": random.choice(["/", "/pricing", "/docs/getting-started"]),
"referrer": "https://www.google.com/",
"timestamp": "2025-04-01T12:00:00Z"
}
self.client.post(
"/api/v1/events",
json=payload,
headers=self.api_headers(),
name="POST /api/v1/events"
)Why mixed workloads are valuable
A mixed test reflects how Caddy behaves when:
- Serving assets directly
- Proxying API traffic simultaneously
- Handling compressed responses
- Managing authenticated sessions
- Balancing read-heavy and write-heavy traffic
This is often the best way to benchmark Caddy for real-world deployments rather than isolated endpoint testing.
Analyzing Your Results
Once your LoadForge test completes, the next step is understanding what the metrics mean for Caddy.
Key metrics to review
Focus on these areas in your LoadForge reports:
- Requests per second
- Indicates overall throughput
- Useful for comparing Caddy config changes
- Response time percentiles
- Median shows typical experience
- p95 and p99 reveal tail latency under load
- Failures
- Look for 4xx and 5xx patterns
- 502/503/504 often point to upstream issues
- Endpoint breakdown
- Compare static routes vs proxied API routes
- Spot whether uploads or auth flows degrade first
- User ramp behavior
- Identify the concurrency level where latency begins to spike
How to interpret Caddy-specific patterns
Fast static routes, slow API routes
This usually means Caddy itself is healthy, but upstream services are overloaded.
Rising latency during TLS-heavy tests
This may indicate CPU saturation on the Caddy host, especially if many new connections are being created.
Upload failures under stress
This can suggest body size limits, proxy timeout issues, or backend storage bottlenecks.
Stable median latency but poor p99
This often points to intermittent backend slowness, lock contention, or uneven request routing.
Use infrastructure metrics too
For accurate performance testing, combine LoadForge results with system metrics from your environment:
- CPU and memory on Caddy instances
- Network throughput
- Open connections
- TLS handshake rates
- Upstream application latency
- Disk I/O for static file serving or upload processing
LoadForge’s real-time reporting makes it easier to correlate spikes in latency with changes in request volume. If you’re testing from multiple geographic regions, distributed testing can also reveal whether performance varies by location or edge path.
Performance Optimization Tips
After load testing Caddy, you can often improve performance with a few targeted changes.
Tune TLS and connection reuse
- Ensure clients and upstreams benefit from keep-alive
- Reduce unnecessary handshake churn
- Benchmark with realistic connection behavior, not just short-lived bursts
Optimize reverse proxy settings
- Verify upstreams are not the real bottleneck
- Check health checks and failover behavior
- Review header and buffering configuration if requests are large
Be careful with compression
Compression can reduce bandwidth but increase CPU usage. During stress testing, compare results with:
encode gzipencode zstd- compression disabled for already-compressed assets
Cache static assets aggressively
For content under /assets/, use long-lived cache headers where appropriate. This won’t improve origin request handling directly, but it reduces repeated edge load in real deployments.
Separate benchmark types
Run different tests for:
- Static file throughput
- TLS termination
- Reverse proxy API traffic
- Upload performance
- Stress testing to failure
This makes it easier to isolate what Caddy is doing well and where bottlenecks appear.
Test from multiple regions
If Caddy is serving global users, use LoadForge’s global test locations to measure latency and throughput from multiple geographies. This helps distinguish server-side bottlenecks from network distance effects.
Common Pitfalls to Avoid
Load testing Caddy is straightforward, but several mistakes can lead to misleading results.
Testing only the homepage
A single endpoint rarely reflects actual traffic. Include static assets, API routes, authentication, and write operations.
Ignoring upstream dependencies
If Caddy is reverse proxying requests, poor results may come from the backend app, database, or storage service rather than Caddy itself.
Using unrealistic traffic patterns
Don’t send only identical GET requests if your real workload includes:
- authenticated sessions
- POST requests
- uploads
- mixed static and dynamic routes
Forgetting TLS impact
Caddy is often chosen for automatic HTTPS, so TLS is a major part of real-world performance. Always test over HTTPS when benchmarking production-like behavior.
Overlooking response validation
A fast 502 response is still a failure. Use catch_response=True where needed and validate status codes and payloads.
Running stress tests against production without safeguards
Stress testing can disrupt live users, trigger alerts, or overload shared dependencies. Use staging environments or tightly controlled production windows.
Not correlating app and proxy metrics
LoadForge gives you excellent load testing visibility, but you should also inspect Caddy logs, upstream logs, and host metrics to understand root causes.
Conclusion
Caddy is a powerful choice for TLS termination, reverse proxying, and static content delivery, but you need real load testing to understand how it performs under realistic traffic. By testing baseline throughput, authenticated API flows, uploads, and mixed workloads, you can identify whether bottlenecks live in Caddy itself, your configuration, or the services behind it.
LoadForge makes this process much easier with cloud-based infrastructure, distributed testing, real-time reporting, CI/CD integration, and the ability to run realistic Locust-based scripts at scale. If you’re ready to benchmark Caddy request throughput, reverse proxy performance, and TLS handling with confidence, try LoadForge and start building tests that match your production traffic.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

Apache Load Testing Guide with LoadForge
Load test Apache web servers with LoadForge to benchmark request handling, concurrency, and overall site performance.

AWS Load Testing Guide with LoadForge
Learn how to load test AWS applications and APIs with LoadForge to find bottlenecks, measure scale, and improve performance.

Azure Functions Load Testing Guide
Load test Azure Functions with LoadForge to evaluate cold starts, throughput, and scaling behavior under peak demand.