LoadForge LogoLoadForge

Nginx Load Testing Guide with LoadForge

Nginx Load Testing Guide with LoadForge

Introduction

Nginx is one of the most widely used web servers and reverse proxies in modern infrastructure. It commonly sits in front of application servers, APIs, static assets, media delivery pipelines, and containerized services. Because Nginx often acts as the first layer every request touches, its performance has a direct impact on user experience, upstream service stability, and overall infrastructure efficiency.

A proper Nginx load testing strategy helps you answer critical questions:

  • How many requests per second can your Nginx server or reverse proxy sustain?
  • What happens to latency under sustained traffic?
  • Does TLS termination become a bottleneck?
  • Are static assets, cached responses, and proxied API requests performing as expected?
  • Where do failures begin during stress testing?

In this guide, you’ll learn how to perform realistic load testing and performance testing for Nginx using LoadForge. Since LoadForge uses Locust under the hood, all examples are written in Python with Locust and can be run locally or scaled through LoadForge’s cloud-based infrastructure. We’ll cover basic endpoint testing, authenticated API traffic behind Nginx, file uploads, and mixed workloads that resemble real production traffic. Along the way, we’ll also show how LoadForge features like distributed testing, real-time reporting, global test locations, and CI/CD integration can help you validate Nginx performance at scale.

Prerequisites

Before you begin load testing Nginx, make sure you have the following:

  • A running Nginx server or reverse proxy
  • Permission to test the environment
  • A list of realistic endpoints exposed through Nginx
  • Knowledge of whether Nginx is serving:
    • static files
    • proxied API traffic
    • TLS termination
    • caching
    • uploads or downloads
  • Test credentials if authentication is required
  • LoadForge account, or a local Locust setup for script validation

You should also identify the environment you want to test:

  • Development: useful for script validation only
  • Staging: best for realistic performance testing
  • Production-like pre-release environment: ideal for stress testing and capacity planning

For Nginx specifically, it’s helpful to know:

  • Worker process and connection settings
  • Whether gzip, brotli, HTTP/2, or keep-alive are enabled
  • Proxy timeout settings
  • Cache behavior
  • Upstream application architecture

If your Nginx instance sits in front of an application, remember that your load test is measuring the full path, not just Nginx itself. That’s often exactly what you want, but it’s important to interpret results correctly.

Understanding Nginx Under Load

Nginx is event-driven and highly efficient at handling many concurrent connections. That design makes it excellent for serving static content, terminating SSL/TLS, and proxying requests to backend services. However, even a high-performance Nginx deployment can run into bottlenecks during load testing.

Common Nginx bottlenecks during performance testing

Worker saturation

If worker_processes or worker_connections are undersized, Nginx may hit connection limits under high concurrency.

TLS overhead

If Nginx is terminating HTTPS, CPU usage can rise sharply during handshake-heavy traffic, especially when clients don’t reuse connections effectively.

Upstream latency

When Nginx proxies requests to application servers, slow upstream responses increase request queueing and inflate end-user latency.

Disk I/O for static files

Serving large assets or logs from slower disks can reduce throughput.

Buffer and timeout misconfiguration

Improper proxy buffer sizes, client body limits, or timeout values can cause request failures or degraded performance.

Rate limiting and security controls

Modules like limit_req, WAFs, and bot protections can affect test results if not accounted for.

What to measure when load testing Nginx

When running load testing or stress testing against Nginx, focus on:

  • Requests per second
  • Median and p95/p99 response times
  • Error rates
  • Connection failures and timeouts
  • Throughput for static and dynamic content
  • Behavior under increasing concurrency
  • Differences between cached and uncached responses

LoadForge’s real-time reporting makes it easier to spot the exact moment latency begins to rise or error rates start appearing, which is especially useful when testing Nginx as a reverse proxy across multiple regions.

Writing Your First Load Test

Let’s start with a simple Nginx load test that simulates users loading a homepage, static assets, and a health endpoint. This is a practical first step for validating that Nginx can serve common public traffic efficiently.

Basic Nginx homepage and static asset test

python
from locust import HttpUser, task, between
 
class NginxPublicTrafficUser(HttpUser):
    wait_time = between(1, 3)
 
    @task(5)
    def homepage(self):
        self.client.get(
            "/",
            headers={
                "Accept": "text/html,application/xhtml+xml",
                "User-Agent": "Mozilla/5.0 LoadForge-Nginx-Test"
            },
            name="GET /"
        )
 
    @task(3)
    def static_css(self):
        self.client.get(
            "/assets/css/app.min.css",
            headers={
                "Accept": "text/css,*/*;q=0.1",
                "Referer": "https://www.example.com/"
            },
            name="GET /assets/css/app.min.css"
        )
 
    @task(2)
    def static_js(self):
        self.client.get(
            "/assets/js/app.bundle.js",
            headers={
                "Accept": "*/*",
                "Referer": "https://www.example.com/"
            },
            name="GET /assets/js/app.bundle.js"
        )
 
    @task(1)
    def health_check(self):
        self.client.get(
            "/healthz",
            headers={"Accept": "application/json"},
            name="GET /healthz"
        )

What this test validates

This first script is useful for measuring:

  • HTML delivery performance
  • Static file throughput through Nginx
  • Health endpoint responsiveness
  • Basic keep-alive connection behavior

This is a common baseline for Nginx performance testing because many deployments serve both public pages and static assets directly from Nginx.

Why this matters for Nginx

If you see slow response times even on static files, the issue may be related to:

  • disk performance
  • gzip or brotli compression overhead
  • TLS configuration
  • worker connection limits
  • network throughput constraints

If the homepage is slow but static assets are fast, the bottleneck may be upstream application processing rather than Nginx itself.

Advanced Load Testing Scenarios

Once your baseline test is working, the next step is to simulate more realistic traffic patterns. Nginx often handles authenticated API requests, proxying to upstream services, file uploads, and cacheable content. These scenarios reveal how Nginx behaves under production-like conditions.

Scenario 1: Authenticated API traffic through Nginx reverse proxy

A very common Nginx setup is as a reverse proxy for an API application. In this example, users authenticate, retrieve account data, and submit an order.

python
from locust import HttpUser, task, between
import random
 
class NginxReverseProxyApiUser(HttpUser):
    wait_time = between(1, 2)
    token = None
 
    def on_start(self):
        response = self.client.post(
            "/api/v1/auth/login",
            json={
                "email": "loadtest.user@example.com",
                "password": "Str0ngP@ssw0rd!"
            },
            headers={
                "Content-Type": "application/json",
                "Accept": "application/json"
            },
            name="POST /api/v1/auth/login"
        )
 
        if response.status_code == 200:
            self.token = response.json().get("access_token")
 
    def auth_headers(self):
        return {
            "Authorization": f"Bearer {self.token}",
            "Accept": "application/json",
            "Content-Type": "application/json"
        }
 
    @task(4)
    def get_profile(self):
        if not self.token:
            return
 
        self.client.get(
            "/api/v1/account/profile",
            headers=self.auth_headers(),
            name="GET /api/v1/account/profile"
        )
 
    @task(3)
    def list_orders(self):
        if not self.token:
            return
 
        self.client.get(
            "/api/v1/orders?status=open&limit=25",
            headers=self.auth_headers(),
            name="GET /api/v1/orders"
        )
 
    @task(2)
    def search_catalog(self):
        if not self.token:
            return
 
        category = random.choice(["networking", "storage", "compute"])
        self.client.get(
            f"/api/v1/catalog/search?q=ssd&category={category}&sort=price_asc",
            headers=self.auth_headers(),
            name="GET /api/v1/catalog/search"
        )
 
    @task(1)
    def create_order(self):
        if not self.token:
            return
 
        self.client.post(
            "/api/v1/orders",
            json={
                "customer_id": "cust_10482",
                "currency": "USD",
                "items": [
                    {"sku": "nginx-proxy-small", "quantity": 2},
                    {"sku": "tls-cert-managed", "quantity": 1}
                ],
                "shipping_method": "standard"
            },
            headers=self.auth_headers(),
            name="POST /api/v1/orders"
        )

Why this scenario is valuable

This test measures Nginx performance when proxying authenticated application traffic. It can reveal:

  • latency added by proxying requests
  • header handling overhead
  • upstream application slowness
  • impact of authentication middleware
  • timeout or buffering issues

If response times increase sharply for authenticated endpoints but not for static content, the bottleneck is likely upstream or in proxy configuration.

Scenario 2: Testing cacheable and uncached content

Nginx is frequently used as a caching layer. To test caching effectiveness, you should simulate both cache hits and cache misses.

python
from locust import HttpUser, task, between
import random
import time
 
class NginxCacheBehaviorUser(HttpUser):
    wait_time = between(1, 2)
 
    @task(5)
    def cached_product_page(self):
        product_id = random.choice([101, 102, 103, 104, 105])
        self.client.get(
            f"/store/products/{product_id}",
            headers={
                "Accept": "text/html",
                "User-Agent": "LoadForge-Cache-Test"
            },
            name="GET cached product page"
        )
 
    @task(2)
    def uncached_search_request(self):
        search_term = random.choice(["nginx", "reverse proxy", "ssl", "load balancer"])
        self.client.get(
            f"/store/search?q={search_term}&t={int(time.time() * 1000)}",
            headers={
                "Accept": "text/html",
                "Cache-Control": "no-cache"
            },
            name="GET uncached search"
        )
 
    @task(2)
    def cached_api_response(self):
        self.client.get(
            "/api/v1/public/regions",
            headers={
                "Accept": "application/json"
            },
            name="GET cached API response"
        )
 
    @task(1)
    def bypass_cache_with_header(self):
        self.client.get(
            "/api/v1/public/pricing",
            headers={
                "Accept": "application/json",
                "Cache-Control": "no-cache",
                "Pragma": "no-cache"
            },
            name="GET uncached pricing"
        )

What to look for in cache testing

When load testing Nginx caching behavior, compare:

  • cached endpoint latency versus uncached endpoint latency
  • throughput improvements on repeat requests
  • backend load reduction
  • consistency of response times under concurrency

If cached responses are not significantly faster, inspect your Nginx cache configuration, cache keys, upstream headers, and cache bypass rules.

Scenario 3: File uploads through Nginx

Nginx is often used to accept client uploads before forwarding them to an application service. This is where request body buffering, size limits, and timeout settings become important.

python
from locust import HttpUser, task, between
from io import BytesIO
import os
import random
 
class NginxFileUploadUser(HttpUser):
    wait_time = between(2, 5)
 
    def generate_file(self, size_kb):
        content = os.urandom(size_kb * 1024)
        return BytesIO(content)
 
    @task(3)
    def upload_avatar(self):
        file_data = self.generate_file(128)
        files = {
            "file": ("avatar.png", file_data, "image/png")
        }
        data = {
            "folder": "avatars",
            "user_id": "user_84721"
        }
 
        self.client.post(
            "/media/upload/avatar",
            files=files,
            data=data,
            headers={
                "Accept": "application/json"
            },
            name="POST /media/upload/avatar"
        )
 
    @task(1)
    def upload_document(self):
        file_data = self.generate_file(2048)
        files = {
            "file": ("invoice.pdf", file_data, "application/pdf")
        }
        data = {
            "folder": "documents",
            "account_id": "acct_22091",
            "document_type": "invoice"
        }
 
        self.client.post(
            "/api/v1/documents/upload",
            files=files,
            data=data,
            headers={
                "Accept": "application/json"
            },
            name="POST /api/v1/documents/upload"
        )
 
    @task(2)
    def fetch_uploaded_asset(self):
        asset_id = random.choice(["a93f1", "b18c7", "c72de"])
        self.client.get(
            f"/media/assets/{asset_id}",
            headers={"Accept": "*/*"},
            name="GET /media/assets/:id"
        )

Why upload testing matters for Nginx

This scenario can uncover:

  • client_max_body_size issues
  • request body buffering overhead
  • slow upstream handling of multipart uploads
  • timeouts for large files
  • resource pressure on disk or memory buffers

Uploads are often overlooked in performance testing, but they can be one of the fastest ways to expose Nginx misconfiguration.

Analyzing Your Results

After running your Nginx load testing scenarios in LoadForge, review the results with a focus on both latency and throughput.

Key metrics to examine

Response time percentiles

Average response time is useful, but p95 and p99 are much more revealing. A low average with a very high p99 often indicates queueing or intermittent upstream slowness.

Requests per second

This shows how much traffic your Nginx server can sustain. Compare throughput across static files, cached content, and proxied API endpoints.

Error rates

Pay close attention to:

  • 502 Bad Gateway
  • 504 Gateway Timeout
  • 499 Client Closed Request
  • 429 Too Many Requests
  • 413 Payload Too Large
  • connection reset or timeout errors

These often point directly to Nginx or upstream configuration issues.

Endpoint-level performance

Use Locust request naming to group similar requests and compare:

  • static content
  • dynamic pages
  • API endpoints
  • uploads
  • authenticated traffic

How to interpret common Nginx load testing patterns

Fast static files, slow APIs

Nginx is healthy, but upstream application servers are likely overloaded.

Rising latency with low CPU on Nginx

The bottleneck may be upstream services, network latency, or backend database contention.

High TLS endpoint latency

TLS handshakes, cipher choices, certificate chain issues, or insufficient CPU may be impacting performance.

Upload failures under load

Inspect body size limits, proxy timeouts, temp file usage, and upstream request handling.

Using LoadForge effectively

LoadForge is especially useful for Nginx performance testing because you can:

  • run distributed testing from multiple geographic regions
  • simulate real-world traffic against public Nginx endpoints
  • monitor tests with real-time reporting
  • compare runs after configuration changes
  • integrate tests into CI/CD pipelines before production rollout

For example, if you update worker_connections, enable HTTP/2, or change proxy cache settings, you can immediately validate whether the change improves throughput or reduces latency.

Performance Optimization Tips

Once your load test reveals weak points, use these common Nginx optimization strategies.

Tune worker processes and connections

Make sure worker_processes and worker_connections are appropriate for your CPU and expected concurrency.

Enable keep-alive effectively

Keep-alive reduces connection setup overhead and improves throughput for repeat client requests.

Offload and cache where possible

Use Nginx caching for static and cacheable dynamic responses to reduce backend pressure.

Optimize TLS

Use modern TLS settings, session reuse, and efficient cipher suites. If HTTPS traffic is heavy, TLS tuning can produce major gains.

Compress wisely

Gzip or brotli can reduce bandwidth usage, but compression also consumes CPU. Validate the tradeoff with load testing.

Review proxy buffering and timeouts

For API traffic and uploads, proper buffering and timeout values can prevent failures and stabilize latency.

Scale upstream services

If Nginx is not the bottleneck, your load testing results may indicate that application servers or databases need scaling.

Test from realistic regions

With LoadForge’s global test locations, you can see whether latency issues are tied to geography, CDN behavior, or origin infrastructure.

Common Pitfalls to Avoid

Load testing Nginx is straightforward, but there are several mistakes that can lead to misleading results.

Testing only the homepage

A homepage-only test misses most of the real traffic patterns that affect Nginx in production.

Ignoring static assets

Static file delivery is a core Nginx use case. If you don’t test it, you may miss disk, compression, or caching bottlenecks.

Not separating cached and uncached traffic

These workloads behave very differently. Mixing them without labeling requests makes analysis harder.

Using unrealistic user behavior

Real users don’t hit one endpoint in a tight loop with no think time. Include pauses, varied paths, and mixed workloads.

Overlooking uploads and large payloads

Nginx handles request bodies differently than small GET requests. Uploads deserve their own stress testing scenario.

Failing to monitor upstream systems

Nginx may look slow even when the real problem is the application, database, or network behind it.

Running tests from one location only

A single-region test may not reflect real user experience. Distributed load testing gives a more complete picture.

Stress testing production without safeguards

Always coordinate tests carefully, use rate limits where appropriate, and start with controlled load ramps.

Conclusion

Nginx is built for high performance, but every deployment has limits shaped by TLS settings, proxy behavior, caching strategy, static asset delivery, and upstream dependencies. A well-designed Nginx load testing plan helps you identify those limits before your users do.

With LoadForge, you can build realistic Locust-based tests for Nginx, run them at scale with cloud-based infrastructure, monitor results in real time, and validate improvements using distributed testing from global locations. Whether you’re benchmarking static file throughput, testing reverse proxy API performance, or stress testing file uploads, LoadForge gives you the tools to do it with confidence.

If you’re ready to uncover Nginx throughput and latency bottlenecks, try LoadForge and start building your first Nginx performance testing scenario today.

Try LoadForge free for 7 days

Set up your first load test in under 2 minutes. No commitment.