LoadForge LogoLoadForge

Apache Load Testing Guide with LoadForge

Apache Load Testing Guide with LoadForge

Introduction

Apache remains one of the most widely deployed web servers in the world, powering everything from simple marketing sites to complex reverse proxy and application delivery stacks. Whether you are serving static assets, terminating TLS, routing traffic to PHP-FPM or upstream application servers, or protecting internal tools with Basic Auth, Apache performance directly affects user experience and infrastructure cost.

That is why Apache load testing is essential. A well-designed load testing and performance testing strategy helps you understand how your Apache server behaves under normal traffic, peak traffic, and stress conditions. You can measure request throughput, response times, error rates, connection handling, and how configuration choices such as KeepAlive, worker settings, compression, caching, and reverse proxy rules influence performance.

In this guide, you will learn how to load test Apache web servers using LoadForge and Locust. We will cover realistic Apache scenarios including static content delivery, authenticated admin areas, API traffic routed through Apache, and file uploads. Along the way, we will show how to build practical Locust scripts, interpret results, and optimize your Apache deployment for better scalability.

LoadForge makes this process easier with cloud-based infrastructure, distributed testing, real-time reporting, global test locations, and CI/CD integration, so you can test Apache from realistic traffic sources at meaningful scale.

Prerequisites

Before you start load testing Apache with LoadForge, make sure you have the following:

  • A running Apache web server or Apache-based environment
  • The base URL for the target environment, such as:
    • https://www.example.com
    • https://staging.example.com
    • https://admin.example.com
  • Permission to test the target system
  • Knowledge of the key Apache-served endpoints you want to benchmark
  • Any required credentials for protected areas such as:
    • HTTP Basic Authentication
    • Session-based login forms
    • Bearer token APIs proxied through Apache
  • A LoadForge account
  • Familiarity with basic HTTP concepts such as headers, cookies, redirects, and status codes

It also helps to know how Apache is being used in your stack. For example:

  • Is Apache serving static files directly?
  • Is it acting as a reverse proxy to an application server?
  • Is mod_php or PHP-FPM involved?
  • Are you using .htaccess, mod_rewrite, mod_deflate, mod_cache, or mod_security?
  • Are there rate limits, WAF rules, or CDN layers in front of Apache?

These details matter because Apache performance testing is often about more than the web server binary itself. It is about the full request path Apache participates in.

Understanding Apache Under Load

Apache handles concurrency based on its Multi-Processing Module (MPM). The most common MPMs are:

  • prefork: process-based, often used historically with mod_php
  • worker: hybrid multi-process and multi-threaded
  • event: optimized for keep-alive connections and generally preferred for modern workloads

When you perform load testing on Apache, you are usually validating one or more of these behaviors:

  • How many concurrent requests Apache can serve
  • How efficiently Apache handles keep-alive connections
  • Whether static files are delivered quickly under load
  • How Apache behaves when proxying requests to upstream apps
  • Whether authentication, rewrites, or TLS termination add latency
  • How it responds when worker pools, CPU, memory, or network limits are hit

Common Apache Bottlenecks

Some of the most common bottlenecks uncovered during Apache stress testing include:

  • Too few worker threads or processes
  • High Time To First Byte due to overloaded upstream applications
  • Inefficient KeepAlive settings
  • Slow disk I/O for static file delivery
  • Excessive logging overhead
  • Poorly tuned TLS settings
  • Expensive rewrite rules in .htaccess
  • Authentication overhead on protected routes
  • Large file uploads tying up workers
  • Reverse proxy timeouts or backend saturation

What to Measure

When load testing Apache, pay attention to:

  • Requests per second
  • Median, p95, and p99 response times
  • Error rate, especially 5xx and 429 responses
  • Connection failures and timeouts
  • Response size consistency
  • Throughput by endpoint
  • Performance differences between static and dynamic routes

In LoadForge, you can use real-time reporting to observe how Apache behaves as user load ramps up. This is particularly useful for identifying the point where response times begin to degrade or errors start to appear.

Writing Your First Load Test

A good first Apache load test should simulate realistic browser traffic to public pages and static assets. This helps benchmark how well Apache handles common anonymous traffic patterns.

Basic Apache Website Load Test

The following Locust script simulates users visiting a homepage, loading a category page, reading a product page, and fetching static assets Apache would commonly serve.

python
from locust import HttpUser, task, between
 
class ApacheWebsiteUser(HttpUser):
    wait_time = between(1, 3)
 
    @task(4)
    def homepage(self):
        self.client.get(
            "/",
            headers={
                "Accept": "text/html,application/xhtml+xml",
                "User-Agent": "Mozilla/5.0 LoadForge-Apache-Test"
            },
            name="GET /"
        )
 
    @task(2)
    def category_page(self):
        self.client.get(
            "/products/web-hosting",
            headers={
                "Accept": "text/html,application/xhtml+xml",
                "User-Agent": "Mozilla/5.0 LoadForge-Apache-Test"
            },
            name="GET /products/web-hosting"
        )
 
    @task(2)
    def product_page(self):
        self.client.get(
            "/products/web-hosting/apache-vps-2gb",
            headers={
                "Accept": "text/html,application/xhtml+xml",
                "User-Agent": "Mozilla/5.0 LoadForge-Apache-Test"
            },
            name="GET /products/web-hosting/apache-vps-2gb"
        )
 
    @task(3)
    def static_assets(self):
        self.client.get(
            "/assets/css/main.css",
            headers={"Accept": "text/css"},
            name="GET /assets/css/main.css"
        )
        self.client.get(
            "/assets/js/app.js",
            headers={"Accept": "application/javascript"},
            name="GET /assets/js/app.js"
        )
        self.client.get(
            "/images/logo.png",
            headers={"Accept": "image/png"},
            name="GET /images/logo.png"
        )

Why this test matters

This script is realistic for Apache because many deployments serve a mix of:

  • HTML pages
  • CSS and JavaScript assets
  • Image files
  • SEO-friendly rewritten routes like /products/web-hosting

This lets you evaluate:

  • Static file performance
  • Page delivery latency
  • Whether Apache handles concurrent asset requests efficiently
  • Impact of caching and compression settings

What to look for

When you run this in LoadForge, compare:

  • Homepage latency versus static asset latency
  • Throughput for static content
  • Error rates on rewritten routes
  • Whether response times stay stable as concurrent users increase

If static assets are slow, Apache may be suffering from disk I/O constraints, missing caching headers, or suboptimal compression settings.

Advanced Load Testing Scenarios

Once you have a baseline, the next step is to test realistic Apache-backed workflows. These often include protected admin areas, reverse-proxied APIs, and file handling.

Scenario 1: Load Testing an Apache-Protected Admin Area with HTTP Basic Auth

Apache commonly protects internal dashboards or admin tools using HTTP Basic Authentication. This test simulates authenticated users accessing protected content.

python
from locust import HttpUser, task, between
from requests.auth import HTTPBasicAuth
 
class ApacheAdminUser(HttpUser):
    wait_time = between(2, 5)
 
    def on_start(self):
        self.client.auth = HTTPBasicAuth("adminuser", "Str0ngP@ssw0rd!")
 
    @task(3)
    def admin_dashboard(self):
        with self.client.get(
            "/admin/",
            headers={"Accept": "text/html"},
            name="GET /admin/",
            catch_response=True
        ) as response:
            if response.status_code != 200:
                response.failure(f"Unexpected status code: {response.status_code}")
            elif "Server Status Overview" not in response.text and "Admin Dashboard" not in response.text:
                response.failure("Admin page content validation failed")
 
    @task(2)
    def admin_reports(self):
        self.client.get(
            "/admin/reports/traffic?range=24h",
            headers={"Accept": "text/html"},
            name="GET /admin/reports/traffic"
        )
 
    @task(1)
    def admin_export(self):
        self.client.get(
            "/admin/reports/export?format=csv&range=7d",
            headers={"Accept": "text/csv"},
            name="GET /admin/reports/export"
        )

Why this matters for Apache

Basic Auth is frequently configured with Apache directives such as AuthType Basic, .htpasswd, and location-based access controls. Under load, authentication can add overhead, especially if:

  • Authentication is checked on every request
  • Access is backed by LDAP or external auth providers
  • Protected routes generate dynamic content

This test helps you measure whether Apache can handle bursts of authenticated traffic without excessive latency.

Scenario 2: Load Testing an API Behind Apache Reverse Proxy

Apache is often used as a reverse proxy in front of application servers such as Node.js, Python, Java, or PHP-FPM APIs. In this case, you are not just testing Apache itself, but also how efficiently it forwards requests and manages backend connections.

python
from locust import HttpUser, task, between
 
class ApacheReverseProxyApiUser(HttpUser):
    wait_time = between(1, 2)
 
    def on_start(self):
        login_payload = {
            "email": "loadtest.user@example.com",
            "password": "SuperSecure123!"
        }
 
        response = self.client.post(
            "/api/v1/auth/login",
            json=login_payload,
            headers={
                "Accept": "application/json",
                "Content-Type": "application/json"
            },
            name="POST /api/v1/auth/login"
        )
 
        if response.status_code == 200:
            data = response.json()
            self.token = data.get("access_token")
        else:
            self.token = None
 
    @task(4)
    def list_orders(self):
        self.client.get(
            "/api/v1/orders?status=processing&limit=25",
            headers={
                "Authorization": f"Bearer {self.token}",
                "Accept": "application/json"
            },
            name="GET /api/v1/orders"
        )
 
    @task(2)
    def order_detail(self):
        self.client.get(
            "/api/v1/orders/ORD-2025-004812",
            headers={
                "Authorization": f"Bearer {self.token}",
                "Accept": "application/json"
            },
            name="GET /api/v1/orders/:id"
        )
 
    @task(1)
    def create_support_ticket(self):
        payload = {
            "order_id": "ORD-2025-004812",
            "subject": "Package delayed in transit",
            "priority": "medium",
            "message": "Customer reported no delivery update for 72 hours."
        }
 
        self.client.post(
            "/api/v1/support/tickets",
            json=payload,
            headers={
                "Authorization": f"Bearer {self.token}",
                "Accept": "application/json",
                "Content-Type": "application/json"
            },
            name="POST /api/v1/support/tickets"
        )

What this reveals

This scenario is useful when Apache is configured with ProxyPass or mod_proxy_http. It helps uncover:

  • Reverse proxy overhead
  • Backend saturation
  • Connection pool issues
  • Authentication token validation latency
  • Differences between read-heavy and write-heavy API traffic

If response times are poor here but static content is fast, Apache may be healthy while the upstream application is the real bottleneck.

Scenario 3: Load Testing File Uploads Through Apache

Apache often handles file uploads for CMS platforms, document portals, support systems, and internal applications. Upload testing is important because large request bodies can stress worker capacity, request buffering, and backend processing.

python
from locust import HttpUser, task, between
import io
 
class ApacheFileUploadUser(HttpUser):
    wait_time = between(3, 6)
 
    def on_start(self):
        login_data = {
            "username": "editor@example.com",
            "password": "UploadTest!2025"
        }
 
        self.client.post(
            "/login",
            data=login_data,
            headers={"Content-Type": "application/x-www-form-urlencoded"},
            name="POST /login"
        )
 
    @task(2)
    def upload_pdf_document(self):
        pdf_content = io.BytesIO(b"%PDF-1.4 sample apache upload test document content")
        files = {
            "document": ("quarterly-report-q1-2025.pdf", pdf_content, "application/pdf")
        }
        data = {
            "folder": "finance/reports",
            "title": "Quarterly Report Q1 2025",
            "visibility": "internal"
        }
 
        self.client.post(
            "/documents/upload",
            files=files,
            data=data,
            name="POST /documents/upload"
        )
 
    @task(1)
    def upload_profile_image(self):
        image_content = io.BytesIO(b"\x89PNG\r\n\x1a\nfakepngcontentforloadtest")
        files = {
            "avatar": ("profile-image.png", image_content, "image/png")
        }
        data = {
            "user_id": "18492",
            "crop": "square"
        }
 
        self.client.post(
            "/account/avatar",
            files=files,
            data=data,
            name="POST /account/avatar"
        )

Why upload testing is important

File uploads can expose Apache configuration limits such as:

  • LimitRequestBody
  • Request timeout settings
  • Proxy buffering behavior
  • Worker exhaustion during slow uploads
  • Backend application limits

This is especially important if users upload files from many regions. With LoadForge’s global test locations, you can simulate geographically distributed upload traffic and see how Apache behaves across realistic network conditions.

Scenario 4: Mixed Traffic Profile for a Realistic Apache Deployment

Most production Apache servers handle a mixture of anonymous traffic, authenticated sessions, static assets, and API calls. A mixed workload gives you a more accurate performance profile than a single-endpoint benchmark.

python
from locust import HttpUser, task, between
import random
 
class ApacheMixedTrafficUser(HttpUser):
    wait_time = between(1, 4)
 
    catalog_paths = [
        "/blog/apache-performance-tuning",
        "/blog/using-mod-deflate-effectively",
        "/pricing",
        "/docs/getting-started",
        "/docs/api/authentication"
    ]
 
    @task(5)
    def browse_public_pages(self):
        path = random.choice(self.catalog_paths)
        self.client.get(
            path,
            headers={"Accept": "text/html"},
            name="GET public content"
        )
 
    @task(3)
    def fetch_static_asset(self):
        assets = [
            "/static/css/site.min.css",
            "/static/js/runtime.min.js",
            "/static/img/hero-banner.webp"
        ]
        path = random.choice(assets)
        self.client.get(path, name="GET static asset")
 
    @task(2)
    def search_docs(self):
        self.client.get(
            "/search?q=apache+keepalive",
            headers={"Accept": "text/html"},
            name="GET /search"
        )

This kind of script is ideal for benchmarking front-end Apache performance before a release or infrastructure change.

Analyzing Your Results

After running your Apache load testing scenarios in LoadForge, focus on the metrics that best reflect web server behavior and user experience.

Look at:

  • Average response time for a broad overview
  • p95 and p99 response times for tail latency
  • Endpoint-specific latency comparisons

For Apache, static assets should usually remain fast even under heavier load. If CSS, JS, or image delivery becomes slow, investigate:

  • Disk throughput
  • Compression overhead
  • Cache-control settings
  • Too many concurrent connections
  • CPU saturation from TLS or rewrite logic

Request Throughput

A healthy Apache server should show predictable throughput scaling as users ramp up. If requests per second flatten early, possible causes include:

  • Worker limits reached
  • Backend application bottlenecks
  • Database contention behind proxied requests
  • Lock contention in application code
  • Network saturation

Error Rates

Watch for:

  • 500 Internal Server Error
  • 502 Bad Gateway
  • 503 Service Unavailable
  • 504 Gateway Timeout
  • 401 Unauthorized or 403 Forbidden if auth is misconfigured

These often indicate:

  • Upstream failure when Apache is a reverse proxy
  • Authentication issues under concurrency
  • Resource exhaustion
  • Timeout settings that are too aggressive

Correlate by Endpoint Type

Compare:

  • Public HTML pages
  • Static files
  • Authenticated admin endpoints
  • API routes
  • Upload endpoints

This helps isolate whether the issue is in Apache itself, the application behind it, or a specific request class.

Use LoadForge Features Effectively

LoadForge gives you several advantages for Apache performance testing:

  • Distributed testing to simulate real traffic at scale
  • Real-time reporting for spotting breakpoints during the run
  • Cloud-based infrastructure so you do not saturate your own network
  • Global test locations to evaluate latency and regional behavior
  • CI/CD integration so Apache regressions are caught before production

A strong workflow is to create baseline tests in staging, then rerun them after Apache config changes, application releases, or infrastructure upgrades.

Performance Optimization Tips

Once your Apache stress testing reveals bottlenecks, these optimizations are often worth reviewing.

Use the Right MPM

For modern workloads, event MPM is typically more efficient than prefork, especially for keep-alive traffic.

Tune Worker Capacity

Review settings such as:

apache
ServerLimit
StartServers
ThreadsPerChild
MaxRequestWorkers

Too few workers cause queuing. Too many can exhaust memory.

Optimize KeepAlive

KeepAlive improves connection reuse, but poor settings can tie up workers unnecessarily. Review:

apache
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 2

Offload and Cache Static Assets

For Apache-served static content:

  • Enable browser caching headers
  • Use compression for text assets
  • Consider CDN offload for global delivery
  • Avoid unnecessary rewrite processing for static files

Reduce .htaccess Overhead

Moving repeated .htaccess rules into the main Apache config can improve performance because Apache does not need to re-read distributed config files on every request.

Tune Reverse Proxy Settings

If Apache fronts an application server, review:

  • Proxy timeout values
  • Connection reuse
  • Backend keep-alive behavior
  • Upstream concurrency capacity

Minimize Expensive Logging

Verbose logging under heavy load can create I/O bottlenecks. Keep enough logging for diagnostics, but avoid excessive overhead in high-traffic environments.

Benchmark After Every Change

Use LoadForge to rerun the same test after each tuning change. Performance optimization without repeatable load testing is mostly guesswork.

Common Pitfalls to Avoid

Apache load testing often goes wrong for avoidable reasons. Here are the most common mistakes.

Testing Only the Homepage

A homepage-only test misses how Apache handles real traffic patterns. Include static assets, authenticated routes, APIs, and uploads where relevant.

Ignoring Keep-Alive and Browser Behavior

Real users do not make isolated single requests. They fetch multiple resources and often reuse connections. Your Locust scripts should reflect this broader behavior.

Confusing Apache Problems with Application Problems

If Apache is reverse proxying to an app, slow responses may come from the backend. Compare static content performance to proxied endpoint performance to isolate the issue.

Using Unrealistic User Behavior

Do not hammer the same endpoint with no wait time unless you are intentionally stress testing. Use realistic pacing and navigation flows.

Forgetting Authentication State

For protected Apache routes, users need proper credentials, session cookies, or bearer tokens. Otherwise, you may only be load testing login failures.

Not Validating Responses

A 200 OK does not always mean the response is correct. Use catch_response=True and check content where it matters.

Running Tests from a Single Location Only

Apache performance can vary by geography and network path. Distributed load testing from multiple regions gives a more accurate picture of production behavior.

Overlooking Infrastructure Limits

Sometimes the bottleneck is not Apache at all. It may be:

  • CPU or memory on the VM
  • Disk throughput
  • Load balancer limits
  • TLS offload capacity
  • Backend database performance

Conclusion

Apache load testing is one of the most effective ways to understand how your web infrastructure behaves before real users expose its limits. By testing public pages, static assets, authenticated admin routes, reverse-proxied APIs, and file uploads, you can build a realistic performance profile and catch bottlenecks early.

Using Locust scripts with LoadForge gives you a flexible, developer-friendly way to run repeatable Apache performance testing and stress testing at scale. With distributed testing, real-time reporting, cloud-based infrastructure, global test locations, and CI/CD integration, LoadForge makes it much easier to benchmark Apache under real-world conditions.

If you are ready to validate your Apache server’s concurrency, throughput, and resilience, try LoadForge and start building your first Apache load test today.

Try LoadForge free for 7 days

Set up your first load test in under 2 minutes. No commitment.