LoadForge LogoLoadForge

Website Load Testing: How to Test Your Site Under Real Traffic

Website Load Testing: How to Test Your Site Under Real Traffic

Why Load Test Your Website?

Every website works fine with one user. The question that matters is: does it still work when hundreds or thousands of people show up at the same time? If you have never tested that scenario, you are running on hope rather than evidence.

The internet is full of cautionary tales. Twitter's iconic fail whale became a symbol of a platform that could not keep up with its own popularity. Reddit's infamous hug of death has taken down countless smaller sites that made the front page and buckled under the sudden wave of traffic. Major retailers have lost millions in revenue during Black Friday sales because their checkout systems could not handle the load they spent months of marketing dollars to generate.

These are not edge cases. They are the predictable consequence of skipping load testing.

The cost of downtime is staggering. For e-commerce sites, estimates put the cost of a single hour of downtime anywhere from $10,000 for a small shop to over $1 million for large retailers. But the damage goes beyond the immediate lost sales. There is the reputational harm, the lost SEO rankings from slow response times, the customer support tickets, and the erosion of trust that takes months to rebuild.

Load testing your website eliminates the guesswork. Instead of hoping your infrastructure can handle peak traffic, you know it can because you have already proven it under controlled conditions. You discover bottlenecks in staging rather than in production. You find out your database connection pool is too small before a product launch, not during one.

The business case is simple: a few hours of testing can prevent days of damage.

What to Test on Your Website

Not every page on your website carries equal weight. A 404 page that loads slowly is an annoyance. A checkout page that times out under load is a revenue emergency. Effective load testing focuses on the pages and flows that matter most to your business.

Homepage — This is where the majority of your traffic lands. It sets the first impression and often makes the most database queries (featured products, recent posts, dynamic content). If your homepage is slow under load, visitors leave before they see anything else.

Login and signup — Authentication flows involve database writes, password hashing, session creation, and often third-party calls (OAuth providers, email verification). These operations are computationally heavier than simple page views and can become bottlenecks quickly.

Search — Search is one of the most resource-intensive features on any website. It hits the database hard, often involves full-text indexing, and users expect near-instant results. Under load, poorly optimized search queries can bring an entire database server to its knees.

Checkout and payment — For e-commerce sites, this is the flow that directly generates revenue. It involves inventory checks, payment gateway calls, order creation, and email notifications — a chain of operations where any single link failing means a lost sale.

Content pages — Blog posts, product listings, category pages. These are often cacheable, but you need to verify that caching actually works under load and that cache misses do not cause cascading failures.

API endpoints — If your website relies on client-side JavaScript that calls backend APIs (as most modern sites do), those API endpoints need load testing independently. A slow API response means a blank screen or spinner for the user, even if the initial HTML loads quickly.

The concept that ties all of this together is critical user journeys. A user journey is the sequence of actions a real person takes to accomplish a goal: land on the homepage, search for a product, view the product page, add to cart, and check out. Your load tests should simulate these complete journeys, not just hammer individual URLs in isolation.

Key Metrics for Website Load Testing

When you run a load test, you will be presented with a wall of data. Knowing which metrics to focus on — and what values are acceptable — is essential for turning that data into action.

MetricWhat It MeasuresTarget Value
Time to First Byte (TTFB)Time from request sent to first byte received from serverUnder 200ms for static, under 500ms for dynamic
Largest Contentful Paint (LCP)Time until the largest visible element rendersUnder 2.5 seconds
Throughput (requests/sec)Number of requests your server processes per secondShould scale linearly with users
Error RatePercentage of requests returning 4xx/5xx errorsUnder 1% at target load
Concurrent UsersNumber of simultaneous active users during the testYour expected peak traffic
Pages per SecondComplete page loads (including all assets) per secondDepends on page complexity

TTFB is your server's raw speed — how quickly it can begin responding to a request. It reflects backend processing time, database query speed, and server-side rendering performance. A high TTFB under load almost always points to a backend bottleneck.

LCP matters for user experience and SEO. Google uses it as a Core Web Vital ranking factor. Under load, LCP can degrade even when TTFB looks acceptable, because the server may be slow to deliver the large images or content blocks that determine LCP.

Throughput is the most direct measure of capacity. If throughput plateaus while you are still adding users, your system has hit a ceiling. The users being added after that point are just waiting in queues, driving up response times.

Error rate should be near zero at your target load. Any errors during a load test at expected traffic levels indicate a real problem — connection timeouts, out-of-memory conditions, or application bugs that only surface under concurrency.

Step-by-Step: Load Testing a Website

1. Identify Critical User Journeys

Before writing any test code, map the paths through your site that matter most. Talk to your product team, look at your analytics, and answer these questions:

  • What are the top 5 pages by traffic volume?
  • What is the primary conversion flow (signup, purchase, subscription)?
  • Which pages generate the most revenue?
  • Are there known slow pages or features?

For a typical e-commerce site, your critical journeys might be:

  1. Browse and discover: Homepage, category page, search results
  2. Evaluate: Product detail page, reviews, compare
  3. Purchase: Add to cart, cart page, checkout, payment
  4. Account: Login, order history, account settings

Weight your test traffic to reflect reality. If 60% of your users browse without buying, your test should reflect that ratio.

2. Set Performance Baselines

Before you test with hundreds of users, test with one. Run your critical user journeys with a single virtual user and record the response times. This gives you a clean baseline — the best your application can do with zero contention.

These baseline numbers become your reference point. If your homepage responds in 150ms with one user but 1,500ms with 200 users, you know the degradation is a 10x factor. Without the baseline, you would not know whether 1,500ms is a regression or just normal for your application.

3. Write Your Test Script

Here is a complete Locust script that simulates realistic multi-page browsing behavior. It models a user who visits the homepage, browses a category, performs a search, and views a product detail page, with realistic think time between actions to simulate human reading behavior.

from locust import HttpUser, task, between, SequentialTaskSet


class BrowsingJourney(SequentialTaskSet):
    """Simulates a user browsing through the site in a realistic sequence."""

    @task
    def visit_homepage(self):
        self.client.get("/", name="Homepage")

    @task
    def browse_category(self):
        self.client.get("/products/category/electronics", name="Category Page")

    @task
    def search_product(self):
        self.client.get("/search?q=wireless+headphones", name="Search Results")

    @task
    def view_product(self):
        self.client.get("/products/wireless-headphones-pro", name="Product Detail")


class WebsiteUser(HttpUser):
    wait_time = between(2, 5)  # Realistic think time between pages
    tasks = [BrowsingJourney]
    host = "https://your-website.com"

Key details in this script:

  • SequentialTaskSet ensures tasks run in order, simulating a real browsing session rather than random page hits.
  • wait_time = between(2, 5) adds 2 to 5 seconds of think time between each action, mimicking a human reading the page before clicking the next link.
  • name parameter on each request gives clean labels in your test results, making it easy to see which pages are slow.

For a more complete test that includes login and checkout, you would add additional task sets with POST requests, form data, and session handling. See our load testing tutorial for more advanced script patterns.

4. Configure Your Load Profile

With your script ready, you need to decide the shape of your test:

  • Number of users: Start with your expected peak concurrent users. If you do not know this number, see our guide on how many users your website can handle.
  • Ramp-up rate: How quickly to add users. A common pattern is to add 10 users per second until you reach your target. Ramping up too fast can create an artificial spike that does not represent real traffic patterns.
  • Test duration: Run for at least 15 minutes at full load. Short tests miss problems that emerge only after connection pools fill up, caches warm, or garbage collection kicks in.
  • Geographic regions: If your users are global, run tests from multiple regions to capture the impact of network latency and CDN behavior.

5. Run and Monitor

While your test is running, watch these indicators in real time:

  • Response time trend — Is it stable, gradually increasing, or suddenly spiking? A gradual increase often indicates memory pressure or connection pool exhaustion. A sudden spike usually means you hit a hard limit.
  • Error count — Even a handful of errors during ramp-up can indicate a problem. Note when errors first appear and at what user count.
  • Throughput — Is it still climbing as users are added, or has it leveled off? A throughput plateau means your system is saturated.
  • Server resources — If you have access to server monitoring (CPU, memory, disk I/O, network), watch those alongside your load test. A CPU pinned at 100% tells you a very different story than a CPU at 30% with high response times (which would point to an I/O or database bottleneck instead).

6. Analyze Results

After the test completes, examine the full results rather than just the summary. Key things to look for:

  • Response time distribution: The median (p50) tells you the typical experience. The p95 and p99 tell you what your slowest users experienced. If the p99 is 10x the median, you have a tail latency problem.
  • Response times by endpoint: Which pages degraded the most? This points you directly to the code or queries that need optimization.
  • Error breakdown: What types of errors occurred? HTTP 502/503 errors suggest server overload. HTTP 500 errors suggest application bugs triggered by concurrency. Timeouts suggest resource exhaustion.
  • Timeline view: Look at how metrics changed over the duration of the test. Problems that appear 10 minutes into a sustained load test often indicate resource leaks.

Interpreting Your Results

What Good Looks Like

A successful load test at your target user count shows:

  • Response times under 500ms for the median (p50) on dynamic pages, and under 1 second for the p95. Static assets served via CDN should be well under 100ms.
  • Error rate below 1%, and ideally below 0.1%. The errors that do occur should be transient and not clustered around a specific endpoint.
  • Throughput that scales linearly with the number of users. If you double the users, throughput should roughly double — until you approach your system's capacity.
  • Stable resource utilization on the server side. CPU below 70%, memory with comfortable headroom, no disk I/O saturation.

Warning Signs

Certain patterns in your results signal problems that need attention before you go to production:

The response time hockey stick — Response times are flat as you add users, then suddenly shoot up exponentially at a certain point. This is the classic sign of a bottleneck: a database connection pool running dry, a thread pool saturating, or a CPU maxing out. Everything below that inflection point works fine; everything above it falls apart.

Increasing error rate under load — If errors climb from 0% to 5% as you ramp from 100 to 500 users, something is failing under concurrency. Common culprits include race conditions, lock contention, and timeout configurations that are too aggressive.

Throughput plateau — Throughput stops increasing even as you add more users. This means your system is fully saturated. Additional users are not getting served faster; they are just waiting longer. The bottleneck could be CPU, database, network bandwidth, or an external service.

Memory growth over time — If memory usage climbs steadily throughout a soak test without stabilizing, you likely have a memory leak. This will eventually cause an out-of-memory crash in production, typically at the worst possible time.

Website Load Testing Best Practices

  • Test from multiple geographic regions. A site that responds in 100ms from the same data center may take 800ms from another continent. CDN configuration, DNS resolution, and TLS handshake overhead all vary by location.
  • Use realistic test data. Do not search for the same term every time or browse the same product. Real traffic has diverse query patterns, and your caches and databases behave differently when the data access pattern is varied.
  • Test regularly, not just before launches. Performance regressions creep in with routine deployments. A new ORM query, an unoptimized API endpoint, or a forgotten debug log statement can degrade performance gradually.
  • Integrate load tests into your CI/CD pipeline. Automated load tests on every release catch regressions before they reach production. Even a short 5-minute smoke test with 50 users can catch major regressions.
  • Test with caching on AND off. Your site may perform beautifully with a warm cache but collapse on a cold start. Test both scenarios — a server reboot during a traffic spike means your cache is empty when you need it most.
  • Monitor server-side resources during tests. Load test metrics alone do not tell you why something is slow. Correlating response times with CPU, memory, database query logs, and application logs gives you the full picture.
  • Test third-party integrations. Payment gateways, analytics scripts, authentication providers, and CDNs can all become bottlenecks. If a third-party service rate-limits you at 100 requests per second, that is your ceiling regardless of how powerful your servers are.

Beyond Basic Load Testing

HTTP-level load testing covers most scenarios, but modern web applications sometimes require more sophisticated approaches.

Browser-based testing is necessary for JavaScript-heavy single-page applications (SPAs) where significant work happens in the browser. If your application relies on client-side rendering, WebSocket connections, or complex JavaScript interactions, protocol-level testing may not capture the full user experience. Browser-based load testing uses real browser instances to execute JavaScript, render pages, and measure client-side performance metrics.

Real User Monitoring (RUM) complements load testing by capturing performance data from actual users in production. While load testing tells you what should happen, RUM tells you what is happening. Combining both gives you complete visibility — load testing for proactive prevention, RUM for ongoing verification.

Scheduled testing runs your load tests automatically on a recurring basis — daily, weekly, or after every deployment. This turns load testing from a one-time activity into a continuous performance safety net. Regressions are caught within hours rather than discovered by users weeks later.

For a deeper understanding of what load testing is and how it fits into your development process, read our complete guide on what is load testing. If you are ready to write your first test, our load testing tutorial walks you through the process from start to finish.

Try LoadForge free for 7 days

Set up your first load test in under 2 minutes. No commitment.