
Introduction
Automating load testing in CircleCI is one of the most effective ways to catch performance regressions before they reach production. While unit tests and integration tests validate correctness, they do not tell you how your application behaves when hundreds or thousands of users hit critical endpoints at the same time. That is where load testing, performance testing, and stress testing become essential parts of a modern CI/CD pipeline.
By integrating LoadForge with CircleCI, teams can run repeatable performance checks on every deployment, nightly build, or release candidate. This makes it easier to detect slow API responses, database bottlenecks, authentication issues, and infrastructure scaling problems early. Instead of discovering performance failures during a launch or traffic spike, you can surface them automatically inside your delivery workflow.
Because LoadForge is cloud-based and built on Locust, you can write realistic Python-based test scripts and run them at scale using distributed testing infrastructure. Combined with CircleCI, this gives you a practical way to automate load testing, view real-time reporting, and enforce performance standards as part of CI/CD.
Prerequisites
Before you automate load testing in CircleCI, make sure you have the following:
- A CircleCI project connected to your repository
- A LoadForge account
- A target environment for testing, such as:
- a staging environment
- a preview deployment
- a performance testing environment
- API credentials or test user accounts for authentication
- Basic familiarity with:
- CircleCI workflows
- environment variables
- HTTP APIs
- Python and Locust syntax
You should also define what success looks like before adding load testing to your pipeline. Common performance goals include:
- 95th percentile response time under 500 ms
- error rate below 1%
- login endpoint under 1 second at 100 concurrent users
- checkout workflow completing successfully under sustained load
For secure automation, store secrets like API tokens, usernames, and passwords in CircleCI environment variables or contexts rather than hardcoding them into scripts.
Understanding CircleCI Under Load
CircleCI itself is not the application being load tested in most cases. Instead, CircleCI is the automation layer that triggers your performance testing. The actual load is usually directed at your application, APIs, or services during the CI/CD process.
That said, there are several important considerations when using CircleCI for automated load testing:
Ephemeral build environments
CircleCI jobs are short-lived and designed for automation. This means your load testing workflow should be deterministic and easy to run from scratch. Avoid tests that rely on manually prepared state unless your pipeline provisions it first.
Environment-specific testing
In CI/CD, you often want to test a freshly deployed version of your app. That means the target host may change between runs. Your Locust scripts should read the base URL, credentials, and tokens from environment variables so they can target staging, preview, or release environments dynamically.
Gating deployments
A common pattern is to run load testing after deployment to staging and before promotion to production. If your load test detects a performance regression, CircleCI can fail the workflow and block the release.
Common bottlenecks exposed by CI-triggered load testing
When automated performance testing runs regularly, it often reveals recurring issues such as:
- slow database queries on list endpoints
- inefficient authentication flows
- API rate limiting misconfigurations
- memory leaks during sustained traffic
- poor caching behavior after deployment
- file upload or report generation endpoints that degrade sharply under concurrency
Using LoadForge’s distributed testing and global test locations, you can simulate traffic patterns that are difficult to reproduce from a single CI runner. CircleCI handles orchestration, while LoadForge provides scalable execution and real-time performance reporting.
Writing Your First Load Test
A good first step is to automate a basic API smoke-style load test in CircleCI. This verifies that your deployed application can handle a moderate level of traffic and that key endpoints remain healthy.
Below is a realistic Locust script for a web application with a health endpoint, product catalog, and login flow. It uses environment variables so it can run cleanly inside CircleCI.
import os
from locust import HttpUser, task, between
class EcommerceApiUser(HttpUser):
wait_time = between(1, 3)
host = os.getenv("TARGET_HOST", "https://staging-api.example.com")
def on_start(self):
self.email = os.getenv("TEST_USER_EMAIL", "loadtest.user@example.com")
self.password = os.getenv("TEST_USER_PASSWORD", "SuperSecurePass123!")
@task(3)
def get_health(self):
self.client.get("/health", name="/health")
@task(5)
def browse_products(self):
self.client.get("/api/v1/products?category=electronics&page=1&limit=20", name="/api/v1/products")
@task(2)
def login(self):
payload = {
"email": self.email,
"password": self.password
}
self.client.post("/api/v1/auth/login", json=payload, name="/api/v1/auth/login")What this test does
This script simulates common user activity:
- checking service health
- browsing the product catalog
- logging in with realistic credentials
The host is loaded from TARGET_HOST, which is ideal for CircleCI because each pipeline can point to a different deployment target.
Running this in CircleCI
You can keep your Locust scripts in your repository and use CircleCI to trigger LoadForge through API calls or through your deployment workflow. A simple CircleCI job might:
- deploy the application to staging
- trigger a LoadForge test
- wait for completion
- fail the pipeline if thresholds are not met
Here is a practical CircleCI configuration example:
version: 2.1
jobs:
deploy_staging:
docker:
- image: cimg/python:3.11
steps:
- checkout
- run:
name: Deploy to staging
command: ./scripts/deploy-staging.sh
trigger_load_test:
docker:
- image: cimg/base:stable
steps:
- run:
name: Trigger LoadForge test
command: |
curl -X POST "https://loadforge.com/api/v1/tests/${LOADFORGE_TEST_ID}/start" \
-H "Authorization: Token ${LOADFORGE_API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"env": {
"TARGET_HOST": "'"${STAGING_URL}"'",
"TEST_USER_EMAIL": "'"${TEST_USER_EMAIL}"'",
"TEST_USER_PASSWORD": "'"${TEST_USER_PASSWORD}"'"
}
}'
workflows:
performance_pipeline:
jobs:
- deploy_staging
- trigger_load_test:
requires:
- deploy_stagingThis pattern makes CircleCI the orchestrator and LoadForge the execution engine for your load testing.
Advanced Load Testing Scenarios
Basic endpoint testing is useful, but real CI/CD performance testing should exercise the workflows that matter most to your users and business. Below are more advanced CircleCI-friendly scenarios you can automate with LoadForge.
Scenario 1: Authenticated API load testing with bearer tokens
Many modern applications use token-based authentication. In automated performance testing, you want users to authenticate once and then perform realistic API actions with the returned token.
import os
from locust import HttpUser, task, between
class AuthenticatedApiUser(HttpUser):
wait_time = between(1, 2)
host = os.getenv("TARGET_HOST", "https://staging-api.example.com")
def on_start(self):
login_payload = {
"email": os.getenv("TEST_USER_EMAIL", "perf.user@example.com"),
"password": os.getenv("TEST_USER_PASSWORD", "PerfTestPass123!")
}
with self.client.post(
"/api/v1/auth/login",
json=login_payload,
name="/api/v1/auth/login",
catch_response=True
) as response:
if response.status_code == 200:
token = response.json().get("access_token")
if token:
self.client.headers.update({
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
})
response.success()
else:
response.failure("No access token returned")
else:
response.failure(f"Login failed: {response.status_code}")
@task(4)
def get_account_profile(self):
self.client.get("/api/v1/account/profile", name="/api/v1/account/profile")
@task(3)
def get_orders(self):
self.client.get("/api/v1/orders?status=all&page=1&limit=10", name="/api/v1/orders")
@task(2)
def create_cart(self):
payload = {
"currency": "USD",
"channel": "web"
}
self.client.post("/api/v1/carts", json=payload, name="/api/v1/carts")Why this matters in CircleCI
This scenario is ideal for post-deployment checks in CI/CD because it validates:
- authentication performance
- token issuance
- protected endpoint latency
- session-related application behavior
If a new release introduces slower JWT validation or database lookups on authenticated endpoints, this test will catch it.
Scenario 2: End-to-end checkout workflow under load
Single-endpoint tests are helpful, but bottlenecks often appear in multi-step transactions. For an ecommerce or SaaS app, you should load test the paths that generate revenue or user conversions.
import os
import random
from locust import HttpUser, task, between
class CheckoutUser(HttpUser):
wait_time = between(2, 5)
host = os.getenv("TARGET_HOST", "https://staging-api.example.com")
def on_start(self):
login_payload = {
"email": os.getenv("CHECKOUT_USER_EMAIL", "checkout.user@example.com"),
"password": os.getenv("CHECKOUT_USER_PASSWORD", "CheckoutPass123!")
}
response = self.client.post("/api/v1/auth/login", json=login_payload, name="/api/v1/auth/login")
token = response.json().get("access_token")
self.client.headers.update({
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
})
@task
def complete_checkout(self):
product_id = random.choice([1012, 1018, 1025, 1033])
cart_response = self.client.post(
"/api/v1/carts",
json={"currency": "USD", "channel": "web"},
name="/api/v1/carts"
)
cart_id = cart_response.json().get("id")
self.client.post(
f"/api/v1/carts/{cart_id}/items",
json={
"product_id": product_id,
"quantity": 1
},
name="/api/v1/carts/[id]/items"
)
self.client.post(
f"/api/v1/carts/{cart_id}/shipping-address",
json={
"first_name": "Test",
"last_name": "User",
"address_line1": "123 Performance Ave",
"city": "Austin",
"state": "TX",
"postal_code": "78701",
"country": "US"
},
name="/api/v1/carts/[id]/shipping-address"
)
self.client.post(
f"/api/v1/carts/{cart_id}/payment-intents",
json={
"provider": "stripe",
"payment_method_token": "pm_card_visa"
},
name="/api/v1/carts/[id]/payment-intents"
)
self.client.post(
f"/api/v1/carts/{cart_id}/checkout",
json={
"accept_terms": True,
"marketing_opt_in": False
},
name="/api/v1/carts/[id]/checkout"
)Why this matters
This is the kind of load testing workflow that reveals:
- cart creation contention
- inventory lookup latency
- payment service slowdowns
- checkout failures under concurrency
- database locking issues during order creation
These are exactly the regressions you want to catch automatically in CircleCI before a release goes live.
Scenario 3: File upload and report generation performance testing
Many internal platforms and SaaS products include heavy operations such as CSV imports, document uploads, or analytics report generation. These often behave well with a few users but degrade quickly under load.
import os
import io
from locust import HttpUser, task, between
class ReportingUser(HttpUser):
wait_time = between(3, 6)
host = os.getenv("TARGET_HOST", "https://staging-api.example.com")
def on_start(self):
login_payload = {
"email": os.getenv("REPORT_USER_EMAIL", "reports.user@example.com"),
"password": os.getenv("REPORT_USER_PASSWORD", "ReportsPass123!")
}
response = self.client.post("/api/v1/auth/login", json=login_payload, name="/api/v1/auth/login")
token = response.json().get("access_token")
self.client.headers.update({
"Authorization": f"Bearer {token}"
})
@task(2)
def upload_csv_import(self):
csv_content = """email,first_name,last_name,plan
alice@example.com,Alice,Johnson,pro
bob@example.com,Bob,Smith,business
charlie@example.com,Charlie,Brown,starter
"""
files = {
"file": ("customers.csv", io.BytesIO(csv_content.encode("utf-8")), "text/csv")
}
self.client.post(
"/api/v1/imports/customers",
files=files,
name="/api/v1/imports/customers"
)
@task(1)
def generate_usage_report(self):
payload = {
"report_type": "usage_summary",
"date_range": {
"from": "2026-03-01",
"to": "2026-03-31"
},
"format": "json",
"filters": {
"workspace_id": "ws_29841",
"include_inactive": False
}
}
self.client.post(
"/api/v1/reports/generate",
json=payload,
name="/api/v1/reports/generate"
)Why this matters in CI/CD
These operations are often resource-intensive and can expose:
- CPU spikes
- queue backlogs
- object storage latency
- memory pressure
- slow background job processing
Automating this kind of stress testing in CircleCI helps teams validate not just request latency, but system resilience after code changes.
Analyzing Your Results
Once CircleCI triggers your LoadForge test, the next step is interpreting the results correctly. LoadForge provides real-time reporting and distributed execution, which makes it easier to understand how your application performs under realistic traffic levels.
Focus on these metrics:
Response time percentiles
Average response time can be misleading. Pay close attention to:
- P50 for typical user experience
- P95 for slower but common requests
- P99 for worst-case performance
If your /api/v1/auth/login endpoint has an average of 200 ms but a P95 of 1.8 seconds, you likely have intermittent bottlenecks.
Error rate
A low but rising error rate under load often indicates the application is approaching capacity. Watch for:
- 500 errors
- 502 or 504 gateway timeouts
- authentication failures
- rate limit responses
- connection resets
Requests per second
This tells you how much traffic your application can sustain. Compare throughput across builds to detect regressions.
Endpoint-specific performance
Group requests by logical names like:
/api/v1/auth/login/api/v1/products/api/v1/carts/[id]/checkout
This makes it easier to identify exactly which workflow slowed down after a deployment.
Behavior over time
Performance issues may only appear after several minutes of sustained load. Look for:
- steadily increasing response times
- memory-related degradation
- queue buildup
- error spikes during ramp-up
In a mature CircleCI workflow, you can use these results as quality gates. For example:
- fail the pipeline if P95 login latency exceeds 800 ms
- fail if checkout error rate exceeds 1%
- warn if throughput drops more than 15% compared to the baseline
Because LoadForge is cloud-based, you can also run tests from global test locations to see whether latency or edge routing issues appear in specific regions.
Performance Optimization Tips
After load testing your application through CircleCI, these are some of the most common optimization opportunities:
Cache expensive reads
If product listings, dashboards, or reports are slow, add caching at the application, query, or CDN level.
Optimize database queries
Look for N+1 queries, missing indexes, and slow joins on high-traffic endpoints like:
/api/v1/products/api/v1/orders/api/v1/account/profile
Reduce authentication overhead
If login or token validation is slow under load, review:
- password hashing cost
- session store performance
- JWT signing and verification logic
- external identity provider latency
Use asynchronous processing for heavy operations
File imports, report generation, and email sending should often be queued instead of processed inline.
Tune autoscaling and connection pools
If response times rise sharply with concurrency, your issue may be infrastructure-related rather than code-related. Check:
- app server worker counts
- database connection pool settings
- container CPU and memory limits
- horizontal autoscaling triggers
Establish performance baselines in CI/CD
Do not wait for a major incident to define acceptable performance. Use CircleCI to run recurring load testing and compare new builds against known-good baselines.
Common Pitfalls to Avoid
Automating load testing in CircleCI is powerful, but there are several mistakes teams make repeatedly.
Testing production accidentally
Always confirm that TARGET_HOST points to a safe environment. Use environment-specific contexts and safeguards in CircleCI.
Hardcoding credentials
Never place API keys, passwords, or tokens directly in Locust scripts. Use CircleCI environment variables and LoadForge environment configuration.
Running unrealistic traffic patterns
A test that hammers only /health is not meaningful. Mix endpoints and workflows based on real usage.
Ignoring test data setup
Authenticated workflows, carts, file imports, and reporting often require valid seed data. Your CI/CD pipeline should provision or reset test fixtures as needed.
Using too few users to reveal bottlenecks
Some issues only appear at higher concurrency or during longer sustained tests. Include both quick validation tests and deeper scheduled stress testing.
Failing to name requests clearly
Use the name parameter in Locust so reports group dynamic URLs correctly. Otherwise, analysis becomes noisy and hard to interpret.
Treating load testing as a one-time task
Performance testing is most valuable when automated continuously. CircleCI makes it easy to run load tests on every release, nightly, or before major launches.
Conclusion
Automating load testing in CircleCI gives your team a practical way to catch performance regressions before they impact users. By combining CircleCI’s workflow automation with LoadForge’s cloud-based infrastructure, distributed testing, real-time reporting, and Locust-based scripting, you can build performance testing directly into your CI/CD process.
Start with a basic API test, then expand into authenticated workflows, checkout journeys, file uploads, and other high-value scenarios. Over time, these automated load testing checks become a reliable safety net for every deployment.
If you want to scale performance testing without managing your own infrastructure, try LoadForge and bring realistic load testing, stress testing, and performance testing into your CircleCI pipeline today.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

ArgoCD Load Testing for Progressive Delivery
Combine ArgoCD and LoadForge to validate app performance during progressive delivery and Kubernetes rollouts.

Datadog Load Testing Integration with LoadForge
Integrate Datadog with LoadForge to correlate load test results with infrastructure and application metrics.

Jenkins Load Testing with LoadForge
Integrate LoadForge with Jenkins to automate load tests, fail slow builds, and improve release confidence.