
If you have looked into load testing with Locust or LoadForge, you have encountered the term "Locustfile." It sounds like a specific file format or configuration language, but it is actually something far more powerful: a plain Python file. That means everything you already know about Python -- variables, loops, conditionals, imports, libraries -- is available to you when writing load tests. There is no new language to learn, no YAML to wrestle with, and no GUI-only test builder that limits what you can express.
This guide starts from zero and builds up to advanced patterns. By the end, you will understand every component of a Locustfile and be able to write tests that simulate complex, realistic user behavior.
What Is a Locustfile?
A Locustfile is a Python file (typically named locustfile.py) that defines how virtual users behave during a load test. Each virtual user is an instance of a Python class that you write, and each class defines the HTTP requests that user makes, how long they wait between requests, and any setup or teardown logic.
Locust is the open-source load testing framework that reads and executes Locustfiles. LoadForge uses Locust as its scripting engine, which means any Locustfile you write works on both platforms. You can develop and debug locally with the locust CLI, then upload the same file to LoadForge for distributed, cloud-based execution across multiple regions.
The power of the Locustfile approach is that your load tests are code. They live in version control, they are reviewed in pull requests, they can be parameterized with environment variables, and they can be as simple or as sophisticated as your testing needs require.
Your First Locustfile
Here is the simplest possible Locustfile:
from locust import HttpUser, task, between
class MyUser(HttpUser):
wait_time = between(1, 3)
@task
def homepage(self):
self.client.get("/")
Five lines of meaningful code, and you have a working load test. Let's break down every part.
from locust import HttpUser, task, between -- This imports the three components you need for most Locustfiles. HttpUser is the base class for virtual users that make HTTP requests. task is a decorator that marks a method as something the virtual user should do. between is a wait time function.
class MyUser(HttpUser) -- You define a class that inherits from HttpUser. Each instance of this class represents one virtual user during the test. If you configure the test to run 100 users, Locust creates 100 instances of MyUser.
wait_time = between(1, 3) -- After each task completes, the virtual user waits a random amount of time between 1 and 3 seconds before executing the next task. This simulates realistic think time -- real users do not click links with zero delay between actions.
@task -- This decorator registers the method as a task that virtual users will execute. Without this decorator, the method exists but Locust will never call it.
self.client.get("/") -- The HttpUser base class provides self.client, which is an HTTP session (based on the requests library) that automatically tracks response times, success rates, and other metrics. Every request made through self.client is recorded in the test results.
Understanding HttpUser
HttpUser is the base class you will use for virtually all web application load tests. It provides several important features:
self.client: An HTTP session that maintains cookies across requests (like a real browser), records all request metrics automatically, and supports all standard HTTP methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS).host: The base URL that all requests are made against. You can set this as a class attribute (host = "https://example.com"), pass it via the command line (--host https://example.com), or configure it in LoadForge's test settings.- Automatic metric collection: Every request through
self.clientis timed and categorized. You do not need to write any timing or reporting code. - Session persistence: Cookies set by login endpoints persist across subsequent requests within the same user, just like a real browser session.
You can also create multiple HttpUser subclasses in a single Locustfile to simulate different types of users with different behaviors. For example, one class for anonymous visitors and another for authenticated users.
Tasks and Task Weighting
The @task decorator tells Locust that a method should be executed by virtual users. When a user finishes one task and its wait time, Locust randomly selects the next task to execute.
By default, all tasks have equal probability. You can change this with task weighting:
from locust import HttpUser, task, between
class WebUser(HttpUser):
wait_time = between(1, 3)
@task(10)
def browse_products(self):
self.client.get("/products")
@task(5)
def view_product(self):
self.client.get("/products/42")
@task(2)
def add_to_cart(self):
self.client.post("/cart", json={"product_id": 42, "quantity": 1})
@task(1)
def checkout(self):
self.client.post("/checkout")
The numbers in @task(N) are relative weights, not percentages or absolute counts. In this example, browse_products (weight 10) will be selected approximately 10 times more often than checkout (weight 1). The total weights are 10 + 5 + 2 + 1 = 18, so browse_products runs about 55% of the time, view_product about 28%, add_to_cart about 11%, and checkout about 6%.
This weighting system lets you model realistic traffic distributions. In most web applications, browsing is far more common than purchasing, and your load test should reflect that.
Wait Time Strategies
Wait time controls the pause between task executions for each virtual user. Locust provides three built-in strategies:
| Strategy | Syntax | Behavior | Best For |
|---|---|---|---|
| between | between(1, 3) | Random wait between min and max seconds | General-purpose simulation of human think time |
| constant | constant(2) | Fixed wait of exactly N seconds | API testing with regular polling intervals |
| constant_pacing | constant_pacing(5) | Ensures each task cycle takes at least N seconds total (including execution time) | Guaranteeing a fixed request rate per user |
The distinction between constant and constant_pacing is important. With constant(2), if a request takes 3 seconds, the total cycle is 5 seconds (3s request + 2s wait). With constant_pacing(5), the total cycle is always 5 seconds -- if the request takes 3 seconds, the wait is only 2 seconds. If the request takes longer than 5 seconds, there is no wait at all.
constant_pacing is particularly useful when you want each user to generate a predictable number of requests per unit of time, regardless of server response speed.
Making HTTP Requests
GET Requests
Simple GET requests and GET requests with query parameters:
@task
def simple_get(self):
self.client.get("/api/products")
@task
def get_with_params(self):
self.client.get("/api/products", params={"category": "electronics", "page": 1})
POST Requests
Sending JSON payloads and form data:
@task
def post_json(self):
self.client.post("/api/orders", json={
"product_id": 42,
"quantity": 2,
"shipping_address": "123 Main St"
})
@task
def post_form(self):
self.client.post("/login", data={
"username": "testuser",
"password": "testpass"
})
The json parameter automatically serializes the dictionary to JSON and sets the Content-Type header. The data parameter sends form-encoded data.
Headers and Authentication
Adding custom headers and Bearer token authentication:
class AuthenticatedUser(HttpUser):
wait_time = between(1, 3)
token = None
def on_start(self):
response = self.client.post("/api/auth/login", json={
"email": "test@example.com",
"password": "password123"
})
self.token = response.json()["access_token"]
@task
def protected_endpoint(self):
self.client.get("/api/me", headers={
"Authorization": f"Bearer {self.token}"
})
Response Validation
By default, Locust considers any non-error HTTP status code (2xx, 3xx) as a success. The catch_response context manager lets you define custom success and failure criteria:
@task
def validated_request(self):
with self.client.get("/api/products", catch_response=True) as response:
if response.status_code != 200:
response.failure(f"Expected 200, got {response.status_code}")
elif response.elapsed.total_seconds() > 2.0:
response.failure("Response too slow")
elif len(response.json().get("products", [])) == 0:
response.failure("No products returned")
else:
response.success()
This is critical for meaningful load tests. A response that returns 200 OK but contains an error message in the body, or that takes 10 seconds, should not be counted as a success. The catch_response pattern gives you full control over what "success" means.
Sequential Task Flows
The standard @task approach picks tasks randomly, which is fine for simulating general browsing behavior. But some user journeys must happen in order: login, then browse, then add to cart, then checkout. For these, Locust provides SequentialTaskSet:
from locust import HttpUser, SequentialTaskSet, task, between
class UserFlow(SequentialTaskSet):
@task
def login(self):
self.client.post("/login", json={"user": "test", "pass": "test"})
@task
def browse(self):
self.client.get("/products")
@task
def view_product(self):
self.client.get("/products/1")
class WebUser(HttpUser):
wait_time = between(1, 3)
tasks = [UserFlow]
In a SequentialTaskSet, tasks execute in the order they are defined in the class, top to bottom. When the last task completes, the sequence restarts from the first task. This guarantees that every virtual user follows the same step-by-step flow.
Note that the HttpUser class uses tasks = [UserFlow] instead of defining @task methods directly. You can also mix task sets: tasks = [UserFlow, BrowsingBehavior] would have users randomly choose between the two flows.
Lifecycle Hooks
Locust provides two lifecycle hooks that run at the beginning and end of each virtual user's life:
on_start runs once when a virtual user is spawned. This is the ideal place for login, session setup, or any initialization that should happen before the user starts executing tasks:
class WebUser(HttpUser):
wait_time = between(1, 3)
def on_start(self):
# Login once when the user spawns
response = self.client.post("/api/auth/login", json={
"email": "loadtest@example.com",
"password": "testpassword"
})
self.token = response.json()["token"]
self.headers = {"Authorization": f"Bearer {self.token}"}
def on_stop(self):
# Cleanup when the user is stopped
self.client.post("/api/auth/logout", headers=self.headers)
@task
def browse(self):
self.client.get("/api/dashboard", headers=self.headers)
on_stop runs when the test is stopping and the user is being terminated. Use it for cleanup: logging out, deleting test data, or closing resources.
Grouping URLs with the name Parameter
When your application uses dynamic URLs like /products/1, /products/2, /products/3, Locust treats each unique URL as a separate entry in the results. This means instead of seeing one "product detail" metric, you see hundreds of individual URL metrics -- which is almost never what you want.
The name parameter groups requests under a common label:
import random
@task
def view_product(self):
product_id = random.randint(1, 1000)
self.client.get(f"/products/{product_id}", name="/products/[id]")
Now all product detail requests are grouped under /products/[id] in the results, giving you a single, meaningful aggregate metric for that endpoint.
This is especially important for APIs with UUID-based routes, query string variations, or any other pattern that generates many unique URLs for logically identical operations.
Using Python for Dynamic Behavior
Because a Locustfile is plain Python, you can use any Python feature or library to create realistic, dynamic test behavior.
Random data generation:
import random
import string
@task
def create_user(self):
username = ''.join(random.choices(string.ascii_lowercase, k=8))
self.client.post("/api/users", json={
"username": username,
"email": f"{username}@loadtest.example.com"
})
Reading test data from a CSV file:
import csv
# Load data once at module level, not per-user
with open("test_users.csv") as f:
TEST_USERS = list(csv.DictReader(f))
class DataDrivenUser(HttpUser):
wait_time = between(1, 3)
def on_start(self):
user = random.choice(TEST_USERS)
self.client.post("/login", json={
"email": user["email"],
"password": user["password"]
})
Conditional logic based on responses:
@task
def browse_and_maybe_buy(self):
response = self.client.get("/api/products")
products = response.json().get("products", [])
if products:
product = random.choice(products)
self.client.get(f"/api/products/{product['id']}", name="/api/products/[id]")
if product.get("in_stock") and random.random() < 0.1:
self.client.post("/api/cart", json={"product_id": product["id"]})
This flexibility is one of the strongest arguments for code-based load testing over GUI-based tools. You can model arbitrarily complex user behavior, use real test data, make decisions based on server responses, and leverage any Python library you need.
Common Patterns and Tips
Reusing Tokens Across Tasks
Store authentication tokens as instance variables set during on_start, then reference them in every task:
class AuthUser(HttpUser):
wait_time = between(1, 2)
def on_start(self):
resp = self.client.post("/auth/token", json={
"client_id": "load-test",
"client_secret": "secret"
})
self.token = resp.json()["access_token"]
@task
def api_call(self):
self.client.get("/api/data", headers={
"Authorization": f"Bearer {self.token}"
})
Extracting Data from Responses
Parse JSON responses and use the data in subsequent requests. This is essential for testing flows where one request returns IDs that are needed by the next:
@task
def create_then_read(self):
# Create a resource
create_resp = self.client.post("/api/items", json={"name": "Test Item"})
item_id = create_resp.json().get("id")
if item_id:
# Read it back
self.client.get(f"/api/items/{item_id}", name="/api/items/[id]")
Error Handling
Wrap requests in try/except blocks when failures should not crash the virtual user:
@task
def resilient_request(self):
try:
response = self.client.get("/api/unreliable-endpoint")
data = response.json()
except Exception:
# Log the error but keep the user running
pass
Parameterizing the Host
Avoid hardcoding the target URL. The host is set externally -- either via the command line, the LoadForge configuration, or as a class attribute that you can override:
import os
class FlexibleUser(HttpUser):
wait_time = between(1, 3)
host = os.environ.get("TARGET_HOST", "http://localhost:8080")
This makes the same Locustfile work across development, staging, and production environments without modification.
Running Your Locustfile
Locally with the Locust CLI
Install Locust and run your file locally for development and debugging:
pip install locust
locust -f locustfile.py --host http://localhost:8080
This starts the Locust web UI at http://localhost:8089, where you can configure the number of users, spawn rate, and watch results in real time. For headless execution (useful in scripts), add the --headless flag with user count and spawn rate:
locust -f locustfile.py --host http://localhost:8080 --headless -u 50 -r 10 --run-time 60s
This runs 50 users, spawning 10 per second, for 60 seconds, and prints results to the terminal.
On LoadForge
For serious load testing -- hundreds or thousands of virtual users, geographically distributed traffic, automated scheduling, and detailed reporting -- upload your Locustfile to LoadForge. The platform handles distributed execution across multiple cloud regions, provides real-time dashboards during the test, and stores historical results for trend analysis.
The Locustfile you developed and debugged locally works identically on LoadForge. No modifications needed. You configure the user count, duration, geographic distribution, and performance thresholds through the LoadForge interface or API.
Conclusion
A Locustfile is just Python, and that simplicity is its greatest strength. You start with five lines of code for a basic test, and you can grow to hundreds of lines modeling complex, multi-step user journeys with dynamic data, conditional behavior, and custom validation -- all using a language you already know.
The patterns covered in this guide handle the vast majority of real-world load testing scenarios. Start simple, validate that your test produces meaningful results, and add complexity only when your testing needs demand it.
For a hands-on walkthrough of running your first load test end to end, see our load testing tutorial. If you are specifically testing REST APIs, our guide on how to load test an API goes deeper into API-specific patterns, authentication flows, and result interpretation.
Try LoadForge free for 7 days
Set up your first load test in under 2 minutes. No commitment.