
Introduction
Jenkins is at the center of many software delivery pipelines, which makes its reliability critical to build throughput, deployment velocity, and overall release confidence. When Jenkins becomes slow under concurrent usage, teams feel it immediately: builds queue up, API calls time out, agents disconnect, and developers lose trust in the CI/CD system.
Jenkins load testing helps you understand how your controller behaves under realistic pressure. Whether you are validating a new Jenkins deployment, testing plugin changes, sizing infrastructure, or ensuring your CI/CD platform can support peak usage, performance testing gives you the data you need before problems hit production.
In this guide, you’ll learn how to use LoadForge to run Jenkins load testing with realistic Locust scripts. We’ll cover authenticated API requests, job triggering, queue monitoring, artifact-heavy workflows, and ways to automate these checks in your pipeline. You’ll also see how to use LoadForge features like distributed testing, real-time reporting, cloud-based infrastructure, CI/CD integration, and global test locations to make Jenkins performance testing part of your release process.
Prerequisites
Before you start load testing Jenkins with LoadForge, make sure you have:
- A Jenkins instance you are allowed to test
- A Jenkins user account with API access
- An API token for that user
- One or more test jobs created in Jenkins
- Knowledge of your Jenkins URL, such as:
https://jenkins.example.com
- Access to the Jenkins endpoints you want to test
- LoadForge account access to create and run distributed load tests
It also helps to have:
- A dedicated non-production Jenkins environment for stress testing
- Test jobs that are safe to trigger repeatedly
- Known performance goals, such as:
- maximum acceptable queue wait time
- API response time thresholds
- maximum concurrent build trigger rate
- acceptable error rate under load
For authenticated Jenkins API calls, many teams use a username and API token. In some environments, Jenkins also requires a CSRF crumb for POST requests. We’ll include that pattern in the examples below.
Understanding Jenkins Under Load
Jenkins has several performance characteristics that make load testing especially important.
Controller bottlenecks
The Jenkins controller handles:
- Web UI requests
- REST API traffic
- Build queue scheduling
- Agent coordination
- Plugin execution
- Credential and configuration access
Under high concurrency, the controller can become CPU-bound or memory-constrained, especially when many users or automation tools are:
- polling job status
- triggering builds
- downloading console logs
- browsing recent runs
- querying plugin-heavy pages
Queue contention
One of the first signs of Jenkins stress is queue growth. If many builds are triggered at once, Jenkins may accept requests quickly but delay execution because:
- executors are exhausted
- agents are offline
- labels are too restrictive
- plugins add scheduling overhead
That means a successful HTTP response does not always mean good performance. Your Jenkins load testing should measure both request latency and downstream queue/build behavior.
Plugin overhead
Jenkins performance often depends on installed plugins. Common plugins can add overhead to:
- authentication
- job configuration rendering
- pipeline execution
- SCM polling
- artifact archiving
- reporting dashboards
A Jenkins instance with many plugins may behave very differently from a clean installation.
Build-trigger storms
In real CI/CD environments, load spikes often happen when:
- multiple pull requests are merged
- nightly jobs start at the same time
- a monorepo change triggers many pipelines
- deployment pipelines fan out across services
Stress testing Jenkins helps you identify whether your controller, agents, and network can absorb those bursts.
Writing Your First Load Test
A good first Jenkins load test should answer a simple question: can the system handle concurrent authenticated API traffic for common read operations?
The script below logs in using basic auth with an API token and exercises common Jenkins endpoints:
/api/json/computer/api/json/job/{job_name}/api/json
These endpoints are realistic because dashboards, automation tools, and internal integrations frequently hit them.
Basic Jenkins API load test
from locust import HttpUser, task, between
import os
class JenkinsApiUser(HttpUser):
wait_time = between(1, 3)
def on_start(self):
self.username = os.getenv("JENKINS_USERNAME", "loadtest-user")
self.api_token = os.getenv("JENKINS_API_TOKEN", "replace-me")
self.job_name = os.getenv("JENKINS_JOB_NAME", "smoke-build")
self.client.auth = (self.username, self.api_token)
@task(3)
def get_root_api(self):
self.client.get(
"/api/json",
params={
"tree": "jobs[name,color],views[name],primaryView[name],quietingDown,useCrumbs"
},
name="GET /api/json"
)
@task(2)
def get_computers(self):
self.client.get(
"/computer/api/json",
params={
"tree": "computer[displayName,offline,temporarilyOffline,numExecutors,busyExecutors]"
},
name="GET /computer/api/json"
)
@task(1)
def get_job_details(self):
self.client.get(
f"/job/{self.job_name}/api/json",
params={
"tree": "name,color,lastBuild[number,result,duration,timestamp],inQueue"
},
name="GET /job/[job]/api/json"
)What this script tests
This script simulates authenticated users and automation systems reading Jenkins metadata. It’s useful for:
- baseline performance testing
- validating reverse proxy and auth behavior
- measuring controller responsiveness under concurrent API reads
What to look for in LoadForge
When you run this test in LoadForge, pay attention to:
- median and p95 response times
- error rates like
401,403,429, or5xx - whether
/computer/api/jsonslows down more than expected - whether response times degrade steadily as users increase
Because LoadForge supports real-time reporting, you can watch latency trends while the test is running and quickly spot controller saturation.
Advanced Load Testing Scenarios
Basic API reads are a good start, but Jenkins performance problems often appear during state-changing operations. The following scenarios are more realistic for CI/CD and DevOps teams.
Scenario 1: Trigger builds with crumb authentication
Many Jenkins environments require a CSRF crumb for POST requests. This example fetches the crumb, triggers a parameterized build, and checks queue status.
This is a highly practical Jenkins load testing scenario because real systems often have many external services or developers triggering jobs through the API.
from locust import HttpUser, task, between
import os
import json
class JenkinsBuildTriggerUser(HttpUser):
wait_time = between(2, 5)
def on_start(self):
self.username = os.getenv("JENKINS_USERNAME", "loadtest-user")
self.api_token = os.getenv("JENKINS_API_TOKEN", "replace-me")
self.job_name = os.getenv("JENKINS_JOB_NAME", "api-regression-tests")
self.branch = os.getenv("GIT_BRANCH", "main")
self.client.auth = (self.username, self.api_token)
self.crumb_header_name = None
self.crumb_value = None
self.fetch_crumb()
def fetch_crumb(self):
with self.client.get(
"/crumbIssuer/api/json",
name="GET /crumbIssuer/api/json",
catch_response=True
) as response:
if response.status_code == 200:
data = response.json()
self.crumb_header_name = data.get("crumbRequestField")
self.crumb_value = data.get("crumb")
response.success()
else:
response.failure(f"Failed to fetch crumb: {response.status_code}")
def get_headers(self):
headers = {}
if self.crumb_header_name and self.crumb_value:
headers[self.crumb_header_name] = self.crumb_value
return headers
@task
def trigger_parameterized_build(self):
payload = {
"BRANCH": self.branch,
"TEST_SUITE": "smoke",
"ENVIRONMENT": "staging",
"RUN_PERFORMANCE_CHECKS": "true"
}
with self.client.post(
f"/job/{self.job_name}/buildWithParameters",
data=payload,
headers=self.get_headers(),
allow_redirects=False,
name="POST /job/[job]/buildWithParameters",
catch_response=True
) as response:
if response.status_code in (201, 302):
queue_url = response.headers.get("Location")
if queue_url:
response.success()
else:
response.failure("Build triggered but queue location missing")
else:
response.failure(f"Unexpected status code: {response.status_code}")Why this matters
This test reveals how Jenkins handles concurrent build submissions. It can expose:
- slow crumb generation
- authentication bottlenecks
- queue insertion delays
- plugin overhead on build triggers
- reverse proxy issues under POST-heavy traffic
If you want to fail slow builds in your release workflow, this is one of the best places to start.
Scenario 2: Trigger builds and monitor queue depth
A Jenkins controller may accept build requests quickly, but the real issue may be queue growth. This example triggers a build and then checks queue item status.
from locust import HttpUser, task, between
import os
import re
class JenkinsQueueUser(HttpUser):
wait_time = between(3, 6)
def on_start(self):
self.username = os.getenv("JENKINS_USERNAME", "loadtest-user")
self.api_token = os.getenv("JENKINS_API_TOKEN", "replace-me")
self.job_name = os.getenv("JENKINS_JOB_NAME", "docker-image-build")
self.client.auth = (self.username, self.api_token)
self.crumb_header_name = None
self.crumb_value = None
self.fetch_crumb()
def fetch_crumb(self):
response = self.client.get("/crumbIssuer/api/json", name="GET crumb")
if response.status_code == 200:
data = response.json()
self.crumb_header_name = data["crumbRequestField"]
self.crumb_value = data["crumb"]
def headers(self):
if self.crumb_header_name and self.crumb_value:
return {self.crumb_header_name: self.crumb_value}
return {}
@task
def trigger_and_check_queue(self):
with self.client.post(
f"/job/{self.job_name}/build",
headers=self.headers(),
allow_redirects=False,
name="POST /job/[job]/build",
catch_response=True
) as response:
if response.status_code not in (201, 302):
response.failure(f"Failed to trigger build: {response.status_code}")
return
queue_location = response.headers.get("Location")
if not queue_location:
response.failure("No queue location returned")
return
response.success()
match = re.search(r"/queue/item/(\d+)/", queue_location)
if not match:
return
queue_id = match.group(1)
self.client.get(
f"/queue/item/{queue_id}/api/json",
params={"tree": "id,blocked,buildable,stuck,why,task[name],executable[number,url]"},
name="GET /queue/item/[id]/api/json"
)
@task(2)
def get_global_queue_snapshot(self):
self.client.get(
"/queue/api/json",
params={"tree": "items[id,task[name],blocked,buildable,stuck,why,inQueueSince]"},
name="GET /queue/api/json"
)What this scenario tells you
This test is useful for identifying:
- whether Jenkins is building a backlog
- how often queue items become stuck
- whether executors and agents are sufficient
- whether queue APIs remain responsive during bursts
When run with LoadForge’s distributed testing, you can simulate build triggers from multiple regions or teams hitting Jenkins at once.
Scenario 3: Pipeline-heavy workflows with console log and artifact access
Many Jenkins environments are pipeline-driven. Users and automation not only trigger builds, but also poll build status, fetch console output, and download artifacts. These actions can be expensive, especially with large logs or archived files.
from locust import HttpUser, task, between
import os
class JenkinsPipelineUser(HttpUser):
wait_time = between(2, 4)
def on_start(self):
self.username = os.getenv("JENKINS_USERNAME", "loadtest-user")
self.api_token = os.getenv("JENKINS_API_TOKEN", "replace-me")
self.pipeline_job = os.getenv("JENKINS_PIPELINE_JOB", "release-pipeline")
self.client.auth = (self.username, self.api_token)
@task(3)
def get_last_build_metadata(self):
self.client.get(
f"/job/{self.pipeline_job}/lastBuild/api/json",
params={
"tree": "number,result,duration,building,timestamp,actions[parameters[name,value]]"
},
name="GET /job/[pipeline]/lastBuild/api/json"
)
@task(2)
def get_console_text(self):
self.client.get(
f"/job/{self.pipeline_job}/lastBuild/consoleText",
name="GET /job/[pipeline]/lastBuild/consoleText"
)
@task(1)
def get_artifact(self):
self.client.get(
f"/job/{self.pipeline_job}/lastSuccessfulBuild/artifact/build/reports/test-summary.json",
name="GET /job/[pipeline]/artifact/test-summary.json"
)
@task(1)
def get_stage_view_like_data(self):
self.client.get(
f"/job/{self.pipeline_job}/wfapi/describe",
name="GET /job/[pipeline]/wfapi/describe"
)Why this is realistic
This scenario mirrors how teams actually use Jenkins after a build starts:
- dashboards poll pipeline state
- developers open console logs
- release tooling downloads artifacts
- plugins query workflow APIs
This can create significant read pressure, even if build trigger volume is moderate.
Analyzing Your Results
Once your Jenkins load testing runs complete, the next step is interpreting the results in a way that improves release confidence.
Focus on percentile latency
Average response time can hide serious issues. For Jenkins performance testing, p95 and p99 are much more useful. Watch for:
/api/jsonbecoming slow under concurrent reads- build trigger endpoints returning quickly but queue APIs degrading
- console log downloads causing long-tail latency spikes
If p95 latency rises sharply as user count increases, your Jenkins controller may be approaching saturation.
Separate HTTP success from operational success
A 201 or 302 from /build or /buildWithParameters only means Jenkins accepted the request. It does not mean the system is healthy. Also inspect:
- queue item delays
- stuck queue items
- agent availability
- build start times
This is especially important in CI/CD environments where “accepted but delayed for 20 minutes” is effectively a failure.
Look for error patterns
Common Jenkins load testing failures include:
401 Unauthorizedfrom bad credentials or expired tokens403 Forbiddenfrom missing crumb or insufficient permissions404from incorrect job paths, especially folder-based jobs429or proxy throttling, depending on infrastructure502or504from reverse proxies under stress500errors from overloaded plugins or JVM pressure
Correlate with infrastructure metrics
For meaningful Jenkins stress testing, compare LoadForge results with:
- controller CPU and memory usage
- JVM heap and GC activity
- thread counts
- disk I/O
- queue length
- executor utilization
- agent connection health
- reverse proxy metrics
LoadForge’s real-time reporting helps you correlate spikes in latency with backend events as they happen.
Use thresholds in CI/CD
A strong pattern is to automate Jenkins performance testing and fail builds when thresholds are exceeded. For example:
- p95 build trigger latency > 1000 ms
- queue API error rate > 1%
- console log download p95 > 3000 ms
This is where LoadForge fits naturally into CI/CD integration workflows: run the test after infrastructure changes, plugin upgrades, or Jenkins version updates, and block promotion if performance regresses.
Example Jenkins pipeline step to run a LoadForge test
pipeline {
agent any
stages {
stage('Run Jenkins Load Test') {
steps {
sh '''
curl -X POST "https://api.loadforge.com/v1/tests/TEST_ID/run" \
-H "Authorization: Bearer $LOADFORGE_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "jenkins-performance-gate",
"users": 50,
"spawn_rate": 5,
"run_time": "10m"
}'
'''
}
}
}
}This kind of automation makes load testing part of your delivery process, not just a one-time exercise.
Performance Optimization Tips
If your Jenkins load testing reveals bottlenecks, these optimizations often help.
Scale the controller appropriately
If API and UI endpoints slow down under moderate concurrency, review:
- CPU allocation
- memory sizing
- JVM heap settings
- disk performance for Jenkins home
Jenkins is sensitive to slow storage, especially with many jobs, logs, and artifacts.
Reduce plugin bloat
Every plugin adds potential overhead. Audit installed plugins and remove anything unused. Performance testing often shows that plugin-heavy controllers degrade much faster under load.
Offload builds to agents
Keep the controller focused on orchestration. If builds run on the controller itself, performance usually suffers quickly under concurrent load.
Tune executors and labels
If queue times are growing, you may need:
- more agents
- better label distribution
- more executors on suitable nodes
- less restrictive job placement rules
Archive fewer artifacts and logs
Large artifacts and huge console logs create extra I/O and network pressure. If your load test shows slow artifact or console endpoints, consider:
- reducing archived content
- rotating logs
- externalizing large artifacts
Optimize reverse proxy settings
If Jenkins is behind NGINX, HAProxy, or Apache, validate:
- connection limits
- keepalive settings
- timeout settings
- buffering behavior
- TLS termination performance
Sometimes the bottleneck is not Jenkins itself, but the layer in front of it.
Test from multiple regions
If global teams use Jenkins, run performance testing from different geographies. LoadForge’s global test locations help you understand whether latency or regional routing affects the user experience.
Common Pitfalls to Avoid
Jenkins load testing is easy to do poorly if the test does not reflect real usage.
Testing only the homepage
The Jenkins homepage is not enough. Real systems use:
- authenticated API calls
- build triggers
- queue polling
- pipeline status checks
- artifact downloads
Your performance testing should reflect those workflows.
Ignoring crumb requirements
Many POST failures happen because teams forget CSRF crumbs. If your Jenkins configuration requires crumbs, make sure your Locust script fetches and sends them correctly.
Using unrealistic jobs
Do not trigger production deployment jobs or expensive long-running pipelines during stress testing unless that is explicitly intended. Create safe, repeatable test jobs with realistic but controlled behavior.
Overlooking folder-based job paths
In many Jenkins setups, jobs live inside folders. The correct path may look like:
/job/platform/job/api-tests/build/job/mobile/job/release-pipeline/api/json
If paths are wrong, your results will be misleading.
Focusing only on request rate
High throughput is not the only goal. In Jenkins, queue health, build start times, and agent availability matter just as much as raw HTTP performance.
Running tests without environment isolation
Load testing a shared Jenkins controller can disrupt real developer workflows. Use a dedicated environment whenever possible.
Forgetting plugin-specific endpoints
If your teams rely on Blue Ocean, workflow APIs, test report plugins, or custom dashboards, include those endpoints in your scripts. Otherwise, the test may miss the real bottlenecks.
Conclusion
Jenkins load testing is one of the most effective ways to improve CI/CD reliability and release confidence. By simulating realistic API traffic, authenticated build triggers, queue pressure, pipeline polling, and artifact access, you can find controller bottlenecks before they slow down your engineering organization.
With LoadForge, you can run cloud-based, distributed load testing against Jenkins using realistic Locust scripts, monitor performance in real time, test from global locations, and integrate performance gates directly into your CI/CD workflow. That makes it easier to catch regressions early, fail slow builds, and ship with confidence.
If you want to make Jenkins performance testing a repeatable part of your DevOps process, try LoadForge and start building automated load tests for your CI/CD platform today.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

ArgoCD Load Testing for Progressive Delivery
Combine ArgoCD and LoadForge to validate app performance during progressive delivery and Kubernetes rollouts.

How to Automate Load Testing in CircleCI
Use LoadForge with CircleCI to automate load testing in CI/CD and detect bottlenecks before production.

Datadog Load Testing Integration with LoadForge
Integrate Datadog with LoadForge to correlate load test results with infrastructure and application metrics.