
Introduction
Contentful powers content delivery for modern websites, mobile apps, and e-commerce experiences. Whether you are serving product descriptions, landing pages, blog content, or localized storefront data, the performance of your Contentful APIs directly affects user experience and conversion rates. If your content layer slows down during a product launch, holiday sale, or marketing campaign, pages can render late, personalization can fail, and checkout journeys may suffer.
That is why load testing Contentful is an important part of performance testing for any CMS-driven application. With a proper Contentful load testing strategy, you can validate how the Content Delivery API, Content Preview API, and Management API behave under concurrent traffic, identify latency bottlenecks, and understand how your application handles rate limits and content fetch patterns.
In this guide, you will learn how to load test Contentful APIs using LoadForge and Locust. We will cover realistic scenarios including public content retrieval, authenticated preview access, filtered entry queries, and asset-heavy requests. You will also see how to interpret test results and optimize performance before production traffic spikes hit. Because LoadForge runs Locust tests on cloud-based infrastructure, you can easily execute distributed testing from global test locations, monitor real-time reporting, and integrate tests into your CI/CD pipeline.
Prerequisites
Before you begin load testing Contentful with LoadForge, make sure you have the following:
- A Contentful space ID
- At least one environment, such as
master - A Content Delivery API token for public published content
- Optionally, a Content Preview API token for unpublished content testing
- Optionally, a Content Management API token if you want to test management operations carefully
- A list of realistic entry IDs, content types, slugs, or query patterns used by your application
- A LoadForge account
- Basic familiarity with Locust and Python
You should also understand which Contentful API you are testing:
- Content Delivery API: for published content delivered to production applications
- Content Preview API: for previewing draft content
- Content Management API: for content management operations, usually not suitable for heavy load testing in production-like volumes
Common base URLs include:
https://cdn.contentful.comfor Content Delivery APIhttps://preview.contentful.comfor Content Preview APIhttps://api.contentful.comfor Content Management API
A typical Contentful endpoint looks like this:
GET /spaces/{space_id}/environments/{environment_id}/entriesFor most performance testing use cases, you should focus on the Content Delivery API because it is the API most directly tied to end-user page rendering and storefront responsiveness.
Understanding Contentful Under Load
Contentful is a headless CMS delivered over APIs, which means your application often makes many content requests to assemble a page. Under load, this architecture creates several important performance considerations.
API-driven page assembly
A single page may trigger multiple Contentful requests:
- Fetch homepage entry by slug
- Resolve linked entries using
include - Load product promo banners
- Retrieve navigation content
- Pull localized content for a specific locale
- Download referenced assets
If your frontend or backend does not cache effectively, traffic surges can multiply the number of API calls dramatically.
Query complexity
Contentful supports rich filtering and link resolution. Queries such as:
content_type=productPagefields.slug=summer-saleinclude=3locale=en-US
are powerful, but they can also increase response size and processing time. During load testing, query-heavy requests often reveal higher latency than simple entry lookups.
Rate limiting
Contentful APIs may enforce rate limits depending on your plan and API type. A stress testing exercise should account for:
- Increased 429 Too Many Requests responses
- Retry behavior in your application
- Backoff handling
- Cache effectiveness under pressure
A good load test does not just measure average response time. It also helps you understand how your system behaves when Contentful starts throttling requests.
Asset and linked content expansion
If your application requests entries with include values or pulls large asset references, payload size can grow quickly. This affects:
- API response time
- Network transfer time
- Client-side parsing overhead
- Origin performance when many users request the same content
Preview and management access
Preview and management APIs are useful for editorial workflows, but they are typically not optimized for the same traffic patterns as content delivery. If your internal teams rely on preview environments during busy publishing windows, testing those flows can still be valuable, but the load profile should be realistic and controlled.
Writing Your First Load Test
Let’s start with a basic Contentful load test using the Content Delivery API. This example simulates users browsing published content by requesting a collection of entries and a specific page by slug.
Basic Content Delivery API load test
from locust import HttpUser, task, between
import os
import random
class ContentfulDeliveryUser(HttpUser):
wait_time = between(1, 3)
host = "https://cdn.contentful.com"
def on_start(self):
self.space_id = os.getenv("CONTENTFUL_SPACE_ID", "your_space_id")
self.environment_id = os.getenv("CONTENTFUL_ENVIRONMENT_ID", "master")
self.delivery_token = os.getenv("CONTENTFUL_DELIVERY_TOKEN", "your_delivery_token")
self.headers = {
"Authorization": f"Bearer {self.delivery_token}",
"Content-Type": "application/vnd.contentful.delivery.v1+json"
}
self.base_path = f"/spaces/{self.space_id}/environments/{self.environment_id}"
@task(3)
def get_homepage_entries(self):
self.client.get(
f"{self.base_path}/entries",
params={
"content_type": "landingPage",
"fields.slug": "home",
"include": 2,
"locale": "en-US"
},
headers=self.headers,
name="GET /entries landingPage home"
)
@task(2)
def get_featured_products(self):
self.client.get(
f"{self.base_path}/entries",
params={
"content_type": "productPage",
"fields.featured": "true",
"limit": 12,
"select": "sys.id,fields.productName,fields.slug,fields.price"
},
headers=self.headers,
name="GET /entries featured products"
)What this script does
This first test models a realistic CMS-backed storefront pattern:
- It authenticates using a Content Delivery API bearer token
- It fetches a homepage entry by slug
- It fetches a list of featured products
- It uses weighted tasks to make homepage traffic slightly more frequent
This is a strong starting point for Contentful performance testing because it reflects common content retrieval patterns used by e-commerce frontends.
Running the test in LoadForge
In LoadForge, you can paste this Locust script into a new test, configure environment variables like:
CONTENTFUL_SPACE_ID=abc123xyz
CONTENTFUL_ENVIRONMENT_ID=master
CONTENTFUL_DELIVERY_TOKEN=CFPAT-example-tokenThen launch the test with your desired user count and spawn rate. Because LoadForge supports distributed testing, you can simulate traffic from multiple regions to see how Contentful content delivery performs for global audiences.
Advanced Load Testing Scenarios
Once the basics are working, you should expand into more realistic Contentful scenarios. The following examples cover authenticated preview access, detailed product page retrieval with linked content, and asset-heavy content requests.
Scenario 1: Product detail page and linked content resolution
In many e-commerce and CMS applications, a product detail page requires more than one simple entry lookup. It may fetch the product by slug, resolve linked assets, and include related content such as recommendations or FAQs.
from locust import HttpUser, task, between
import os
import random
class ContentfulProductPageUser(HttpUser):
wait_time = between(2, 5)
host = "https://cdn.contentful.com"
def on_start(self):
self.space_id = os.getenv("CONTENTFUL_SPACE_ID", "your_space_id")
self.environment_id = os.getenv("CONTENTFUL_ENVIRONMENT_ID", "master")
self.delivery_token = os.getenv("CONTENTFUL_DELIVERY_TOKEN", "your_delivery_token")
self.headers = {
"Authorization": f"Bearer {self.delivery_token}",
"Content-Type": "application/vnd.contentful.delivery.v1+json"
}
self.base_path = f"/spaces/{self.space_id}/environments/{self.environment_id}"
self.product_slugs = [
"classic-running-shoe",
"waterproof-daypack-20l",
"organic-cotton-hoodie",
"stainless-steel-bottle"
]
@task(4)
def load_product_page(self):
slug = random.choice(self.product_slugs)
with self.client.get(
f"{self.base_path}/entries",
params={
"content_type": "productPage",
"fields.slug": slug,
"include": 3,
"locale": "en-US"
},
headers=self.headers,
name="GET /entries productPage by slug",
catch_response=True
) as response:
if response.status_code != 200:
response.failure(f"Unexpected status code: {response.status_code}")
return
data = response.json()
items = data.get("items", [])
if not items:
response.failure(f"No product found for slug {slug}")
return
product = items[0]
fields = product.get("fields", {})
if "productName" not in fields:
response.failure("Missing productName field")
else:
response.success()
@task(1)
def load_related_products(self):
self.client.get(
f"{self.base_path}/entries",
params={
"content_type": "productPage",
"fields.category": "footwear",
"limit": 8,
"order": "-sys.updatedAt",
"select": "sys.id,fields.productName,fields.slug,fields.thumbnail"
},
headers=self.headers,
name="GET /entries related products"
)Why this matters
This test is more realistic because it validates:
- Slug-based product page lookups
- Link resolution with
include=3 - Response correctness, not just response speed
- Category listing behavior used for recommendations or related products
For load testing Contentful in an e-commerce environment, this kind of scenario is often more valuable than a simple generic endpoint hit.
Scenario 2: Preview API testing for editorial workflows
If your editorial team uses Contentful preview to validate unpublished campaigns, landing pages, or product content before release, you may want to test the preview API under moderate internal concurrency.
from locust import HttpUser, task, between
import os
import random
class ContentfulPreviewUser(HttpUser):
wait_time = between(1, 4)
host = "https://preview.contentful.com"
def on_start(self):
self.space_id = os.getenv("CONTENTFUL_SPACE_ID", "your_space_id")
self.environment_id = os.getenv("CONTENTFUL_ENVIRONMENT_ID", "master")
self.preview_token = os.getenv("CONTENTFUL_PREVIEW_TOKEN", "your_preview_token")
self.headers = {
"Authorization": f"Bearer {self.preview_token}",
"Content-Type": "application/vnd.contentful.delivery.v1+json"
}
self.base_path = f"/spaces/{self.space_id}/environments/{self.environment_id}"
self.campaign_slugs = [
"black-friday-2025",
"spring-launch-preview",
"member-exclusive-sale"
]
@task
def load_preview_campaign(self):
slug = random.choice(self.campaign_slugs)
self.client.get(
f"{self.base_path}/entries",
params={
"content_type": "campaignPage",
"fields.slug": slug,
"include": 2,
"locale": "en-US"
},
headers=self.headers,
name="GET preview /entries campaignPage"
)When to use this
This is useful when:
- Editors report preview slowness
- Marketing teams publish large campaigns with many linked assets
- Internal users access draft content concurrently before launch
Be careful not to apply unrealistic stress testing volumes to preview systems unless you have a specific need and understand the impact.
Scenario 3: Asset-heavy content and localized requests
Large images, rich media references, and localized content can increase Contentful API response times. This scenario simulates users browsing localized category pages with banner assets and linked promotional blocks.
from locust import HttpUser, task, between
import os
import random
class ContentfulLocalizedCatalogUser(HttpUser):
wait_time = between(1, 3)
host = "https://cdn.contentful.com"
def on_start(self):
self.space_id = os.getenv("CONTENTFUL_SPACE_ID", "your_space_id")
self.environment_id = os.getenv("CONTENTFUL_ENVIRONMENT_ID", "master")
self.delivery_token = os.getenv("CONTENTFUL_DELIVERY_TOKEN", "your_delivery_token")
self.headers = {
"Authorization": f"Bearer {self.delivery_token}",
"Content-Type": "application/vnd.contentful.delivery.v1+json"
}
self.base_path = f"/spaces/{self.space_id}/environments/{self.environment_id}"
self.locales = ["en-US", "de-DE", "fr-FR"]
self.categories = ["men", "women", "accessories", "outdoor"]
@task(3)
def load_category_page(self):
locale = random.choice(self.locales)
category = random.choice(self.categories)
self.client.get(
f"{self.base_path}/entries",
params={
"content_type": "categoryPage",
"fields.slug": category,
"include": 2,
"locale": locale
},
headers=self.headers,
name="GET /entries categoryPage localized"
)
@task(2)
def load_category_promotions(self):
locale = random.choice(self.locales)
self.client.get(
f"{self.base_path}/entries",
params={
"content_type": "promoBanner",
"fields.active": "true",
"limit": 5,
"locale": locale,
"order": "-fields.priority"
},
headers=self.headers,
name="GET /entries promoBanner localized"
)
@task(1)
def load_assets(self):
self.client.get(
f"{self.base_path}/assets",
params={
"limit": 10,
"mimetype_group": "image"
},
headers=self.headers,
name="GET /assets images"
)What this scenario reveals
This test helps identify:
- Localization-related latency differences
- Heavier payloads caused by linked banners and media
- Asset listing performance
- Whether category page requests are significantly slower than product or landing page requests
In LoadForge, these endpoints can be tracked separately in real-time reporting so you can compare response times and error rates by request name.
Analyzing Your Results
After running your Contentful load test, focus on more than just average response time. Effective performance testing requires deeper analysis.
Key metrics to review
Response time percentiles
Look at:
- Median response time
- 95th percentile
- 99th percentile
Averages can hide spikes. If your 95th percentile is high, some users are experiencing slow content delivery even if the average looks acceptable.
Requests per second
This tells you how much API throughput your Contentful-backed application can sustain. Compare this against expected peak traffic during launches, promotions, or seasonal events.
Error rates
Pay close attention to:
- 401 Unauthorized, which usually indicates bad tokens
- 404 responses from incorrect content type or slug assumptions
- 429 Too Many Requests, which may indicate API throttling
- 5xx errors, which could point to service instability or upstream issues
Payload-heavy endpoints
Requests using include, localization, or asset retrieval often become the slowest. Compare endpoint groups like:
GET /entries productPage by slugGET /entries promoBanner localizedGET /assets images
This helps identify which content patterns need optimization.
Using LoadForge effectively
LoadForge makes Contentful load testing easier by providing:
- Distributed testing for simulating global visitor traffic
- Real-time reporting to spot latency spikes as they happen
- Cloud-based infrastructure so you do not need to manage load generators
- CI/CD integration to automate performance testing before deployment
- Historical comparisons to measure whether content model changes affect API performance
A useful workflow is to run a baseline test, make a change such as reducing include depth or improving caching, and then rerun the same scenario to compare results.
Performance Optimization Tips
If your Contentful load testing reveals bottlenecks, these optimizations often help.
Reduce include depth where possible
Deep link resolution can increase payload size significantly. Only request the linked content you actually need.
Use select to limit fields
Instead of returning full entries, request only necessary fields:
sys.idfields.slugfields.productNamefields.price
This reduces response size and parsing overhead.
Cache aggressively
For public content, use CDN and application-level caching where possible. Contentful is fast, but repeated uncached requests during traffic spikes can still add latency.
Pre-render or hydrate wisely
If your frontend fetches multiple Contentful resources at runtime, consider server-side aggregation, static generation, or edge caching to reduce request counts.
Optimize localization strategy
If you serve many locales, test each one independently. Some locales may have more linked content or larger payloads than others.
Avoid unnecessary preview traffic
Preview APIs are for internal workflows. Make sure production traffic never accidentally routes to preview endpoints.
Monitor rate limiting behavior
If you see 429 errors during stress testing, implement retry logic with backoff in your application and revisit request patterns to reduce unnecessary API calls.
Common Pitfalls to Avoid
When load testing Contentful, teams often make the same mistakes.
Testing unrealistic queries
Do not create synthetic requests your application never uses. Test actual content types, slugs, locales, and include depths from production patterns.
Ignoring caching layers
If your real architecture uses a CDN, edge cache, or backend cache, your load test should reflect that. Otherwise, results may exaggerate Contentful API load or miss real bottlenecks elsewhere.
Overloading preview or management APIs
The Content Management API and Preview API should not usually be stress tested like public delivery endpoints. Use moderate, realistic concurrency for editorial scenarios.
Failing to validate responses
A fast response is not useful if it returns incomplete or incorrect data. Use catch_response=True where appropriate to verify expected fields and content presence.
Not separating endpoint names
Always use clear name values in Locust so your reports group similar requests together. This makes analysis much easier in LoadForge.
Using one static slug or entry ID
If every virtual user hits the same content item, you may get misleading results due to caching. Use realistic pools of slugs, categories, and locales.
Forgetting token management
Expired or incorrect API tokens can invalidate a load test quickly. Store them securely in LoadForge environment variables and verify them before running larger tests.
Conclusion
Contentful is a powerful foundation for modern e-commerce and CMS-driven applications, but API-backed content delivery must be validated before high-traffic events. By load testing Contentful APIs with LoadForge, you can measure content delivery performance, identify slow queries, validate localized and asset-heavy requests, and understand how your application behaves during traffic surges.
Using realistic Locust scripts, you can test homepage content, product detail pages, preview workflows, and localized catalog traffic with confidence. From there, LoadForge helps you scale tests with distributed testing, analyze results through real-time reporting, and integrate performance testing into your CI/CD process.
If you want to ensure your Contentful-powered experiences stay fast under pressure, try LoadForge and start building your first Contentful load test today.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

Ghost Load Testing: Performance Testing Ghost CMS with LoadForge
Learn how to load test Ghost CMS with LoadForge to benchmark publishing performance and keep content delivery fast under load.

Magento Load Testing: Performance Testing Magento Stores with LoadForge
Run Magento load tests with LoadForge to identify slow pages, optimize checkout flows, and validate store performance at scale.

Medusa Load Testing: How to Stress Test Medusa Commerce with LoadForge
Discover how to load test Medusa commerce apps with LoadForge to improve API speed, cart flows, and order handling at scale.