
Introduction
tRPC has become a popular choice for teams building end-to-end typesafe applications with TypeScript, especially in stacks that combine Next.js, React, and Node.js backends. Its developer experience is excellent: you define procedures on the server, consume them on the client, and get full type inference without maintaining separate REST or GraphQL schemas.
But great developer experience does not automatically guarantee great runtime performance.
If your application depends on tRPC for critical user flows—authentication, dashboards, search, checkout, notifications, or admin tooling—you need to understand how those procedure calls behave under load. Load testing tRPC APIs helps you measure throughput, latency, concurrency limits, and failure behavior before real users find the bottlenecks for you.
In this guide, you’ll learn how to load test tRPC APIs with LoadForge using realistic Locust scripts. We’ll cover basic procedure calls, authenticated requests, batched operations, and stateful workflows that mirror how modern full-stack apps actually use tRPC. Along the way, we’ll also look at how to interpret performance testing results and identify optimization opportunities.
Because LoadForge is built on Locust, every example uses standard Python-based Locust scripts. That means you get flexible scripting plus LoadForge features like distributed testing, cloud-based infrastructure, real-time reporting, CI/CD integration, and global test locations.
Prerequisites
Before you start load testing a tRPC API, make sure you have the following:
- A working tRPC application or staging environment
- The base URL for your app or API, such as:
https://staging.example.comhttps://api.example.com
- Knowledge of your tRPC endpoint structure, commonly something like:
/api/trpc/post.list/api/trpc/auth.login/api/trpc/post.byId
- Sample request payloads for your procedures
- Test user credentials and any required auth tokens or session cookies
- A LoadForge account to run distributed load tests at scale
It also helps to understand how your tRPC server is configured. Depending on your stack, tRPC requests may:
- Use GET for queries and POST for mutations
- Support request batching through a single endpoint
- Require cookies, bearer tokens, or custom headers
- Run inside a Next.js API route, standalone Node server, or serverless deployment
For accurate performance testing, use an environment that resembles production as closely as possible, including database size, caching behavior, and authentication middleware.
Understanding tRPC Under Load
tRPC differs from traditional REST APIs because the performance characteristics often depend on more than raw HTTP routing. A single user action in a frontend app may trigger several procedure calls in sequence or in parallel. If batching is enabled, multiple logical calls may be combined into one HTTP request. That changes how you should think about load testing.
How tRPC Requests Typically Work
A tRPC API usually exposes procedure-specific paths such as:
/api/trpc/post.list/api/trpc/post.byId/api/trpc/user.me/api/trpc/cart.addItem
Queries often pass input through a URL parameter like ?input=..., while mutations usually send JSON in the request body. Many apps also use batched endpoints where multiple procedures are invoked together.
Common Bottlenecks in tRPC Apps
When load testing tRPC APIs, the most common bottlenecks include:
- Database-heavy procedures with inefficient queries
- N+1 access patterns in resolvers
- Expensive input validation
- Session or JWT verification overhead
- Slow middleware chains
- Serialization and deserialization costs for large payloads
- Batching behavior that hides expensive backend work
- Serverless cold starts in Next.js or edge deployments
Why Full-Stack Behavior Matters
Many teams use tRPC specifically because it powers full-stack app interactions. That means performance testing should not stop at isolated endpoints. You should also test realistic workflows like:
- User login followed by profile fetch
- Dashboard loads that trigger multiple procedure calls
- Search with filters and pagination
- Shopping cart updates and checkout steps
- Admin views with large result sets
These scenarios reveal how your tRPC API behaves under realistic concurrency and stress testing conditions.
Writing Your First Load Test
Let’s start with a simple tRPC query load test. Imagine a blog-style app with a procedure:
post.list
This procedure returns a paginated list of published posts. In many tRPC setups, a query request looks like this:
GET /api/trpc/post.list?input={"json":{"limit":10,"cursor":null,"category":"engineering"}}
Because the input parameter must be URL-encoded, we’ll let Python handle that.
Basic tRPC Query Load Test
from locust import HttpUser, task, between
import json
class TRPCQueryUser(HttpUser):
wait_time = between(1, 3)
@task
def list_posts(self):
input_payload = {
"json": {
"limit": 10,
"cursor": None,
"category": "engineering"
}
}
self.client.get(
"/api/trpc/post.list",
params={"input": json.dumps(input_payload)},
name="tRPC Query: post.list"
)What This Script Does
This script simulates users repeatedly calling a common tRPC query:
- It targets
/api/trpc/post.list - It passes realistic pagination and filtering input
- It groups results in Locust under a readable request name
This is a good starting point for baseline load testing. In LoadForge, you can run this test across multiple workers and quickly see:
- Requests per second
- Average and percentile response times
- Error rates
- How performance changes as concurrency increases
Why This Matters
A query like post.list is often called on homepage loads, category pages, or content feeds. If it becomes slow under concurrency, the whole app feels sluggish. Even a simple procedure can become expensive if it includes joins, sorting, filtering, or large response payloads.
Advanced Load Testing Scenarios
Once you’ve validated basic query performance, the next step is to test more realistic tRPC usage patterns.
Scenario 1: Authenticated tRPC Mutations and User Context
A common tRPC pattern is authenticated procedures that depend on session middleware. Let’s simulate a login flow followed by an authenticated profile fetch and cart mutation.
In this example, the application has these procedures:
auth.loginuser.mecart.addItem
We’ll assume login sets a session cookie.
from locust import HttpUser, task, between
import json
import random
class AuthenticatedTRPCUser(HttpUser):
wait_time = between(1, 2)
def on_start(self):
self.login()
def login(self):
payload = {
"json": {
"email": f"loadtest{random.randint(1, 100)}@example.com",
"password": "TestPassword123!"
}
}
with self.client.post(
"/api/trpc/auth.login",
json=payload,
name="tRPC Mutation: auth.login",
catch_response=True
) as response:
if response.status_code != 200:
response.failure(f"Login failed with status {response.status_code}")
return
try:
data = response.json()
if "error" in data:
response.failure(f"Login returned tRPC error: {data['error']}")
else:
response.success()
except Exception as e:
response.failure(f"Invalid login response: {e}")
@task(3)
def get_profile(self):
input_payload = {
"json": None
}
self.client.get(
"/api/trpc/user.me",
params={"input": json.dumps(input_payload)},
name="tRPC Query: user.me"
)
@task(2)
def add_item_to_cart(self):
payload = {
"json": {
"productId": random.choice([
"prod_hoodie_001",
"prod_mug_002",
"prod_sticker_003"
]),
"quantity": random.randint(1, 3)
}
}
self.client.post(
"/api/trpc/cart.addItem",
json=payload,
name="tRPC Mutation: cart.addItem"
)What This Tests
This script is more realistic because it includes:
- Login and session establishment
- Authenticated query access
- Authenticated mutation behavior
- Repeated user actions after login
This matters because authenticated tRPC procedures often trigger extra work:
- Session lookup
- JWT verification
- Role and permission checks
- User-specific database queries
- Middleware execution
If your tRPC app feels fine for public endpoints but slows down for logged-in users, this type of load test will expose it.
Scenario 2: Batched tRPC Requests for Dashboard Loads
One of tRPC’s strengths is batching multiple procedure calls into a single request. Frontend apps often do this when loading dashboards or account pages. But batching can create misleading performance assumptions: fewer HTTP requests does not necessarily mean less backend work.
Suppose your dashboard loads:
user.menotifications.listanalytics.summary
A batched call may hit an endpoint like:
/api/trpc/user.me,notifications.list,analytics.summary?batch=1&input=...
Here’s a Locust script that simulates this pattern.
from locust import HttpUser, task, between
import json
class TRPCBatchDashboardUser(HttpUser):
wait_time = between(2, 5)
@task
def load_dashboard(self):
batched_input = {
"0": {"json": None},
"1": {
"json": {
"limit": 20,
"unreadOnly": False
}
},
"2": {
"json": {
"range": "30d",
"includeComparison": True
}
}
}
self.client.get(
"/api/trpc/user.me,notifications.list,analytics.summary",
params={
"batch": "1",
"input": json.dumps(batched_input)
},
name="tRPC Batch: dashboard.load"
)Why Batch Testing Is Important
Batching changes the shape of load:
- Fewer HTTP connections
- More work per request
- Larger response payloads
- Increased resolver concurrency on the server
- Greater risk of one slow procedure delaying the entire response
When running performance testing in LoadForge, compare batched and unbatched scenarios. You may discover that batching improves frontend efficiency but increases backend contention, especially for database-heavy dashboards.
Scenario 3: Stateful Search and Detail Workflows
Many tRPC apps power interactive search experiences. These can be costly because they involve filtering, sorting, pagination, and repeated user interactions.
Let’s simulate a user who:
- Searches products
- Opens a product detail page
- Submits a review
We’ll use these procedures:
catalog.searchcatalog.bySlugreviews.create
from locust import HttpUser, task, between
import json
import random
class TRPCSearchWorkflowUser(HttpUser):
wait_time = between(1, 4)
product_slugs = [
"ultralight-travel-backpack",
"wireless-mechanical-keyboard",
"noise-cancelling-headphones",
"ergonomic-office-chair"
]
search_terms = [
"backpack",
"keyboard",
"headphones",
"office chair"
]
@task(4)
def search_catalog(self):
term = random.choice(self.search_terms)
input_payload = {
"json": {
"query": term,
"category": "all",
"priceRange": {
"min": 0,
"max": 500
},
"sort": "relevance",
"limit": 24,
"cursor": None
}
}
self.client.get(
"/api/trpc/catalog.search",
params={"input": json.dumps(input_payload)},
name="tRPC Query: catalog.search"
)
@task(2)
def view_product(self):
slug = random.choice(self.product_slugs)
input_payload = {
"json": {
"slug": slug
}
}
self.client.get(
"/api/trpc/catalog.bySlug",
params={"input": json.dumps(input_payload)},
name="tRPC Query: catalog.bySlug"
)
@task(1)
def create_review(self):
payload = {
"json": {
"productId": random.choice([
"prod_bp_1001",
"prod_kb_2001",
"prod_hp_3001"
]),
"rating": random.randint(4, 5),
"title": "Load test review submission",
"body": "This review was generated during performance testing of the tRPC reviews.create mutation."
}
}
headers = {
"Authorization": "Bearer test-jwt-token-for-load-testing"
}
self.client.post(
"/api/trpc/reviews.create",
json=payload,
headers=headers,
name="tRPC Mutation: reviews.create"
)What Makes This Realistic
This scenario better reflects full-stack app performance because it includes:
- Read-heavy search traffic
- Detail page lookups
- Write operations mixed with reads
- Realistic payload sizes
- Authenticated mutations
Under stress testing, search procedures often become hotspots due to:
- Full-text search
- Filter combinations
- Database indexes
- Cache misses
- Large result sets
This kind of script helps you understand whether your tRPC API can support interactive user behavior at scale.
Analyzing Your Results
After running your load test in LoadForge, focus on more than just average response time. tRPC APIs often appear healthy at the average level while hiding serious tail-latency issues.
Key Metrics to Watch
Response Time Percentiles
Look at:
- P50 for typical performance
- P95 for degraded user experience
- P99 for worst-case behavior under load
A post.list query averaging 120 ms may sound fine, but if the P95 is 1.8 seconds, users will notice.
Requests Per Second
Measure how many procedure calls your tRPC API can sustain. For batched requests, remember that one HTTP request may contain multiple logical operations. Interpret throughput accordingly.
Error Rate
Watch for:
- HTTP 429 rate limiting
- HTTP 500 application failures
- Timeouts
- tRPC-specific error objects in otherwise successful HTTP responses
Some tRPC servers return HTTP 200 with an embedded error payload. Your Locust scripts should validate response bodies where necessary, not just status codes.
Latency by Procedure
Group requests with clear names like:
tRPC Query: user.metRPC Mutation: cart.addItemtRPC Batch: dashboard.load
This makes LoadForge’s real-time reporting much more useful because you can pinpoint which procedures degrade first.
Compare Different Load Shapes
Use LoadForge’s distributed testing to compare:
- Steady load testing for baseline performance
- Spike testing for sudden traffic bursts
- Stress testing to find the breaking point
- Soak testing for memory leaks and long-running stability issues
For example:
- A dashboard batch request may pass a 5-minute test but fail during a 1-hour soak test
- Authentication middleware may behave well under steady traffic but collapse during spikes
- Search procedures may show acceptable median latency but poor tail performance at higher concurrency
Correlate API Results with Backend Metrics
For the best analysis, compare LoadForge results with server-side telemetry:
- Database query duration
- CPU and memory usage
- Cache hit rates
- Event loop lag
- Connection pool saturation
- Serverless cold start frequency
This is often where the real reason behind poor tRPC performance becomes obvious.
Performance Optimization Tips
Once your load testing identifies slow procedures, focus on the highest-impact fixes.
Optimize Database Access
Most tRPC performance issues are really database issues. Review:
- Missing indexes
- N+1 queries in resolvers
- Unbounded result sets
- Inefficient joins
- Expensive sorting and filtering
For procedures like catalog.search or analytics.summary, database tuning often delivers the biggest gains.
Keep Procedure Payloads Small
Large JSON payloads increase:
- Serialization time
- Network transfer time
- Browser parsing time
- Memory usage on both client and server
Return only the fields clients need, especially for list endpoints and dashboard procedures.
Review Middleware Cost
tRPC middleware is powerful, but each layer adds overhead. Measure the cost of:
- Auth checks
- Request logging
- Input validation
- Role-based access control
- Tracing hooks
If every procedure passes through several heavy middleware layers, concurrency suffers.
Test Batched vs Unbatched Requests
Batching is not always faster overall. It can reduce network overhead while increasing backend contention. Load testing both patterns helps you choose the right strategy.
Cache Where Appropriate
Good candidates for caching include:
- Public list queries
- Product detail lookups
- Analytics snapshots
- Feature flag reads
- Reference data
But always re-run performance testing after caching changes to validate real improvements.
Use LoadForge for Repeatable Benchmarks
Because LoadForge supports CI/CD integration, you can automate tRPC performance testing as part of deployment pipelines. This helps catch regressions before they reach production.
Common Pitfalls to Avoid
Load testing tRPC APIs requires a few technology-specific considerations.
Treating tRPC Like Generic REST
Even though tRPC uses HTTP, the logical workload is procedure-based, not route-based in the traditional REST sense. Test actual user interactions and procedure combinations, not just isolated URLs.
Ignoring Batched Calls
If your frontend uses tRPC batching, your load test should too. Otherwise, you may completely misrepresent production traffic patterns.
Validating Only HTTP Status Codes
A request may return HTTP 200 and still contain a tRPC error. Always inspect response bodies for critical procedures, especially authentication and mutations.
Using Unrealistic Inputs
Tiny payloads, empty databases, and single test users can produce misleadingly good results. Use realistic:
- Search queries
- Pagination sizes
- Product IDs
- User accounts
- Authentication flows
Forgetting Authentication Overhead
Authenticated tRPC procedures often cost much more than public ones. If your app is mostly used by logged-in users, anonymous tests will understate real production load.
Testing Only Short Bursts
Some tRPC issues only appear over time:
- Memory leaks
- Connection exhaustion
- Session store contention
- Cache churn
- Background job buildup
Run longer tests in LoadForge to uncover these slower-developing problems.
Not Naming Requests Clearly
If every request is grouped under raw URLs, your reports become harder to interpret. Use clear names in Locust so you can analyze performance by procedure and workflow.
Conclusion
Load testing tRPC APIs is essential if you want confidence in the real-world performance of your full-stack application. Because tRPC often powers everything from authentication to dashboards, search, and transactional workflows, even small inefficiencies can have a major impact under concurrency.
With LoadForge, you can build realistic Locust-based performance testing scripts for tRPC queries, mutations, authenticated sessions, and batched requests. You also get the benefits of cloud-based infrastructure, distributed testing, global test locations, real-time reporting, and CI/CD integration to make performance testing part of your normal development process.
If you’re ready to benchmark your tRPC procedures, uncover bottlenecks, and improve full-stack reliability, try LoadForge and start running your first tRPC load test today.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

Load Testing gRPC Services with LoadForge
Learn how to load test gRPC services with LoadForge to validate response times, streaming performance, and service reliability.

Load Testing HTTP/3 Applications with LoadForge
See how to load test HTTP/3 applications with LoadForge to evaluate QUIC performance, latency, and behavior under load.

How to Load Test API Rate Limiting with LoadForge
Test API rate limiting with LoadForge to verify throttling rules, retry behavior, and service stability during traffic spikes.