
Introduction
JSON-RPC is still a popular choice for internal services, blockchain nodes, financial platforms, ERP integrations, and systems that need a lightweight remote procedure call protocol over HTTP or WebSocket. Its simplicity is a major advantage: clients send a JSON object with a method, params, and id, and the server returns a JSON response. But that simplicity can hide serious performance risks when traffic grows.
Load testing JSON-RPC APIs helps you answer critical questions before production traffic does it for you:
- How many JSON-RPC requests per second can your API handle?
- Which methods have the highest latency under load?
- How does authentication affect throughput?
- What happens when clients send malformed requests or backend dependencies slow down?
- Can your infrastructure scale when multiple RPC methods are called concurrently?
With LoadForge, you can run cloud-based load testing and stress testing for JSON-RPC APIs using Locust-based Python scripts. That means you can simulate realistic client behavior, run distributed testing from multiple global test locations, and analyze results with real-time reporting. In this guide, you’ll learn how to build practical JSON-RPC performance tests, from a basic health check to authenticated and mixed-workload scenarios.
Prerequisites
Before you start load testing a JSON-RPC API with LoadForge, make sure you have:
- A working JSON-RPC endpoint, such as:
https://api.example.com/rpchttps://node.example.com/jsonrpc
- Documentation for your JSON-RPC methods, including:
- method names
- required parameters
- expected response structure
- authentication requirements
- Test credentials or API tokens if authentication is required
- A safe test environment or staging environment whenever possible
- A LoadForge account to run distributed load tests in the cloud
You should also understand the basics of JSON-RPC 2.0. A typical request looks like this:
{
"jsonrpc": "2.0",
"method": "user.getProfile",
"params": {
"userId": 12345
},
"id": 1
}And a typical response looks like this:
{
"jsonrpc": "2.0",
"result": {
"userId": 12345,
"name": "Jane Doe"
},
"id": 1
}If something goes wrong, the API may return an error object instead of result.
Understanding JSON-RPC Under Load
JSON-RPC APIs often look lightweight, but their performance profile depends heavily on what each method does behind the scenes. A simple method like system.ping may be nearly free, while a method like report.generateMonthlySummary could involve database joins, cache lookups, file generation, or calls to other services.
When you load test JSON-RPC APIs, you’re usually measuring more than protocol overhead. You’re testing the full execution path behind each RPC method.
Common JSON-RPC bottlenecks
Method-level performance differences
Not all JSON-RPC methods are equal. For example:
user.getProfilemay be a single indexed database lookupwallet.getBalancemay hit a cache and return quicklyinvoice.searchmay trigger expensive filtering and paginationblockchain.getTransactionHistorymay involve multiple backend queries
A good performance testing strategy measures methods individually and as part of a realistic mixed workload.
Serialization and payload size
JSON parsing and serialization can become expensive when:
- request payloads are large
- responses return deeply nested objects
- clients use batch JSON-RPC calls
- APIs return large arrays of records
Load testing should include both small and large payload scenarios.
Authentication overhead
Many JSON-RPC APIs sit behind:
- Bearer token authentication
- API key headers
- session-based login methods
- signed requests
Authentication can affect latency and throughput, especially if tokens are validated against a remote identity provider on every request.
Backend dependencies
JSON-RPC is often used as a gateway to backend services. Under load, bottlenecks may appear in:
- relational databases
- Redis or Memcached
- blockchain nodes
- message queues
- search clusters
- third-party APIs
The RPC server may remain responsive at low concurrency but degrade quickly when these dependencies saturate.
Error handling under stress
A well-designed JSON-RPC API should fail predictably under pressure. During stress testing, watch for:
- rising error rates
- malformed JSON responses
- timeouts
- HTTP 429 or 503 responses
- JSON-RPC error objects with internal server codes
These issues often appear before full service failure.
Writing Your First Load Test
Let’s start with a basic JSON-RPC load test against a realistic endpoint:
- Endpoint:
https://api.example.com/rpc - Methods:
system.pinguser.getProfile
This first script validates the JSON-RPC structure, checks for successful responses, and gives you a baseline for request volume and latency.
from locust import HttpUser, task, between
import itertools
request_counter = itertools.count(1)
class JsonRpcUser(HttpUser):
wait_time = between(1, 3)
def rpc_call(self, method, params=None, name=None):
payload = {
"jsonrpc": "2.0",
"method": method,
"params": params or {},
"id": next(request_counter)
}
with self.client.post(
"/rpc",
json=payload,
headers={"Content-Type": "application/json"},
name=name or method,
catch_response=True
) as response:
if response.status_code != 200:
response.failure(f"HTTP {response.status_code}")
return
try:
data = response.json()
except Exception as e:
response.failure(f"Invalid JSON response: {e}")
return
if "error" in data:
response.failure(f"JSON-RPC error: {data['error']}")
return
if data.get("jsonrpc") != "2.0":
response.failure("Invalid JSON-RPC version in response")
return
response.success()
@task(3)
def ping(self):
self.rpc_call(
method="system.ping",
params={},
name="system.ping"
)
@task(1)
def get_profile(self):
self.rpc_call(
method="user.getProfile",
params={"userId": 1001},
name="user.getProfile"
)What this script does
This test simulates users making two common RPC calls:
- a lightweight health-style method:
system.ping - a more realistic data access method:
user.getProfile
It also validates:
- HTTP status code is
200 - response is valid JSON
- response contains JSON-RPC version
2.0 - no
errorobject is returned
Why this matters
A lot of teams only measure transport-level success. But for JSON-RPC, an HTTP 200 response can still contain an application-level failure in the error field. Your load test should always validate JSON-RPC semantics, not just HTTP status.
Running this in LoadForge
In LoadForge, paste this Locust script into your test, configure the target host as:
https://api.example.comThen start with a moderate load profile, such as:
- 25 users
- spawn rate of 5 users per second
- 5 to 10 minutes duration
This gives you a baseline for latency, throughput, and error rate before moving to more advanced scenarios.
Advanced Load Testing Scenarios
Once your baseline is established, the next step is to simulate realistic JSON-RPC usage patterns. Below are several advanced scenarios that better reflect production traffic.
Authenticated JSON-RPC sessions with bearer tokens
Many JSON-RPC APIs require users to authenticate first through a REST login endpoint or token service, then use the resulting bearer token for all subsequent RPC calls.
In this example:
- users authenticate via
POST /auth/token - authenticated requests go to
POST /rpc - methods include
account.getDetailsandinvoice.list
from locust import HttpUser, task, between
import itertools
request_counter = itertools.count(1)
class AuthenticatedJsonRpcUser(HttpUser):
wait_time = between(1, 2)
token = None
def on_start(self):
credentials = {
"client_id": "loadforge-test-client",
"client_secret": "test-secret-value",
"audience": "jsonrpc-api",
"grant_type": "client_credentials"
}
with self.client.post(
"/auth/token",
json=credentials,
headers={"Content-Type": "application/json"},
name="auth.token",
catch_response=True
) as response:
if response.status_code != 200:
response.failure(f"Authentication failed: HTTP {response.status_code}")
return
try:
data = response.json()
self.token = data["access_token"]
except Exception as e:
response.failure(f"Could not parse token response: {e}")
def rpc_call(self, method, params=None, name=None):
if not self.token:
return
payload = {
"jsonrpc": "2.0",
"method": method,
"params": params or {},
"id": next(request_counter)
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.token}"
}
with self.client.post(
"/rpc",
json=payload,
headers=headers,
name=name or method,
catch_response=True
) as response:
if response.status_code != 200:
response.failure(f"HTTP {response.status_code}")
return
try:
data = response.json()
except Exception as e:
response.failure(f"Invalid JSON response: {e}")
return
if "error" in data:
response.failure(f"JSON-RPC error: {data['error']}")
return
response.success()
@task(2)
def get_account_details(self):
self.rpc_call(
method="account.getDetails",
params={"accountId": "ACC-2024-00981"},
name="account.getDetails"
)
@task(3)
def list_invoices(self):
self.rpc_call(
method="invoice.list",
params={
"customerId": "CUST-44192",
"status": "open",
"page": 1,
"pageSize": 25
},
name="invoice.list"
)What this tests
This scenario measures:
- login/token issuance overhead
- authenticated RPC call latency
- backend performance for account and invoice retrieval
- token validation cost under concurrency
This is especially useful for performance testing APIs behind OAuth2, API gateways, or identity platforms.
Mixed JSON-RPC workloads with search and write operations
Production traffic is rarely uniform. Some requests are reads, some are writes, and some are computationally expensive. A mixed workload helps you identify which methods become bottlenecks when traffic patterns overlap.
In this example, users perform:
catalog.searchProductscart.addItemorder.previewCheckout
These are realistic e-commerce style JSON-RPC methods that put different pressure on the backend.
from locust import HttpUser, task, between
import itertools
import random
request_counter = itertools.count(1)
PRODUCT_IDS = [101, 102, 103, 104, 105, 106]
SEARCH_TERMS = ["laptop", "monitor", "keyboard", "usb-c hub", "desk chair"]
class EcommerceJsonRpcUser(HttpUser):
wait_time = between(1, 4)
def on_start(self):
self.cart_id = f"cart-{random.randint(10000, 99999)}"
def rpc_call(self, method, params=None, name=None):
payload = {
"jsonrpc": "2.0",
"method": method,
"params": params or {},
"id": next(request_counter)
}
with self.client.post(
"/rpc",
json=payload,
headers={"Content-Type": "application/json", "X-API-Key": "lf_test_api_key_12345"},
name=name or method,
catch_response=True
) as response:
if response.status_code != 200:
response.failure(f"HTTP {response.status_code}")
return
try:
data = response.json()
except Exception as e:
response.failure(f"Invalid JSON response: {e}")
return
if "error" in data:
response.failure(f"JSON-RPC error: {data['error']}")
return
response.success()
@task(5)
def search_products(self):
self.rpc_call(
method="catalog.searchProducts",
params={
"query": random.choice(SEARCH_TERMS),
"filters": {
"inStock": True,
"priceMin": 25,
"priceMax": 1500
},
"sort": "relevance",
"page": 1,
"pageSize": 20
},
name="catalog.searchProducts"
)
@task(3)
def add_item_to_cart(self):
self.rpc_call(
method="cart.addItem",
params={
"cartId": self.cart_id,
"productId": random.choice(PRODUCT_IDS),
"quantity": random.randint(1, 3)
},
name="cart.addItem"
)
@task(1)
def preview_checkout(self):
self.rpc_call(
method="order.previewCheckout",
params={
"cartId": self.cart_id,
"shippingAddress": {
"country": "US",
"state": "CA",
"postalCode": "94107"
},
"couponCode": "SPRINGSALE10"
},
name="order.previewCheckout"
)Why this scenario is valuable
This test gives you a better view of real-world scalability because it combines:
- search-heavy read traffic
- cart mutation traffic
- checkout calculation traffic
These method combinations often reveal database locking, cache misses, queueing delays, and application thread pool exhaustion.
Testing error handling and malformed JSON-RPC requests
Stress testing is not just about valid traffic. You also want to know whether your JSON-RPC API handles invalid requests gracefully. This is especially important for public-facing APIs, SDK integrations, and systems where client bugs are common.
In this example, we intentionally mix valid and invalid requests to verify that the API returns proper JSON-RPC errors without destabilizing the service.
from locust import HttpUser, task, between
import itertools
import random
request_counter = itertools.count(1)
class JsonRpcErrorHandlingUser(HttpUser):
wait_time = between(1, 2)
def send_payload(self, payload, name):
with self.client.post(
"/rpc",
json=payload,
headers={"Content-Type": "application/json"},
name=name,
catch_response=True
) as response:
try:
data = response.json()
except Exception as e:
response.failure(f"Invalid JSON response: {e}")
return
if response.status_code not in [200, 400]:
response.failure(f"Unexpected HTTP status: {response.status_code}")
return
response.success()
@task(4)
def valid_transaction_lookup(self):
payload = {
"jsonrpc": "2.0",
"method": "transaction.getStatus",
"params": {
"transactionId": f"TXN-{random.randint(100000, 999999)}"
},
"id": next(request_counter)
}
self.send_payload(payload, "transaction.getStatus")
@task(1)
def invalid_method_call(self):
payload = {
"jsonrpc": "2.0",
"method": "transaction.unknownMethod",
"params": {},
"id": next(request_counter)
}
self.send_payload(payload, "invalid.method")
@task(1)
def missing_params(self):
payload = {
"jsonrpc": "2.0",
"method": "transaction.createRefund",
"id": next(request_counter)
}
self.send_payload(payload, "missing.params")What this scenario reveals
This type of load test helps you verify:
- invalid requests don’t consume excessive resources
- the API returns structured errors consistently
- malformed traffic doesn’t increase latency for valid requests
- application logging and exception handling remain stable under noisy traffic
This is particularly important when you want to measure resilience as part of stress testing.
Analyzing Your Results
After running your JSON-RPC load test in LoadForge, focus on metrics that reflect both transport success and application correctness.
Key metrics to watch
Response time percentiles
Average latency is useful, but percentiles tell the real story. Watch:
- p50 for normal user experience
- p95 for degraded experience under load
- p99 for tail latency and outliers
A JSON-RPC method with a reasonable average but very high p95 or p99 often points to backend contention or inconsistent caching.
Requests per second
This tells you how much request volume your API can sustain. Compare throughput across methods:
- lightweight methods should scale higher
- database-heavy methods will usually cap out sooner
- write-heavy methods may show lower throughput under concurrency
Error rate
Track both:
- HTTP-level errors
- JSON-RPC application-level errors
Remember that a 200 OK response can still contain a JSON-RPC error object. Your Locust scripts should catch that, and LoadForge will surface those failures in reporting.
Method-specific performance
Name your requests clearly, such as:
system.pinguser.getProfilecatalog.searchProductsorder.previewCheckout
This makes it easy to compare which JSON-RPC methods are slowing down first.
How to interpret patterns
High latency with low error rate
This often means your API is still functioning but nearing saturation. Look for:
- database connection pool exhaustion
- slow downstream services
- CPU pressure from serialization or business logic
Rising error rate after a concurrency threshold
This usually indicates a hard capacity limit. Common causes include:
- worker thread exhaustion
- rate limiting
- gateway timeouts
- overloaded database replicas
Slow writes but fast reads
This may point to:
- transaction contention
- lock waits
- synchronous event processing
- disk I/O bottlenecks
Using LoadForge effectively
LoadForge’s real-time reporting helps you see these patterns while the test is still running, so you can stop early or adjust scenarios as needed. For larger performance testing efforts, LoadForge’s distributed testing lets you generate traffic from multiple cloud regions, which is useful for APIs serving global clients. You can also integrate tests into CI/CD pipelines to catch regressions before release.
Performance Optimization Tips
Once your JSON-RPC load testing identifies bottlenecks, these optimizations are often worth exploring.
Optimize hot RPC methods
Profile the most frequently called methods first. A small improvement to a hot method like user.getProfile or catalog.searchProducts can have a major impact on total throughput.
Cache predictable reads
Methods that return frequently accessed data should use caching where possible. Examples include:
- account summaries
- product details
- configuration values
- blockchain metadata
Reduce payload size
Large JSON responses increase serialization cost and network overhead. Consider:
- pagination
- field filtering
- compact response objects
- avoiding unnecessary nested structures
Improve database efficiency
For database-heavy JSON-RPC methods:
- add indexes for common query paths
- eliminate N+1 queries
- tune connection pools
- use read replicas where appropriate
Separate read and write workloads
If your API mixes search-heavy reads and expensive writes, isolate those paths operationally when possible. This can improve resilience during traffic spikes.
Validate errors efficiently
Malformed requests should be rejected quickly and cheaply. Avoid expensive processing for invalid JSON-RPC calls.
Common Pitfalls to Avoid
Treating HTTP 200 as success
This is one of the most common JSON-RPC testing mistakes. Always inspect the response body for an error object.
Testing only one method
A single-method benchmark rarely reflects production behavior. Use mixed workloads with realistic task weighting.
Ignoring authentication overhead
If production clients use tokens, API keys, or signed requests, your load test should too. Otherwise, results may be misleading.
Using unrealistic test data
Hardcoded IDs that always hit cache or tiny datasets can make performance look better than it really is. Use varied and realistic parameters.
Overlooking backend observability
Load testing without checking database, cache, and application metrics makes root cause analysis difficult. Pair LoadForge results with server-side monitoring.
Running destructive tests in production
Some JSON-RPC methods create records, modify state, or trigger expensive workflows. Use staging environments or carefully controlled test accounts whenever possible.
Conclusion
Load testing JSON-RPC APIs is essential if you want confidence in request volume handling, latency, error behavior, and backend scalability. Because JSON-RPC methods can vary widely in complexity, the best performance testing approach is to start with a simple baseline, then expand into authenticated, mixed-workload, and resilience-focused scenarios.
With LoadForge, you can build realistic Locust-based JSON-RPC tests, run them on cloud-based infrastructure, scale them with distributed testing, and analyze performance in real time. Whether you’re validating an internal RPC service, a customer-facing platform, or a high-throughput backend API, LoadForge makes it easier to find bottlenecks before users do.
Try LoadForge to start load testing your JSON-RPC APIs and turn performance insights into a more scalable, reliable service.
LoadForge Team
LoadForge is a load and performance testing platform built on Locust. Our team has been shipping load tests against production systems since 2018, and we write these guides from real customer engagements.
Related guides
Keep going with more guides from the same category.

How to Load Test API Rate Limiting with LoadForge
Test API rate limiting with LoadForge to verify throttling rules, retry behavior, and service stability during traffic spikes.

Load Testing API Gateways with LoadForge
Discover how to load test API gateways with LoadForge to measure routing performance, latency, and resilience under heavy traffic.

Load Testing GraphQL APIs with LoadForge
Discover how to load test GraphQL APIs with LoadForge, including queries, mutations, concurrency, and performance bottlenecks.