Throughput Optimization for XRP Nodes (US)

Practical guidance to increase transaction throughput, reduce latency, and maintain validator health across xrpnodes in the USA.

  • Focus: network IO, ledger processing, DB tuning, and container orchestration patterns.
  • Audience: node operators, validators, infra engineers.
Throughput overview

Why throughput matters

Higher throughput reduces backlog on the peer-to-peer layer and improves consensus responsiveness. We target practical optimizations that preserve ledger integrity while increasing TPS in real deployments.

  • Eliminate IO bottlenecks
  • Optimize process scheduling and cgroups
  • Tune RocksDB and network stack
Node setup checklist
Node tuning

Benchmarks & Baselines

Representative results from lab runs emulating US regional traffic and validator load.

ConfigurationIO ProfileMedian TPS99th pct latency (ms)Notes
Std VM (SSD)Mixed read/write1,200110Default rippled
Optimized DBWrite-heavy2,60045RocksDB tuning + io scheduler
Containerized, tunedBursty2,20060CPU pinning + NIC offload

Table: use this baseline to pick target upgrades for production nodes.

Tuning checklist

  • Provision NVMe or high IOPS SSDs; separate DB and logs
  • Use deadline/none io scheduler for db drive
  • Increase file descriptor limits and tune vm.swappiness
  • Optimize RocksDB options: write_buffer_size, max_background_compactions
  • Network: enable TCP BBR or tune congestion control; offload where possible
Schedule tuning session

Quick commands

# increase fd limit
ulimit -n 65536

# example: set swappiness
sysctl -w vm.swappiness=10

# enable BBR
sysctl -w net.ipv4.tcp_congestion_control=bbr

Reference architectures

Architecture 1
Single-region high-IO

Best for validators with colocated peers.

Architecture 2
Multi-AZ validator

Redundancy with replicated DB and warm standby nodes.

Architecture 3
Container cluster

K8s for orchestration, careful resource isolation required.

Tools and monitoring

Recommended tooling to profile and observe throughput:

  • Prometheus + Grafana for metrics
  • iostat, blktrace, perf for IO profiling
  • custom rippled metrics export (validator health, ledger apply rate)
Privacy & telemetry

Sample metrics

Metrics snapshot

Case study: Regional validator upgrade

One validator in the Mid-Atlantic region reduced ledger apply time by 62% after storage and network tuning. Actions: migrate to NVMe, tune RocksDB, enable TCP BBR, and pin rippled to isolated CPU cores.

  • Result: sustained throughput increase and lower tail latency
  • Recommended rollout: stage changes, benchmark, and monitor
Talk to an engineer
Engineer face
Lead engineer: A. Rivera

Resources & downloads

Tuning guide

Download the concise checklist and sample configs for rippled and RocksDB.

View guide

Benchmark scripts

Scripts used to reproduce the results in our benchmarks.

Get scripts

Support plan

Commercial support and on-call tuning engagements for US validators.

Request quote