Benchmarks
Real benchmarks you can reproduce. We believe in transparency — here's exactly how hitlimit performs.
Which Store Should I Use?
| Use Case | Recommended Store | Performance |
|---|---|---|
| Single server, many unique IPs | Memory (default) | 3.16-4.83M ops/sec |
| Need persistence, single server | SQLite | 352-455K ops/sec |
| Multiple servers (distributed) | Redis | 6.7-6.9K ops/sec |
| Multiple servers (distributed, SQL) | Postgres | 3.0-3.5K ops/sec |
Which Store Should I Use?
| Use Case | Recommended Store | Performance |
|---|---|---|
| Single server, many unique IPs | Memory (default) | 8.32-12.38M ops/sec |
| Need persistence, single server | bun:sqlite | 325-458K ops/sec |
| Multiple servers (distributed) | Redis | 6.7K ops/sec |
| Multiple servers (distributed, SQL) | Postgres | 3.7K ops/sec |
Methodology
Machine: Apple M1 (ARM64, 8GB)
Node.js: v24.4.1
Bun: v1.3.7
Redis: 7.x (Docker, localhost)
Postgres: 16.x (Docker, localhost)
Test Scenarios:
- single-ip: Same key every request (worst case)
- multi-ip-1k: 1,000 unique keys (typical API)
- multi-ip-10k: 10,000 unique keys (high-traffic API)
Each benchmark: 5 runs × 50,000 iterations Memory Store vs Competitors
Honest comparison with other Node.js rate limiters using the same benchmark suite.
Single IP (Edge Case)
One user hammering your API repeatedly.
| Library | ops/sec | Latency | vs Fastest |
|---|---|---|---|
| hitlimit | 4.83M | 207ns | fastest |
| rate-limiter-flexible | 1.66M | 601ns | 34% |
| express-rate-limit | 967K | 1,034ns | 20% |
10,000 Unique IPs (High Traffic)
High-traffic API with many concurrent users. hitlimit excels here.
| Library | ops/sec | Latency | vs Fastest |
|---|---|---|---|
| hitlimit | 3.16M | 316ns | fastest |
| rate-limiter-flexible | 1.14M | 878ns | 36% |
| express-rate-limit | 749K | 1,335ns | 24% |
SQLite Store
Only hitlimit offers a built-in SQLite store for Node.js (via better-sqlite3).
Redis Store
Redis operations use atomic Lua scripts via defineCommand() for single round-trip performance.
hitlimit wins all three scenarios with lower latency across the board.
| Library | Scenario | ops/sec | Latency | vs Fastest |
|---|---|---|---|---|
| hitlimit | single-ip | 6.7K | 150μs | fastest |
| rate-limiter-flexible | single-ip | 5.7K | 176μs | 85% |
| hitlimit | multi-1k | 6.9K | 144μs | fastest |
| rate-limiter-flexible | multi-1k | 5.9K | 171μs | 85% |
| hitlimit | multi-10k | 6.7K | 149μs | fastest |
| rate-limiter-flexible | multi-10k | 6.4K | 156μs | 95% |
defineCommand() for SHA caching and single round-trip performance.
hitlimit wins all three scenarios — 18% faster on single-ip and multi-1k, 5% faster on multi-10k. hitlimit also uses significantly less memory (4.9MB vs 29-55MB for RLF).
Redis throughput is limited by network latency (~150μs local Docker). For remote Redis, expect 200-1000 ops/sec.
Postgres Store
Postgres operations use atomic INSERT...ON CONFLICT upserts with named prepared statements. hitlimit wins all three scenarios with lower latency and better memory usage.
| Library | Scenario | ops/sec | Latency | vs Fastest |
|---|---|---|---|---|
| hitlimit | single-ip | 3.5K | 286μs | fastest |
| rate-limiter-flexible | single-ip | 3.0K | 334μs | 86% |
| hitlimit | multi-1k | 3.2K | 308μs | fastest |
| rate-limiter-flexible | multi-1k | 3.0K | 330μs | 94% |
| hitlimit | multi-10k | 3.0K | 336μs | fastest |
| rate-limiter-flexible | multi-10k | 2.7K | 365μs | 92% |
Postgres throughput is limited by query latency (~280-300μs local Docker). hitlimit uses named prepared statements for server-side query plan caching.
Store Support Comparison
| Library | Memory | SQLite | Redis | Postgres |
|---|---|---|---|---|
| hitlimit | Built-in | Built-in | Built-in | Built-in |
| express-rate-limit | Built-in | No | External | No |
| rate-limiter-flexible | Built-in | No | Built-in | Built-in |
HTTP Overhead
How much throughput does hitlimit cost on real HTTP servers?
| Framework | Without Limiter | With hitlimit | Overhead |
|---|---|---|---|
| Express | 45,000 req/s | 42,000 req/s | ~7% |
| Fastify | 65,000 req/s | 61,000 req/s | ~6% |
HTTP benchmarks measured with autocannon (-c 100 -d 10). High limit (1M) to measure overhead, not blocking.
Memory Store
hitlimit-bun's memory store uses a zero-allocation sync fast path for maximum throughput.
| Scenario | ops/sec | Latency |
|---|---|---|
| Single IP | 12.38M | 81ns |
| 1,000 Unique IPs | 5.17M | 193ns |
| 10,000 Unique IPs | 8.32M | 120ns |
SQLite Store (bun:sqlite)
Native bun:sqlite — zero dependencies, built into the Bun runtime.
Redis Store
Atomic Lua scripts (EVALSHA) — single round-trip per request. Same Redis protocol, same performance.
Redis throughput is limited by network latency (~150μs local Docker). For remote Redis, expect 200-1000 ops/sec.
Postgres Store
Atomic INSERT...ON CONFLICT upserts for distributed Postgres deployments.
All Four Stores — Built In
| Store | Status | Performance | Use Case |
|---|---|---|---|
| Memory | Built-in | 8.32M ops/sec | Single server, ephemeral |
| bun:sqlite | Built-in (native) | 325K ops/sec | Single server, persistent |
| Redis | Built-in | 6.7K ops/sec | Multi-server, distributed |
| Postgres | Built-in | 3.7K ops/sec | Multi-server, distributed (SQL) |
Run Benchmarks Yourself
git clone https://github.com/JointOps/hitlimit-monorepo
cd hitlimit-monorepo
pnpm install && pnpm build
# Start Redis (optional)
docker compose up -d redis
# Run Node.js benchmarks
cd benchmarks
pnpm bench:node git clone https://github.com/JointOps/hitlimit-monorepo
cd hitlimit-monorepo
bun install
# Start Redis (optional)
docker compose up -d redis
# Run Bun benchmarks
cd benchmarks
pnpm bench:bun Results saved to benchmarks/results/