Benchmarks
Real benchmarks you can reproduce. We believe in transparency — here's exactly how hitlimit performs.
Which Store Should I Use?
| Use Case | Recommended Store | Performance |
|---|---|---|
| Single server, many unique IPs | Memory (default) | 4.08-5.96M ops/sec |
| Need persistence, single server | SQLite | 404-497K ops/sec |
| Multiple servers (distributed, document DB) | MongoDB | 2.2-2.4K ops/sec |
| Multiple servers (distributed) | Redis / Postgres / MySQL | Network-bound (~200-3,500 ops/sec) |
Which Store Should I Use?
| Use Case | Recommended Store | Performance |
|---|---|---|
| Single server, many unique IPs | Memory (default) | 5.57-7.73M ops/sec |
| Need persistence, single server | bun:sqlite | 372-469K ops/sec |
| Multiple servers (distributed, document DB) | MongoDB | 2.1-2.3K ops/sec |
| Multiple servers (distributed) | Redis / Postgres | Network-bound (~200-3,500 ops/sec) |
Methodology
Machine: Apple M1 Pro (ARM64, 16GB)
Node.js: v24.14.0
Bun: v1.3.10
MongoDB: Docker, localhost
Redis: 7.x (Docker, localhost)
Postgres: 16.x (Docker, localhost)
Test Scenarios:
- single-ip: Same key every request (worst case)
- multi-ip-1k: 1,000 unique keys (typical API)
- multi-ip-10k: 10,000 unique keys (high-traffic API)
Each benchmark: 5 runs × 50,000 iterations Memory Store vs Competitors
Honest comparison with other Node.js rate limiters using the same benchmark suite.
Single IP (Edge Case)
One user hammering your API repeatedly.
| Library | ops/sec | Latency | vs Fastest |
|---|---|---|---|
| hitlimit | 5.96M | 168ns | fastest |
| rate-limiter-flexible | 2.06M | 486ns | 35% |
| express-rate-limit | 1.22M | 817ns | 21% |
10,000 Unique IPs (High Traffic)
High-traffic API with many concurrent users. hitlimit excels here.
| Library | ops/sec | Latency | vs Fastest |
|---|---|---|---|
| hitlimit | 4.08M | 245ns | fastest |
| rate-limiter-flexible | 1.26M | 793ns | 31% |
| express-rate-limit | 824K | 1.2μs | 20% |
SQLite Store
Only hitlimit offers a built-in SQLite store for Node.js (via better-sqlite3).
MongoDB Store
MongoDB operations use atomic $inc + $setOnInsert upserts with TTL indexes for automatic cleanup.
Both hitlimit and rate-limiter-flexible use the same optimized approach, resulting in comparable performance.
| Library | Scenario | ops/sec | Latency | vs Fastest |
|---|---|---|---|---|
| hitlimit | single-ip | 2.4K | 409.9μs | 100% |
| rate-limiter-flexible | single-ip | 2.4K | 411.1μs | 100% |
| hitlimit | multi-1k | 2.3K | 444.3μs | 90% |
| rate-limiter-flexible | multi-1k | 2.5K | 400.8μs | 100% |
| hitlimit | multi-10k | 2.2K | 462.8μs | 84% |
| rate-limiter-flexible | multi-10k | 2.6K | 386.8μs | 100% |
$inc + $setOnInsert operators for MongoDB, resulting in nearly identical performance.
The bottleneck is MongoDB network/query latency (~409.9μs), not library overhead.
For remote MongoDB, expect 200-1,000 ops/sec depending on network latency.
Redis & Postgres Stores
Redis and Postgres benchmarks are coming soon. Both stores are fully supported and use atomic operations (Lua scripts for Redis, INSERT...ON CONFLICT upserts for Postgres). Performance is network-bound, typically 200-3,500 ops/sec on localhost Docker.
Store Support Comparison
| Library | Memory | SQLite | Redis | Postgres | MongoDB |
|---|---|---|---|---|---|
| hitlimit | Built-in | Built-in | Built-in | Built-in | Built-in |
| express-rate-limit | Built-in | No | External | No | No |
| rate-limiter-flexible | Built-in | No | Built-in | Built-in | Built-in |
HTTP Overhead
How much throughput does hitlimit cost on real HTTP servers?
| Framework | Without Limiter | With hitlimit | Overhead |
|---|---|---|---|
| Express | 45,000 req/s | 42,000 req/s | ~7% |
| Fastify | 65,000 req/s | 61,000 req/s | ~6% |
HTTP benchmarks measured with autocannon (-c 100 -d 10). High limit (1M) to measure overhead, not blocking.
Memory Store
hitlimit-bun's memory store uses a zero-allocation sync fast path for maximum throughput.
| Scenario | ops/sec | Latency |
|---|---|---|
| Single IP | 7.73M | 129ns |
| 1,000 Unique IPs | 5.94M | 168ns |
| 10,000 Unique IPs | 5.57M | 179ns |
SQLite Store (bun:sqlite)
Native bun:sqlite — zero dependencies, built into the Bun runtime.
MongoDB Store
Atomic $inc + $setOnInsert upserts with TTL indexes — same approach as Node.js.
Performance is network/query-bound.
| Scenario | ops/sec | Latency |
|---|---|---|
| Single IP | 2.3K | 434.4μs |
| 1,000 Unique IPs | 2.2K | 463.2μs |
| 10,000 Unique IPs | 2.1K | 469μs |
Redis Store
Atomic Lua scripts (EVALSHA) — single round-trip per request. Performance is network-bound.
Redis throughput is limited by network latency (~300-500μs local Docker). For remote Redis, expect 200-1,000 ops/sec. Bun benchmarks for Redis coming soon.
Postgres Store
Atomic INSERT...ON CONFLICT upserts for distributed Postgres deployments.
Postgres throughput is limited by query latency (~280-300μs local Docker). Bun benchmarks for Postgres coming soon.
All Four Stores — Built In
| Store | Status | Performance | Use Case |
|---|---|---|---|
| Memory | Built-in | 5.57M ops/sec | Single server, ephemeral |
| bun:sqlite | Built-in (native) | 372K ops/sec | Single server, persistent |
| Redis | Built-in | Network-bound | Multi-server, distributed |
| Postgres | Built-in | Network-bound | Multi-server, distributed (SQL) |
| MongoDB | Built-in | 2.1K ops/sec | Multi-server, distributed (document DB) |
Run Benchmarks Yourself
git clone https://github.com/JointOps/hitlimit-monorepo
cd hitlimit-monorepo
pnpm install && pnpm build
# Start Redis (optional)
docker compose up -d redis
# Run Node.js benchmarks
cd benchmarks
pnpm bench:node git clone https://github.com/JointOps/hitlimit-monorepo
cd hitlimit-monorepo
bun install
# Start Redis (optional)
docker compose up -d redis
# Run Bun benchmarks
cd benchmarks
pnpm bench:bun Results saved to benchmarks/results/