On this page

Benchmarks

Real benchmarks you can reproduce. We believe in transparency — here's exactly how hitlimit performs.

Which Store Should I Use?

Use CaseRecommended StorePerformance
Single server, many unique IPsMemory (default)4.08-5.96M ops/sec
Need persistence, single serverSQLite404-497K ops/sec
Multiple servers (distributed, document DB)MongoDB2.2-2.4K ops/sec
Multiple servers (distributed)Redis / Postgres / MySQLNetwork-bound (~200-3,500 ops/sec)

Which Store Should I Use?

Use CaseRecommended StorePerformance
Single server, many unique IPsMemory (default)5.57-7.73M ops/sec
Need persistence, single serverbun:sqlite372-469K ops/sec
Multiple servers (distributed, document DB)MongoDB2.1-2.3K ops/sec
Multiple servers (distributed)Redis / PostgresNetwork-bound (~200-3,500 ops/sec)

Methodology

Test Environment
Machine:    Apple M1 Pro (ARM64, 16GB)
Node.js:    v24.14.0
Bun:        v1.3.10
MongoDB:    Docker, localhost
Redis:      7.x (Docker, localhost)
Postgres:   16.x (Docker, localhost)

Test Scenarios:
- single-ip:    Same key every request (worst case)
- multi-ip-1k:  1,000 unique keys (typical API)
- multi-ip-10k: 10,000 unique keys (high-traffic API)

Each benchmark: 5 runs × 50,000 iterations

Memory Store vs Competitors

Honest comparison with other Node.js rate limiters using the same benchmark suite.

Single IP (Edge Case)

One user hammering your API repeatedly.

Libraryops/secLatencyvs Fastest
hitlimit5.96M168nsfastest
rate-limiter-flexible2.06M486ns35%
express-rate-limit1.22M817ns21%

10,000 Unique IPs (High Traffic)

High-traffic API with many concurrent users. hitlimit excels here.

Libraryops/secLatencyvs Fastest
hitlimit4.08M245nsfastest
rate-limiter-flexible1.26M793ns31%
express-rate-limit824K1.2μs20%
Key Insight: hitlimit's sync fast path eliminates async/await overhead for in-process stores. For high-traffic APIs, hitlimit is ~3.2x faster than rate-limiter-flexible (4.08M vs 1.26M at 10K IPs).

SQLite Store

Only hitlimit offers a built-in SQLite store for Node.js (via better-sqlite3).

SQLite Store (Node.js)
404-497Kops/sec
2μsavg latency

MongoDB Store

MongoDB operations use atomic $inc + $setOnInsert upserts with TTL indexes for automatic cleanup. Both hitlimit and rate-limiter-flexible use the same optimized approach, resulting in comparable performance.

LibraryScenarioops/secLatencyvs Fastest
hitlimitsingle-ip2.4K409.9μs100%
rate-limiter-flexiblesingle-ip2.4K411.1μs100%
hitlimitmulti-1k2.3K444.3μs90%
rate-limiter-flexiblemulti-1k2.5K400.8μs100%
hitlimitmulti-10k2.2K462.8μs84%
rate-limiter-flexiblemulti-10k2.6K386.8μs100%
Key Insight: Both hitlimit and rate-limiter-flexible use atomic $inc + $setOnInsert operators for MongoDB, resulting in nearly identical performance. The bottleneck is MongoDB network/query latency (~409.9μs), not library overhead. For remote MongoDB, expect 200-1,000 ops/sec depending on network latency.

Redis & Postgres Stores

Redis and Postgres benchmarks are coming soon. Both stores are fully supported and use atomic operations (Lua scripts for Redis, INSERT...ON CONFLICT upserts for Postgres). Performance is network-bound, typically 200-3,500 ops/sec on localhost Docker.

Store Support Comparison

LibraryMemorySQLiteRedisPostgresMongoDB
hitlimitBuilt-inBuilt-inBuilt-inBuilt-inBuilt-in
express-rate-limitBuilt-inNoExternalNoNo
rate-limiter-flexibleBuilt-inNoBuilt-inBuilt-inBuilt-in

HTTP Overhead

How much throughput does hitlimit cost on real HTTP servers?

FrameworkWithout LimiterWith hitlimitOverhead
Express45,000 req/s42,000 req/s~7%
Fastify65,000 req/s61,000 req/s~6%

HTTP benchmarks measured with autocannon (-c 100 -d 10). High limit (1M) to measure overhead, not blocking.

Memory Store

hitlimit-bun's memory store uses a zero-allocation sync fast path for maximum throughput.

Scenarioops/secLatency
Single IP7.73M129ns
1,000 Unique IPs5.94M168ns
10,000 Unique IPs5.57M179ns
Key Insight: Bun significantly outperforms Node.js on memory store — 5.57M vs 4.08M at 10K IPs (1.4x faster). Bun's optimized JavaScript engine shines on synchronous operations like in-memory rate limiting.

SQLite Store (bun:sqlite)

Native bun:sqlite — zero dependencies, built into the Bun runtime.

bun:sqlite Store
372Kops/sec (10K IPs)
2.7μsavg latency
0dependencies

MongoDB Store

Atomic $inc + $setOnInsert upserts with TTL indexes — same approach as Node.js. Performance is network/query-bound.

Scenarioops/secLatency
Single IP2.3K434.4μs
1,000 Unique IPs2.2K463.2μs
10,000 Unique IPs2.1K469μs
Key Insight: MongoDB performance is nearly identical on Bun and Node.js (~2.3K vs 2.4K ops/sec). The bottleneck is MongoDB network/query latency (~434.4μs), not the runtime.

Redis Store

Atomic Lua scripts (EVALSHA) — single round-trip per request. Performance is network-bound.

Redis Store (Bun)
Network-bound~200-3,500 ops/sec
1round-trip per request

Redis throughput is limited by network latency (~300-500μs local Docker). For remote Redis, expect 200-1,000 ops/sec. Bun benchmarks for Redis coming soon.

Postgres Store

Atomic INSERT...ON CONFLICT upserts for distributed Postgres deployments.

Postgres Store (Bun)
Network-bound~200-3,500 ops/sec

Postgres throughput is limited by query latency (~280-300μs local Docker). Bun benchmarks for Postgres coming soon.

All Four Stores — Built In

StoreStatusPerformanceUse Case
MemoryBuilt-in5.57M ops/secSingle server, ephemeral
bun:sqliteBuilt-in (native)372K ops/secSingle server, persistent
RedisBuilt-inNetwork-boundMulti-server, distributed
PostgresBuilt-inNetwork-boundMulti-server, distributed (SQL)
MongoDBBuilt-in2.1K ops/secMulti-server, distributed (document DB)

Run Benchmarks Yourself

Terminal
git clone https://github.com/JointOps/hitlimit-monorepo
cd hitlimit-monorepo
pnpm install && pnpm build

# Start Redis (optional)
docker compose up -d redis

# Run Node.js benchmarks
cd benchmarks
pnpm bench:node
Note: These are our benchmarks and we've done our best to keep them fair and reproducible. Results vary by hardware and environment. hitlimit is the fastest in ALL memory scenarios — ~2.9x faster than rate-limiter-flexible on single-IP (5.96M vs 2.06M) and ~3.2x faster on high-traffic 10K IPs (4.08M vs 1.26M). For MongoDB, both libraries perform comparably since both use the same atomic upsert pattern. Redis and Postgres benchmarks are coming soon. They're not set in stone — there's always room for improvement. If you spot issues or have suggestions, please open an issue.
Terminal
git clone https://github.com/JointOps/hitlimit-monorepo
cd hitlimit-monorepo
bun install

# Start Redis (optional)
docker compose up -d redis

# Run Bun benchmarks
cd benchmarks
pnpm bench:bun
Note: These are our benchmarks and we've done our best to keep them fair and reproducible. Results vary by hardware and environment. Bun significantly outperforms Node.js on memory store — 5.57M vs 4.08M at 10K IPs. Bun's optimized JS engine shines on synchronous in-memory operations. Clone the repo and run them yourself — they're not set in stone. If you spot issues or have suggestions, please open an issue.

Results saved to benchmarks/results/