On this page

Benchmarks

Real benchmarks you can reproduce. We believe in transparency — here's exactly how hitlimit performs.

Which Store Should I Use?

Use CaseRecommended StorePerformance
Single server, many unique IPsMemory (default)3.16-4.83M ops/sec
Need persistence, single serverSQLite352-455K ops/sec
Multiple servers (distributed)Redis6.7-6.9K ops/sec
Multiple servers (distributed, SQL)Postgres3.0-3.5K ops/sec

Which Store Should I Use?

Use CaseRecommended StorePerformance
Single server, many unique IPsMemory (default)8.32-12.38M ops/sec
Need persistence, single serverbun:sqlite325-458K ops/sec
Multiple servers (distributed)Redis6.7K ops/sec
Multiple servers (distributed, SQL)Postgres3.7K ops/sec

Methodology

Test Environment
Machine:    Apple M1 (ARM64, 8GB)
Node.js:    v24.4.1
Bun:        v1.3.7
Redis:      7.x (Docker, localhost)
Postgres:   16.x (Docker, localhost)

Test Scenarios:
- single-ip:    Same key every request (worst case)
- multi-ip-1k:  1,000 unique keys (typical API)
- multi-ip-10k: 10,000 unique keys (high-traffic API)

Each benchmark: 5 runs × 50,000 iterations

Memory Store vs Competitors

Honest comparison with other Node.js rate limiters using the same benchmark suite.

Single IP (Edge Case)

One user hammering your API repeatedly.

Libraryops/secLatencyvs Fastest
hitlimit4.83M207nsfastest
rate-limiter-flexible1.66M601ns34%
express-rate-limit967K1,034ns20%

10,000 Unique IPs (High Traffic)

High-traffic API with many concurrent users. hitlimit excels here.

Libraryops/secLatencyvs Fastest
hitlimit3.16M316nsfastest
rate-limiter-flexible1.14M878ns36%
express-rate-limit749K1,335ns24%
Key Insight: hitlimit's sync fast path eliminates async/await overhead for in-process stores. For high-traffic APIs, hitlimit is ~2.8x faster than rate-limiter-flexible (3.16M vs 1.14M at 10K IPs).

SQLite Store

Only hitlimit offers a built-in SQLite store for Node.js (via better-sqlite3).

SQLite Store (Node.js)
352-455Kops/sec
2.2μsavg latency

Redis Store

Redis operations use atomic Lua scripts via defineCommand() for single round-trip performance. hitlimit wins all three scenarios with lower latency across the board.

LibraryScenarioops/secLatencyvs Fastest
hitlimitsingle-ip6.7K150μsfastest
rate-limiter-flexiblesingle-ip5.7K176μs85%
hitlimitmulti-1k6.9K144μsfastest
rate-limiter-flexiblemulti-1k5.9K171μs85%
hitlimitmulti-10k6.7K149μsfastest
rate-limiter-flexiblemulti-10k6.4K156μs95%
Key Insight: Both libraries use atomic Lua scripts via ioredis defineCommand() for SHA caching and single round-trip performance. hitlimit wins all three scenarios — 18% faster on single-ip and multi-1k, 5% faster on multi-10k. hitlimit also uses significantly less memory (4.9MB vs 29-55MB for RLF).

Redis throughput is limited by network latency (~150μs local Docker). For remote Redis, expect 200-1000 ops/sec.

Postgres Store

Postgres operations use atomic INSERT...ON CONFLICT upserts with named prepared statements. hitlimit wins all three scenarios with lower latency and better memory usage.

LibraryScenarioops/secLatencyvs Fastest
hitlimitsingle-ip3.5K286μsfastest
rate-limiter-flexiblesingle-ip3.0K334μs86%
hitlimitmulti-1k3.2K308μsfastest
rate-limiter-flexiblemulti-1k3.0K330μs94%
hitlimitmulti-10k3.0K336μsfastest
rate-limiter-flexiblemulti-10k2.7K365μs92%

Postgres throughput is limited by query latency (~280-300μs local Docker). hitlimit uses named prepared statements for server-side query plan caching.

Store Support Comparison

LibraryMemorySQLiteRedisPostgres
hitlimitBuilt-inBuilt-inBuilt-inBuilt-in
express-rate-limitBuilt-inNoExternalNo
rate-limiter-flexibleBuilt-inNoBuilt-inBuilt-in

HTTP Overhead

How much throughput does hitlimit cost on real HTTP servers?

FrameworkWithout LimiterWith hitlimitOverhead
Express45,000 req/s42,000 req/s~7%
Fastify65,000 req/s61,000 req/s~6%

HTTP benchmarks measured with autocannon (-c 100 -d 10). High limit (1M) to measure overhead, not blocking.

Memory Store

hitlimit-bun's memory store uses a zero-allocation sync fast path for maximum throughput.

Scenarioops/secLatency
Single IP12.38M81ns
1,000 Unique IPs5.17M193ns
10,000 Unique IPs8.32M120ns
Key Insight: Bun significantly outperforms Node.js on memory store — 8.32M vs 3.16M at 10K IPs (2.6x faster). Bun's optimized JavaScript engine shines on synchronous operations like in-memory rate limiting.

SQLite Store (bun:sqlite)

Native bun:sqlite — zero dependencies, built into the Bun runtime.

bun:sqlite Store
325Kops/sec (10K IPs)
3.1μsavg latency
0dependencies

Redis Store

Atomic Lua scripts (EVALSHA) — single round-trip per request. Same Redis protocol, same performance.

Redis Store (Bun)
6.7Kops/sec
148μsavg latency
1round-trip per request

Redis throughput is limited by network latency (~150μs local Docker). For remote Redis, expect 200-1000 ops/sec.

Postgres Store

Atomic INSERT...ON CONFLICT upserts for distributed Postgres deployments.

Postgres Store (Bun)
3.7Kops/sec
273μsavg latency

All Four Stores — Built In

StoreStatusPerformanceUse Case
MemoryBuilt-in8.32M ops/secSingle server, ephemeral
bun:sqliteBuilt-in (native)325K ops/secSingle server, persistent
RedisBuilt-in6.7K ops/secMulti-server, distributed
PostgresBuilt-in3.7K ops/secMulti-server, distributed (SQL)

Run Benchmarks Yourself

Terminal
git clone https://github.com/JointOps/hitlimit-monorepo
cd hitlimit-monorepo
pnpm install && pnpm build

# Start Redis (optional)
docker compose up -d redis

# Run Node.js benchmarks
cd benchmarks
pnpm bench:node
Note: These are our benchmarks and we've done our best to keep them fair and reproducible. Results vary by hardware and environment. hitlimit is the fastest in ALL memory scenarios — ~2.9x faster than rate-limiter-flexible on single-IP (4.83M vs 1.66M) and ~2.8x faster on high-traffic 10K IPs (3.16M vs 1.14M). hitlimit also wins all Redis and Postgres scenarios — with 18% faster Redis throughput and significantly lower memory usage. They're not set in stone — there's always room for improvement. If you spot issues or have suggestions, please open an issue.
Terminal
git clone https://github.com/JointOps/hitlimit-monorepo
cd hitlimit-monorepo
bun install

# Start Redis (optional)
docker compose up -d redis

# Run Bun benchmarks
cd benchmarks
pnpm bench:bun
Note: These are our benchmarks and we've done our best to keep them fair and reproducible. Results vary by hardware and environment. Bun significantly outperforms Node.js on memory store — 8.32M vs 3.16M at 10K IPs. Bun's optimized JS engine shines on synchronous in-memory operations. Clone the repo and run them yourself — they're not set in stone. If you spot issues or have suggestions, please open an issue.

Results saved to benchmarks/results/