Scaling Guide
Scale hitlimit from a single server to a distributed architecture.
Single Instance
For small deployments, the memory store works well:
import { hitlimit, memoryStore } from '@joint-ops/hitlimit'
const limiter = hitlimit({
limit: 100,
window: '1m',
store: memoryStore({
cleanupInterval: 60000
})
}) import { hitlimit } from '@joint-ops/hitlimit-bun'
const limiter = hitlimit({
limit: 100,
window: '1m'
}) Horizontal Scaling with Redis
For multiple instances, use Redis to share state:
import { hitlimit } from '@joint-ops/hitlimit'
import { redisStore } from '@joint-ops/hitlimit/stores/redis'
const limiter = hitlimit({
limit: 1000,
window: '1m',
store: redisStore({
url: 'redis://redis-cluster:6379',
prefix: 'api:ratelimit:'
})
}) import { hitlimit } from '@joint-ops/hitlimit-bun'
import { redisStore } from '@joint-ops/hitlimit-bun/stores/redis'
const limiter = hitlimit({
limit: 1000,
window: '1m',
store: redisStore({
url: 'redis://redis-cluster:6379',
prefix: 'api:ratelimit:'
})
}) Redis Cluster Configuration
import Redis from 'ioredis'
const cluster = new Redis.Cluster([
{ host: 'node1', port: 6379 },
{ host: 'node2', port: 6379 },
{ host: 'node3', port: 6379 }
])
redisStore({ client: cluster }) Multi-Tier Rate Limiting
Apply different limits to different routes:
// Global: 1000 requests per minute
const globalLimiter = hitlimit({
limit: 1000,
window: '1m',
store
})
// Strict: 10 requests per minute for auth
const authLimiter = hitlimit({
limit: 10,
window: '1m',
store,
key: (req) => `auth:${req.ip}`
})
// Relaxed: 5000 requests per minute for reads
const readLimiter = hitlimit({
limit: 5000,
window: '1m',
store,
key: (req) => `read:${req.ip}`
}) Performance by Store
| Store | Ops/sec | Multi-Instance |
|---|---|---|
| memoryStore | 4,082,874+ | No |
| sqliteStore | 404,135+ | No |
| mongoStore | 2,161+ | Yes |
| redisStore | Network-bound | Yes |
| valkeyStore | Network-bound | Yes |
| dragonflyStore | Network-bound | Yes |
| postgresStore | Network-bound | Yes |
| mysqlStore | Network-bound | Yes |
| Store | Ops/sec | Latency | Multi-Instance |
|---|---|---|---|
| memoryStore | 5,574,103+ | ~179ns | No |
| sqliteStore | 372,247+ | ~2.7μs | No |
| redisStore | Network-bound | Network-bound | Yes |
| valkeyStore | Network-bound | Network-bound | Yes |
| dragonflyStore | Network-bound | Network-bound | Yes |
| postgresStore | Network-bound | Network-bound | Yes |
| mongoStore | Network-bound | Network-bound | Yes |
| mysqlStore | Network-bound | Network-bound | Yes |
These are our benchmarks — we've done our best to keep them fair and reproducible. Results vary by hardware. Full benchmarks | Run them yourself
Migrating Between Stores
Swapping stores requires changing only the store option. Your application code stays the same:
// Stage 1: Start with memory (fastest, no deps)
const store = memoryStore()
// Stage 2: Add persistence (survives restarts)
// const store = sqliteStore({ path: './data/rate-limits.db' })
// Stage 3: Go distributed (shared state across instances)
// const store = redisStore({ url: process.env.REDIS_URL })
const limiter = hitlimit({
limit: 100,
window: '1m',
store
}) Optimization Tips
- Start with memory — It's the default and fastest option. Only move to SQLite or Redis when you actually need persistence or multi-instance support.
- Keep keys short — Shorter keys reduce memory and storage overhead.
- Reuse Redis connections — Create one
redisStoreand share it across multiple limiters. - Skip health checks — Exclude
/healthand monitoring endpoints from rate limiting. - Set prefix per service — When multiple services share Redis, use distinct prefixes to avoid key collisions.
- Fail open in production — Configure
onErrorto return'allow'so a Redis outage doesn't block all requests.