Bun Performance
hitlimit-bun is optimized to take full advantage of Bun's performance characteristics. This guide covers benchmarks, optimization tips, and best practices for high-throughput applications.
Benchmarks
Performance measurements on Apple M1 Max with 64GB RAM:
| Store | Throughput | Latency (p50) | Latency (p99) |
|---|---|---|---|
| Memory | 280K req/s | 0.05ms | 0.12ms |
| SQLite (WAL) | 220K req/s | 0.08ms | 0.18ms |
| SQLite (Memory) | 260K req/s | 0.06ms | 0.14ms |
| Redis (Local) | 95K req/s | 0.35ms | 0.85ms |
Benchmarked using Bun 1.0+ with hitlimit-bun, single-threaded
Bun vs Node.js
hitlimit performance comparison between Bun and Node.js:
| Runtime | Memory Store | SQLite Store |
|---|---|---|
| Bun 1.0+ | 280K req/s | 220K req/s |
| Node.js 20 | 200K req/s | 180K req/s |
Optimization Tips
1. Use Native SQLite Over Redis for Single-Instance
Bun's native SQLite is faster than Redis for single-instance deployments:
// Prefer this for single-instance deployments
const store = bunSqliteStore({ path: './rate-limits.db' })
// Use Redis only for distributed deployments
// const store = redisStore({ url: '...' }) 2. Enable WAL Mode
WAL mode is enabled by default and provides better concurrent read/write performance:
const store = bunSqliteStore({
path: './rate-limits.db',
walMode: true // Default
}) 3. Tune Cleanup Interval
Adjust cleanup frequency based on your traffic patterns:
const store = bunSqliteStore({
path: './rate-limits.db',
// High traffic: less frequent cleanup
cleanupInterval: 120000, // 2 minutes
// Low traffic: more frequent cleanup
// cleanupInterval: 30000, // 30 seconds
}) 4. Use Efficient Key Functions
Keep key extraction fast and avoid expensive operations:
// Good: Simple, fast key extraction
hitlimit({
limit: 100,
window: '1m',
key(req) {
return req.headers.get('X-API-Key') || 'anon'
}
})
// Avoid: Expensive operations in key function
// key: async (req) => await lookupUser(req) // Don't do this 5. Batch Rate Limit Checks
For batch APIs, check rate limits once per batch:
// Rate limit by batch, not per item
Bun.serve({
async fetch(req) {
const items = await req.json()
// One check per request, not per item
const result = await limiter.check(req)
if (!result.allowed) {
return new Response('Rate limited', { status: 429 })
}
// Process all items...
return new Response('OK')
}
}) Running Benchmarks
Run your own benchmarks with the included script:
# Clone the repository
git clone https://github.com/JointOps/hitlimit-monorepo
cd hitlimit-monorepo
# Run Bun benchmarks
bun run bench:bun
# Compare with Node.js
bun run bench:compare Production Configuration
Recommended configuration for production deployments:
import { hitlimit, bunSqliteStore } from '@joint-ops/hitlimit-bun'
const limiter = hitlimit({
limit: 1000,
window: '1m',
store: bunSqliteStore({
path: './data/rate-limits.db',
walMode: true,
cleanupInterval: 60000
}),
// Fail open on errors (don't block requests)
failOpen: true,
// Skip rate limiting for health checks
skip(req) {
return new URL(req.url).pathname === '/health'
}
}) Monitoring
Monitor rate limiter performance:
const limiter = hitlimit({
limit: 100,
window: '1m',
onRateLimited(key, req) {
// Log or send metrics
console.log(`Rate limited: ${key}`)
metrics.increment('rate_limit_exceeded')
}
}) Next Steps
- Stores - Configure storage backends
- Bun.serve - Server integration patterns
- Elysia Plugin - Framework integration