Production Deployment
Best practices for deploying hitlimit in production environments.
Choose the Right Store
Select a storage backend based on your deployment:
| Deployment | Recommended Store | Notes |
|---|---|---|
| Single instance | memoryStore | Fast, no setup required |
| Single server, persistence | sqliteStore | Survives restarts |
| Multiple instances | redisStore | Shared state across nodes |
Redis Setup for Production
import { hitlimit } from '@joint-ops/hitlimit'
import { redisStore } from '@joint-ops/hitlimit/stores/redis'
const limiter = hitlimit({
limit: 100,
window: '1m',
store: redisStore({
url: process.env.REDIS_URL,
prefix: 'api:rl:'
})
}) Environment-Based Configuration
const isProd = process.env.NODE_ENV === 'production'
const limiter = hitlimit({
limit: isProd ? 100 : 1000, // Relaxed in dev
window: '1m',
store: isProd
? redisStore({ url: process.env.REDIS_URL })
: memoryStore(),
skip: (req) => !isProd && req.headers['x-skip-ratelimit']
}) Graceful Shutdown
Clean up store connections on shutdown:
import { redisStore } from '@joint-ops/hitlimit/stores/redis'
const store = redisStore({ url: process.env.REDIS_URL })
const limiter = hitlimit({ limit: 100, window: '1m', store })
process.on('SIGTERM', async () => {
await store.close()
process.exit(0)
}) Security Headers
hitlimit sets standard rate limit headers:
RateLimit-Limit: 100
RateLimit-Remaining: 42
RateLimit-Reset: 1640000060
Retry-After: 27 // Only when limited Custom Error Responses
hitlimit({
limit: 100,
window: '1m',
statusCode: 429,
response: {
error: 'rate_limit_exceeded',
message: 'Too many requests. Please try again later.',
documentation: 'https://api.example.com/docs/rate-limits'
}
}) Behind a Load Balancer
When behind a proxy, use the correct IP source:
// Trust X-Forwarded-For header
app.set('trust proxy', true)
// Or use custom key extraction
hitlimit({
limit: 100,
window: '1m',
key: (req) => {
const forwarded = req.headers['x-forwarded-for']
return forwarded?.split(',')[0].trim() || req.ip
}
}) Production Checklist
- Use Redis or persistent store for multi-instance deployments
- Configure appropriate limits for your API
- Set up graceful shutdown handlers
- Enable rate limit headers for client visibility
- Configure trust proxy if behind a load balancer
- Monitor rate limit metrics