On this page

Production Deployment

Best practices for deploying hitlimit in production environments.

Choose the Right Store

Select a storage backend based on your deployment:

DeploymentRecommended StoreNotes
Single instancememoryStoreFast, no setup required
Single server, persistencesqliteStoreSurvives restarts
Multiple instancesredisStoreShared state across nodes
Multiple instances (open-source)valkeyStoreBSD-licensed Redis alternative
Multiple instances (high-throughput)dragonflyStoreMulti-threaded Redis alternative
Multiple instances (no Redis)postgresStoreUse existing Postgres DB

Redis Setup for Production

import { hitlimit } from '@joint-ops/hitlimit'
import { redisStore } from '@joint-ops/hitlimit/stores/redis'

const limiter = hitlimit({
  limit: 100,
  window: '1m',
  store: redisStore({
    url: process.env.REDIS_URL,
    prefix: 'api:rl:'
  })
})
import { hitlimit } from '@joint-ops/hitlimit-bun'
import { redisStore } from '@joint-ops/hitlimit-bun/stores/redis'

const limiter = hitlimit({
  limit: 100,
  window: '1m',
  store: redisStore({
    url: Bun.env.REDIS_URL,
    prefix: 'api:rl:'
  })
})

Environment-Based Configuration

const isProd = process.env.NODE_ENV === 'production'

const limiter = hitlimit({
  limit: isProd ? 100 : 1000,  // Relaxed in dev
  window: '1m',
  store: isProd
    ? redisStore({ url: process.env.REDIS_URL })
    : memoryStore(),
  skip: (req) => !isProd && req.headers['x-skip-ratelimit']
})
const isProd = Bun.env.NODE_ENV === 'production'

const limiter = hitlimit({
  limit: isProd ? 100 : 1000,  // Relaxed in dev
  window: '1m',
  store: isProd
    ? redisStore({ url: Bun.env.REDIS_URL })
    : memoryStore(),
  skip: (req) => !isProd && req.headers.get('x-skip-ratelimit')
})

Tip: Bun automatically loads .env files without additional packages. Place your .env file in the project root and access variables via Bun.env.

Graceful Shutdown

Clean up store connections on shutdown:

import { redisStore } from '@joint-ops/hitlimit/stores/redis'

const store = redisStore({ url: process.env.REDIS_URL })
const limiter = hitlimit({ limit: 100, window: '1m', store })

process.on('SIGTERM', async () => {
  await store.close()
  process.exit(0)
})
import { hitlimit } from '@joint-ops/hitlimit-bun'
import { redisStore } from '@joint-ops/hitlimit-bun/stores/redis'

const store = redisStore({ url: Bun.env.REDIS_URL })
const limiter = hitlimit({ limit: 100, window: '1m', store })

const server = Bun.serve({
  port: 3000,
  async fetch(req) {
    const result = await limiter.check(req)
    if (!result.allowed) {
      return new Response('Rate limited', { status: 429 })
    }
    return new Response('OK')
  }
})

// Graceful shutdown on SIGINT (Ctrl+C)
process.on('SIGINT', async () => {
  server.stop()
  await store.close()
  process.exit(0)
})

Error Handling

Configure fail-open or fail-closed behavior when the store encounters errors:

// Fail-open: Allow requests when Redis is down
const limiter = hitlimit({
  limit: 100,
  window: '1m',
  store,
  onError(err) {
    console.error('Rate limiter error:', err)
    return 'allow'  // Let request through
  }
})

// Fail-closed: Block requests when store is unavailable
const strictLimiter = hitlimit({
  limit: 10,
  window: '1m',
  store,
  onError(err) {
    console.error('Rate limiter error:', err)
    return 'deny'  // Block request
  }
})

Recommendation: Use fail-open ('allow') for most APIs to prioritize availability. Use fail-closed ('deny') for sensitive endpoints like authentication or payment processing.

Security Headers

hitlimit sets standard rate limit headers:

RateLimit-Limit: 100
RateLimit-Remaining: 42
RateLimit-Reset: 1640000060
Retry-After: 27  // Only when limited

Custom Error Responses

hitlimit({
  limit: 100,
  window: '1m',
  statusCode: 429,
  response: {
    error: 'rate_limit_exceeded',
    message: 'Too many requests. Please try again later.',
    documentation: 'https://api.example.com/docs/rate-limits'
  }
})

Behind a Load Balancer

When behind a proxy, use the correct IP source:

// Trust X-Forwarded-For header
app.set('trust proxy', true)

// Or use custom key extraction
hitlimit({
  limit: 100,
  window: '1m',
  key: (req) => {
    const forwarded = req.headers['x-forwarded-for']
    return forwarded?.split(',')[0].trim() || req.ip
  }
})
// Use server.requestIP() or X-Forwarded-For
Bun.serve({
  async fetch(req) {
    const forwarded = req.headers.get('x-forwarded-for')
    const ip = forwarded?.split(',')[0].trim()
      || this.requestIP(req)?.address
      || '127.0.0.1'

    const result = await limiter.check(ip)
    // ...
  }
})

Production Checklist

  • Use Redis for distributed deployments — memory store limits are per-process and can be bypassed across instances
  • Extract the real client IP — behind a reverse proxy, use X-Forwarded-For to avoid rate limiting the proxy itself
  • Set appropriate limits per endpoint — authentication routes need stricter limits than read-only APIs
  • Never expose internal error details — return generic 429 responses to clients, log details server-side
  • Monitor for bypass attempts — log rate limit events and watch for patterns like rotating IPs
  • Keep Redis credentials secure — store REDIS_URL in environment variables, never hardcode
  • Enable rate limit headers — send RateLimit-Remaining and Retry-After so clients can back off gracefully
  • Skip internal health checks — exclude /health and monitoring endpoints from rate limiting