@npclfg/nano-limit-redis
v1.0.0
Published
Distributed rate limiting with Redis. Sliding window, atomic Lua scripts, zero hassle.
Maintainers
Readme
nano-limit-redis
Distributed rate limiting with Redis. Sliding window, atomic Lua scripts, zero hassle.
- Sliding window counter for accurate rate limiting
- Atomic Lua scripts - no race conditions
- TypeScript-first with full type inference
- Minimal footprint - just ioredis as peer dep
The Problem
You need distributed rate limiting across multiple servers. You try bottleneck:
// bottleneck: queued jobs are local, lost on crash
// bottleneck: priority ordering is local only
// bottleneck: must call disconnect() or memory leaks
// bottleneck: SCRIPT FLUSH breaks connected limiters
// bottleneck: last release was 2019The Fix
import Redis from "ioredis";
import { createRedisLimit } from "@npclfg/nano-limit-redis";
const redis = new Redis();
const limit = createRedisLimit(redis, {
limit: 100, // 100 requests
interval: 60000, // per minute
});
const result = await limit("user:123");
if (!result.allowed) {
// Rate limited - result.resetAt tells you when to retry
}That's it. Distributed rate limiting with accurate sliding windows, atomic operations, and no lifecycle management.
Installation
npm install @npclfg/nano-limit-redis ioredisRequirements: Node.js 16+, Redis 2.6+
API Reference
createRedisLimit(redis, options): RateLimiter
Create a distributed rate limiter.
const limit = createRedisLimit(redis, {
limit: 100, // max requests per interval
interval: 60000, // interval in ms (default: 60000)
});RateLimiter
// Check and consume one request
const result = await limit("user:123");
// { allowed: true, remaining: 99, resetAt: 1699999999999, current: 1, limit: 100 }
// Check without consuming
const peek = await limit.peek("user:123");
// Reset rate limit for a key
await limit.reset("user:123");RateLimitResult
| Property | Type | Description |
|----------|------|-------------|
| allowed | boolean | Whether the request is allowed |
| remaining | number | Remaining requests in window |
| resetAt | number | Unix timestamp (ms) when limit resets |
| current | number | Current request count in window |
| limit | number | The configured limit |
createPrefixedLimit(redis, prefix, options): RateLimiter
Create a rate limiter with a fixed key prefix.
const apiLimit = createPrefixedLimit(redis, "api", { limit: 1000, interval: 60000 });
const userLimit = createPrefixedLimit(redis, "user", { limit: 100, interval: 60000 });
await apiLimit("endpoint:/users"); // key: api:endpoint:/users
await userLimit("123"); // key: user:123Patterns & Recipes
Express Middleware
import { createRedisLimit } from "@npclfg/nano-limit-redis";
const limit = createRedisLimit(redis, { limit: 100, interval: 60000 });
app.use(async (req, res, next) => {
const key = req.ip || req.headers["x-forwarded-for"];
const result = await limit(key);
res.set({
"X-RateLimit-Limit": result.limit,
"X-RateLimit-Remaining": result.remaining,
"X-RateLimit-Reset": Math.ceil(result.resetAt / 1000),
});
if (!result.allowed) {
return res.status(429).json({ error: "Too Many Requests" });
}
next();
});Per-User API Limits
const userLimit = createRedisLimit(redis, { limit: 1000, interval: 3600000 }); // 1000/hour
async function handleRequest(userId: string) {
const result = await userLimit(`user:${userId}`);
if (!result.allowed) {
throw new Error(`Rate limit exceeded. Retry after ${result.resetAt - Date.now()}ms`);
}
// Process request...
}Different Limits Per Tier
const limits = {
free: createRedisLimit(redis, { limit: 100, interval: 3600000 }),
pro: createRedisLimit(redis, { limit: 10000, interval: 3600000 }),
enterprise: createRedisLimit(redis, { limit: 100000, interval: 3600000 }),
};
async function handleRequest(userId: string, tier: "free" | "pro" | "enterprise") {
const result = await limits[tier](`user:${userId}`);
// ...
}Check Without Consuming
// Show user their current usage without affecting their limit
const status = await limit.peek(`user:${userId}`);
console.log(`You have ${status.remaining} requests remaining`);Graceful Degradation
async function rateLimitedRequest(key: string) {
try {
const result = await limit(key);
return result;
} catch (error) {
// Redis down - fail open
console.error("Rate limiter unavailable:", error);
return { allowed: true, remaining: -1, resetAt: 0, current: 0, limit: 0 };
}
}How It Works
Sliding Window Counter
Uses two fixed windows and interpolates for a sliding window effect:
- Current window: counts requests in the current time period
- Previous window: counts from the last period
- Weighted count:
previous * (1 - progress) + current
This gives accuracy close to a true sliding window with O(1) memory per key.
Atomic Lua Scripts
All operations use Lua scripts executed atomically on Redis:
-- Simplified version
local currentCount = redis.call('INCR', currentKey)
redis.call('PEXPIRE', currentKey, interval * 2)
-- No race conditions, no split-brainScripts are preloaded with SCRIPT LOAD and called with EVALSHA for performance.
Key Structure
Keys follow the pattern: ratelimit:{key}:{window}
ratelimit:user:123:1699999999 # Current window
ratelimit:user:123:1699999998 # Previous windowKeys automatically expire after 2 intervals via PEXPIRE.
Why Not Bottleneck?
| Feature | bottleneck | nano-limit-redis | |---------|------------|------------------| | Algorithm | Token bucket | Sliding window | | Accuracy | Good | Better (no boundary issues) | | Queued jobs | Local (lost on crash) | N/A - stateless | | Lifecycle | disconnect() required | None needed | | Lua scripts | Breaks on SCRIPT FLUSH | Auto-reloads | | Last updated | 2019 | Active | | TypeScript | @types package | Built-in |
License
MIT
