ai-rate-queue
v0.1.2
Published
Redis-backed requests-per-minute limiter for AI/LLM API calls.
Maintainers
Readme
ai-rate-queue
Redis-backed requests-per-minute (RPM) limiter for AI/LLM API calls. Works across multiple workers/processes by sharing counters in Redis.
Install
Requires Node.js >=22.
npm install ai-rate-queueBring your own Redis client (example with ioredis):
npm install ioredisUsage
import Redis from "ioredis";
import { createRateLimitQueue } from "ai-rate-queue";
const redis = new Redis(process.env.REDIS_URL!);
const queue = createRateLimitQueue({
redis,
requestsPerMinute: 60,
keyPrefix: "my-app:openai"
});
const result = await queue.enqueue(async () => {
// call your LLM provider here
return "ok";
});Docs
docs/quickstart.mddocs/usage.mddocs/redis.mddocs/troubleshooting.md
