npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

limico

v2.1.2

Published

Zero-dependency TypeScript rate limiter (interval & quota) with pluggable KV and optional distributed mutex.

Readme

limico

Limico is a tiny, zero-dependency set of rate-limit primitives that covers two real-world needs: an interval limiter that allows something to happen at most once per window per key, and a quota limiter that implements a proper token bucket with continuous refill, optional burst capacity, and per-call cost. It works in a single process with an in-memory store, scales across workers with a mutex-capable store, and ships a first-class Redis adapter that can update buckets atomically in one round trip via Lua. The API is small, the math is predictable, and the TypeScript types are strict.

Install

pnpm add limico
# for distributed/cross-process use
pnpm add ioredis            # or: pnpm add redis

How it works

There are two modes. The interval limiter is a per-key “once per window” gate—perfect for OTP or password reset throttling—implemented purely in memory with no persistence or store calls. The quota limiter is a token bucket that refills continuously at limit / windowMs, caps at burst (or limit if not set), and deducts cost per check. In one process it uses a minimal memory KV. In distributed setups you get correctness by providing a store that can serialize updates with a lock or, better, perform the entire refill/consume/write in one atomic Redis script.

Windows are strings with units: "ms", "s", "m", "h", "d".

Quick start

Interval

import { createLimiter } from "limico";

// one attempt every 500ms per user (purely in-memory)
const login = createLimiter({ kind: "interval", interval: "500ms" });

const r = await login.check("user:42");
if (!r.allowed) return tooMany(r.retryAfterMs);

// r: { allowed: boolean, remaining: 0|1, retryAfterMs: number }

Quota

import { createLimiter } from "limico";

const api = createLimiter({
  kind: "quota",
  limit: 100,   // tokens per window
  window: "1m",
  burst: 150,   // optional capacity cap (defaults to limit)
});

const res = await api.check("ip:1.2.3.4", 3); // cost=3
if (!res.allowed) return retryAfter(res.retryAfterMs);

Redis in distributed setups

For multiple workers or hosts, plug in the Redis adapter. The adapter speaks both node-redis v4 and ioredis without extra deps or types.

import { createClient } from "redis";     // or: import Redis from "ioredis"
import { createLimiter, RedisKvStore } from "limico";

const client = createClient();            // for ioredis: new Redis()
await (client as any).connect?.();

const store = new RedisKvStore(client);

const rl = createLimiter({
  kind: "quota",
  limit: 20,
  window: "10s",
  burst: 20,
  store,
  keyPrefix: "rl:api",
  lockTtlMs: 500,
  lockRetryDelayMs: 8,
  lockRetries: 3,
});

With Redis, there are two safety paths. If EVAL/eval is available, limico will use an atomic fast path: a single Lua script reads, refills, consumes, writes with an appropriate TTL, and returns { allowed, tokens, retryAfterMs } in one round trip—no lock contention, no races. If scripting is unavailable, it falls back to a lock-based path using SET NX PX and a Lua compare-and-delete unlock.

Example Usage

Here’s a thin express middleware using the quota limiter and the Redis adapter:

import type { Request, Response, NextFunction } from "express";
import { createLimiter, RedisKvStore } from "limico";
import { createClient } from "redis";

const client = createClient();
await (client as any).connect?.();
const store = new RedisKvStore(client);

const rl = createLimiter({
  kind: "quota",
  limit: 120,
  window: "1m",
  burst: 180,
  store,
  keyPrefix: "rl:express",
  onError: (err, op, id) => console.error("[rl]", op, id, err),
});

export function rateLimitByIp(cost = 1) {
  return async function rlMw(req: Request, res: Response, next: NextFunction) {
    const ip =
      req.ip || req.headers["x-forwarded-for"]?.toString() || "unknown";
    const r = await rl.check(`ip:${ip}`, cost);
    if (r.allowed) return next();
    res.setHeader("Retry-After", Math.ceil(r.retryAfterMs / 1000));
    return res
      .status(429)
      .json({ error: "rate_limited", retryAfterMs: r.retryAfterMs });
  };
}

API

You create a limiter with createLimiter(cfg) where cfg.kind selects the mode. The function is overloaded so TypeScript returns the correct API for each mode.

function createLimiter(config: IntervalCfg): IntervalLimiterApi
function createLimiter(config: QuotaCfg): QuotaLimiterApi

Interval configuration

type RlWindow = `${number}ms` | `${number}s` | `${number}m` | `${number}h` | `${number}d`;

interface IntervalCfg {
  kind: "interval";
  interval: RlWindow;   // no store, no prefix—purely in-memory
}

Quota configuration

type Encoding = "json" | "packed";

interface QuotaCfg {
  kind: "quota";
  limit: number;
  window: RlWindow;
  burst?: number;             // capacity clamp, default = limit
  store?: KvStore;            // MemoryKvStore by default; use RedisKvStore for distributed
  keyPrefix?: string;         // default "rl:quota"
  lockTtlMs?: number;         // default 500
  lockRetryDelayMs?: number;  // default 8
  lockRetries?: number;       // default 3
  ttlBufferMs?: number;       // default 5000; TTL = ceil(time_to_full)+buffer, min 1000
  encoding?: Encoding;        // "json" (default) or compact "packed" -> "tokens|updatedAt"
  failOpen?: boolean;         // default true; on store error allow with remaining=0
  onError?: (err: unknown, op: StoreOp, id: string) => void;
}

Interval API

interface IntervalLimiterApi {
  check(id: string): Promise<CheckResult>;
  update(opts: { interval?: RlWindow }): this;
  getLast(id: string): number | undefined;  // ms epoch
  setLast(id: string, ts: number): void;
}

Quota API

interface QuotaLimiterApi {
  check(id: string, cost?: number): Promise<CheckResult>;
  inspect(id: string): Promise<{ tokens: number; updatedAt: number } | null>;
  setRecord(id: string, tokens: number, updatedAt?: number): Promise<void>;
  update(opts: {
    limit?: number;
    window?: RlWindow;
    burst?: number;
    store?: KvStore;
    keyPrefix?: string;
    lockTtlMs?: number;
    lockRetryDelayMs?: number;
    lockRetries?: number;
    ttlBufferMs?: number;
    encoding?: Encoding;
    failOpen?: boolean;
    onError?: (err: unknown, op: StoreOp, id: string) => void;
  }): this;
}

Result type

interface CheckResult {
  allowed: boolean;
  remaining: number;     // whole tokens left for quota; 0|1 for interval
  retryAfterMs: number;  // zero on success
}

Stores

There are three capabilities. At the bottom is a plain KvStore (get/set). If your store also supports tryLock and unlock, it’s a MutexKvStore and the quota limiter will serialize updates across workers. If it supports an atomicConsume that refills, consumes, sets TTL and returns a result in one command, it’s an AtomicBucketStore and you get a single Redis round trip for each check.

export interface KvStore {
  get(key: string): string | null | Promise<string | null>;
  set(key: string, value: string, ttlMs?: number): void | Promise<void>;
}

export interface MutexKvStore extends KvStore {
  tryLock(key: string, ttlMs: number): string | null | Promise<string | null>;
  unlock(key: string, token: string): void | Promise<void>;
}

export interface AtomicBucketStore extends KvStore {
  atomicConsume(
    key: string,
    nowMs: number,
    capacity: number,
    ratePerMs: number,
    cost: number,
    ttlBufferMs: number,
    usePacked: boolean
  ): Promise<{ allowed: boolean; tokens: number; retryAfterMs: number }>;
}

MemoryKvStore implements a simple in-process KV and a basic mutex good enough for a single process. RedisKvStore implements all three, including the atomic Lua path. You import both from the package root.

Error policy

Quota supports failOpen and an onError hook. When a store operation (get, set, tryLock, unlock, atomic, parse) throws and failOpen is true (the default), the limiter permits the call and reports remaining: 0. Set failOpen: false to fail closed and surface the exception. The hook lets you log or count errors with the operation name and the logical id.

Math

The bucket uses continuous refill. If your last state was { tokens, updatedAt } and the current time is now, the refill is min(capacity, tokens + (now - updatedAt) * ratePerMs). A check with cost succeeds when refilled >= cost and stores refilled - cost with updatedAt = now. If it fails, retryAfterMs is ceil((cost - refilled) / ratePerMs). Keys get a TTL so Redis can evict quiet buckets. TTL is max(1000, ceil((capacity - currentTokens)/ratePerMs) + ttlBufferMs). The buffer (default 5000ms) avoids churn when buckets oscillate around “full”.

Custom store sketch

If you aren’t on Redis, implement a minimal store. For cross-process correctness without scripting, add the mutex methods, for best performance on Redis, implement the atomic method instead.

import type { KvStore, MutexKvStore, AtomicBucketStore } from "limico";

class MyStore implements MutexKvStore /* or AtomicBucketStore */ {
  async get(k: string) { /* ... */ return null; }
  async set(k: string, v: string, ttl?: number) { /* ... */ }
  async tryLock(k: string, ttlMs: number) { /* ... */ return "token"; }
  async unlock(k: string, token: string) { /* ... */ }
  // For atomic mode:
  // async atomicConsume(...) { /* one-shot refill/consume/write */ return { allowed: true, tokens: 0, retryAfterMs: 0 }; }
}

Updating at runtime

Both limiter types support update() and return this, so you can tune limits, windows, burst, store, prefixes, and policies without rebuilding the limiter:

const rl = createLimiter({ kind: "quota", limit: 100, window: "1m" })
  .update({ burst: 150 })
  .update({ keyPrefix: "rl:prod", ttlBufferMs: 3000, encoding: "packed" });

License

MIT