npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@npclfg/nano-limit

v1.0.1

Published

Tiny concurrency and rate limiter with priorities, AbortSignal, and zero dependencies.

Downloads

11

Readme

nano-limit

npm version npm downloads license

Tiny concurrency and rate limiter with priorities, AbortSignal, and zero dependencies.

  • Zero dependencies
  • TypeScript-first with full type inference
  • ESM and CommonJS support

The Problem

You need to limit API calls. You reach for p-limit but:

// p-limit: only concurrency, no rate limiting
// p-limit: can't do "max 60 requests per minute"
// p-limit: no priority queue, no AbortSignal

You try bottleneck but:

// bottleneck: complex API with reservoir, highWater, strategy...
// bottleneck: must call disconnect() or memory leaks
// bottleneck: breaks with mock timers in tests
// bottleneck: last major release was 2019

The Fix

import { createLimit } from "@npclfg/nano-limit";

// Concurrency + rate limiting in one
const limit = createLimit({
  concurrency: 5,     // max 5 parallel
  rate: 60,           // max 60 per minute
  interval: 60000,
});

await limit(() => openai.chat.completions.create({ model: "gpt-4", messages }));

// Priority queue: important requests go first
await limit(() => criticalOperation(), { priority: 10 });

// Cancel pending operations
await limit(() => fetch(url), { signal: controller.signal });

That's it. Concurrency limiting, rate limiting, priorities, and cancellation in one tiny package.

Why nano-limit?

| Feature | p-limit | bottleneck | nano-limit | |---------|---------|------------|------------| | Dependencies | 2 | 0 | 0 | | Concurrency limiting | ✅ | ✅ | ✅ | | Rate limiting | ❌ | ✅ | ✅ | | Priority queue | ❌ | ✅ | ✅ | | AbortSignal | ❌ | ❌ | ✅ | | Per-key limiting | ❌ | Manual | ✅ Built-in | | Queue size limit | ❌ | ✅ | ✅ | | onIdle() | ❌ | ✅ | ✅ | | Memory leak risk | Low | disconnect() required | None | | ESM + CJS | ESM-only (v4+) | ✅ | ✅ | | Last updated | Active | 2019 | Active |

Installation

npm install @npclfg/nano-limit

Requirements: Node.js 16+ or modern browsers (ES2020)

Quick Start

import { createLimit } from "@npclfg/nano-limit";

// Concurrency only (like p-limit)
const limit = createLimit({ concurrency: 5 });
await Promise.all(urls.map(url => limit(() => fetch(url))));

// Rate limiting (max 10 per second)
const rateLimited = createLimit({ rate: 10, interval: 1000 });

// Both together
const apiLimit = createLimit({
  concurrency: 5,
  rate: 60,
  interval: 60000,
});

API Reference

createLimit(options?): Limiter

Create a concurrency and/or rate limiter.

const limit = createLimit({
  concurrency: 5,    // max concurrent (default: Infinity)
  rate: 100,         // max per interval
  interval: 1000,    // interval in ms (default: 1000)
  maxQueueSize: 1000 // max queued operations (default: Infinity)
});

const result = await limit(() => fetchData());

Options

| Option | Type | Default | Description | |--------|------|---------|-------------| | concurrency | number | Infinity | Maximum concurrent operations | | rate | number | - | Maximum operations per interval | | interval | number | 1000 | Interval in ms for rate limiting | | maxQueueSize | number | Infinity | Max queued operations (throws QueueFullError) |

Limiter Properties

| Property | Type | Description | |----------|------|-------------| | activeCount | number | Currently running operations | | pendingCount | number | Operations waiting in queue | | clearQueue(reject?) | function | Clear pending operations | | onIdle() | Promise<void> | Resolves when queue is empty and all done |

Operation Options

await limit(() => fn(), {
  priority: 10,              // higher = sooner (default: 0)
  signal: controller.signal  // AbortSignal for cancellation
});

createKeyedLimit(options?): KeyedLimiter

Per-key rate limiting for multi-tenant systems.

const userLimit = createKeyedLimit({
  concurrency: 2,
  rate: 10,
  interval: 1000
});

// Each user gets their own limit
await userLimit("user-123", () => fetchUserData("123"));
await userLimit("user-456", () => fetchUserData("456"));

// Manage keys
userLimit.get("user-123");     // Get limiter for key
userLimit.delete("user-123");  // Remove key
userLimit.clear();             // Remove all keys
userLimit.size;                // Number of active keys

limited(fn, options?): WrappedFunction

Create a pre-configured limited function.

const fetchWithLimit = limited(
  (url: string) => fetch(url),
  { concurrency: 5, rate: 10, interval: 1000 }
);

await fetchWithLimit("/api/data");

Error Types

import { AbortError, QueueClearedError, QueueFullError } from "nano-limit";

try {
  await limit(() => fn(), { signal });
} catch (error) {
  if (error instanceof AbortError) {
    // Operation was aborted
  }
  if (error instanceof QueueClearedError) {
    // clearQueue() was called
  }
  if (error instanceof QueueFullError) {
    // maxQueueSize exceeded
  }
}

Patterns & Recipes

OpenAI/Anthropic Rate Limiting

const aiLimit = createLimit({
  concurrency: 5,      // parallel requests
  rate: 60,            // 60 RPM
  interval: 60000,
});

const response = await aiLimit(() =>
  openai.chat.completions.create({
    model: "gpt-4",
    messages,
  })
);

Priority Queue

const limit = createLimit({ concurrency: 1 });

// High priority runs first
limit(() => lowPriorityTask());
limit(() => criticalTask(), { priority: 10 });  // Runs first

Graceful Shutdown

const limit = createLimit({ concurrency: 10 });

// On shutdown
process.on("SIGTERM", async () => {
  limit.clearQueue();     // Cancel pending
  await limit.onIdle();   // Wait for active to finish
  process.exit(0);
});

With Timeout

const controller = new AbortController();
setTimeout(() => controller.abort(), 5000);

try {
  await limit(() => slowOperation(), { signal: controller.signal });
} catch (error) {
  if (error instanceof AbortError) {
    console.log("Timed out");
  }
}

Backpressure / Queue Overflow

const limit = createLimit({
  concurrency: 5,
  maxQueueSize: 100,  // Reject if queue grows too large
});

try {
  await limit(() => processRequest());
} catch (error) {
  if (error instanceof QueueFullError) {
    // Return 503 Service Unavailable
  }
}

Multi-Tenant API

const userLimit = createKeyedLimit({
  concurrency: 2,
  rate: 100,
  interval: 60000,
});

app.use(async (req, res, next) => {
  try {
    await userLimit(req.user.id, () => next());
  } catch (error) {
    if (error instanceof QueueFullError) {
      res.status(429).send("Too Many Requests");
    }
  }
});

TypeScript Usage

Full type inference is supported:

import { createLimit, LimitOptions, Limiter } from "nano-limit";

interface ApiResponse {
  data: string[];
}

// Return type is inferred as Promise<ApiResponse>
const result = await limit(async (): Promise<ApiResponse> => {
  const res = await fetch("/api");
  return res.json();
});

License

MIT