npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@nxtedition/scheduler

v4.1.8

Published

A high-performance, priority-based task scheduler for Node.js with support for concurrency limiting, byte-rate throttling, and multi-worker coordination via `SharedArrayBuffer`.

Readme

@nxtedition/scheduler

A high-performance, priority-based task scheduler for Node.js with support for concurrency limiting, byte-rate throttling, and multi-worker coordination via SharedArrayBuffer.

Install

npm install @nxtedition/scheduler

Usage

Basic

import { Scheduler } from '@nxtedition/scheduler'

const scheduler = new Scheduler({ concurrency: 4 })

const result = await scheduler.run(async () => {
  const res = await fetch('https://example.com')
  return res.json()
})

Priority

Seven priority levels are available:

| Name | Value | | --------- | ----- | | lowest | -3 | | lower | -2 | | low | -1 | | normal | 0 | | high | 1 | | higher | 2 | | highest | 3 |

// Using string priority
await scheduler.run(() => importantWork(), 'high')

// Using static constants
await scheduler.run(() => backgroundWork(), Scheduler.LOW)

Per-Priority Concurrency

Concurrency can be configured per priority to reserve capacity for higher-priority work. A task at priority p is admitted on the fast path only when total running tasks are below that priority's cap; otherwise the task queues. The overall max is always a hard ceiling.

// At most 100 concurrent tasks total. Background work (low/lowest) is held
// to a small fraction so interactive (normal/high) traffic isn't queued behind
// a flood of batch jobs.
const scheduler = new Scheduler({
  concurrency: { max: 100, low: 20, lowest: 5 },
})

Unspecified priorities inherit the cap from the priority just below them — caps propagate downward, not upward. Priorities above the topmost explicit fall back to max, and an unspecified lowest seeds from the bottommost explicit. The example above resolves to lowest=lower=5, low=20, normal..highest=100. Each per-priority value is itself capped at max.

Starvation prevention

Per-priority caps act as backpressure: when running tasks exceed a priority's cap, new tasks at that priority queue. If higher-priority tasks keep arriving, the capped queue could be starved indefinitely. To prevent that, the dispatch loop's fairness lottery (already used to give lower priorities a turn under uniform concurrency) bypasses per-priority caps when it picks a queued priority — so a fully-capped queue still drains slowly. The overall max is still respected.

In SharedArrayBuffer mode, the second constructor argument carries per-priority limits (the buffer carries max across workers):

const sharedState = Scheduler.makeSharedState(100) // global max=100
const scheduler = new Scheduler(sharedState, {
  concurrency: { low: 20, lowest: 5 }, // per-worker caps
})

Low-Level API

For more control, use acquire / release directly:

scheduler.acquire(
  (opaque) => {
    try {
      doWork(opaque)
    } finally {
      scheduler.release()
    }
  },
  Scheduler.NORMAL,
  opaqueData,
)

Multi-Worker Coordination

Share a concurrency limit across worker threads using SharedArrayBuffer:

// Main thread
import { Scheduler } from '@nxtedition/scheduler'
import { Worker } from 'node:worker_threads'

const sharedState = Scheduler.makeSharedState(8)
const worker = new Worker('./worker.js', { workerData: sharedState })

// Worker thread
import { Scheduler } from '@nxtedition/scheduler'
import { workerData } from 'node:worker_threads'

const scheduler = new Scheduler(workerData)
await scheduler.run(() => work())

The global max is a hard cap across all workers — a worker that finds the global counter at the limit queues its task locally and waits (via Atomics.waitAsync on the shared counter). When another worker releases a slot, Atomics.notify wakes the waiter, which retries dispatch.

Atomics.waitAsync does not keep the Node event loop alive. If a worker is fully idle (running === 0) with tasks queued waiting for capacity, you need something else holding the loop open — usually trivial in real applications (HTTP servers, intervals, etc.), but standalone scripts may need a setInterval(() => {}, …) until their work is done.

UV Thread Pool Scheduling

A practical use case is coordinating access to the libuv thread pool (UV_THREADPOOL_SIZE) across multiple worker threads. For example, several HTTP file-serving workers can share a scheduler so that file-system operations (which consume UV thread pool slots) are prioritized and throttled globally:

// main.js
import { Scheduler } from '@nxtedition/scheduler'
import { Worker } from 'node:worker_threads'

const UV_THREADPOOL_SIZE = parseInt(process.env.UV_THREADPOOL_SIZE || '4', 10)
const sharedState = Scheduler.makeSharedState(UV_THREADPOOL_SIZE)

for (let i = 0; i < 4; i++) {
  new Worker('./http-worker.js', { workerData: { sharedState } })
}
// http-worker.js
import { Scheduler, parsePriority } from '@nxtedition/scheduler'
import { workerData } from 'node:worker_threads'
import fs from 'node:fs/promises'

const scheduler = new Scheduler(workerData.sharedState)

async function handleRequest(req, res) {
  // Derive priority from a request header, e.g. "X-Priority: high"
  const priority = parsePriority(req.headers['x-priority'] || 'normal')

  const data = await scheduler.run(() => fs.readFile(filePath), priority)
  res.end(data)
}

This ensures that high-priority requests get file-system access first, while low-priority background work (thumbnails, transcoding, etc.) yields thread pool capacity without starving entirely — thanks to the built-in starvation prevention.

Monitoring

const { running, pending, queues } = scheduler.stats
  • running — currently executing tasks
  • pending — tasks waiting in queues
  • queues — per-priority queue counts

API

new Scheduler(opts, options?)

  • opts.concurrency — concurrency configuration (default: Infinity). Either a number (the overall max) or an object with max and per-priority caps:

    type ConcurrencyOptions =
      | number
      | {
          max?: number
          highest?: number
          higher?: number
          high?: number
          normal?: number
          low?: number
          lower?: number
          lowest?: number
        }
  • opts may also be a SharedArrayBuffer created by Scheduler.makeSharedState(). In that case, an optional second argument { concurrency } may carry per-priority caps (the buffer's max is always the global ceiling).

scheduler.run(fn, priority?, opaque?): Promise<T>

Execute fn within the scheduler. Returns a promise that resolves with the return value of fn.

scheduler.acquire(fn, priority?, opaque?): void

Low-level task acquisition. You must call scheduler.release() when done.

scheduler.release(): void

Signal task completion. Dequeues the next pending task if concurrency allows.

Scheduler.makeSharedState(concurrency): SharedArrayBuffer

Create shared state for cross-worker scheduling.

scheduler[Symbol.dispose]()

Drops the reference to any pending shared-mode wait. Use when abandoning a Scheduler with tasks still queued so the orphan promise can be GC'd.

parsePriority(value): number

Parse a string or number into a normalized priority value.


Throttle

Throttle is a token-bucket rate limiter that controls how many bytes per second are processed. It shares the same priority system as Scheduler.

Basic

import { Throttle } from '@nxtedition/scheduler'

// Allow 1 MB/s
const throttle = new Throttle({ bytesPerSecond: 1_000_000 })

// Refill tokens every 10ms
setInterval(() => throttle.refill(), 10)

Multi-Worker Coordination

Streaming

throttle.stream() returns a Node.js Transform stream that enforces backpressure — it won't call the next write until tokens are available:

import { createReadStream, createWriteStream } from 'node:fs'
import { pipeline } from 'node:stream/promises'

const throttle = new Throttle({ bytesPerSecond: 1_000_000 }) // 1 MB/s
setInterval(() => throttle.refill(), 10)

await pipeline(createReadStream('input.mp4'), throttle.stream(), createWriteStream('output.mp4'))

Priority

Both run() and stream() accept a priority. Higher-priority work drains first when tokens are available:

// Low-priority stream (yields to higher-priority work)
const stream = throttle.stream('low')

Low-Level API

throttle.acquire(
  () => {
    sendPacket(data)
  },
  data.byteLength,
  'normal',
)

acquire returns true if the callback ran immediately (tokens were available), or false if it was queued.

API

new Throttle(opts)

  • opts.bytesPerSecond — bytes per second (default: Infinity)
  • opts may also be a SharedArrayBuffer created by Throttle.makeSharedState()

throttle.acquire(fn, bytes, priority?): boolean

Low-level acquisition. Returns true if fn ran immediately, false if queued.

throttle.stream(priority?): Transform

Returns a Transform stream that rate-limits data passing through it. Each chunk consumes chunk.length tokens.


License

MIT