npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@dmop/puru

v0.1.15

Published

puru (プール) — A thread pool with Go-style concurrency primitives for JavaScript

Readme

puru (プール)

npm version npm downloads bundle size zero dependencies license

Go-style concurrency for JavaScript. Run CPU-heavy or I/O-heavy work off the main thread with channels, WaitGroup, ErrGroup, select, and context — zero dependencies, no worker files, no boilerplate.

import { spawn } from '@dmop/puru'

const { result } = spawn(() => {
  let sum = 0
  for (let i = 0; i < 100_000_000; i++) sum += i
  return sum
})

console.log(await result) // runs off the main thread

Before / After

const { Worker } = require('worker_threads')
const worker = new Worker('./worker.js')
worker.postMessage({ n: 40 })
worker.on('message', (result) => {
  console.log(result)
  worker.terminate()
})
worker.on('error', reject)

// worker.js (separate file)
const { parentPort } = require('worker_threads')
parentPort.on('message', ({ n }) => {
  parentPort.postMessage(fibonacci(n))
})
import { spawn } from '@dmop/puru'

const { result } = spawn(() => {
  function fibonacci(n: number): number {
    if (n <= 1) return n
    return fibonacci(n - 1) + fibonacci(n - 2)
  }
  return fibonacci(40)
})

try {
  console.log(await result)
} catch (err) {
  console.error(err)
}

One file. No message plumbing. Automatic pooling.

Install

Zero runtime dependencies — just the library itself.

npm install @dmop/puru
# or
bun add @dmop/puru

Quick Start

import { spawn, WaitGroup, chan, task } from '@dmop/puru'

// CPU work on a dedicated worker
const { result } = spawn(() => {
  function fibonacci(n: number): number {
    if (n <= 1) return n
    return fibonacci(n - 1) + fibonacci(n - 2)
  }
  return fibonacci(40)
})

// Reusable worker logic with explicit arguments
const crunch = task((n: number) => {
  let sum = 0
  for (let i = 0; i < n; i++) sum += i
  return sum
})

// Parallel batch — wait for all
const wg = new WaitGroup()
wg.spawn(() => 21 * 2)
wg.spawn(() => 6 * 7)
const [a, b] = await wg.wait()

const bigNumber = await result
const heavySum = await crunch(1_000_000)
console.log({ a, b, bigNumber, heavySum })

// Cross-thread channels
const ch = chan<number>(10)
spawn(async ({ ch }) => {
  for (let i = 0; i < 10; i++) await ch.send(i)
  ch.close()
}, { channels: { ch } })

for await (const item of ch) console.log(item)

Performance

Measured on Apple M1 Pro (8 cores). Full results in BENCHMARKS.md.

| Benchmark | Single-threaded | puru | Speedup | | --- | --: | --: | --: | | Fibonacci (fib(38) x8) | 4,345 ms | 2,131 ms | 2.0x | | Prime counting (2M range) | 335 ms | 77 ms | 4.4x | | 100 concurrent async tasks | 1,140 ms | 16 ms | 73x | | Fan-out pipeline (4 workers) | 176 ms | 51 ms | 3.4x |

Spawn overhead: ~0.1-0.5ms. Use for tasks above ~5ms.

Two Modes

| Mode | Use it for | What happens | | --- | --- | --- | | spawn(fn) | CPU-bound work | Dedicated worker thread | | spawn(fn, { concurrent: true }) | Async / I/O work | Shares a worker's event loop |

When To Use What

| Situation | Tool | | --- | --- | | One heavy CPU task | spawn(fn) | | Same logic, many inputs | task(fn) | | Wait for all tasks | WaitGroup | | Fail-fast, cancel the rest | ErrGroup (with setLimit() for throttling) | | Timeouts and cancellation | context + spawn(fn, { ctx }) | | Producer/consumer pipelines | chan() + select() |

The Big Rule

Functions passed to spawn() cannot capture outer variables. They are serialized as text and sent to a worker — closures don't survive.

const x = 42
spawn(() => x + 1)          // ReferenceError at runtime

spawn(() => {
  const x = 42               // define inside
  return x + 1
})                            // works

Use task(fn) to pass arguments to reusable worker functions.

What's Included

Coordination: chan() · WaitGroup · ErrGroup · select() · context

Synchronization: Mutex · RWMutex · Once · Cond

Timing: after() · ticker() · Timer

Ergonomics: task() · configure() · stats() · directional channels · channel len/cap

All modeled after Go's concurrency primitives. Full API in docs/API.md.

Why Not Just Use...

Promise.all() — Great for cheap async work. Use puru when work is CPU-heavy or you need the main thread to stay responsive.

worker_threads — Powerful but low-level: separate files, manual messaging, manual pooling, no channels/WaitGroup/select. puru keeps the power, removes the ceremony.

Cluster — Cluster adds processes for request throughput. puru offloads heavy work inside each process. They compose well together.

Runtimes

| Runtime | Status | | --- | --- | | Node.js >= 20 | Full support | | Bun | Full support | | Deno | Planned |

Testing

import { configure } from '@dmop/puru'
configure({ adapter: 'inline' }) // runs on main thread, no real workers

Docs

Limitations

  • spawn() functions cannot capture outer variables (see The Big Rule)
  • Channel values must be structured-cloneable (no functions, symbols, WeakRefs)
  • null is reserved as the channel-closed sentinel
  • task() arguments must be JSON-serializable

License

MIT