npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

clusterwork

v0.1.7

Published

Process arbitrary incoming data on multiple workers

Readme

node-clusterwork

Build Status npm downloads Open issues Pull requests

Process arbitrary incoming data on multiple worker processes. This is a helper module that helps setup node's cluster feature for tasks other than network servers.

Why?

The built in cluster module automatically spreads the requests among workers when the main usage is for a nodejs server. BUT there might be other scenarios where a server is not involved, rather work from any arbitrary source is to be distributed among workers for parallel processing on all CPU cores. This modules makes it easy to setup a worker cluster for such scenarios.

Installation

npm install clusterwork --save

Quick Start

const clusterwork = require('clusterwork');

function producer(dispatch){
    setInterval(() => {
        const data = Math.random(); 
        console.info(`master process ${process.pid} sending ${data}`);
        dispatch(data);
    }, 1000);
}

function consumer(data){
    console.info(`worker process ${process.pid} got ${data}`);
}

clusterwork.init(producer,consumer);

When the above script is run, apart from the main (master) process, multiple worker processes, (1 per each CPU core), are automatically spawned.

Master process runs the producer function that can use dispatch() to send data to worker processes.

The consumer function is invoked in a worker process to handle/process the dispatched data. Workers are selected in a cyclic fashion to evenly distribute processing load.

The above example yields:

master process 14530 sending 0.5805814887272316
worker process 14541 got 0.5805814887272316
master process 14530 sending 0.3958888451685434
worker process 14542 got 0.3958888451685434
master process 14530 sending 0.1878265613269785
worker process 14547 got 0.1878265613269785
master process 14530 sending 0.2966814774610549
worker process 14536 got 0.2966814774610549

Usage

The main script should call the init() function:

clusterwork.init(producer, consumer, respawn);

The parameters are explained as follows:

  1. producer - Function called once in master process and is supposed to setup data processing. It receives a dispatch function that can be used to send data object to workers.

  2. consumer - Function called in worker process to handle data object.

  3. respawn - Optional boolean flag that auto spawns a new worker, if an existing worker terminates somehow.

Notes

  1. If all worker processes are terminated, the main process throws an error while trying to dispatch data.

Licence

The source code is published under MIT licence.