npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

id-promise

v0.3.0

Published

A cluster friendly, identity based, Promise resolver

Readme

id-promise

Build Status Coverage Status

The goal of this module is to hold on the very first promise that asks for a specific task, resolved once for all concurrent promises that meanwhile asked for that very same task during its resolution time.

In a nutshell

In the following example, after the first samePromise(), all other samePromise() invokes will simply hold until the first invoke has been resolved, granting that for 300 ms, dictated in the following example by setTimeout, no extra timer will be set, and ++samePromiseCalls won't be incremented more than once.

import idPromise from 'id-promise';
// const idPromise = require('id-promise');

let samePromiseCalls = 0;
const samePromise = () => idPromise(
  'some-unique-id:samePromise',
  (resolve, reject) => {
    setTimeout(resolve, 300, ++samePromiseCalls);
  }
);

// ask for the same task as many times as you want
samePromise().then(console.log);
samePromise().then(console.log);
samePromise().then(console.log);
samePromise().then(console.log);

Cluster Friendly

If the callback is executed within a forked worker, it will put on hold the same id for all workers that meanwhile might ask for the same operation.

This is specially useful when a single fork would need to perform a potentially very expensive operation, either DB or file system related, but it shouldn't perform such operation more than once, as both DB and file system are shared across all workers.

import idPromise from 'id-promise';
// const idPromise = require('id-promise');

const optimizePath = path => idPromise(
  // ⚠ use strong identifiers, not just path!
  `my-project:optimizePath:${path}`,
  (resolve, reject) => {
    performSomethingVeryExpensive(path)
      .then(resolve, reject);
  }
);

// invoke it as many times as you need
optimizePath(__filename).then(...);
optimizePath(__filename).then(...);
optimizePath(__filename).then(...);

How does it work

In master, each unique id is simply stored once, and removed from the Map based cache once resolved, or rejected. Each call to the same unique id will return the very same promise that is in charge of resolving or rejecting.

In workers, each uunique id would pass through master to understand if other workers asked for it already, or it should be executed as task. The task is then eventually executed within the single worker and, once resolved, propagated to every other possible worker that meanwhile asked for the same task.

Caveats

This module has been created to solve some very specific use case and it's important to understand where it easily fails.

There are 3 kinds of caveats to consider with this module:

  • name clashes, so that weak unique identifiers will easily cause troubles. Try to use your project/module namespace as prefix, plus the functionality, plus any other static information that summed to the previous details would make the operation really unique (i.e. a fully resolved file path)
  • serialization, so that you cannot resolve values that cannot be serialized and passed around workers, and you should rather stick with JSON compatible values only.
  • different parameters, so that if a promise is cached but the next call internally refers to different values, the result might be unexpected

While the first caveat is quite easy to understand, the last one is subtle:

import idPromise from 'id-promise';
// const idPromise = require('id-promise');

const writeOnce = (where, what) => idPromise(
  `my-project:writeOnce:${where}`,
  (resolve, reject) => {
    fs.writeFile(where, what, err => {
      if (err) reject(err);
      else resolve(what);
    });
  }
);

// concurrent writes
writeOnce('/tmp/test.txt', 'a').then(console.log);
writeOnce('/tmp/test.txt', 'b').then(console.log);
writeOnce('/tmp/test.txt', 'c').then(console.log);

Above concurrent writeOnce(where, what) invokes uses the same id with different values to write. Accordingly with how fast the writing operation would be, the outcome might be unpredictable, but in the worst case scenario, where it was something very expensive, all 3 invokes will resolve with the string "a".

The rule of thumbs here is that First Come, First Serve, so specifically for writing files this module might be not the solution.

Use cases

  • expensive operations that don't need to be performed frequently, including recursive asynchronous folders crawling or scanning
  • expensive file operations such as compression, archive zipping, archive extraction, and so on and so forth, where the source path is unique and operation would grant always the same outcome
  • any expensive operation that accepts always a unique entry point that should grant always the same outcome