npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@ahwayakchih/pete

v1.0.0

Published

A simple Performance Tester (benchmark runner) for Node.js and Bun.sh.

Readme

PeTe

PeTe is a performance tester, a benchmark runner for JavaScript functions (or whole scripts). Compatible with both Node.js and Bun.

It's primary goal is to make benchmarking as easy as possible. So, instead of creating its own specific (and overly complicated) API, it offers three simple functions that you might find incredibly similar to those you already know from simpler test modules (like tape or Node's built-in test-runner):

  • test: runs a function in two "phases" ("warmup" and "regular"),
  • skip: does not run the function at all, just marks result as skipped,
  • todo: runs function without "warmup" phase, marks result as "todo" and does not stop tests if it errors out.

Each of them accepts the same number and type of parameters. They differ only by how the results are interpreted.

  • name of a test is optional, if not provided, PeTe will use benchmarked function's "id" (filename, line number),
  • fn is a required function that is to be benchmarked,
  • options is an optional object that may contain any of the supported options,
  • cb is an optional function to call after benchmarking of the fn function ends.

Installation

To install PeTe, as with most of the other Node.js modules, use following command line:

npm install @ahwayakchih/pete

or install it globally:

npm install -g @ahwayakchih/pete

Usage (API)

The simplest possible way to run a test is:

const test = require('pete/test');

test(myCustomFunction);

/**
 * This is the function that's being tested above.
 */
function myCustomFunction () {
  return "done";
}

Of course you can pass any type of function to it. PeTe tries to automatically recognize if that function is synchronous (like in example above), "callback"-style (last of its params is called either: "cb", "callback", "done" or "next"), async or returns a Promise. If a string is passed in place of a function, PeTe will try to run forked process assuming that the string is a path to a script file.

If that fails for some reason, you can always enforce your choice by passing additional runType option:

const test = require('pete/test');

test(myCustomFunction, {
  // @see: lib/runs/auto.js for current list of supported/"built-in" `runTypes`
  runType: 'sync'
});

function myCustomFunction () {
  return "done";
}

You can even create your own "runner" module. PeTe will first try its own runners, and if none is found with specified name, it will try to require module with that name (which means, that module should export a single function). Check how pete/lib/runs/skip and other runners are implemented.

Usage (CLI)

If you installed PeTe globally, or you are trying to use it from your project's package.json script(s), you should be able to use it like this:

pete test-file.js

It supports following options:

  • -c, --maxCpuTime : limit total time (in milliseconds) that target test can take when run multiple times
  • -r, --maxRealTime : limit total time (in milliseconds) that a single test run can take
  • -s, --maxSamples : limit how many times target function should be called when tested
  • -w, --maxWarmup : same as samples, only for "warmup" phase of test run
  • -p, --percentile : specify which percentiles you want reported
  • -o, --output, --report: specify reporter(s)
  • -b, --bail : set to false (or use -no prefix: --no-bail) to continue to the next test, when one errors out
  • -j, --just, --only : specify ID of the test you want to be run as the only one from the file(s)
  • -a, --arg : optionally specify one or more arguments to be passed to tested function(s)

You can also pass a JSON file with previous report data, for example to "convert" it to another supported format:

pete examples/index.js -o json?out=reports/examples.json
pete reports/examples.json -o tap

Of course, any test file that uses PeTe, can be run directly and supports same CLI options. For example, to run test for maximum of 3 seconds and output results in TAP format:

node examples/example1.js --maxRealTime 3000 --report tap

Or same situation but using Bun:

bun --expose-gc examples/example1.js --maxRealTime 3000 --report tap

To get results in a format that's a bit more readable for a common human, use mico reporter:

node examples/example1.js --maxRealTime 3000 --report mico

That will output something similar to:

   s name                         p99         max           ops           samples      *p99   
  --------------------------------------------------------------------------------------------
   ✔ sync1                     🕛  121ns    97,279ns   ~23,053,627/s   |11,252,924|   × 1.00  
  🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜶘𜴦🮔
   ✔ callback3                 🕐  130ns   122,623ns   ~21,624,682/s   |12,667,593|   × 1.07  
  🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜶘𜶘𜴦🮔
   ✔ generator1                🕐  140ns    32,351ns   ~16,652,797/s   | 9,625,101|   × 1.16  
  🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜶘𜴦🮔🮔
   ✔ async1                    🕗  220ns   166,015ns   ~ 7,103,987/s   | 7,584,502|   × 1.82  
  🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶘𜶫🮔🮔
   ✗ callback1ToDo             Error: Intentional Test Error
  🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔
   ✗ callback2ToDoException    Error: Intentional Test Exception
  🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔

To keep the difference column ("*p99"), but without horizontal bars, use:

node examples/example3.js -o 'mico?cols=status,name,p99,ops,samples,*p99&sort=p99' --no-bail

That will output something similar to:

   s name             p99           ops           samples      *p99   
  --------------------------------------------------------------------
   ✔ sync3         🕛  121ns   ~22,766,507/s   |11,043,876|   × 1.00  
   ✔ callback3     🕐  130ns   ~21,984,471/s   |12,243,370|   × 1.07  
   ✔ async3        🕘  231ns   ~ 6,471,422/s   | 7,071,764|   × 1.91  
   ✗ sync3error    Error: Intentional Test Error

With bar row:

node examples/example3.js -o 'mico?cols=status,name,p99,ops,samples,*p99,=p99.95&sort=p99' --no-bail

will output something similar to:

   s name             p99           ops           samples      *p99   
  --------------------------------------------------------------------
   ✔ sync3         🕛  130ns   ~21,853,117/s   |10,993,846|   × 1.00  
  🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜶘𜴦🮔
   ✔ callback3     🕛  130ns   ~21,662,141/s   |11,708,079|   × 1.00  
  🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴦🮔
   ✔ async3        🕗  231ns   ~ 6,457,818/s   | 7,105,622|   × 1.78  
  🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂🮂𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜴢𜶘𜶘𜶫🮔🮔
   ✗ sync3error    Error: Intentional Test Error
  🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔🮔