npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@that-one-tool/performer

v1.0.1

Published

A lightweight tool to evaluate your JS/TS code performance

Downloads

3

Readme

Performer

Objective

Performer aims at catching regressions you could introduce in your code, as well as check the performances in the first place. By using Performer in your unit tests, you can create performance regression tests, that will help you to avoid missing code changes that might harm your code performances very quickly.

Performer is meant to benchmark code you control: to get the most reliable results, you must make sure to properly mock any external service that you code is using (e.g. database calls, etc.).

When benchmarking a function, it will be instrumented so it can track Execution time in ms (also converted to Operations per second) and Memory usage in MB. Statistics are computed from these metrics: samples, min, max, sum, avg and stdDev (standard deviation). You can then use these stats as you want in your tests to ensure your code's performance is watched and validated.

Performer does not use any dependency, and solely rely on Performance and Process APIs. It has been thought to be run in Node.js in unit tests, and is not recommend for use in production code or in the browser.

Usage

  1. Install Performer (as a dev dependency)
npm i -D @that-one-tool/performer
  1. Import and create a Performer instance
// ESM
import { Performer } from '@that-one-tool/performer';

// CJS
const { Performer } = require('@that-one-tool/performer');

...
// Before the test suits
const performer = new Performer();

Note: A Performer instance must be created since it must retain recorded metrics during benchmarks.

  1. Benchmark a function
// Benchmark a synchronous function (add), running it 10 times max (default) in a 1000ms max (default) time frame
const results = performer.benchmarkFunction(add(1, 2));

// Benchmark an asynchronous class (foo) method (bar), running it 50 times in a 2000ms max time frame
const results = performer.benchmarkAsyncFunction(() => foo.bar(), 50, 2000);
  1. Use benchmark results
// Check the code is executing under 10ms on average
assert.ok(results.getExecutionTimeStats().avg < 10);
  1. Clear Performer after a test suit
// Between test suits
performer.clear();

Benchmark: Arguments and Results

Arguments

Both benchmarkFunction and benchmarkAsyncFunction take the same input arguments:

  • func [required] the function to run (for a class method, wrap the function call in an anonymous function to preserve this binding)
  • maxIterations [optional - default 10] the maximum number of iterations of the benchmark before stopping it
  • maxTotalDurationMs [optional - default 1000] the maximum total benchmark duration before stopping it (note: this max time can be exceeded given the tested function's execution time)

Note: Trying to use benchmarkFunction with an asynchronous function will result in an error to be thrown.

Results

The benchmark results are returned as an instance of the class BenchmarkResults, with the three following methods callable:

  • getExecutionTimeStats()
  • getOperationsPerSecondStats()
  • getUsedMemoryStats()

Each of these methods returns a Stats object, with the following members:

  • samples (the number of samples used to compute the stats, equals to the number of iterations run)
  • min (the minimum value among the samples)
  • max (the maximum value among the samples)
  • sum (the sum of all the samples)
  • avg (the average value of the samples)
  • stdDev (the standard deviation of the samples)

BenchmarkResults also exposes the method getErrors(), from which you can get an array of the errors thrown during the benchmark.

Utilities

Create arrays for use in benchmarks

The Performer class exposes 3 static methods to create arrays that can then be used in benchmarks:

// To create an array of a given size (size is optional, default to 1000 items) which values are created by the generator function you provide
createCustomArray(generatorFunction, size): T[]
// Example
const arr = createCustomArray(() => { id: randomUUID() });

// To create a random numbers array of a given size (size is optional, default to 1000 items)
createRandomNumberArray(size): number[]

// To create a random strings (UUIDv4) array of a given size (size is optional, default to 1000 items)
createRandomStringArray(size): string[]

Contribute

Please feel free to suggest improvements, features or bug fix through Git issues. Pull Requests for that are also more than welcome.

Keywords

performance speed memory cpu benchmark measurement validation test regression