npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

sanctuary-benchmark

v1.0.1

Published

Basis for easy benchmarking and sharing of the results

Downloads

42

Readme

Sanctuary Benchmark

Sanctuary Benchmark is a small wrapper over benchmarkjs to enable a consistent style of benchmarking across all Sanctuary projects. Allows for the easy definition of comparative benchmarks, and outputs results in a standardized format for sharing.

Usage

Create a file in the bench directory, for example old-vs-new.js:

const sb = require ('sanctuary-benchmark');

// Imagine these are libs. Normally they would be require()'d.
const oldVersion = (f, xs) => xs.map (f);
const newVersion = (f, xs) => {
  const len = xs.length;
  const out = new Array (len);
  for (let idx = 0; idx < len; idx += 1) out[idx] = f (xs[idx]);
  return out;
};

const small = Array.from ({length: 1}, (_, i) => i);
const large = Array.from ({length: 1000}, (_, i) => i);

module.exports = sb (oldVersion, newVersion, {}, {
  'map/small': [{}, map => map (x => x + 1, small)],
  'map/large': [{}, map => map (x => x + 1, large)],
});

Run the sanctuary-benchmark command. Pass --help for options.

$ ./node_modules/.bin/sanctuary-benchmark

Alternatively, you can use the value now assigned to module.exports programmatically. Consult the API documentation.

Reading the output

Running the benchmarks will print a table to the terminal with the following columns:

  • suite: The name of the test suite
  • left: The hertz and standard deviation measured for the number of rounds run for the library passed as first argument.
  • right: The hertz and standard deviation measured for the number of rounds run for the library passed as second argument.
  • diff: A percentage representing the difference between left and right, where 0 means "makes no difference" and 100 means "makes all the difference". You can use this number to tweak the significantDifference option, which determines whether a line will be highlighted.
  • change: The increase or decrease from left to right. You can use this to show your friends how well you've optimized a feature.
  • α: Wheter the difference is significant. Possible values are "✓" for an increase or "✗" for a decrease. Nothing will be rendered if the difference was insignificant.

API Documentation

benchmark :: (a, b, Options, StrMap (Spec a b)) -⁠> Options -⁠> Undefined

Spec a b :: [Object, (a | b) -> Any]
          | [Object, a -> Any, b -> Any]

Options :: { callback :: Function?
             colors :: Boolean?
             config :: Object?
             leftHeader :: String?
             match :: String?
             rightHeader :: String?
             significantDifference :: Number? }

This module exports a single function. It takes four arguments and returns another function. The four arguments are:

  1. The left-hand benchmarking input: could be an older version of the library you're testing, or a competing library.
  2. The right-hand benchmarking input: usually the current version of the library you're testing required directly from the working directory.
  3. An object containing defaults to the options passed in the next step. Refer to the documentation on the returned function to see what options are available.
  4. A mapping of benchmarks where the keys represent the names, and the values describe the work we're benchmarking. The names can later be used to filter benchmarks by using a glob, so it's recommended to use the forward slash character as a separator, as shown in usage. The value specifies the test. It's a Tuple with two or three items. The first item must always be an Object, and is used for per-test configuration overrides. The second and third items are the functions to run. When given a single function, it's used to test both libraries. When given two functions, they are used for the left and right library respectively.

Once these inputs are provided, a function is returned. The function will run the benchmarks and print the results to StdOut when it is called. It takes as input an object of options for the customization of this process:

  • callback (() => {})): Called when the benchmarks have completed.
  • colors (true): Set to false to disable terminal colors.
  • config ({}): Default Benchmark options to use for every benchmark. These can be overridden per benchmark.
  • leftHeader ('left'): Header describing the library on the left.
  • match ("**"): This glob allows one to filter benchmarks by name.
  • rightHeader ('right'): Header describing the library on the right.
  • significantDifference (0.1): The resulting difference (between 0 and 1) required for the output table to draw attention to these results.