npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@jsperf.dev/core

v0.3.0

Published

Core API for jsperf.dev ecosystem

Downloads

4

Readme

@jsperf.dev/core

API

Class: Benchmark

Extends: EventEmitter

new Benchmark<Context>(options)

Arguments:

  • options - object - optional - Default: {}
    • warmup - boolean - optional - Default: true
    • samples - number - optional - Default: 10
    • meta - Meta - optional - Default: {}

Instance Properties

Benchmark.context
  • Context

A property that will be passed as the first argument to each lifecyle method and run script.

Benchmark.meta
  • Meta

A property that represents general information about the benchmark instance. The title is defaulted to process.argv[1] (generally the path of the benchmark script).

Benchmark.results
  • Map<string, Array<PerformanceEntry>>

This property is updated after the benchmark suite executes.

Benchmark.samples
  • number

The number of samples the benchmark suite will run. This property can be modified at any point prior to calling benchmark.start().

Benchmark.warmup
  • boolean

When set to true, the warmup step will run. This property can be modified at any point prior to calling benchmark.start().

Instance Methods

Benchmark.afterAll(func)

Arguments:

  • func - FunctionWithContext<Context> - required

Lifecycle method for adding a function that executes after all other scripts are executed.

Benchmark.afterEach(func)

Arguments:

  • func - FunctionWithContext<Context> - required

Lifecycle method for adding a function that executes after each run script is executed.

Benchmark.beforeAll(func)

Arguments:

  • func - FunctionWithContext<Context> - required

Lifecycle method for adding a function that executes before all scripts are executed.

Benchmark.beforeEach(func)

Arguments:

  • func - FunctionWithContext<Context> - required

Lifecycle method for adding a function that executes before each run script is executed.

Benchmark.run(id, file)

Arguments:

  • id - string - required
  • file - string - required

Add a run to the benchmark instance. The id must be unique and file must be the absolute path to the script.

Benchmark.start()

Returns: Promise<void>

Executes the benchmark suite. Resolves once it completes execution. Does not rethrow errors thrown during execution. Errors are emitted through the error event.

Instance Events

start

Emitted at the beginning of the microtask queued during the constructor. It will not be emitted if no runs have been added.

end

Emitted at the end of the microtask after all runs have been executed.

error

Emitted whenever an error is thrown. Will return the thrown error.

const [error] = await once(benchmark, "error");

Type: FunctionWithContext<Context>

  • (context: Context, ...extraArgs: unknown[]) => void | Promise<void>

Interface: Meta

  • title - string - optional
  • description - string - optional

Testing

Execute tests using pnpm test