npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

playwright-performance-reporter

v3.6.0

Published

Measure and publish performance metrics from browser dev-tools when running playwright

Downloads

236

Readme

playwright-performance-reporter

Release codecov version

Collect performance metrics from the browser dev tools during playwright test execution

Index

Install

npm install playwright-performance-reporter --save-dev

or

yarn add playwright-performance-reporter --dev

Usage

Disable parallelism:

export default defineConfig({
  ...
  fullyParallel: false,
  workers: 1,
  ...
})

Setup Reporter

To register the reporter, include the code blow in your playwright config. Please see the subsections for more details about browser specific cases and advanced configurations. For a runnable setup, see example/playwright.config.ts.

import type { CDP, Options, Metric } from 'playwright-performance-reporter';
import { nativeChromiumObservers } from 'playwright-performance-reporter';

const PlaywrightPerformanceReporterOptions: Options = {
  deleteOnFailure: false,
  browsers: {
    chromium: {
      onTestStep: {
        metrics: [new nativeChromiumObservers.allPerformanceMetrics()],
      }
    }
  }
}

export default defineConfig({
  ...
  reporter: [
    ['playwright-performance-reporter', PlaywrightPerformanceReporterOptions]
  ],
  ...
});

Chromium

Following metrics are supported out of the box:

  • usedJsHeapSize
  • totalJsHeapSize
  • allPerformanceMetrics
  • heapDump
  • heapProfilerSampling

The MetricsEngine relies on the Chrome DevTool Protocol (CDP), which can be accessed through HTTP and WebSocket. To allow for a connection, make sure to expose a port for the remote debugging. The reporter will try to extract that port during start-up.

Setup Browser

{
  name: 'chromium',
  use: {
      ...devices['Desktop Chrome'],
    launchOptions: {
      args: [
        '--remote-debugging-port=9222'
      ]
    }
  }
},

Advanced Configurations

Sampling

Relying solely on the start and stop metric in a long running step leads to inaccuracies and requires a large set of runs to have a meaningful amount of metrics. By registering a metric to be collected every samplingTimeoutInMilliseconds the sampling output will be written to samplingMetrics, similar to startMetrics or startMetrics.

import { nativeChromiumObservers } from 'playwright-performance-reporter';

const PlaywrightPerformanceReporterOptions: Options = {
  ...
  browsers: {
    chromium: {
      onTestStep: {
        metrics: [new nativeChromiumObservers.usedJsHeapSize(), new nativeChromiumObservers.totalJsHeapSize()],
      },
      sampling: {
        metrics: [
          {
            samplingTimeoutInMilliseconds: 1000,
            metric: new nativeChromiumObservers.totalJsHeapSize()
          }
        ]
      }
    }
  }
}

Custom Metric Observer

If you want to extend it with custom metrics, you can create a new class that implements the MetricObserver interface. Please see the example below how to use it, or checkout the allPerformanceMetrics implementation.

For ease of implementation, the passed object can implement the interface ChromiumMetricObserver, WebkitMetricObserver or FirefoxMetricObserver. By using custom metrics it's possible to make observers stateful and e.g. make the next output dependent on the previous one.

import type { ChromiumMetricObserver, Options } from 'playwright-performance-reporter';

class NewMetric implements ChromiumMetricObserver {
  ...
}

const PlaywrightPerformanceReporterOptions: Options = {
  outputDir: '/your/path/to/dir',
  outputFile: 'output.json',
  deleteOnFailure: false,
  browsers: {
    chromium: {
      onTestStep: {
        metrics: [new NewMetric()]
      }
    }
  }
}

Presenters

Presenters allow multiple output formats to be generated simultaneously from the same test data. Each presenter receives the same data and can transform it into a different format.

In the example project, two presenters are configured and generate:

How Presenters Work

  • Multiple presenters can be registered in the presenters array
  • Each presenter is initialized with the same output configuration
  • Every metric write is broadcast to all presenters
  • Each presenter handles its own file writing, closing, and deletion

Using Predefined Presenters

The library provides two built-in presenters:

import { presenters } from 'playwright-performance-reporter';

const options: Options = {
  presenters: [
    new presenters.jsonChunkWriter(...),
    new presenters.chartPresenter(...)
  ],
  ...
}

Custom Presenters

The output is sent in chunks to the presenter(s) defined in the options. If there is a need to provide a custom writer, then the presenters is of help to customize how the chunks are handled. Every new entry is sent to the write function. Once the test is complete close is called. In case the test failed and deleteOnFailure === true, then the delete function is called.

import type { PresenterWriter, ResultAccumulator } from 'playwright-performance-reporter';

class CustomJsonWriter implements PresenterWriter {
  async write(content: ResultAccumulator): Promise<boolean> {
    // Write content
    return true;
  }

  async close(): Promise<boolean> {
    // Close the writer
    return true;
  }

  async delete(): Promise<boolean> {
    // Delete the created file
    return true;
  }
}

const PlaywrightPerformanceReporterOptions: Options = {
  deleteOnFailure: true,
  presenters: [new CustomJsonWriter()],
  ...
}

Output

Check example/ for the real-world setup. If you run the example simulation (cd example && npm run test), output is written to:

The top level is hooked into test().

{
  ...
  "4dde6239d9ac8c9468f3-82e3094b06379c51b729": {
    "TEST_CASE_PARENT": {
      "name": " > chromium > scenarios/profile.spec.ts > Profile",
      ...
    }
    ...
  }
  ...
}

The content consists of steps of the test suite. Please keep in mind that the metric request is async and is not awaited by Playwright. This means that the browser API might still in the process of collecting the metrics, even though Playwright instructed the browser to continue to the next step. This could lead to wrong output. To check if the output is invalid, the values startMeasurementOffset and endMeasurementOffset are provided, which measure the time delta in milliseconds between the request until the browser provides all metrics.