npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

tjq-http-replay

v0.1.3

Published

Puppeteer HTTP recorder and playback module.

Downloads

50

Readme

http-replay

codecov

Installation

npm i -D @tjq/http-replay

How It Works

Flowchart

Usage

const setup = require("@tjq/http-replay");
const puppeteer = require("puppeteer");

describe("Browser test", () => {
  it("Does something", async () => {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();

    await setup({
      page,
      dir: "/path/to/mock",
      urls: ["http://localhost:8000/"],
      replay: true,
      id: "mocks",
    });

    await page.goto("http://example.com");
    await browser.close();
  });
});

Configuration

page (puppeteer.Page)

Required

The puppeteer page instance which will be conducting the visual regression test.

dir (string)

Required

The directory which the generated JSON file will be stored. The dir should not have a trailing slash.

urls (string[])

Required

The host names and/or any applicable slugs that should be intercepted and written to the generated JSON file.

id (string)

Default mocks

Name of the JSON file that will be written to the dir.

replay (boolean)

Default false

If false, http-replay will record all XHR and Fetch requests made by the browser page instance. The request headers and data, response headers and data, as well as the URL components are written to a JSON file whose path and name are delegated to the dir and id props, respectively.

If true, requests made by the browser page instance will be matched against a JSON file identified by the id property and responded to accordingly.

Motivating Example

In this example, we have a frontend project and a decoupled backend project API which are both being served over different ports on localhost. While developing and running tests locally, no problems should be seen if the projects are configured to communicate with each other.

This often breaks down in a CI environment where the API will not necessarily be available. This is complicated further performing visual regression tests in a CI environment, which may rely on asynchronous data affecting the render.

To solve this, http-replay can be run locally in record mode, where the developer has ensured access to the API, and save intercepted requests to a local JSON file to be later used as responses in case the API is no longer available.

File strcture

Initial files required include a Jest configuration as well as a test to run.

jest.record.config.js
jest.replay.config.js
package.json
replay.test.js

Installing dependencies

npm i -D jest puppeteer

Modifying test runner

Jest does not currently allow custom command line arguments passed in the form of flags. To circumvent this and leverage the same record-replay functionality within the same tests, create two separate configuration files supplying a global variable of the same name.

// jest.record.config.js

module.exports = {
  testRegex: "./*\\.test\\.js$",
  globals: {
    __REPLAY__: false,
  },
};
// jest.replay.config.js

module.exports = {
  testRegex: "./*\\.test\\.js$",
  globals: {
    __REPLAY__: true,
  },
};

Create npm scripts in the package.json to target the two configuration files based on the intent of the command.

// package.json

{
  "scripts": {
    "test:record": "jest --config jest.record.config",
    "test:replay": "jest --config jest.replay.config"
  }
}

The above scripts can be run with npm run test:record and npm run test:replay, respectively.

Writing a test

// replay.test.js

const puppeteer = require("puppeteer");
const setup = require("@tjq/http-replay");

describe("Example", function () {
  it("renders with or without a server", async () => {
    const browser = await puppeteer.launch({
      headless: true,
      args: ["--no-sandbox"],
    });

    const page = await browser.newPage();

    await setup({
      page,
      replay: __REPLAY__,
      dir: __dirname,
      id: "example",
      url: "localhost:8000/api/",
    });

    await page.goto("http://localhost:9000/", {
      waitUntil: "networkidle0",
    });

    await browser.close();
  });
});

Running the initial test

To generate responses for the requests made during the previous example, run the following:

npm run test:record

This will create a file at the root directory named example.json that will contain the requests and responses made by the browser to the specified urls, as well as their headers and data.

Running in CI

To ensure that all requests will have a proper response in CI, commit the generated JSON files to your source code manager. The following Github Action yml file shows the use of the aforementioned npm scripts to run the tests in replay mode.

# run_tests.yml
name: Browser Tests
on: [push]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v2
      - name: Install dependencie
        run: npm install
      - name: Build application
        run: npm run build
      - name: Start and serve application
        run: npm run serve &
      - name: Run tests with recorded responses
        run: npm run test:replay