npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@flakiness/flakiness-report

v0.17.0

Published

Flakiness.io report format specification

Downloads

1,441

Readme

Flakiness JSON Report Specification

Official specification for the Flakiness.io report format.

The Flakiness Report format was inspired by the Playwright Test report format and extends it to support comprehensive test execution analysis across multiple environments.

In a nutshell, Flakiness Report is a JSON file that follows this specification. Oftentimes, this JSON file is accompanied by a set of files - attachments. This format defines a standardized file system layout for storing these test artifacts.

This repository contains:

Features

The Flakiness Report format supports:

  • Test Tags - Categorize tests using custom tags
  • Test Annotations - Attach metadata and annotations to test runs
  • Multiple Errors per Test - Capture all errors including soft errors that don't fail the test
  • System Monitoring - Track CPU and RAM utilization during test execution with time-series sampling
  • Multiple Execution Environments - Run the same tests across different configurations (OS, browser, project settings) in a single report

Usage

Once you have a flakiness-report on the file system, you can view & upload it using the Flakiness CLI Tool:

# view report
flakiness show ./flakiness-report
# upload report to flakiness.io
flakiness upload ./flakiness-report

Learn more in the official documentation.

Specification

JSON format

💡 Tip: The TypeScript type definitions include extensive inline comments that describe each entity and field in detail. Be sure to read through the comments in flakinessReport.ts for a comprehensive understanding of the report format structure.

Minimal Example

Here's a minimal Flakiness Report with one environment, one test and one attachment:

flakiness-report/
├── report.json
└── attachments/
    └── 5d41402abc4b2a76b9719d911017c592
{
  "category": "pytest",
  "commitId": "a1b2c3d4e5f6789012345678901234567890abcd",
  "environments": [
    {
      "name": "Python Tests"
    }
  ],
  "tests": [
    {
      "title": "should pass basic test",
      "location": {
        "file": "tests/test_example.py",
        "line": 3,
        "column": 1
      },
      "attempts": [
        {
          "environmentIdx": 0,
          "expectedStatus": "passed",
          "status": "passed",
          "startTimestamp": 1703001600000,
          "duration": 1500,
          "attachments": [
            {
              "name": "screenshot.png",
              "contentType": "image/png",
              "id": "5d41402abc4b2a76b9719d911017c592"
            }
          ]
        }
      ]
    }
  ],
  "startTimestamp": 1703001600000,
  "duration": 2000
}

Report Concepts

  1. File Paths All file paths within the report are POSIX-formatted paths relative to the repository root, regardless of the platform on which tests are executed.

  2. Test A test represents a specific location in the source code.

  3. Suite A suite is a logical grouping of tests. Suites can be of various types and may have associated file locations.

  4. Environment An environment is a set of key-value pairs that describe the execution context. Environments capture information about the operating system, browser, and other testing properties.

  5. Run Attempts A run attempt represents a single execution of a test within a specific environment. When test runners automatically retry failed tests, each retry is recorded as a separate run attempt for the same test in the same environment.

  6. Test Statuses Each run attempt has both an actual status and an expected status. The expected status is typically passed, but some test runners allow marking tests as expected to fail, in which case the expected status is failed.

    The Flakiness Report viewer supports filtering reports by status.

  7. Test Tags Test tags are case-insensitive markers assigned to tests. Tags are static and cannot be dynamically attached during test execution; they are typically modified only when source code changes. Common examples include smoke, e2e, and regression.

    The Flakiness Report viewer supports filtering reports by tags.

  8. Annotations Annotations are metadata attached to individual run attempts, unlike tags which are attached to tests themselves. Annotations are dynamic and can vary across different test executions.

    Each annotation has a type and a description. Common use cases include:

    • skip annotations to mark tests as skipped (without additional description)
    • owner annotations to assign ownership of specific tests

    The Flakiness Report viewer supports filtering by annotations.

  9. Attachments Each run attempt might also have an attachment: a screenshots, video, log, or any other piece of debugging information that is referenced by ID. Actual attachment contents are stored on the file system, following the directory layout explained in the "Attachments" section.

Attachments

Attachments (screenshots, videos, logs, etc.) are referenced in the report by ID rather than embedded directly.

Each attachment in a RunAttempt contains:

  • name - The attachment filename
  • contentType - MIME type of the attachment
  • id - Unique identifier used to retrieve the actual attachment content. It is recommended to use the MD5 hash of the attachment content as the identifier.

The actual attachment files are stored in the attachments/ directory alongside the report.json, with their id as the filename (without extension).

The report JSON and its attachments should be organized as follows:

flakiness-report/
├── report.json
└── attachments/
    ├── 5d41402abc4b2a76b9719d911017c592
    ├── 7d865e959b2466918c9863afca942d0f
    └── 9bb58f26192e4ba00f01e2e7b136bbd8

Important: Report generators shall not compress attachments manually. The Flakiness.io CLI tool automatically applies optimal compression for different file types during upload.

NPM Package

The repository is published to NPM and is compatible with both Node.js and browser environments:

# Install this type definition
npm install @flakiness/flakiness-report

The package provides a simple validation utility for the reports.

import { FlakinessReport } from '@flakiness/flakiness-report';

// Type-safe report handling
const report: FlakinessReport.Report = { /* ... */ };

const validation = FlakinessSchema.validate(report);
if (!validation.success)
  console.error(`Validation failed:`, z.prettifyError(validation.error));

Development

For information on building, watching, and contributing to this project, see CONTRIBUTING.md.