npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@testkase/reporter

v1.1.0

Published

CLI tool to report automated test results to TestKase

Downloads

364

Readme

@testkase/reporter

npm version license

CLI tool to report automated test results to TestKase — the modern test management platform.

Parse results from popular test frameworks, map them to test cases via automation IDs, and push execution data to TestKase in a single command.

Documentation · Website


Installation

npm install -g @testkase/reporter

Or run directly with npx:

npx @testkase/reporter report --token <PAT> --project-id <PROJECT_ID> --org-id <ORG_ID> --format junit --results-file results.xml

Quick Start

# 1. Generate a PAT at https://www.testkase.com/api-keys
# 2. Find your Org ID at https://www.testkase.com/profile

# 3. Run your tests (output a results file)
pytest --junitxml=results.xml

# 4. Report results
testkase-reporter report \
  --token <PAT> \
  --project-id PRJ-1 \
  --org-id 1 \
  --format junit \
  --results-file results.xml

Supported Frameworks

| Framework | Format flag | Results file type | Attachments | | ----------- | -------------- | ----------------- | ----------- | | JUnit | junit | XML | Yes (via [[ATTACHMENT\|path]] in system-out) | | Playwright | playwright | JSON | Yes (from result.attachments) | | Cypress | cypress | JSON (Mochawesome)| No | | TestNG | testng | XML | No | | NUnit | nunit | XML | Yes (from <attachments> nodes) | | Cucumber | cucumber | JSON | Yes (from embeddings) |

Commands

report

Parse test results and report them to TestKase.

testkase-reporter report --token <PAT> --project-id <ID> --org-id <ID> --format <type> --results-file <path> [options]

Required options:

| Option | Description | | ------ | ----------- | | --token <pat> | Personal access token — generate at API Keys. Falls back to $TESTKASE_PAT env var | | --project-id <id> | Project ID (e.g. PRJ-1) | | --org-id <id> | Organization ID — found on your Profile | | --format <type> | Result format: junit, playwright, cypress, testng, nunit, cucumber | | --results-file <path> | Path to the results file |

Optional options:

| Option | Default | Description | | ------ | ------- | ----------- | | --api-url <url> | https://api.testkase.com | API base URL | | --cycle-id <id> | TCYCLE-1 | Test cycle ID (e.g. TCYCLE-5) | | --automation-id-format <regex> | \[(\d{5})\] | Regex to extract automation ID from test name | | --missing-id-in-script <action> | skip | When test has no automation ID: skip, fail, or create | | --unknown-id-in-testkase <action> | skip | When automation ID not found in TestKase: skip or fail | | --report-skipped-as <status> | blocked | Report skipped tests as: blocked, not-executed, or ignore | | --attachments-dir <path> | — | Directory containing test artifacts (screenshots, videos) | | --build-id <id> | config | CI build identifier | | --timezone <tz> | — | Timezone for timestamps | | --dry-run | false | Parse and display mapping without sending results | | --verbose | false | Show detailed logs | | --silent | false | Only show errors |

create-run

Create a new test cycle in TestKase.

testkase-reporter create-run --token <PAT> --project-id <ID> --org-id <ID> --title "Regression Suite — Build #42" [options]

Required options:

| Option | Description | | ------ | ----------- | | --token <pat> | Personal access token — generate at API Keys. Falls back to $TESTKASE_PAT env var | | --project-id <id> | Project ID (e.g. PRJ-1) | | --org-id <id> | Organization ID — found on your Profile | | --title <title> | Title for the new test cycle |

Optional options:

| Option | Default | Description | | ------ | ------- | ----------- | | --api-url <url> | https://api.testkase.com | API base URL |

list-projects

List projects accessible with the provided token.

testkase-reporter list-projects --token <PAT> --org-id <ID> [options]

Required options:

| Option | Description | | ------ | ----------- | | --token <pat> | Personal access token — generate at API Keys. Falls back to $TESTKASE_PAT env var | | --org-id <id> | Organization ID — found on your Profile |

Optional options:

| Option | Default | Description | | ------ | ------- | ----------- | | --api-url <url> | https://api.testkase.com | API base URL |

Automation ID Mapping

The reporter maps test results to TestKase test cases using automation IDs embedded in test names.

How it works

  1. Each test name is matched against --automation-id-format (default: \[(\d{5})\])
  2. The first capture group is used as the automation ID
  3. The reporter resolves automation IDs to test case IDs via the TestKase API
  4. Results are reported against the matched test cases

Example

Test name:  "User can log in with valid credentials [10042]"
Regex:      \[(\d{5})\]
Extracted:  10042  →  resolves to TC-87 in TestKase

Handling missing or unknown IDs

| Scenario | Option | Values | | -------- | ------ | ------ | | Test has no automation ID | --missing-id-in-script | skip (default) — warn and skip · fail — exit with code 4 · create — auto-create test case and mapping | | Automation ID not found in TestKase | --unknown-id-in-testkase | skip (default) — warn and skip · fail — exit with code 4 |

Attachments

The reporter can upload screenshots, videos, and other artifacts alongside test results.

Framework-parsed attachments

Attachments discovered by parsers (Playwright's result.attachments, JUnit's [[ATTACHMENT|path]] in system-out, NUnit's <attachments> nodes, Cucumber's embeddings) are automatically associated with the corresponding test case.

Directory-based attachments

Use --attachments-dir to upload files from a directory:

testkase-reporter report \
  --token <PAT> \
  --project-id <ID> \
  --org-id <ID> \
  --format playwright \
  --results-file results.json \
  --attachments-dir ./test-artifacts
  • Files whose names contain an automation ID are mapped to the corresponding test case(s)
  • Files without an ID are uploaded to all executed test cases in the run

Limits

  • Maximum file size: 50 MB per file
  • Supported types: PNG, JPG, GIF, WebP, SVG, MP4, WebM, AVI, PDF, ZIP, JSON, TXT, HTML, XML, CSV, LOG

CI/CD Integration

Works with any CI/CD platform that can run npx. Set TESTKASE_PAT as a secret and add the report step after your tests.

| Platform | Docs | | -------- | ---- | | GitHub Actions | Guide | | GitLab CI | Guide | | Jenkins | Guide | | Azure DevOps | Guide | | Bitbucket Pipelines | Guide | | CircleCI | Guide |

For full CLI reference and CI/CD setup instructions, see the documentation.

Exit Codes

| Code | Meaning | | ---- | ------- | | 0 | Success | | 1 | Test failures reported (one or more tests failed) | | 2 | API error (authentication, network, server) | | 3 | File or format validation error (file not found, invalid format) | | 4 | Missing or unknown automation IDs (when fail mode is enabled) |

Links

License

MIT