npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@zenai/playwright-coding-agent-reporter

v1.2.1

Published

A coding agent friendly reporter for playwright, so that your coding agent finally really understands what is going wrong

Readme

@zenai/playwright-coding-agent-reporter

npm version semantic-release License: MIT

A specialized Playwright reporter designed for AI/LLM coding agents that provides minimal, structured test failure reporting to maximize context efficiency and actionable insights. Works well with coding agents such as Claude Code, Codex, Aider, Roo Code, and Cursor.

Features

  • 🎯 Error-Focused: Captures complete failure context including exact line numbers, stack traces, and page state
  • 📸 Rich Context: Includes console errors, network failures, and screenshots
  • 💚 Smart Selector Suggestions: Uses Levenshtein distance to suggest similar selectors when elements aren't found
  • 📝 Markdown Reports: Clean, structured markdown output for easy parsing by LLMs
  • Performance Optimized: Minimal overhead, async file operations
  • 🔧 Highly Configurable: Customize what data to capture and report

Installation

npm install --save-dev @zenai/playwright-coding-agent-reporter

Usage

Basic Configuration

Add the reporter to your Playwright configuration:

// playwright.config.ts
import { defineConfig } from '@playwright/test';

export default defineConfig({
  reporter: [
    [
      '@zenai/playwright-coding-agent-reporter',
      {
        outputDir: 'test-report-for-coding-agents',
        includeScreenshots: true, // Include screenshots in reports when available
        silent: false, // Show helpful console output
        singleReportFile: true, // All errors in one file
      },
    ],
  ],
  use: {
    // IMPORTANT: Configure Playwright to take screenshots on failure
    screenshot: 'only-on-failure', // This tells Playwright WHEN to take screenshots
    video: 'off', // Turn off video by default for efficiency
  },
});

Screenshot Configuration

Important: Screenshot capture is controlled at two levels:

  1. Playwright Level (use.screenshot): Controls WHEN screenshots are taken

    • 'off' - No screenshots
    • 'on' - Always take screenshots
    • 'only-on-failure' - Only on test failure (recommended)
  2. Reporter Level (includeScreenshots): Controls whether captured screenshots are included in reports

    • true - Include screenshots in error reports when they exist (default)
    • false - Don't include screenshots in reports, even if Playwright captured them

For optimal debugging, use:

  • screenshot: 'only-on-failure' in Playwright config (to capture screenshots)
  • includeScreenshots: true in reporter config (to include them in reports)

Configuration Options

| Option | Type | Default | Description | | ---------------------- | ------- | --------------------------------- | --------------------------------------------------------------------------------------- | | outputDir | string | 'test-report-for-coding-agents' | Directory for report output | | includeScreenshots | boolean | true | Include screenshots in error reports when available (see note below) | | includeConsoleErrors | boolean | true | Capture console errors and warnings | | includeNetworkErrors | boolean | true | Capture network request failures | | includeVideo | boolean | false | Include video references in reports when available (Playwright must have video enabled) | | silent | boolean | false | Suppress per-test pass output; still shows summary | | maxErrorLength | number | 5000 | Maximum error message length | | singleReportFile | boolean | true | Generate single consolidated error-context.md file | | capturePageState | boolean | true | Capture page state on failure (URL, title, available selectors, visible text) | | verboseErrors | boolean | true | Show detailed error list after summary. Set to false for only concise summary | | maxInlineErrors | number | 5 | Maximum number of errors to show in console output | | showCodeSnippet | boolean | true | Show code snippet at error location |

Output Structure

When tests fail, the reporter generates a consolidated report and per-test artifacts:

test-report-for-coding-agents/
├── all-failures.md  # Consolidated failure report (all failures)
├── basic-reporter-features-failing-test-element-not-found-7/
│   ├── report.md  # Detailed error report for this test
│   └── screenshot.png  # Screenshot at failure (if enabled)
├── basic-reporter-features-failing-test-assertion-failure-6/
│   ├── report.md  # Detailed error report for this test
│   └── screenshot.png  # Screenshot at failure (if enabled)
├── timeout-handling-timeout-waiting-for-element-shows-enhanced-context-7/
│   ├── report.md  # Detailed error report with timeout context
│   └── screenshot.png  # Screenshot at timeout
└── ...

Notes:

  • all-failures.md - Consolidated report containing all test failures with summaries and links to individual reports
  • report.md - Per-test detailed error report with full context, stack traces, and debugging information
  • screenshot.png - Visual state captured at the moment of failure (when screenshots are enabled)
  • Folder names are generated from suite and test names with test index suffix for uniqueness

Report Contents

Each failure report includes:

  • Test Location: Exact file path and line number
  • Error Details: Complete error message and stack trace with enhanced timeout context
  • Page Context: Current URL, page title, screenshot reference
  • Available Selectors: Sorted by relevance when element not found
  • Action History: Recent test actions before failure
  • Console Output: Captured JavaScript errors and warnings
  • Network Errors: Failed network requests
  • Screenshots: Visual state at failure with direct links
  • HTML Context: Relevant HTML around failed selectors
  • Quick Links: Navigation to individual test folders (in consolidated report)

Console Output Example

The reporter shows a concise summary at the end of test execution:

Running 16 tests using 4 workers

·F·F·F-FFFF·FFFFF

E2E Test Run: 1/16 passed (15 failed/skipped) in 3.1s

  FAILED (15):
    ✗ reporter-demo.spec.ts:16 - Reporter Core Features - assertion failure with context - Expected "Expected Title", got "Actual Title"
    ✗ reporter-demo.spec.ts:23 - Reporter Core Features - console errors capture - Uncaught exception
    ✗ reporter-demo.spec.ts:9 - Reporter Core Features - element not found - suggestions - Element not found: [.non-existent-selector]

  See for failed test details: ./test-report-for-coding-agents/

When verboseErrors: true (default), detailed error information follows:

### Detailed Failures

  ## 1) test/e2e/reporter-demo.spec.ts:20:7 › Reporter Core Features › assertion failure with context
     Duration: 583ms
  ### Error
  Error: expect(received).toBe(expected) // Object.is equality

  Expected: "Expected Title"
  Received: "Actual Title"

  ### Code Location (TypeScript)
    18 |
    19 |     const title = await page.locator('h1').textContent();
  > 20 |     expect(title).toBe('Expected Title');
       |                   ^
    21 |   });

  ### 🔍 Page State When Failed
  **URL:** data:text/html,<h1>Actual Title</h1>
  **Title:** unknown
  **Screenshot:** Saved to screenshot.png

  📝 **Full Error Context:** test-report-for-coding-agents/reporter-core-features-assertion-failure-with-context-4/report.md

Summary Format Features

The new concise summary format provides:

  • One-line overview: Shows passed/failed/skipped counts and total duration
  • Failed tests only: Lists only failed tests with file:line, test name, and brief error
  • Skipped tests: Shows skipped tests when present
  • Report directory: Points to detailed reports using your configured outputDir

To show only the summary without detailed errors, set verboseErrors: false in your configuration

📝 Detailed error report: test-report-for-coding-agents/all-failures.md


### Integration with AI/LLM Agents

This reporter is optimized for AI coding assistants (Claude Code, Codex, Aider, Roo Code, Cursor, etc.). When tests fail:

1. **Single File Context**: The AI reads one `all-failures.md` file containing all failures
2. **Structured Information**: Each failure includes exact line numbers, error messages, and stack traces
3. **Visual Context**: Screenshots and smart selector suggestions provide debugging insights
4. **Immediate Debugging**: Console and network errors are captured inline
5. **Quick Reproduction**: Ready-to-run commands for each failing test

The consolidated format minimizes token usage while maximizing debugging information.

### Example Test with Enhanced Context

```typescript
import { test, expect } from '@playwright/test';

test('user can complete checkout', async ({ page }) => {
  // The reporter will capture all of this context on failure
  await page.goto('/shop');

  // Console errors are automatically captured
  await page.evaluate(() => {
    console.error('Payment processing failed');
  });

  // Network failures are tracked
  await page.route('**/api/checkout', (route) => route.abort());

  // Screenshots and available selectors captured on failure
  await expect(page.locator('.checkout-success')).toBeVisible();
});

Development

Building

npm run build

Testing

# Run unit tests (Vitest - no browser required)
npm run test:unit

# Run E2E demo tests (Playwright - demonstrates reporter output)
npm run test:example

# Run all tests
npm test          # Runs unit tests only (used in CI)

Watch Mode

npm run watch

Why Use This Reporter?

The default Playwright reporter surfaces the error, but often lacks enough surrounding context for a coding model to understand what actually went wrong and what the page state was at failure time. It's hard for coding agents to debug with just the error text.

This reporter focuses on actionable context for agents:

  • Dot progress output: Concise dot progress with immediate failed test listing, detailed sections only for failures
  • Page state snapshot: URL, title, visible text, nearby/available selectors, recent actions
  • Structured errors: Consistent formatting with code snippets and stack traces
  • Repro commands: Ready-to-run commands per failing test
  • Markdown reports: Single consolidated file plus per-test reports for targeted review

Comparison: Standard vs Coding Agent Reporter

Here's the same failing test with both reporters - notice how our reporter provides solution context:

Standard Playwright Line Reporter

Error: expect(locator).toBeVisible() failed

Locator:  locator('#submit-button')
Expected: visible
Received: <element(s) not found>
Timeout:  2000ms

Call log:
  - Expect "toBeVisible" with timeout 2000ms
  - waiting for locator('#submit-button')

Our Coding Agent Reporter

Console Output:

## 1) element not found - selector suggestions
   Duration: 2219ms

### Error
Error: expect(locator).toBeVisible() failed

Locator:  locator('#submit-button')
Expected: visible
Received: <element(s) not found>
Timeout:  2000ms

### Code Location
  11 |
  12 |     // Reporter should suggest similar selectors
> 13 |     await expect(page.locator('#submit-button')).toBeVisible();
     |                                                  ^
  14 |   });

### 🔍 Page State When Failed
**URL:** data:text/html,<button id="submit-btn">Submit</button>
**Screenshot:** Saved to screenshot.png

### 📜 Recent Actions
2025-09-08T17:53:07.848Z - → Navigating to: data:text/html,<button...>
2025-09-08T17:53:07.859Z - ✓ DOM ready
2025-09-08T17:53:07.860Z - ✓ Page loaded

### 🎯 Available Selectors (sorted by relevance)
#submit-btn
button:has-text("Submit")

### 📄 Visible Text (first 500 chars)
Submit

📝 **Full Error Context:** /path/to/detailed-report.md

Key Differences:

  • Exact code location with context lines
  • Available selectors - shows #submit-btn is available (typo fix!)
  • Action history - what happened before the failure
  • Page context - URL and visible content
  • Structured markdown reports - for detailed analysis

The Result: AI agents can immediately see the typo (#submit-button vs #submit-btn) and suggest the fix!

Contributing

This project uses semantic-release for automated releases.

  • Prefer squash merges. The pull request title should follow Conventional Commits; individual commit messages do not need to.
  • The PR title drives the release notes and version bump.

Release Process

Releases are fully automated via GitHub Actions:

  1. Merge to main: Use squash merge; ensure the PR title follows Conventional Commits
  2. Automatic versioning: semantic-release analyzes the PR title and determines version bump
  3. NPM publish: Package is automatically published to NPM
  4. GitHub Release: Creates GitHub release with changelog
  5. Git tags: Creates appropriate version tags

Setup Requirements

To enable automated publishing:

  1. NPM Token: Add NPM_TOKEN secret to your GitHub repository

    • Get token from npmjs.com → Account Settings → Access Tokens
    • Create "Automation" token with publish permissions
    • Add to GitHub: Settings → Secrets → Actions → New repository secret
  2. GitHub Token: GITHUB_TOKEN is automatically provided by GitHub Actions

  3. Branch Protection (optional but recommended):

    • Protect main branch
    • Require PR reviews
    • Ensure commit messages follow conventions

License

MIT