npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@bughunters/vision

v1.2.0

Published

InfoSec-friendly AI Visual Testing for Playwright.

Readme

✅ BugHunters Vision

Last updated: 2026-03-07

InfoSec-friendly AI Visual Testing for Playwright.

Stop fighting 1-pixel false positives. BugHunters Vision uses Multimodal AI (Claude) to evaluate visual regression tests like a human QA — while keeping all your screenshots 100% local.

Images are never stored on our servers. They are Base64-encoded, evaluated in-memory for a single request, and immediately discarded.


How it works

  1. A test takes a screenshot of the current page.
  2. BugHunters Vision looks for a baseline in ./bhv-snapshots/.
  3. No baseline → saves it as the new baseline, test passes ✅
  4. Baseline found → Fast Pixel Match → pixel-perfect = instant PASS, no AI call needed ⚡
  5. Pixels differ → both images are sent (Base64, HTTPS) to the BugHunters Vision API.
  6. The API calls Anthropic Claude (multimodal vision) to evaluate the diff like a human QA.
  7. Returns { status: "PASS" | "FAIL", reason: "..." } — test passes or fails accordingly.
  8. After all tests finish, a standalone HTML report is generated locally.

Requirements

| Requirement | Minimum | |-------------|---------| | @playwright/test | ≥ 1.20.0 (Nov 2021) | | Node.js | ≥ 18.0.0 |

All Playwright APIs used by BugHunters Vision (testInfo.attach, screenshot({ mask }), testInfo.project.use) are available from Playwright 1.20 onwards. Every API call is defensively guarded so that running on an older version degrades gracefully rather than throwing.


Quick start

1. Install

npm install @bughunters/vision --save-dev

2. Set your API token

Get a token at bughunters.dev.

# .env or your CI secrets manager
BUGHUNTERS_VISION_TOKEN=bhv_your_token_here

3. Configure playwright.config.ts

import { defineConfig, devices } from '@playwright/test';

export default defineConfig({
  testDir: './tests',
  timeout: 90_000, // AI evaluation can take 20–40s on cold start
  reporter: [
    ['list'],
    ['@bughunters/vision/reporter', { snapshotsDir: './bhv-snapshots', reportDir: './bhv-report' }],
  ],
  use: {
    screenshot: 'off', // BHV handles screenshots manually

    // BugHunters Vision options — can also be set via env variables
    bvMode:           'ai',        // 'ai' | 'strict' | 'off'
    bvFailMode:       'hard',      // 'hard' (throw) | 'soft' (collect all failures)
    bvDataTolerance:  'tolerant',  // 'tolerant' | 'strict' — see below
    bvSnapshotsDir:   './bhv-snapshots',
    bvUpdateBaseline: false,
  } as Record<string, unknown>,
  projects: [
    { name: 'chromium', use: { ...devices['Desktop Chrome'] } },
  ],
});

4. Write a visual test

import { test } from '@playwright/test';
import { vision } from '@bughunters/vision';

test('homepage looks correct', async ({ page }) => {
  await page.goto('https://example.com');
  await page.waitForLoadState('networkidle');
  await vision.check(page, 'homepage');
});

You can pass an optional AI hint as the third argument:

await vision.check(page, 'dashboard', 'Ignore the live timestamp in the top-right corner.');

vision.check() accepts a Page or a Locator — useful for scoped element checks:

await vision.check(page.locator('#sidebar'), 'sidebar');

Masking dynamic regions

Use mask to exclude areas that change between runs (ads, live clocks, user avatars). Masked regions are painted solid grey before comparison — both in Fast Pixel Match and in the AI evaluation.

await vision.check(page, 'dashboard', {
  prompt: 'Ignore the notification counter.',
  mask: [
    page.locator('.live-clock'),
    page.locator('[data-testid="user-avatar"]'),
    page.locator('#ad-banner'),
  ],
});

mask accepts Locator[] — the same API as Playwright's native screenshot({ mask }).

Multi-step visual checks

Call vision.check() multiple times within one test — each step gets its own baseline file, its own row in the HTML report, and its own annotation in the native Playwright report.

test('checkout flow looks correct', async ({ page }) => {
  await page.goto('https://shop.example.com/cart');
  await vision.check(page, 'cart-page', 'Cart summary should be visible.');

  await page.click('#checkout-btn');
  await vision.check(page, 'shipping-form', 'Shipping form should be shown.');

  await page.click('#next-btn');
  await vision.check(page, 'payment-page', 'Payment options should be visible.');
});

Baseline files are named <project>--<test-title>--<step-name>.baseline.png, ensuring steps never collide even when reusing step names like "header" across multiple tests or projects.

5. Run your tests

# First run — baselines are created automatically, all tests pass
BUGHUNTERS_VISION_TOKEN=bhv_... npx playwright test

# Second run — Fast Pixel Match first; AI only called if pixels differ
npx playwright test

# Force-update all baselines after an intentional UI change
BHV_UPDATE_BASELINE=true npx playwright test

# Update a single baseline — delete it manually, then run once
rm bhv-snapshots/chromium--checkout-flow--cart-page.baseline.png
npx playwright test --grep "checkout flow"

6. View the HTML report

open bhv-report/index.html      # macOS
start bhv-report/index.html     # Windows
xdg-open bhv-report/index.html  # Linux

Terminal output

BugHunters Vision keeps terminal output compact and linear — one line per step, no duplicates:

📸 [BugHunters Vision] Baseline created: cart-page
✅ [BugHunters Vision] FAST PIXEL MATCH: shipping-form
🔍 [BugHunters Vision] AI evaluating: payment-page (12 pixel(s) differ)…
🤖 [BugHunters Vision] AI PASS: payment-page
⚠️  [BugHunters Vision] AI error: header — Claude AI is temporarily overloaded.
⏭  [BugHunters Vision] off: footer

When a check fails, a single clear error is thrown (no repeated messages):

Error: [BugHunters Vision] Visual regression detected in step "payment-page": Button label changed from "Pay now" to "Subscribe".

At the end of every run, the summary shows both test and step counts:

✅ BugHunters Vision Report generated:
   📄 /path/to/bhv-report/index.html
   ✅ 3 passed  ❌ 1 failed  (4 tests · 12 checks total)

IDE & VS Code integration

BugHunters Vision is fully compatible with the VS Code Playwright extension and Playwright UI mode.

  • Tests appear normally in the VS Code Test Explorer sidebar
  • npx playwright test --list works as expected
  • npx playwright test --ui works as expected

This works because vision is a standalone object — not a custom fixture — so the native import { test } from '@playwright/test' is preserved.


Multi-project support

When the same test runs across multiple Playwright projects (e.g. chromium, firefox, android, iOS), BugHunters Vision automatically handles each project independently:

  • Separate baselines per projectwidget-chromium--checkout-flow--cart-page.baseline.png, widget-firefox--checkout-flow--cart-page.baseline.png, etc. Mobile and desktop layouts never share a baseline.
  • Separate rows in the HTML report — 1 test × 6 projects = 6 rows, not 1.
  • Project badge on each row[widget-chromium], [widget-firefox], etc. are shown as badges in the report.
[widget-chromium]  checkout flow    [PASS]  ›  4 visual checks · all passed
[widget-firefox]   checkout flow    [PASS]  ›  4 visual checks · all passed
[widget-android]   checkout flow    [FAIL]  ›  4 visual checks · 1 failed
[widget-iOS]       checkout flow    [PASS]  ›  4 visual checks · all passed

No configuration needed — project isolation is automatic from the Playwright project name.


Configuration reference

Options can be set in playwright.config.ts under use: or via environment variables. The use: block takes precedence; env vars are the fallback.

Credit consumption

| Scenario | Credits consumed | |---|---| | Screenshots are pixel-identical (Fast Pixel Match) | 0 — no AI call made | | Pixels differ → AI evaluates | 1 credit per vision.check() call | | First run — baseline created | 0 — no comparison performed |

Each token comes with a credit balance shown after every run. When credits run low (10% remaining), you receive an automatic email alert. Top up or upgrade at bughunters.dev.


| Option in use: | Environment variable | Default | Description | |---|---|---|---| | bvMode | BHV_MODE | 'ai' | Comparison mode: ai · strict · off | | bvFailMode | BHV_FAIL_MODE | 'hard' | Fail mode: hard · soft (see below) | | bvDataTolerance | BHV_DATA_TOLERANCE | 'tolerant' | AI data tolerance: tolerant · strict (see below) | | bvSnapshotsDir | BHV_SNAPSHOTS_DIR | './bhv-snapshots' | Where to store PNG snapshots | | bvUpdateBaseline | BHV_UPDATE_BASELINE | false | Set to true to force-overwrite all baselines | | bvApiUrl | BUGHUNTERS_VISION_API_URL | (production) | Override API endpoint (e.g. for self-hosted) | | bvToken | BUGHUNTERS_VISION_TOKEN | — | Your API token — always use env var, never commit |

Note: Because vision is a standalone object (not a Playwright fixture), the use: options are not TypeScript-typed. Add as Record<string, unknown> to the use block to suppress TS errors.

Comparison modes

| Mode | Behaviour | |---|---| | ai (default) | Fast Pixel Match first → AI evaluation only if pixels differ | | strict | Fail immediately on any pixel difference — no AI call | | off | Skip all visual checks — every test passes instantly |

BHV_MODE=strict npx playwright test
BHV_MODE=off    npx playwright test

Soft assertions (bvFailMode)

By default (hard), a visual regression immediately stops the test — just like a standard Playwright assertion. Switch to soft to let long E2E tests run to completion even when a visual check fails; Playwright will still mark the test as failed at the end.

| Mode | Behaviour | |---|---| | hard (default) | Stops the test on first visual failure | | soft | Queues the failure via expect.soft() — test continues, marked FAIL at the end |

// playwright.config.ts
use: {
  bvFailMode: 'soft',
} as Record<string, unknown>,
BHV_FAIL_MODE=soft npx playwright test

When soft mode is active, each failed step prints a yellow line in the terminal:

🟡 [BugHunters Vision] SOFT FAIL: payment-page — Button label changed from "Pay now" to "Subscribe".

AI data tolerance (bvDataTolerance)

The most powerful option for teams testing against shared staging environments where live data changes between runs.

| Value | AI behaviour | |---|---| | tolerant (default) | A wide range of real-world variation is ignored automatically — see tables below. Only structural regressions get flagged. | | strict | Flag all differences including numeric values. Use when testing against a controlled environment with deterministic data. |

# Override in CI without changing config:
BHV_DATA_TOLERANCE=strict npx playwright test

What tolerant mode ignores automatically (no custom prompt needed):

| Category | Examples | |---|---| | Dynamic metrics | Chart bar heights, counters, percentages, timestamps, usernames | | GDPR / cookie consent | Cookie banners that appear on fresh CI sessions (cookies cleared) | | Maintenance banners | "Scheduled downtime Saturday 02:00 UTC" bars at top/bottom of page | | 3rd-party widgets | Chatbot agents (Intercom, Zendesk), rotating support avatars, live-chat status | | Global layout shifts | Entire page pushed down by a new informational banner — content intact |

What the AI always catches regardless of mode (FAIL even without a custom prompt):

| Category | Examples | |---|---| | Technical artifacts | €NaN, undefined, [Object object], broken image placeholders | | Action button icon changes | Deploy arrow → Trash/Delete icon (semantic regression — dangerous) | | Missing core structure | Header absent, primary CTA missing, main content card gone |

Why this matters: Most visual testing tools break on shared staging environments because counters tick, charts refresh, and CI browsers have no cookies — triggering false failures on every GDPR banner and maintenance notice. With tolerant mode, the AI ignores all expected environmental variation and catches only real regressions. In practice, vision.check(page, 'name') with no custom prompt is sufficient for ~70% of production use cases.


HTML Report

After every run a report is generated at bhv-report/index.html. No upload, no cloud — everything is local.

The report groups results by Playwright test (not by individual visual checks), matching the hierarchy you know from the native Playwright HTML report.

Report structure

✅ checkout flow  [widget-chromium]     3 visual checks · all passed    ›
   ✅ cart-page          PIXEL · PASS
   ✅ shipping-form      AI    · PASS
   ✅ payment-page       AI    · PASS

❌ checkout flow  [widget-android]      3 visual checks · 1 failed      ›
   ✅ cart-page          PIXEL · PASS
   ✅ shipping-form      PIXEL · PASS
   ❌ payment-page       AI    · FAIL   ›
      "Button label changed from 'Pay now' to 'Subscribe'."
      [Baseline]  [Current]  [⚡ Diff]

Features

  • Summary bar — counts Playwright tests (not individual checks): Total · Passed · Failed · Errors · Baselines
  • Filter — All / Passed / Failed / Errors — operates at the test level
  • Two-level accordion — click a test row to expand its visual checks; click a check to expand verdict + images
  • Project badge[widget-chromium], [widget-android], etc. shown per test row when multiple projects are used
  • Baseline / Current / Diff tab view per check — client-side pixel diff rendered in browser, click any image for fullscreen zoom
  • AI verdict with human-readable explanation for every AI-evaluated check
  • Method badgeAI · PIXEL · NEW — shows how each check was evaluated
  • "📸 Pass screenshots" toggle — by default, passed checks show only a verdict note; click the toggle in the report header to reveal baseline (and current, for AI-evaluated) screenshots for passed steps
  • Light / Dark / System theme toggle, persisted in localStorage

Security model

  • Zero cloud storage — images live only in RAM for the duration of one API request and are never written to disk on our side.
  • Local baselines — all baseline PNGs stay on your machine or CI runner. We never see your UI.
  • Token isolation — each customer token is scoped to their own credit bucket. Our Anthropic key never leaves our infrastructure.
  • InfoSec approved — suitable for banking, healthcare, and enterprise environments where screenshots cannot leave the perimeter.

Automated Security Scanners (Socket.dev / Snyk)

Automated supply-chain scanners may flag this package for 'Network Access', 'File System Access', and 'Environment Variable Access'. This is fully intended behavior:

  • Network: We send Base64 image diffs to bugvision-backend.vercel.app for AI evaluation.
  • File System: We read/write baseline images to ./bhv-snapshots and generate HTML reports to ./bhv-report.
  • Environment: We securely read BUGHUNTERS_VISION_TOKEN from your environment to authenticate the API.

Native Playwright integration

BugHunters Vision results appear directly in the native Playwright HTML report as annotations and attachments:

  • BugHunters Vision - <step>: PASS/FAIL/ERROR — <reason> annotation per visual check
  • bhv-baseline [<step>] and bhv-current [<step>] screenshot attachments per visual check

CI/CD integration

BugHunters Vision works in any CI/CD system that runs Playwright — GitHub Actions, GitLab CI, Bitbucket Pipelines, CircleCI, Jenkins, and more.

Baseline strategy

Commit your bhv-snapshots/ directory to Git. Every CI runner checks out the same baselines automatically — no cache, no external storage, no flaky state.

# On your machine, create baselines once:
BHV_UPDATE_BASELINE=true npx playwright test

# Commit them:
git add bhv-snapshots/ && git commit -m "chore: update BHV baselines"

GitHub Actions — Step Summary

After each run, bhv-report/summary.md is generated. One line publishes it as a rich Markdown summary directly in the workflow:

- name: Publish BHV summary
  if: always()
  run: cat bhv-report/summary.md >> $GITHUB_STEP_SUMMARY

- name: Upload BHV report
  uses: actions/upload-artifact@v4
  if: always()
  with:
    name: bhv-report
    path: bhv-report/

The summary shows: total checks · pixel match passes (free) · AI passes · failures with AI reasons · method badges.

Correct exit codes

Use continue-on-error: true on the test step so the report is always uploaded, then explicitly fail the job if regressions were detected:

- name: Run visual tests
  id: tests
  continue-on-error: true
  run: npx playwright test
  env:
    BUGHUNTERS_VISION_TOKEN: ${{ secrets.BUGHUNTERS_VISION_TOKEN }}

# ... upload artifact ...

- name: Fail if regressions detected
  if: steps.tests.outcome == 'failure'
  run: exit 1

Cross-OS baselines

By default, baselines are named <platform>--<test>--<step>.png (e.g. linux--checkout--cart-page.png). This prevents Fast Pixel Match failures when Mac developers commit baselines that differ from Linux CI runners due to font rendering.

To disable and use the old format: set bvLegacyNaming: true in playwright.config.ts or BHV_LEGACY_NAMING=true in your environment.


Links