npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@mr_ozio/playwright-stealth

v1.0.0

Published

Playwright-native stealth with stronger modern detector coverage than the legacy plugin stack.

Downloads

140

Readme

Playwright Stealth

CI

@mr_ozio/playwright-stealth is a stealth plugin for playwright. It patches common automation leaks in Chromium, applies fixes before the first navigation, and helps Playwright sessions look less synthetic on public bot detectors and real sites.

It focuses on the parts that usually matter in practice: request headers and client hints, worker contexts, Playwright-specific bindings, and browser surfaces that still light up on modern detection pages.

Requirements

  • Node.js >=18
  • playwright ^1.59.1
  • ESM-only package surface via import, not CommonJS require()
  • Chromium-focused stealth coverage; non-Chromium browsers can still use the wrapper, but CDP-backed evasions only apply on Chromium

Benchmarks

Checked-in benchmark snapshot from this repo, last updated on 2026-04-15:

| Detector | Baseline | @mr_ozio/playwright-stealth | | --- | --- | --- | | Rebrowser bot detector | vanilla Playwright: 5 red / 4 green / 1 neutral | 0 red / 9 green / 1 neutral | | AreYouHeadless | fresh headless Chromium is detected | You are not Chrome headless | | Device & Browser Info | baseline not tracked | You are human! (0 signals) | | Pixelscan bot check | baseline not tracked | You're Definitely a Human (0 signals) | | CreepJS headless line | previous reference: 50% like headless | 0% headless, 6% like headless, 100% stealth | | Sannysoft key rows | baseline passes key rows | current package also passes key rows |

What that means in practice:

  • It closes the obvious fresh-Playwright leaks that public detectors still catch.
  • It stays clean on the important Sannysoft and AreYouHeadless checks.
  • It performs strongly on the current Rebrowser and CreepJS routes.

Treat this table as a reproducible snapshot, not a standing promise that every detector will stay unchanged over time.

Full checked-in comparison: benchmark docs

What This Package Covers

The package is built around the leaks that still show up in real Playwright runs.

  • First-request correctness: the package fixes the very first document request, modern UA-CH headers, fullVersionList, and Accept-Language, so pages do not see a patched JS surface with stale network headers. Basis: tests/network.spec.ts
  • Real Playwright lifecycle coverage: stealth is bound through browser.newPage(), browser.newContext(), launchPersistentContext(), connect(), and connectOverCDP() paths, instead of assuming a Puppeteer-style plugin lifecycle. Basis: src/add-extra.ts, tests/network.spec.ts, tests/stealth.spec.ts
  • Worker coverage: dedicated workers, shared workers, service workers, and worker blobs inherit the stealth payload instead of leaking a cleaner browser surface only on the main page. Basis: src/stealth/workers.ts, tests/workers.spec.ts
  • Playwright binding hardening: it hides obvious Playwright globals and makes context.exposeFunction() bindings look much less synthetic, including non-enumerable bindings and native-looking toString(). Basis: src/stealth/expose-function.ts, tests/stealth.spec.ts
  • Cleaner stack traces: it strips UtilityScript and related evaluation frames from page-visible errors, which are specific to Playwright's execution model rather than old Puppeteer fingerprints. Basis: src/stealth.ts, tests/stealth.spec.ts
  • Updated headless-adjacent surfaces: beyond classic webdriver and navigator.plugins, it also normalizes modern surfaces like speech voices, device memory, screen and outer window dimensions, PDF viewer exposure, media state, and more. Basis: tests/stealth.spec.ts

Install

npm install playwright @mr_ozio/playwright-stealth

Usage

import { chromium } from 'playwright'
import { stealth } from '@mr_ozio/playwright-stealth'

const browser = await stealth(chromium).launch({
  headless: true
})

const context = await browser.newContext()
const page = await context.newPage()
await page.goto('https://example.com')

With playwright-extra:

import { chromium } from 'playwright-extra'
import { StealthPlugin } from '@mr_ozio/playwright-stealth'

chromium.use(StealthPlugin())

const browser = await chromium.launch({
  headless: true
})

Notes

  • stealth(chromium, options) is the native Playwright wrapper API.
  • StealthPlugin(options) is the explicit plugin factory for playwright-extra and chromium.use(...).
  • sourceurl is kept as a no-op for config compatibility because Playwright does not expose Puppeteer's old identifying sourceURL path.
  • The wrapper makes context.newPage() wait for page-level stealth hooks so CDP-based patches apply before the first navigation.

Tuning

import { chromium } from 'playwright'
import { stealth } from '@mr_ozio/playwright-stealth'

const browser = await stealth(chromium, {
  disabledEvasions: ['navigator.plugins'],
  evasionOptions: {
    'navigator.languages': {
      languages: ['de-DE', 'de']
    },
    'webgl.vendor': {
      vendor: 'Intel Inc.',
      renderer: 'Intel Iris OpenGL Engine'
    }
  }
}).launch()

Verification

Quick local regression suite:

npm test

Full verification loop:

npm run verify

External detector run with screenshots and JSON report:

npm run detectors

Artifacts are written to experiments/artifacts/detectors/<timestamp>/. The checked-in summary lives in benchmark docs. Each scenario bundle also includes route-plan.json and checkpoints.json so seeded route changes and cumulative score deltas are inspectable after the run.

The detector runner supports matrix inputs through environment variables:

DETECTOR_HEADLESS=true,false \
DETECTOR_CHANNEL=default,chrome \
DETECTOR_PERSISTENT=false,true \
node experiments/scripts/run-detectors.mjs

Seeded route planning and additive wait jitter for experiment runs:

node experiments/scripts/run-detectors.mjs \
  --seed exp-004-a \
  --route-mode seeded \
  --wait-jitter-ms 750

Explicit detector ordering for route A/B experiments:

node experiments/scripts/run-detectors.mjs \
  --detector-order deviceandbrowserinfo,areyouheadless,creepjs,sannysoft,pixelscan,rebrowser

Small seed sweep with an aggregate summary:

npm run benchmark:seeds -- --count 5 --seed-prefix exp-002 --wait-jitter-ms 750

Local behavioral lab with per-action checkpoints:

npm run benchmark:behavior

That local route now walks through dedicated worker, shared worker, same-origin iframe, cross-origin iframe, and popup probes so action-level context transitions can be inspected without re-running the public detector suite after every click.

Opt-in external challenge lane bootstrap:

npm run benchmark:external:init

That bootstrap now creates both experiments/scripts/external-benchmark-targets.local.json and .env.local if they do not already exist.

Planning the external lane with the local catalog:

npm run benchmark:external:proxy -- --server http://127.0.0.1:8080 --cohort proxy-cohort-a
npm run benchmark:external -- --list --include-disabled
npm run benchmark:external -- --dry-run --include-disabled
npm run benchmark:external:validate
npm run benchmark:external:validate -- --target managed-challenge-lab --strict

Updating one local target without hand-editing JSON:

npm run benchmark:external:target -- \
  --target managed-challenge-lab \
  --url https://example.com/approved-route \
  --enable true \
  --approval approved \
  --owner maintainer-team \
  --proxy-cohort proxy-cohort-a

If you want the external lane to run the new popup plus cross-origin iframe validation after load, add:

npm run benchmark:external:target -- \
  --target managed-challenge-lab \
  --interaction-profile context-probes-v1

When the first approved target is ready, compare load-only versus context-probes-v1 in one run:

npm run benchmark:external:compare -- --target managed-challenge-lab

For challenge routes that need to survive the interstitial and only then run popup plus iframe probes in the same session:

npm run benchmark:external:clearance -- \
  --target managed-challenge-lab \
  --clearance-wait-ms 20000

That runner keeps one browser context alive, polls the page until challenge indicators disappear or the timeout expires, and only then executes the configured post-clearance interaction profile.

If you want to preserve a manually cleared session and replay it later:

npm run benchmark:external:clearance -- \
  --target managed-challenge-lab \
  --headless false \
  --clearance-wait-ms 120000 \
  --save-storage-state experiments/artifacts/external-benchmarks/manual-clearance-state.json

Then reuse that state on the next run:

npm run benchmark:external:clearance -- \
  --target managed-challenge-lab \
  --storage-state experiments/artifacts/external-benchmarks/manual-clearance-state.json \
  --clearance-wait-ms 5000

To compare a real cleared-browser manual report against one automation artifact:

npm run benchmark:external:manual-compare -- \
  --manual-report https://challenge.oyaebu.ru/api/manual-reports/<report-id> \
  --automation-result /absolute/path/to/probe/result.json

The compare summary also reports persona mismatch count, which excludes expected host-only drift like page.href and page.title.

To summarize several real manual reports as a cohort and see whether automation still fits inside that real-world spread:

npm run benchmark:external:manual-cohort -- \
  --manual-report https://challenge.oyaebu.ru/api/manual-reports/<id-1>,https://challenge.oyaebu.ru/api/manual-reports/<id-2> \
  --automation-result /absolute/path/to/probe/result.json

That summary is useful once you start collecting reports from different real browsers, devices, or languages. It tells you where the cohort naturally varies and which reports, if any, still yield persona mismatch count: 0 against the current automation artifact.

Current scope decision:

  • Treat the default stealth persona as a desktop-chromium baseline, not as a universal browser impersonation layer.
  • Do not treat Android, Safari, or Firefox reports as short-term parity targets for this baseline.
  • If we ever pursue those families, they should be explicit opt-in research branches with their own artifacts and acceptance criteria.
  • Measure browser-surface stealth separately from edge or server-side decisions: TLS/JA4, HTTP/2 fingerprints, IP or ASN reputation, and vendor threat intelligence are outside this package's direct control and should be evaluated through the external benchmark lane.

For the oyaebu stand, there is now also a same-host automation target:

npm run benchmark:external -- \
  --target oyaebu-challenge-post-clearance-probes \
  --no-proxy \
  --config experiments/scripts/external-benchmark-targets.local.json

That route lands directly on https://challenge.oyaebu.ru/post-clearance-probes-v1, so comparing it against a manual post-clearance report can eliminate even the expected host-only drift.

If challenge clearance only works in your real browser, use the handoff helper instead:

npm run benchmark:external:manual-handoff -- \
  --open-url https://challenge.oyaebu.ru/clearance-gate-v1 \
  --hostname challenge.oyaebu.ru \
  --route-id post-clearance-probes-v1 \
  --wait-ms 180000 \
  --compare-automation-result /absolute/path/to/probe/result.json

That command can open the route in your default browser, use the latest matching report as a baseline watermark, wait for the next fresh manual report from the stand, and immediately emit a compare bundle in experiments/artifacts/external-benchmarks.

If .env.local has a proxy configured but you want to hit an owned target directly from the current network, add --no-proxy.

To replay a previously saved browser storage state through the non-browser request-control lane on an owned route:

npm run benchmark:external:request-control -- \
  --target managed-challenge-lab \
  --storage-state experiments/artifacts/external-benchmarks/manual-clearance-state.json \
  --no-proxy

That request-control replay only seeds cookies from the Playwright storage-state file. It is meant to answer a narrower continuity question than browser replay: whether the route still progresses when the browser session cookie is reduced to plain request cookies.

If the route only clears in your normal browser, export just the target cookies from DevTools or a cookie extension and turn them into a minimal Playwright storage-state file:

npm run benchmark:external:cookie-import -- \
  --input ~/Downloads/challenge-oyaebu-cookies.json \
  --hostname challenge.oyaebu.ru \
  --output experiments/artifacts/external-benchmarks/oyaebu-imported-state.json

Then replay only that host-relevant state through request-control:

npm run benchmark:external:request-control -- \
  --target oyaebu-challenge-clearance-gate \
  --storage-state experiments/artifacts/external-benchmarks/oyaebu-imported-state.json \
  --no-proxy

The importer accepts either a Playwright-style storage-state JSON, a plain JSON cookie array copied from browser tooling, or a Netscape cookies.txt export. It filters the result down to cookies actually relevant to the target hostname so you do not have to drag an entire browser profile into the repo.

If you want the fuller live browser session state instead of a cookie export, launch a normal Chrome with remote debugging enabled, clear the route manually, and capture the matching tab context through CDP:

open -na "Google Chrome" --args --remote-debugging-port=9222

npm run benchmark:external:cdp-state -- \
  --cdp-url http://127.0.0.1:9222 \
  --hostname challenge.oyaebu.ru \
  --url-contains post-clearance-probes-v1 \
  --output experiments/artifacts/external-benchmarks/oyaebu-cdp-state.json \
  --summary experiments/artifacts/external-benchmarks/oyaebu-cdp-state-summary.md

That capture still writes a reduced host-relevant storage-state bundle, but it starts from the live browser context after manual clearance instead of from a manual cookie export.

Promoting one target to live-ready and failing fast if anything is still missing:

npm run benchmark:external:target -- \
  --target managed-challenge-lab \
  --url https://owned-or-approved.example \
  --owner maintainer-team \
  --proxy-cohort proxy-cohort-a \
  --ready \
  --strict

First live external run, after replacing local placeholder URLs with owned or approved targets:

BENCHMARK_PROXY_SERVER=http://127.0.0.1:8080 \
BENCHMARK_PROXY_USERNAME=my-user \
BENCHMARK_PROXY_PASSWORD=my-pass \
BENCHMARK_PROXY_COHORT=proxy-cohort-a \
npm run benchmark:external -- --target managed-challenge-lab

The runner prefers experiments/scripts/external-benchmark-targets.local.json when it exists and refuses live runs against disabled or placeholder example.invalid targets. The checked-in .env.example shows the proxy variable names used by the external lane. benchmark:external:validate now also tells you whether the current proxy config and proxy cohort make any live-ready target actually runnable.

Roadmap

This repo is also set up for repeatable stealth experiments, not just one-off fixes.

Quick corpus checks:

npm run check:heuristics
npm run verify:heuristics
npm run benchmark:external:challenge-corpus:update
npm run benchmark:external:challenge-corpus:replay:update
npm run benchmark:external:challenge-corpus:check

The GitHub Actions workflow now runs verify:heuristics as a dedicated heuristics job alongside the broader verify job.

CI Signals

  • heuristics: lightweight safety net for challenge heuristics, corpus replay, and negative-fixture false-positive checks.
  • verify: broader project validation path for typecheck, Playwright tests, package smoke checks, and tarball inspection.

Community

License

MIT. See LICENSE.