@mr_ozio/playwright-stealth
v1.0.0
Published
Playwright-native stealth with stronger modern detector coverage than the legacy plugin stack.
Downloads
140
Maintainers
Readme
Playwright Stealth
@mr_ozio/playwright-stealth is a stealth plugin for playwright. It patches common automation leaks in Chromium, applies fixes before the first navigation, and helps Playwright sessions look less synthetic on public bot detectors and real sites.
It focuses on the parts that usually matter in practice: request headers and client hints, worker contexts, Playwright-specific bindings, and browser surfaces that still light up on modern detection pages.
Requirements
- Node.js
>=18 playwright^1.59.1- ESM-only package surface via
import, not CommonJSrequire() - Chromium-focused stealth coverage; non-Chromium browsers can still use the wrapper, but CDP-backed evasions only apply on Chromium
Benchmarks
Checked-in benchmark snapshot from this repo, last updated on 2026-04-15:
| Detector | Baseline | @mr_ozio/playwright-stealth |
| --- | --- | --- |
| Rebrowser bot detector | vanilla Playwright: 5 red / 4 green / 1 neutral | 0 red / 9 green / 1 neutral |
| AreYouHeadless | fresh headless Chromium is detected | You are not Chrome headless |
| Device & Browser Info | baseline not tracked | You are human! (0 signals) |
| Pixelscan bot check | baseline not tracked | You're Definitely a Human (0 signals) |
| CreepJS headless line | previous reference: 50% like headless | 0% headless, 6% like headless, 100% stealth |
| Sannysoft key rows | baseline passes key rows | current package also passes key rows |
What that means in practice:
- It closes the obvious fresh-Playwright leaks that public detectors still catch.
- It stays clean on the important Sannysoft and AreYouHeadless checks.
- It performs strongly on the current Rebrowser and CreepJS routes.
Treat this table as a reproducible snapshot, not a standing promise that every detector will stay unchanged over time.
Full checked-in comparison: benchmark docs
What This Package Covers
The package is built around the leaks that still show up in real Playwright runs.
- First-request correctness: the package fixes the very first document request, modern UA-CH headers,
fullVersionList, andAccept-Language, so pages do not see a patched JS surface with stale network headers. Basis:tests/network.spec.ts - Real Playwright lifecycle coverage: stealth is bound through
browser.newPage(),browser.newContext(),launchPersistentContext(),connect(), andconnectOverCDP()paths, instead of assuming a Puppeteer-style plugin lifecycle. Basis:src/add-extra.ts,tests/network.spec.ts,tests/stealth.spec.ts - Worker coverage: dedicated workers, shared workers, service workers, and worker blobs inherit the stealth payload instead of leaking a cleaner browser surface only on the main page. Basis:
src/stealth/workers.ts,tests/workers.spec.ts - Playwright binding hardening: it hides obvious Playwright globals and makes
context.exposeFunction()bindings look much less synthetic, including non-enumerable bindings and native-lookingtoString(). Basis:src/stealth/expose-function.ts,tests/stealth.spec.ts - Cleaner stack traces: it strips
UtilityScriptand related evaluation frames from page-visible errors, which are specific to Playwright's execution model rather than old Puppeteer fingerprints. Basis:src/stealth.ts,tests/stealth.spec.ts - Updated headless-adjacent surfaces: beyond classic
webdriverandnavigator.plugins, it also normalizes modern surfaces like speech voices, device memory, screen and outer window dimensions, PDF viewer exposure, media state, and more. Basis:tests/stealth.spec.ts
Install
npm install playwright @mr_ozio/playwright-stealthUsage
import { chromium } from 'playwright'
import { stealth } from '@mr_ozio/playwright-stealth'
const browser = await stealth(chromium).launch({
headless: true
})
const context = await browser.newContext()
const page = await context.newPage()
await page.goto('https://example.com')With playwright-extra:
import { chromium } from 'playwright-extra'
import { StealthPlugin } from '@mr_ozio/playwright-stealth'
chromium.use(StealthPlugin())
const browser = await chromium.launch({
headless: true
})Notes
stealth(chromium, options)is the native Playwright wrapper API.StealthPlugin(options)is the explicit plugin factory forplaywright-extraandchromium.use(...).sourceurlis kept as a no-op for config compatibility because Playwright does not expose Puppeteer's old identifying sourceURL path.- The wrapper makes
context.newPage()wait for page-level stealth hooks so CDP-based patches apply before the first navigation.
Tuning
import { chromium } from 'playwright'
import { stealth } from '@mr_ozio/playwright-stealth'
const browser = await stealth(chromium, {
disabledEvasions: ['navigator.plugins'],
evasionOptions: {
'navigator.languages': {
languages: ['de-DE', 'de']
},
'webgl.vendor': {
vendor: 'Intel Inc.',
renderer: 'Intel Iris OpenGL Engine'
}
}
}).launch()Verification
Quick local regression suite:
npm testFull verification loop:
npm run verifyExternal detector run with screenshots and JSON report:
npm run detectorsArtifacts are written to experiments/artifacts/detectors/<timestamp>/.
The checked-in summary lives in benchmark docs.
Each scenario bundle also includes route-plan.json and checkpoints.json so
seeded route changes and cumulative score deltas are inspectable after the run.
The detector runner supports matrix inputs through environment variables:
DETECTOR_HEADLESS=true,false \
DETECTOR_CHANNEL=default,chrome \
DETECTOR_PERSISTENT=false,true \
node experiments/scripts/run-detectors.mjsSeeded route planning and additive wait jitter for experiment runs:
node experiments/scripts/run-detectors.mjs \
--seed exp-004-a \
--route-mode seeded \
--wait-jitter-ms 750Explicit detector ordering for route A/B experiments:
node experiments/scripts/run-detectors.mjs \
--detector-order deviceandbrowserinfo,areyouheadless,creepjs,sannysoft,pixelscan,rebrowserSmall seed sweep with an aggregate summary:
npm run benchmark:seeds -- --count 5 --seed-prefix exp-002 --wait-jitter-ms 750Local behavioral lab with per-action checkpoints:
npm run benchmark:behaviorThat local route now walks through dedicated worker, shared worker, same-origin iframe, cross-origin iframe, and popup probes so action-level context transitions can be inspected without re-running the public detector suite after every click.
Opt-in external challenge lane bootstrap:
npm run benchmark:external:initThat bootstrap now creates both experiments/scripts/external-benchmark-targets.local.json and .env.local if they do not already exist.
Planning the external lane with the local catalog:
npm run benchmark:external:proxy -- --server http://127.0.0.1:8080 --cohort proxy-cohort-a
npm run benchmark:external -- --list --include-disabled
npm run benchmark:external -- --dry-run --include-disabled
npm run benchmark:external:validate
npm run benchmark:external:validate -- --target managed-challenge-lab --strictUpdating one local target without hand-editing JSON:
npm run benchmark:external:target -- \
--target managed-challenge-lab \
--url https://example.com/approved-route \
--enable true \
--approval approved \
--owner maintainer-team \
--proxy-cohort proxy-cohort-aIf you want the external lane to run the new popup plus cross-origin iframe validation after load, add:
npm run benchmark:external:target -- \
--target managed-challenge-lab \
--interaction-profile context-probes-v1When the first approved target is ready, compare load-only versus context-probes-v1 in one run:
npm run benchmark:external:compare -- --target managed-challenge-labFor challenge routes that need to survive the interstitial and only then run popup plus iframe probes in the same session:
npm run benchmark:external:clearance -- \
--target managed-challenge-lab \
--clearance-wait-ms 20000That runner keeps one browser context alive, polls the page until challenge indicators disappear or the timeout expires, and only then executes the configured post-clearance interaction profile.
If you want to preserve a manually cleared session and replay it later:
npm run benchmark:external:clearance -- \
--target managed-challenge-lab \
--headless false \
--clearance-wait-ms 120000 \
--save-storage-state experiments/artifacts/external-benchmarks/manual-clearance-state.jsonThen reuse that state on the next run:
npm run benchmark:external:clearance -- \
--target managed-challenge-lab \
--storage-state experiments/artifacts/external-benchmarks/manual-clearance-state.json \
--clearance-wait-ms 5000To compare a real cleared-browser manual report against one automation artifact:
npm run benchmark:external:manual-compare -- \
--manual-report https://challenge.oyaebu.ru/api/manual-reports/<report-id> \
--automation-result /absolute/path/to/probe/result.jsonThe compare summary also reports persona mismatch count, which excludes expected host-only drift like page.href and page.title.
To summarize several real manual reports as a cohort and see whether automation still fits inside that real-world spread:
npm run benchmark:external:manual-cohort -- \
--manual-report https://challenge.oyaebu.ru/api/manual-reports/<id-1>,https://challenge.oyaebu.ru/api/manual-reports/<id-2> \
--automation-result /absolute/path/to/probe/result.jsonThat summary is useful once you start collecting reports from different real browsers, devices, or languages. It tells you where the cohort naturally varies and which reports, if any, still yield persona mismatch count: 0 against the current automation artifact.
Current scope decision:
- Treat the default stealth persona as a
desktop-chromiumbaseline, not as a universal browser impersonation layer. - Do not treat Android, Safari, or Firefox reports as short-term parity targets for this baseline.
- If we ever pursue those families, they should be explicit opt-in research branches with their own artifacts and acceptance criteria.
- Measure browser-surface stealth separately from edge or server-side decisions: TLS/JA4, HTTP/2 fingerprints, IP or ASN reputation, and vendor threat intelligence are outside this package's direct control and should be evaluated through the external benchmark lane.
For the oyaebu stand, there is now also a same-host automation target:
npm run benchmark:external -- \
--target oyaebu-challenge-post-clearance-probes \
--no-proxy \
--config experiments/scripts/external-benchmark-targets.local.jsonThat route lands directly on https://challenge.oyaebu.ru/post-clearance-probes-v1, so comparing it against a manual post-clearance report can eliminate even the expected host-only drift.
If challenge clearance only works in your real browser, use the handoff helper instead:
npm run benchmark:external:manual-handoff -- \
--open-url https://challenge.oyaebu.ru/clearance-gate-v1 \
--hostname challenge.oyaebu.ru \
--route-id post-clearance-probes-v1 \
--wait-ms 180000 \
--compare-automation-result /absolute/path/to/probe/result.jsonThat command can open the route in your default browser, use the latest matching report as a baseline watermark, wait for the next fresh manual report from the stand, and immediately emit a compare bundle in experiments/artifacts/external-benchmarks.
If .env.local has a proxy configured but you want to hit an owned target directly from the current network, add --no-proxy.
To replay a previously saved browser storage state through the non-browser request-control lane on an owned route:
npm run benchmark:external:request-control -- \
--target managed-challenge-lab \
--storage-state experiments/artifacts/external-benchmarks/manual-clearance-state.json \
--no-proxyThat request-control replay only seeds cookies from the Playwright storage-state file. It is meant to answer a narrower continuity question than browser replay: whether the route still progresses when the browser session cookie is reduced to plain request cookies.
If the route only clears in your normal browser, export just the target cookies from DevTools or a cookie extension and turn them into a minimal Playwright storage-state file:
npm run benchmark:external:cookie-import -- \
--input ~/Downloads/challenge-oyaebu-cookies.json \
--hostname challenge.oyaebu.ru \
--output experiments/artifacts/external-benchmarks/oyaebu-imported-state.jsonThen replay only that host-relevant state through request-control:
npm run benchmark:external:request-control -- \
--target oyaebu-challenge-clearance-gate \
--storage-state experiments/artifacts/external-benchmarks/oyaebu-imported-state.json \
--no-proxyThe importer accepts either a Playwright-style storage-state JSON, a plain JSON cookie array copied from browser tooling, or a Netscape cookies.txt export. It filters the result down to cookies actually relevant to the target hostname so you do not have to drag an entire browser profile into the repo.
If you want the fuller live browser session state instead of a cookie export, launch a normal Chrome with remote debugging enabled, clear the route manually, and capture the matching tab context through CDP:
open -na "Google Chrome" --args --remote-debugging-port=9222
npm run benchmark:external:cdp-state -- \
--cdp-url http://127.0.0.1:9222 \
--hostname challenge.oyaebu.ru \
--url-contains post-clearance-probes-v1 \
--output experiments/artifacts/external-benchmarks/oyaebu-cdp-state.json \
--summary experiments/artifacts/external-benchmarks/oyaebu-cdp-state-summary.mdThat capture still writes a reduced host-relevant storage-state bundle, but it starts from the live browser context after manual clearance instead of from a manual cookie export.
Promoting one target to live-ready and failing fast if anything is still missing:
npm run benchmark:external:target -- \
--target managed-challenge-lab \
--url https://owned-or-approved.example \
--owner maintainer-team \
--proxy-cohort proxy-cohort-a \
--ready \
--strictFirst live external run, after replacing local placeholder URLs with owned or approved targets:
BENCHMARK_PROXY_SERVER=http://127.0.0.1:8080 \
BENCHMARK_PROXY_USERNAME=my-user \
BENCHMARK_PROXY_PASSWORD=my-pass \
BENCHMARK_PROXY_COHORT=proxy-cohort-a \
npm run benchmark:external -- --target managed-challenge-labThe runner prefers experiments/scripts/external-benchmark-targets.local.json when it exists and refuses live runs against disabled or placeholder example.invalid targets.
The checked-in .env.example shows the proxy variable names used by the external lane.
benchmark:external:validate now also tells you whether the current proxy config and proxy cohort make any live-ready target actually runnable.
Roadmap
This repo is also set up for repeatable stealth experiments, not just one-off fixes.
experiments/docs/research-loop.md: the default workflow for focused hypothesis-driven runs.experiments/docs/hypotheses.md: the active queue and compact experiment log.experiments/docs/benchmark-radar.md: what is already measured and what should be added next.experiments/docs/research-loop.md: now also lists the lightweightcheck:heuristicsandverify:heuristicsverification paths.experiments/docs/benchmark-experiment-roadmap.md: the broader experiment backlog, ordered by what should build trust and insight next.experiments/docs/external-evaluation-ledger.md: a practical template for external run families, blind-spot checklists, and honeysite-style evaluation notes.experiments/docs/external-evaluation-ledger-oyaebu-example.md: a worked example based on the currentoyaebuchallenge and post-clearance runs.experiments/docs/external-challenge-corpus.md: the checked-in snapshot of current challenge fixtures by vendor, locale, and page class.experiments/docs/external-challenge-corpus-replay.md: the checked-in replay snapshot comparing fixtureexpectedIndicatorsagainst the current detection heuristics.experiments/docs/request-control-lane.md: the protocol for adding a non-browser control client to external target families.experiments/docs/oyaebu-run-note-template.md: a copyable run note template for the nextoyaebufamily.
Quick corpus checks:
npm run check:heuristics
npm run verify:heuristics
npm run benchmark:external:challenge-corpus:update
npm run benchmark:external:challenge-corpus:replay:update
npm run benchmark:external:challenge-corpus:checkThe GitHub Actions workflow now runs verify:heuristics as a dedicated heuristics job alongside the broader verify job.
CI Signals
heuristics: lightweight safety net for challenge heuristics, corpus replay, and negative-fixture false-positive checks.verify: broader project validation path for typecheck, Playwright tests, package smoke checks, and tarball inspection.
Community
License
MIT. See LICENSE.
