npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

synthetic-test-fabric

v0.3.0

Published

Generic engine for autonomous synthetic test loops: simulation → browser flows → scoring → feedback

Readme

Synthetic Test Fabric

CI npm version License: MIT Node >=20

Self-improving QA infrastructure. No test maintenance. Coverage grows every run.


Synthetic Test Fabric is a TypeScript framework that replaces hand-written test maintenance with a closed loop: generate synthetic users → simulate their behavior → extract observed paths → generate and execute browser flows → score results → feed findings into the next iteration.

You write adapters for your app once. The framework does the rest.


See it run in 30 seconds

git clone https://github.com/kaneshir/synthetic-test-fabric
cd synthetic-test-fabric
npm install
npx playwright install chromium
npx tsx demo/run.ts

No external services. No API keys. Full loop against a static HTML taskboard app — completes in under 30 seconds and produces a scored report.


The problem it solves

Playwright is an executor. It runs specs you wrote against state you set up manually. That model breaks when you have hundreds of flows, a changing product, and no time to write tests for every new path.

Synthetic Test Fabric inverts this: synthetic users navigate your app autonomously, their paths become the test corpus, and the corpus grows automatically. Coverage is a function of runtime, not headcount.

SEED → VERIFY → RUN → ANALYZE → GENERATE_FLOWS → TEST → SCORE → FEEDBACK → repeat

Each iteration, the system finds new paths, generates new specs, scores what it has, and uses that score to steer the next iteration toward gaps.


Install

npm install synthetic-test-fabric

Node.js 20+ · TypeScript 5+

The packaged demo uses Playwright directly. Install @playwright/test when you want to run demo/run.ts from the npm tarball.

Optionally add the Lisa MCP server for LLM-driven element inference in the flow generation step:

npm install @kaneshir/lisa-mcp

Your first loop in 5 minutes

1. Implement the eight adapter interfaces:

import {
  AppAdapter, SimulationAdapter, ScoringAdapter, FeedbackAdapter,
  BrowserAdapter, MemoryAdapter, Reporter, ScenarioPlanner,
} from 'synthetic-test-fabric';

// Start with stubs — the orchestrator surfaces errors at each step
export class MyAppAdapter implements AppAdapter {
  async seed(iterRoot, config)       { /* write mini-sim-export.json + lisa.db */ }
  async verify(iterRoot)             { /* throw if required aliases missing */ }
  async reset(iterRoot)              {}
  async validateEnvironment()        { return { healthy: true, errors: [], warnings: [] }; }
  async importRun(iterRoot, dbUrl)   {}
}

See demo/adapters.ts for a complete working reference — every method implemented, no external dependencies.

2. Wire the orchestrator:

import { FabricOrchestrator, makeLoopId } from 'synthetic-test-fabric';

const orchestrator = new FabricOrchestrator({
  app:        new MyAppAdapter(),
  simulation: new MySimulationAdapter(),
  scoring:    new MyScoringAdapter(),
  feedback:   new MyFeedbackAdapter(),
  memory:     new MyMemoryAdapter(),
  browser:    new MyBrowserAdapter(),
  reporters:  [new ConsoleReporter()],
  planner:    new MyScenarioPlanner(),
});

await orchestrator.run({
  loopId:                 makeLoopId(),
  iterations:             1,
  ticks:                  5,
  liveLlm:                false,
  allowRegressionFailures: false,
  seekers:                1,
  employers:              1,
  employees:              0,
});

Or use the fab CLI with a config file:

// fabric.config.ts
import { FabricConfig } from 'synthetic-test-fabric';
export default {
  adapters: { app: new MyAppAdapter(), /* ... */ },
  defaults:  { iterations: 3, ticks: 10 },
} satisfies FabricConfig;
npx fab orchestrate          # run the full loop
npx fab smoke                # single iteration smoke check
npx fab check --root /tmp/fabric-loop/iter-001 --threshold 8  # CI score gate

What you get out of the loop

After each iteration the framework produces a six-dimension score:

| Dimension | What it measures | |-----------|-----------------| | persona_realism | Did agents hit their stated goals? | | coverage_delta | New screen paths found vs previous run | | fixture_health | Seeded relationships all resolve cleanly | | discovery_yield | New error outcomes discovered | | regression_health | Previously passing flows still pass | | flow_coverage | Playwright pass rate across all executed flows |

The score drives the next iteration — low novelty steers the planner toward unexplored scenarios; low regression_health flags regressions immediately.


Advanced features

| Feature | How to use | |---------|-----------| | Flakiness tracking | FlakinessTracker persists per-flow failure rates; failing flows get quarantined automatically | | Adversarial personas | Set adversarial: true in persona YAML; the agent probes validation gaps and unauthorized routes | | CI score gate | fab check --threshold 8.0 or assertScoreThreshold(score, 8.0) in your pipeline | | Slack reporting | SlackReporter posts a score summary + dimension breakdown to any webhook | | Visual regression | VisualRegression.capture/compare with pixelmatch; baselines managed via fab baseline | | HTML trend report | HtmlReporter generates a self-contained report with Chart.js trend across the last 30 iterations | | Headless HTTP | ApiExecutor records behavior events without a browser — 80× faster than Playwright for simulation | | LLM element inference | @kaneshir/lisa-mcp peer gives BrowserAdapter AI-driven key discovery via the Lisa MCP server |


How it relates to @kaneshir/lisa-mcp

@kaneshir/lisa-mcp is an optional peer package that ships a precompiled Lisa MCP server binary. It enables the LLM-driven element inference path in the GENERATE_FLOWS step — the Lisa MCP server navigates your live app, discovers interactive elements by their stable identifiers, and produces Playwright locators without hand-labeling selectors. It works with any web app; see docs/lisa-mcp.md for the full integration guide.

Without it: your BrowserAdapter supplies its own selectors (data-testid, hard-coded, etc.) — fully supported.

With it: your BrowserAdapter.runSpecs() implementation can call buildLisaMcpCommand() to get the binary path, spawn the MCP server, and let an LLM use lisa_explore_screen / lisa_tap_key tools to navigate the live app and generate spec steps from actual observations before executing them.

npm install @kaneshir/lisa-mcp

Documentation

| Doc | Audience | What's covered | |-----|----------|---------------| | docs/prerequisites.md | Everyone | Start here — what STF actually requires to be effective, honest cost estimates, realistic timeline | | docs/testability-standard.md | Everyone | Required self-assessment — pass/fail checklist for all 8 adapters; determines whether your product is ready for integration | | docs/overview.md | Everyone | Framework model, loop phases, adapters, lisa-mcp, scoring | | docs/example-walkthrough.md | Everyone | One full iteration, file by file — what actually gets written and why | | docs/quickstart.md | Engineers | Step-by-step wiring guide — zero to working loop | | docs/architecture.md | Architects | Full call chain, lisa.db schema, MCP integration, feedback loop design | | docs/adapter-contract.md | Engineers | Every interface, every method, with inline guidance | | docs/run-root-contract.md | Engineers | Artifact layout and environment variable contract | | docs/persona-yaml-reference.md | QA engineers | Persona schema, pressure model, adversarial personas, examples | | docs/lisa-mcp.md | Engineers | Lisa MCP binary, MCP tools reference, showKeys, troubleshooting | | docs/for-qa-engineers.md | QA engineers | What your job becomes, how to steer the system, writing personas | | docs/executive-brief.md | VPs / Directors | Offshore transcendence, ROI, strategic positioning, decision criteria | | docs/value-proposition.md | VPs / Directors | Business case, Gen 3 QA framing | | CONTRIBUTING.md | Contributors | How to contribute to the framework itself |


License

MIT — see LICENSE.