opencheck
v0.2.1
Published
AI-powered end-to-end browser test automation platform
Downloads
623
Readme
OpenCheck
AI-powered end-to-end browser testing, designed to be simple.
By Salfati Group | opencheck.ai
Write test cases in plain English. An AI agent executes them in a real browser or via API calls. Successful steps are cached for instant replay on subsequent runs.
# tests.yaml
baseUrl: "http://localhost:3000"
sessionMode: "isolated"
tests:
- name: "#login"
case: "check login is working"
- case: "#login, then verify dashboard loads after login"
- case: "check logout redirects to login page"
- case: "GET /api/health returns 200"opencheck --config tests.yamlWhy OpenCheck?
Unlike tools like browser-use or raw Playwright MCP, OpenCheck is built from the ground up as a testing framework — not just a browser automation tool.
The key difference: the cache layer.
| Feature | OpenCheck | browser-use | Playwright MCP | |---------|:---------:|:-----------:|:--------------:| | Test framework (pass/fail, exit codes) | ✅ | ❌ | ❌ | | Step caching (instant replay) | ✅ | ❌ | ❌ | | Self-healing (auto-updates on UI change) | ✅ | ❌ | ❌ | | Natural language test cases | ✅ | ✅ | ❌ | | YAML config, zero code | ✅ | ❌ | ❌ | | CI/CD ready (exit codes + summary) | ✅ | ❌ | ❌ | | Browser + API testing (auto-detected) | ✅ | ❌ | ❌ |
The cache makes E2E tests viable for CI/CD
Without caching, every AI-driven test takes ~10 seconds (LLM reasoning + browser interaction). That's fine for 3 tests, but not for 30.
OpenCheck solves this: on the first run, the AI figures out the steps. On every subsequent run, those steps replay in ~50ms — no AI, no LLM calls, no cost. If the UI changes and the cached steps fail, the AI automatically re-executes and updates the cache.
First run: AI agent → ~10s per test (learns the steps)
Second run: Cache replay → ~50ms per test (instant, deterministic)
UI changed: Cache fails → AI re-runs → ~10s → cache updatedThis is what makes OpenCheck a testing framework rather than a demo tool.
Quickstart
Prerequisites
Install
Run directly with no install:
bunx opencheck --config tests.yamlOr install globally:
bun install -g opencheckConfigure
Create a tests.yaml:
baseUrl: "http://localhost:3000"
tests:
- case: "check login is working"
- case: "verify the homepage loads correctly"Run
# Make sure your app is running at baseUrl
opencheck --config tests.yamlWhat happens
- OpenCheck loads and validates your
tests.yaml - For each test case:
- Cached? Replay stored steps instantly (no AI needed)
- No cache / cache stale? AI agent drives the browser
- Named references? The agent can resolve
#login-style references through an internal lookup tool - AI passes? Steps saved to
.opencheck-cache/ - AI fails after retries? Test marked failed, stale cache deleted
- Summary printed with pass/fail/cached counts and timing
- Exit code
0if all pass,1if any fail
Test Recordings
Every test run records a Playwright trace and video by default:
# tests.yaml
baseUrl: "http://localhost:3000"
tests:
- case: "check login is working"
- case: "verify dashboard loads after login"Each test saves a Playwright trace and video to .opencheck-recordings/<test-name>/. Traces capture DOM snapshots, screenshots, network, and console at every step — ideal for debugging failed tests (expected vs actual).
To disable recordings, set recording: false in your tests.yaml.
Viewing Recordings
Locally:
bunx playwright show-trace .opencheck-recordings/check-login-is-working/trace.zipOnline (no install):
Drag the trace.zip into trace.playwright.dev.
Videos:
Open .opencheck-recordings/<test-name>/video.webm in any browser or media player.
CI/CD (GitHub Actions)
Use the built-in reusable workflow. It runs OpenCheck, uploads recordings as artifacts, and posts results as a PR comment updated on each re-run:
# .github/workflows/e2e.yml
name: E2E Tests
on:
pull_request:
jobs:
opencheck:
uses: salfatigroup/opencheck/.github/workflows/opencheck.yml@main
with:
config: tests.yaml
secrets: inheritBecause recordings are enabled by default, trace and video artifacts will be uploaded automatically unless you set recording: false.
That's it. On each PR, you'll get:
- A PR comment with the full test results summary
- Downloadable recording artifacts (traces + videos) in the Actions run summary
- A direct link to view traces at trace.playwright.dev
Manual setup
If you prefer not to use the reusable workflow, add these steps directly:
- name: Run OpenCheck
run: bunx opencheck --config tests.yaml
continue-on-error: true
- name: Upload test recordings
if: always()
uses: actions/upload-artifact@v4
with:
name: opencheck-recordings
path: .opencheck-recordings/
retention-days: 30After the workflow completes, download the opencheck-recordings artifact from the GitHub Actions run summary. Extract it and view traces with:
bunx playwright show-trace trace.zipOr drag trace.zip into trace.playwright.dev for browser-based viewing.
Documentation
- Configuration Reference — All
tests.yamloptions - How It Works — Architecture and AI strategy
- Cache System — How step caching works
- CLI Reference — Flags, exit codes, environment variables
Providers
OpenCheck supports multiple LLM providers via LangChain's universal model interface. Set the model and optionally modelProvider in your tests.yaml.
Anthropic (default)
No extra config needed — provider is auto-inferred from the model name.
model: "claude-sonnet-4-5-20250929"
tests:
- case: "check login is working"export ANTHROPIC_API_KEY="sk-ant-..."
opencheck --config tests.yamlAWS Bedrock
Requires @langchain/aws and AWS credentials.
bun add @langchain/awsmodel: "us.anthropic.claude-sonnet-4-5-20250929-v1:0"
modelProvider: "bedrock"
tests:
- case: "check login is working"# Standard AWS credential chain (env vars, ~/.aws/credentials, IAM role, etc.)
export AWS_DEFAULT_REGION="us-east-1"
opencheck --config tests.yamlGoogle Vertex AI
Requires @langchain/google-vertexai and GCP credentials.
bun add @langchain/google-vertexaimodel: "gemini-1.5-pro"
modelProvider: "google-vertexai"
tests:
- case: "check login is working"export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
opencheck --config tests.yamlOther Providers
Any provider supported by LangChain's initChatModel works. Install the provider package and set modelProvider accordingly (e.g., openai, fireworks, mistralai).
Tech Stack
| Component | Technology | |-----------|-----------| | Runtime | Bun | | Language | TypeScript (strict mode) | | AI Agent | LangChain + LangGraph | | LLM | Multi-provider via LangChain (Anthropic, Bedrock, Vertex AI, and more) | | Browser | Playwright MCP (@playwright/mcp) | | API | curl MCP (@mcp-get-community/server-curl) | | Config | Zod + YAML | | CLI | Commander.js |
Releasing
- Bump
versioninpackage.json - Commit and push to
main - Tag and push:
git tag <version> && git push origin <version> - The
publish-npm.ymlworkflow runs tests and publishes to npm via trusted publishing (OIDC — no token needed) - Optionally create a GitHub Release for visibility:
gh release create <version> --title "v<version>" --generate-notes
License
Business Source License 1.1 (BUSL-1.1).
Copyright (c) Salfati Group GmbH, Zug, Switzerland.
See LICENSE for full terms.
Built and maintained by Salfati Group
