harness-review
v1.0.0
Published
AI Engineering Review Workbench — interactive HTML reports for AI agent output
Maintainers
Readme
harness-review
AI Engineering Review Workbench — make sure you get all you need after vibe coding.
When AI agents complete engineering tasks, humans need to verify quality without reading every line. This tool generates an interactive HTML review report that lets reviewers trace requirements → design → code → tests, all in one self-contained page.

Install
# Use directly with npx (no install needed)
npx harness-review report-data.json
# Or install globally
npm install -g harness-reviewHow It Works
AI Agent Build Script Reviewer
│ │ │
│── outputs JSON (DSL) ─────▶│ │
│ │── reads source files │
│ │── runs git diff │
│ │── assembles HTML │
│ │── harness-report.html ───▶│
│ │ │── reviews, annotates
│ │ │── clicks "Copy as Next"
│◀── receives feedback ──────│◀── markdown feedback ─────│Token efficiency: AI outputs ~100 lines of JSON instead of ~900 lines of HTML. The build script handles file embedding, diffing, syntax highlighting, and interaction.
Features
- Three-column layout — outline nav / traceability + checklist / source viewer with syntax highlighting
- Source & Diff viewer — click any file link to view source; toggle to see git diff
- Code annotations — select code → add severity + comment → auto-appends to checklist
- Verification checklist — three-state verdict (pass / issue / blocker) per item with live progress
- Copy as Next — one-click export of structured Markdown feedback to paste back to the AI
- 4 luxury themes — Hermès / Prada / LV / Dior, persisted via localStorage
- Zero dependencies — Node.js only, no native modules or bundlers
Quick Start
1. Initialize in your project
npx harness-review initThis copies the HTML template, example JSON, and SKILL.md to your current directory.
2. AI outputs report-data.json
After completing a task, the AI outputs structured JSON following the DSL spec (see SKILL.md for full schema).
3. Build & review
npx harness-review report-data.json --openThe --open flag automatically opens the report in your browser.
CLI Usage
# Build report from DSL JSON
npx harness-review report-data.json
# Build and open in browser
npx harness-review report-data.json --open
# Copy template + example to current directory
npx harness-review init
# Help
npx harness-review --helpDSL Overview
{
"meta": { "title": "...", "branch": "...", "base": "main", "agent": "...", "date": "..." },
"repo_path": ".",
"figures": [{ "value": "2,400", "label": "Lines Added", "sub": "15 files" }],
"docs": ["docs/design.md"],
"trace": [{ "layer": "...", "sections": [{ "num": "§1", "name": "...", "status": "ok", "impl": [...], "tests": [...] }] }],
"insights": [{ "tag": "blocker", "title": "...", "body": "..." }],
"verify": [{ "group": "...", "items": [{ "id": "v1", "title": "...", "priority": "P0", "how": "..." }] }]
}See SKILL.md for the complete DSL specification, status values, insight tags, and a full self-review example.
Claude Code Skill
This also works as a Claude Code skill. Copy SKILL.md to your skills directory:
cp SKILL.md ~/.claude/skills/harness-review.mdThen add to your CLAUDE.md:
After completing implementation tasks, output a `report-data.json`
following the harness-review DSL spec, then run:
npx harness-review report-data.json --openExample
The example/ directory contains a self-review: the harness-review skill reviewing its own implementation.
npx harness-review example/report-data.json --openLicense
MIT
