@agentlair/agent-report
v0.2.0
Published
Behavioral audit for agent workspaces. Makes 'feels trustworthy' into a number.
Downloads
274
Maintainers
Readme
agent-report
Behavioral audit for agent workspaces. Makes "feels trustworthy" into a number.
npx @agentlair/agent-reportZero config. Just run it in any git repository.
What it does
Scans your repo and produces a trust score (0-100) based on:
- Git author analysis — detects agent vs human commits (Claude, Copilot, Devin, Dependabot, etc.)
- Churn hotspots — files agents keep modifying = instability signals
- Test coverage — are agent-written files tested?
- Dependency risk — known vulnerable packages, typosquatting detection
- Trust score — weighted composite of all signals
Usage
# Scan current directory
npx @agentlair/agent-report
# Scan a specific directory
npx @agentlair/agent-report ./my-project
# Output as JSON
npx @agentlair/agent-report --json
# Limit commit history
npx @agentlair/agent-report --max-commits 1000Example output
Running on vercel/ai (44% of recent commits from release bots):
╔══════════════════════════════════════╗
║ agent-report v0.1.0 ║
╚══════════════════════════════════════╝
Directory: ./vercel-ai
Scanned: 2026-04-12
TRUST SCORE ⚠ 66
── Score Breakdown ───────────────────────────────────
Git Hygiene ██████████████████░░ 90
Churn Stability ██████████░░░░░░░░░░ 50
Test Coverage ██████░░░░░░░░░░░░░░ 30
Dependency Risk ████████████████████ 100
── Git Summary ───────────────────────────────────────
Total commits: 50
Agent commits: 22 (44%)
Human commits: 28
Authors:
🤖 agent vercel-ai-sdk[bot] 22 commits (95% confidence)
👤 human Aayush Kapoor 10 commits
👤 human Felix Arntz 6 commits
👤 human Walter Korman 4 commits
...
── Churn Hotspots ────────────────────────────────────
Files agents keep modifying — potential instability signals
⚡ .changeset/pre.json 19 changes, 100% agent
⚡ pnpm-lock.yaml 19 changes, 95% agent
⚡ examples/ai-e2e-next/package.json 18 changes, 100% agent
...
── Test Coverage ─────────────────────────────────────
Test files in repo: 498
Agent-written files: 1
With matching tests: 0
Without tests: 1
Untested agent-written files:
✗ packages/gateway/src/gateway-language-model-settings.tsRunning on cline/cline (0% agent commits, 14 contributors):
TRUST SCORE ✓ 90
Git Hygiene ████████████████████ 100
Churn Stability ████████████████████ 100
Test Coverage ████████████████░░░░ 80
Dependency Risk ████████████████░░░░ 81
Total commits: 50 | Agent: 0 (0%) | Human: 50Running on anthropic-sdk-python (74% auto-generated by stainless-app bot):
TRUST SCORE ⚠ 65
Git Hygiene ██████████████░░░░░░ 70
Churn Stability ███████████████░░░░░ 75
Test Coverage ██████░░░░░░░░░░░░░░ 30
Dependency Risk ████████████████████ 100
Agent commits: 37/50 (74%)
🤖 stainless-app[bot] 37 commits (95% confidence)
Churn hotspots: .stats.yml, CHANGELOG.md, _version.py (all 100% agent)
250 agent-written files with 0 directly matched testsProgrammatic API
import { scan, renderTerminal } from "@agentlair/agent-report";
const report = await scan({ dir: "./my-repo" });
console.log(report.trustScore); // 0-100
console.log(renderTerminal(report)); // Pretty terminal outputScore breakdown
| Component | Weight | What it measures | |-----------|--------|-----------------| | Git Hygiene | 30% | Agent ratio, commit quality, author diversity | | Churn Stability | 20% | Files agents keep churning = instability | | Test Coverage | 30% | Agent-written code with matching tests | | Dependency Health | 20% | Known risky deps, typosquatting, dep count |
Agent detection
Detects agents by:
- Email patterns (
[email protected],[bot]@users.noreply.github.com, etc.) - Name patterns (
Claude,Copilot,Devin,[bot]suffix, etc.) - Co-author tags (
Co-Authored-By: Claude) - GitHub bot conventions
Why this exists
"I've never opened any of these CLI scripts it's written"
Most developers using AI agents never review what the agents produce. This tool makes the invisible visible — a behavioral audit you can run before merging, deploying, or trusting agent-written code.
This is the open-source entry point to AgentLair — trust infrastructure for the agentic economy.
From snapshot to continuous monitoring
agent-report gives you a point-in-time audit. AgentLair provides continuous behavioral trust scoring across your entire agent fleet — identity, governance, and cross-organization trust data.
License
MIT
