@aiready/testability
v0.1.11
Published
Measures how safely and verifiably AI-generated changes can be made to your codebase
Readme
@aiready/testability
AIReady Spoke: Measures the "Verify" score of the codebase—how easily an AI agent can write tests and verify its own changes autonomously.
Overview
The "Verify" loop is the most expensive part of the AI agent workflow. Codebases with high global state, missing dependency injection, or poor test coverage force agents into long, expensive retry loops. The Testability Index quantifies these frictions.
🏛️ Architecture
🎯 USER
│
▼
🎛️ @aiready/cli (orchestrator)
│ │ │ │ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
[PAT] [CTX] [CON] [AMP] [DEP] [DOC] [SIG] [AGT] [TST]
│ │ │ │ │ │ │ │ │
└─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┘
│
▼
🏢 @aiready/core
Legend:
PAT = pattern-detect CTX = context-analyzer
CON = consistency AMP = change-amplification
DEP = deps-health DOC = doc-drift
SIG = ai-signal-clarity AGT = agent-grounding
TST = testability ★ ★ = YOU ARE HEREFeatures
- Purity analysis: Detects usage of global state and side effects that make unit testing difficult for agents.
- Dependency Map: Evaluates the usage of DI (Dependency Injection) patterns that allow agents to mock boundaries easily.
- Verification Ratio: Measures the presence and quality of existing tests relative to source files.
- Retry Risk: Specifically flags patterns that cause AI agents to enter infinite "fix-test-fail" loops.
Installation
pnpm add @aiready/testabilityUsage
This tool is designed to be run through the unified AIReady CLI.
# Scan for testability and verification risk
aiready scan . --tools testabilityLicense
MIT
