api-tests-coverage
v1.0.25
Published
CLI and library to measure how thoroughly your test suite exercises your API surface area
Maintainers
Readme
API Test Coverage Analyzer
A CLI tool that measures how thoroughly your test suite exercises your API surface area. Rather than simply counting passing tests, it asks:
- Are all endpoints reachable via at least one test?
- Are parameters tested with valid, boundary, missing, and invalid values?
- Are business rules (discount logic, rate limiting, etc.) explicitly validated?
- Do integration flows (multi-step user journeys) run end-to-end?
- Are security scenarios (auth bypass, injection, IDOR) covered?
- Are error paths (4xx/5xx) handled correctly?
- Is there performance and resilience evidence (JMeter/k6 data)?
- Does the API remain compatible between versions?
The answers appear in rich HTML, JSON, CSV, and JUnit reports that can be enforced as pass/fail gates in any CI pipeline.
Coverage Intelligence
Beyond raw percentages, the Coverage Intelligence engine answers:
What is missing? What matters most? What should be tested next?
It identifies functional findings, links them to missing test recommendations, assigns risk scores (0–100), and prioritises work as P0/P1/P2/P3. Outputs are AI-friendly markdown — ready for LLM consumption or CI gating.
Table of Contents
- Prerequisites
- Installation
- Quickstart
- Using as a Library
- GitHub Action
- Commands overview
- Configuration
- AST Analysis
- UI Dashboard
- Documentation
- Contributing
- License
Prerequisites
- Node.js ≥ 18 LTS
- npm ≥ 9
node --version # v20.x
npm --version # 10.xInstallation
Option 1 – Install globally from npm (recommended)
npm install -g api-test-coverage-analyzer
api-coverage --helpOption 2 – Run without installing (npx)
npx api-test-coverage-analyzer endpoint-coverage \
--spec openapi.yaml \
--tests 'tests/**/*.ts'Option 3 – Clone the repository (development)
git clone https://github.com/q-intel/apiTestsCoverageAnalyzer.git
cd apiTestsCoverageAnalyzer
# Install dependencies
npm install
# Compile TypeScript
npm run build
# Verify
node dist/src/index.js --helpAlternatively, use ts-node to skip the build step:
node -r ts-node/register src/index.ts --helpQuickstart
Run all coverage types against the included sample project:
# Endpoint coverage (using globally installed CLI)
api-coverage endpoint-coverage \
--spec sample/openapi.yaml \
--tests "sample/tests/**/*.ts" \
--format json,html \
--threshold-endpoint 80
# Business rule coverage
api-coverage business-coverage \
--rules sample/business-rules.yaml \
--tests "sample/tests/**/*.ts" \
--format json,html
# Integration flow coverage
api-coverage integration-coverage \
--flows sample/integration-flows.yaml \
--tests "sample/tests/**/*.ts" \
--format json,html
# Security coverage
api-coverage security-coverage \
--spec sample/openapi-security.yaml \
--tests "sample/tests/**/*.ts" \
--format json,html
# Unit-analysis enrichment (coverage reports, smells, slow tests, mutation score)
api-coverage unit-analysis \
--root . \
--reports-dir reports \
--format json,html
# Compatibility check (breaking changes between v1 and v2)
api-coverage compatibility-check \
--old-spec sample/v1.yaml \
--new-spec sample/v2.yaml \
--contracts "sample/contracts/**/*.json" \
--format json,htmlReports are written to the reports/ directory.
Using as a Library
Install as a project dependency:
npm install api-test-coverage-analyzerThen import the analysis functions in your own scripts:
const {
analyzeEndpoints,
analyzeParameters,
analyzeBusinessRules,
analyzeIntegrationFlows,
analyzeErrorHandling,
analyzeSecurityControls,
analyzePerfResilience,
analyzeCompatibility,
checkThresholds,
} = require('api-test-coverage-analyzer');
async function runCoverage() {
// Endpoint coverage
const endpointResult = await analyzeEndpoints({
spec: 'openapi.yaml',
tests: 'tests/**/*.ts',
format: 'json,html',
thresholdEndpoint: 80,
});
console.log(`Endpoint coverage: ${endpointResult.coveragePercent}%`);
// Business rule coverage
const businessResult = await analyzeBusinessRules({
rules: 'business-rules.yaml',
tests: 'tests/**/*.ts',
});
console.log(`Business coverage: ${businessResult.coveragePercent}%`);
// Check thresholds
const failures = checkThresholds(
[endpointResult, businessResult],
{ endpoint: 80, business: 60 }
);
if (failures.length > 0) {
console.error('Threshold failures:', failures);
process.exitCode = 1;
}
}
runCoverage();TypeScript users get full type definitions out of the box.
GitHub Action
Add API coverage analysis to your CI/CD pipeline with zero setup:
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Run API coverage analysis
id: coverage
uses: q-intel/apiTestsCoverageAnalyzer/action@v1
with:
spec: 'sample/openapi.yaml'
tests: 'tests/**/*.ts'
format: 'json,html'
coverage-types: 'endpoint,error,security'
threshold-endpoint: '80'
- name: Print endpoint coverage
run: echo "Endpoint coverage ${{ steps.coverage.outputs.endpoint-coverage }}%"
- name: Upload reports
uses: actions/upload-artifact@v4
with:
name: coverage-reports
path: reports/Action inputs
| Input | Description | Default |
|-------|-------------|---------|
| spec | Path to OpenAPI/Swagger spec file | sample/openapi.yaml |
| tests | Glob pattern for test files | tests/**/*.ts |
| format | Comma-separated report formats (json,html,csv,junit) | json,html |
| coverage-types | Coverage types to run (endpoint,parameter,business,integration,error,security) | endpoint |
| rules | Path to business rules YAML (required for business type) | — |
| flows | Path to integration flows YAML (required for integration type) | — |
| language | Test language(s) (auto,typescript,javascript,java,python,ruby,cucumber) | auto |
| threshold-endpoint | Minimum required endpoint coverage % | 0 |
| threshold-parameter | Minimum required parameter coverage % | 0 |
| threshold-business | Minimum required business rule coverage % | 0 |
| threshold-integration | Minimum required integration flow coverage % | 0 |
| threshold-error | Minimum required error handling coverage % | 0 |
| threshold-security | Minimum required security coverage % | 0 |
| reports-dir | Directory to write reports into | reports |
Action outputs
| Output | Description |
|--------|-------------|
| endpoint-coverage | Endpoint coverage percentage |
| parameter-coverage | Parameter coverage percentage |
| business-coverage | Business rule coverage percentage |
| integration-coverage | Integration flow coverage percentage |
| error-coverage | Error handling coverage percentage |
| security-coverage | Security coverage percentage |
| reports-dir | Absolute path to the generated reports directory |
| overallStatus | "passed" or "failed" |
| overallCoverage | Average coverage % across all analyzed categories |
| failedGates | Comma-separated list of failed category names (empty when all pass) |
| summaryPath | Path to the generated build-summary.md |
The action fails (non-zero exit code) when any coverage threshold is not met. Summary files are always generated – even when the gate fails.
Commands overview
| Command | What it measures | Key flag |
|---------|-----------------|---------|
| endpoint-coverage | % of spec endpoints hit by tests | --spec, --tests |
| parameter-coverage | valid/boundary/missing/invalid param testing | --spec, --tests |
| business-coverage | % of business rules covered (@businessRule) | --rules, --tests |
| integration-coverage | % of integration flows covered (@flow) | --flows, --tests |
| error-coverage | 4xx/5xx/validation/timeout scenarios | --spec, --tests |
| security-coverage | OWASP Top-10 security scenarios | --spec, --tests |
| perf-resilience-coverage | Load-test SLA + resilience patterns | --spec, --load-results |
| compatibility-check | Breaking changes + Pact contract violations | --old-spec, --new-spec |
| generate-md-report | Markdown summary from JSON reports | --reports, --output |
| coverage-intelligence | Identify findings, missing tests, risk scores, P0–P3 priorities | --reports-dir, --out-dir |
All commands accept --format json,html,csv,junit and --threshold-* flags.
Coverage Intelligence (coverage-intelligence)
The intelligence command ingests all coverage reports and produces prioritised, AI-friendly outputs:
# Generate intelligence reports after running other coverage commands
api-coverage coverage-intelligence \
--reports-dir reports \
--out-dir reports \
--project-name my-api \
--languages typescript \
--frameworks jestGenerated files:
| File | Description |
|------|-------------|
| reports/coverage-intelligence.json | Full intelligence report (findings + recommendations) |
| reports/coverage-intelligence.md | AI-friendly summary with top 10 findings and recommendations |
| reports/missing-tests-recommendations.json | Prioritised missing test recommendations |
| reports/missing-tests-recommendations.md | Markdown recommendations per recommendation |
| reports/risk-prioritization.json | Risk breakdown by score, category, and endpoint |
| reports/risk-prioritization.md | Risk prioritisation narrative |
Functional Findings map gaps in coverage to specific root causes (e.g. "no auth test on DELETE /users/{id}").
Missing Test Recommendations are prioritised P0–P3 by a risk formula:
Risk Score =
0.30 × SeverityWeight +
0.20 × ExposureWeight +
0.15 × CriticalityWeight +
0.15 × MissingCoverageWeight +
0.10 × SecuritySignalWeight +
0.05 × FlowImpactWeight +
0.05 × ChangeVolatilityWeight| Score | Risk Band | Priority | |-------|-----------|---------| | 85–100 | Critical | P0 — immediate action | | 70–84 | Critical | P1 — high urgency | | 50–69 | High | P2 — address soon | | 0–49 | Moderate/Low | P3 — backlog |
Security findings, money-movement endpoints, and auth/authz gaps are never rated below P1 regardless of formula score.
Configuration
Create a config.yaml at your project root. Running analyze with no arguments discovers this
file automatically. If it is absent the analyzer runs the full default profile and emits a warning.
version: 1
project:
name: my-api
analysis:
defaultMode: full
ast:
enabled: true # master switch for AST analysis
fallbackHeuristics: true # run regex when AST returns 0 results
maxCallDepth: 4 # wrapper/helper tracing depth
assertionAware: true # link HTTP calls to response assertions
languages: # per-language enable/disable toggles
javascript: { enabled: true }
typescript: { enabled: true }
java: { enabled: true }
kotlin: { enabled: true }
python: { enabled: true }
ruby: { enabled: true }
cucumber: { enabled: true }
scans:
coverage:
enabled: true
types:
- endpoint
- parameter
- business
- integration
- error
- security
- performance
- compatibility
security:
enabled: true
scanners:
- semgrep
- trivy
- zap
- gitleaks
intelligence:
enabled: true
types:
- ai-summary
- risk-prioritization
- recommendations
- scanner-interpretation
thresholds:
global: 80
endpoint: 90
qualityGate:
enabled: true
mode: warn
reports:
outputDir: reports
formats:
- json
- html
security:
pipeline:
trivyReport: reports/trivy.json
semgrepReport: reports/semgrep.json
gitleaksReport: reports/gitleaks.json
zapReport: reports/zap.json
autoLoadReports: true
unitAnalysis:
enabled: true
codeCoverage:
enabled: true
autoDiscover: true
thresholds:
line: 80
branch: 70
method: 80
mutationTesting:
enabled: false
tool: auto
threshold: 70
scope: changed-files
smellDetection:
enabled: true
failOn: CRITICAL
slowTestThreshold:
unit: 200
integration: 5000
independenceCheck:
enabled: true
minScore: 80
mcp:
enabled: falseUse --config <path> to load an arbitrary YAML file:
analyze --config ./configs/staging.yamlSee docs/guides/configuration.md for the full field reference.
If you are migrating from coverage.config.json, see
docs/guides/migration-to-config-yaml.md.
CLI threshold flags (--threshold-endpoint, etc.) still work but are deprecated. Migrate
values to the thresholds block in config.yaml.
AST Analysis
The analyzer uses a true multi-language AST-based engine to detect HTTP calls with far higher accuracy than regex scanning. Analysis happens through a three-tier fallback cascade:
| Tier | Condition | Result |
|------|-----------|--------|
| 1 | AST parse succeeds | confidence: high or medium |
| 2 | AST yields 0 results + fallbackHeuristics: true | regex run, confidence: low |
| 3 | AST disabled or parse error | existing regex pipeline |
Supported languages
| Language | Parser | Notes |
|----------|--------|-------|
| JavaScript | @typescript-eslint/typescript-estree | axios, fetch, supertest, got |
| TypeScript | @typescript-eslint/typescript-estree | full type annotation support |
| Java | tree-sitter-java | RestAssured, MockMvc, WebTestClient |
| Kotlin | tree-sitter-kotlin → tree-sitter-java → regex | Ktor DSL, Spring Boot |
| Python | tree-sitter-python | requests, httpx, Django/Flask test clients |
| Ruby | tree-sitter-ruby | Rails request specs, HTTParty, Faraday |
| Cucumber | (step dispatch) | @Given/@When/@Then annotations |
Resolution types
Each covered endpoint carries a resolutionType indicating how the URL was found:
| Type | Description | Confidence |
|------|-------------|------------|
| direct | String literal URL in source | high |
| constant | Named constant resolved to URL | high/medium |
| enum | Enum member resolved to URL | high/medium |
| string-template | Template literal / f-string / interpolation | medium |
| wrapper-method | HTTP call traced through a helper function | medium |
| request-builder | Builder object (RequestEntity, etc.) | medium |
| client-mapping | Explicit client ↔ HTTP mapping | high |
| interpolated-path | URL with run-time segment interpolation | medium |
| cucumber-step | HTTP call inside a Cucumber step definition | medium |
| heuristic | Regex fallback | low |
Disable AST for a specific language
analysis:
ast:
enabled: true
languages:
kotlin:
enabled: false # use regex fallback for Kotlin onlyDebugging resolution
Add --format json to any command and inspect the matches[].resolutionType field
in the endpoint coverage JSON report.
Built-in Summary Engine
The library owns summary generation. No custom scripting is needed.
Generated files
When reports are generated (via CLI or GitHub Action), the following summary files
are automatically written to the configured --reports-dir:
| File | Description |
|------|-------------|
| reports/build-summary.md | Full CI/build summary (Markdown) |
| reports/pr-summary.md | Concise PR comment summary (Markdown) |
| reports/summary.json | Machine-readable summary data |
| reports/ai-summary.md | AI-optimized Markdown for agents |
| reports/ai-summary.json | AI-optimized JSON for agents |
Gate-aware inclusion
Sections are included only when the analyzer ran or a threshold was configured.
Analyzers that did not run are silently omitted — no empty sections appear.
Control enabled scan types via scans.coverage.types in config.yaml.
Public API
import { generateBuildSummary, generatePrSummary } from 'api-test-coverage-analyzer';
const { markdown, sections, json } = await generateBuildSummary({
results, // CoverageResult[] from any analyze* call
qualityGate, // QualityGateResult (optional)
thresholds, // Record<string, number> (optional)
projectName: 'my-api',
branch: 'main',
}, 'reports'); // optional output directorytype SummaryResult = {
markdown: string;
sections: Array<{
id: string; // e.g. "endpoint", "security-scan"
title: string; // human-readable heading
included: boolean;
gateEvaluated: boolean;
passed?: boolean;
markdown: string;
}>;
json: unknown; // machine-readable summary object
};GitHub Action outputs
| Output | Description |
|--------|-------------|
| overallStatus | "passed" or "failed" |
| overallCoverage | Average coverage % across all analyzed categories |
| failedGates | Comma-separated list of failed category names |
| summaryPath | Path to build-summary.md |
UI Dashboard
A Vite + React dashboard is included for visualising reports:
cd dashboard
npm install
npm run dev # http://localhost:5173Load any JSON report from reports/ and explore:
- Overview, Endpoints, Parameters, Business Rules, Integration Flows
- Security, Errors, Performance/Resilience, Trends
Documentation
Full documentation is available in the docs/ directory and can be served locally:
npm run docs:dev # start dev server at http://localhost:5174
npm run docs:build # build static site → docs/.vitepress/dist/
npm run docs:check # validate sidebar, links, and assets, then build
npm run docs:preview # serve the built site (production preview)
npm run docs:test # run Cypress navigation/link testsDocumentation sections:
| Section | Description |
|---------|-------------|
| Getting Started | First-run walkthrough |
| Installation | Detailed setup steps |
| CLI Reference | All commands and options |
| Multi-Language Support | Java, Kotlin, Python, Ruby, Cucumber test suites |
| Coverage Intelligence | Findings, risk scoring, missing test recommendations |
| Architecture | Module design and data flow |
| CI/CD Integration | GitHub Actions & Jenkins |
| Interpreting Reports | Reading each report type |
| Writing Effective Tests | Test best practices |
| Extending via Plugins | Custom coverage types |
| Configuration Reference | config.yaml field reference |
| Troubleshooting | Common issues & FAQ |
| Glossary | Key terms |
| Contributing | How to contribute |
TypeScript Example Project
A complete end-to-end example is available under examples/typescript/.
This is a realistic Wallets / Payments API that demonstrates the analyzer in a real project context.
Domain
| Concept | Description | |---------|-------------| | Wallets | Create, fund, debit, transfer, freeze/unfreeze, close | | Payments | Create, process, refund, track status | | Transactions | Ledger-style history | | Risk / Limits | Daily limits, currency checks, idempotency | | External deps | Payment processor + Fraud engine (nock-mocked) |
Test layers
| Layer | Location | What it tests |
|-------|----------|---------------|
| Unit | tests/unit/ | Service logic, risk rules, validation |
| Integration | tests/integration/ | Routes, auth, supertest end-to-end |
| Blackbox | tests/blackbox/ | Positive/negative/boundary/idempotency via HTTP |
| WireMock/nock | tests/wiremock/ | External dependency healthy / failed / timeout |
Running the example
cd examples/typescript
npm install
npm test # all 63 tests
npm run analyze # run the analyzer + generate reports
npm run screenshots # capture Playwright screenshotsCI/CD demonstrations
| CI System | Location | What it does |
|-----------|----------|-------------|
| GitHub Actions | .github/workflows/ci.yml | Install, test, analyze, screenshots, upload artifacts |
| Jenkins | ci/jenkins/Jenkinsfile | Install, test, analyze, archive reports, surface gate failures |
Observability
cd examples/typescript/observability
docker-compose up # starts Prometheus + Grafana
# Grafana at http://localhost:3000 — dashboards pre-configuredIntentional coverage gaps
The example intentionally omits some test scenarios so the intelligence engine generates meaningful findings:
- Frozen wallet debit scenario (not tested)
- Daily $10,000 limit enforcement (not tested)
- Currency mismatch in transfer (not tested)
- Refund after 30-day window (not tested)
- Payment processor failure fallback (not tested)
Self-Analysis
The analyzer is a self-analyzing system: on every build it runs all implemented metric types against its own codebase, enforces 100% thresholds, and fails automatically if any metric falls below threshold.
Quick start
make install # Install dependencies
make build # Compile TypeScript
make self-analysis-all # Run all 8 metric types + intelligence engineOr run the full CI pipeline:
make ci # install → build → test → self-analysis-all → summarySelf-analysis input artifacts
| Artifact | Path | Purpose |
|---|---|---|
| OpenAPI spec | openapi.self-analysis.yaml | Analyzer CLI/library API surface |
| Business rules | business-rules.self-analysis.yaml | One rule per documented capability (19 rules) |
| Integration flows | integration-flows.self-analysis.yaml | Key usage sequences (5 flows) |
| Perf data | load-results.self-analysis.json | Reference data for performance metric |
| Config | coverage.self-analysis.json | 100% thresholds across all metrics |
Reports
All reports are written to reports/ after each run:
| File | Contents |
|---|---|
| reports/endpoint-report.json/html | Endpoint coverage |
| reports/parameter-report.json/html | Parameter coverage |
| reports/business-report.json/html | Business rule coverage |
| reports/integration-report.json/html | Integration flow coverage |
| reports/error-report.json/html | Error scenario coverage |
| reports/security-report.json/html | Security control coverage |
| reports/perf-resilience-report.json/html | Performance/resilience coverage |
| reports/coverage-intelligence.json | Intelligence findings + risk scores |
| reports/pr-summary.md | PR comment summary |
| reports/build-summary.md | Build log summary |
Thresholds
All self-analysis thresholds default to 100%. Override via environment variables for development:
THRESHOLD_ENDPOINT=80 make self-analysis-endpointSee docs/guides/thresholds.md for full threshold documentation.
CI integration
The .github/workflows/self-analysis.yml workflow runs on every push and pull request.
All steps call Makefile targets. Pass/fail is governed by the analyzer's process exit code only.
See docs/guides/self-analysis.md for the full self-analysis guide.
Contributing
See CONTRIBUTING.md and the full contributing guide.
Please read our Code of Conduct before participating.
License
MIT © q-intel
