convergence-eval
v1.0.0
Published
One command. Find out if your AI agrees with itself. Statistically validated consensus measurement using multi-model AI raters.
Downloads
82
Maintainers
Readme
convergence-eval
One command. Find out if your AI agrees with itself.
Convergence measures whether multiple AI models produce consistent outputs on your evaluation tasks. It tells you which items your AI agrees on and which need human review — using the same psychometric methods trusted in clinical research.
The Problem
You have a golden dataset — "correct" answers your AI is measured against. But how reliable is it? If you ran the labeling again, would you get the same results?
Convergence answers that question with statistical rigor.
Quick Start
# Install
npm install -g convergence-eval
# Initialize from a template
convergence-eval init --template classification
# Edit data.json with your items, then run
convergence-eval run --schema schema.json --data data.json --config config.json --output results.json
# Read the results
convergence-eval report --input results.jsonTotal cost for a typical run (30 items, 4 raters): under $1 USD via AWS Bedrock.
What You Get
- Agreement scores per field — know exactly where your AI is reliable and where it isn't
- Per-item classification — high confidence, moderate, contested, or diagnostic alert
- Actionable guidance — not just scores, but what to do about them
- Bootstrap confidence intervals — know how certain the estimates are
- JSON + Markdown output — machine-readable for pipelines, human-readable for review
Templates
Start with a template, customize for your domain:
| Template | Use Case |
|----------|----------|
| classification | Binary or multi-class classification |
| extraction | Structured field extraction accuracy |
| summarization | Summary quality assessment |
| evidence-evaluation | Evidence relevance and limitations |
convergence-eval init --template summarizationCLI Reference
convergence-eval run # Run convergence analysis
convergence-eval report # Generate Markdown report
convergence-eval estimate # Estimate cost (no API calls)
convergence-eval compare # Compare two runs (before/after)
convergence-eval finalize # Merge expert decisions into golden dataset
convergence-eval init # Initialize from templateRun any command with --help for full options and examples.
Requirements
- Node.js >= 20
- AWS credentials with Bedrock access (
bedrock:InvokeModel) - At least 3 raters recommended (minimum 2)
License
Copyright (c) 2026 nquiry. All rights reserved. See LICENSE.
