evalsense
v0.4.2
Published
JS-native LLM evaluation framework with Jest-like API and statistical assertions
Downloads
586
Maintainers
Readme
Jest for LLM Evaluation. Pass/fail quality gates for your LLM-powered code.
evalsense runs your code across many inputs, measures quality statistically, and gives you a clear pass / fail result — locally or in CI.
npm install --save-dev evalsenseQuick Start
Create quality.eval.js:
import { describe, evalTest, expectStats } from "evalsense";
import { readFileSync } from "fs";
describe("test answer quality", async () => {
evalTest("toxicity detection", async () => {
const answers = await generateAnswersDataset(testQuestions);
const toxicityScore = await toxicity(answers);
expectStats(toxicityScore).field("score").percentageBelow(0.5).toBeAtLeast(0.5);
});
evalTest("correctness score", async () => {
const answers = await generateAnswersDataset(testQuestions);
const groundTruth = JSON.parse(readFileSync("truth-dataset.json", "utf-8"));
expectStats(answers, groundTruth)
.field("label")
.accuracy.toBeAtLeast(0.9)
.precision("positive")
.toBeAtLeast(0.7)
.recall("positive")
.toBeAtLeast(0.7)
.displayConfusionMatrix();
});
});Run it:
npx evalsense run quality.eval.jsOutput:
test answer quality
✓ toxicity detection (1ms)
✓ 50.0% of 'score' values are below
or equal to 0.5 (expected >= 50.0%)
Expected: 50.0%
Actual: 50.0%
✓ correctness score (1ms)
Field: label | Accuracy: 100.0% | F1: 100.0%
negative: P=100.0% R=100.0% F1=100.0% (n=5)
positive: P=100.0% R=100.0% F1=100.0% (n=5)
Confusion Matrix: label
Predicted → correct incorrect
Actual ↓
correct 5 0
incorrect 0 5
✓ Accuracy 100.0% >= 90.0%
✓ Precision for 'positive' 100.0% >= 70.0%
✓ Recall for 'positive' 100.0% >= 70.0%
✓ Confusion matrix recorded for field "label"
All tests passed.Key Features
- Jest-like API —
describe,evalTest,expectStatsfeel familiar - Statistical assertions — accuracy, precision, recall, F1, MAE, RMSE, R²
- Confusion matrices — built-in display with
.displayConfusionMatrix() - Distribution monitoring —
percentageAbove/percentageBelowwithout ground truth - LLM-as-judge — built-in hallucination, relevance, faithfulness, toxicity metrics
- CI/CD ready — structured exit codes, JSON reporter, bail mode
- Zero config — works with any JS data loading and model execution
LLM-Based Metrics
import { setLLMClient, createAnthropicAdapter } from "evalsense/metrics";
import { hallucination, relevance } from "evalsense/metrics/opinionated";
setLLMClient(
createAnthropicAdapter(process.env.ANTHROPIC_API_KEY, {
model: "claude-haiku-4-5-20251001",
})
);
const scores = await hallucination({
outputs: [{ id: "1", output: "Paris has 50 million people." }],
context: ["Paris has approximately 2.1 million residents."],
});
// scores[0].score → 0.9 (high hallucination)
// scores[0].reasoning → "Output claims 50M, context says 2.1M"Built-in providers: OpenAI, Anthropic, OpenRouter, or bring your own adapter. See LLM Metrics Guide and Adapters Guide.
Using with Claude Code (Vibe Check)
evalsense includes an example Claude Code skill that acts as an automated LLM quality gate. To set it up in your project:
- Install evalsense as a dev dependency
- Copy
skill.mdinto your project at.claude/skills/llm-quality-gate/SKILL.md - After building any LLM feature, run
/llm-quality-gatein Claude Code
Claude will automatically create a .eval.js file with a real dataset and meaningful thresholds, run npx evalsense run, and give you a ship / no-ship decision.
Documentation
| Guide | Description | | -------------------------------------------------- | ------------------------------------------------ | | API Reference | Full API — all assertions, matchers, metrics | | CLI Reference | All CLI flags, exit codes, CI integration | | LLM Metrics | Hallucination, relevance, faithfulness, toxicity | | LLM Adapters | OpenAI, Anthropic, OpenRouter, custom adapters | | Custom Metrics | Pattern and keyword metrics | | Agent Judges | Design patterns for evaluating agent systems | | Regression Metrics | MAE, RMSE, R² usage | | Examples | Working code examples |
Dataset Format
Records must have an id or _id field:
[
{ "id": "1", "text": "sample input", "label": "positive" },
{ "id": "2", "text": "another input", "label": "negative" }
]Exit Codes
| Code | Meaning |
| ---- | ------------------------- |
| 0 | All tests passed |
| 1 | Assertion failure |
| 2 | Dataset integrity failure |
| 3 | Execution error |
| 4 | Configuration error |
Contributing
Contributions are welcome. See CONTRIBUTING.md for setup, coding standards, and the PR process.

