@open-evals/metrics
v0.1.4
Published
Metrics for Open Evals
Readme
@open-evals/metrics
Pre-built evaluation metrics for RAG and LLM applications.
Installation
pnpm add @open-evals/metricsFeatures
- Faithfulness: Measures how grounded responses are in the provided context
- Factual Correctness: Evaluates factual accuracy against reference answers
- Answer Similarity: Semantic similarity between generated and reference answers
- Context Recall: Measures retrieval quality against reference answers
- Noise Sensitivity: Tests robustness against irrelevant context
- Evalite Integration: Convert metrics to Evalite scorers
