@orq-ai/evaluators
v1.3.1
Published
Reusable evaluators for AI evaluation frameworks
Readme
@orq-ai/evaluators
Reusable evaluators for AI evaluation frameworks. This package provides a collection of pre-built evaluators that can be imported and used in your .eval files.
Installation
npm install @orq-ai/evaluatorsUsage
String Contains Evaluator
Check if the output contains the expected output (case-insensitive by default):
import { stringContainsEvaluator } from "@orq-ai/evaluators";
// Default: case-insensitive matching
const evaluator = stringContainsEvaluator();
// Case-sensitive matching
const strictEvaluator = stringContainsEvaluator({
caseInsensitive: false
});
// Custom name
const namedEvaluator = stringContainsEvaluator({
name: "contains-capital-city"
});The evaluator compares output against data.expectedOutput from the dataset and returns:
value: 1.0andpass: trueif output contains expectedvalue: 0.0andpass: falseotherwise
Cosine Similarity Evaluator
Compare semantic similarity between output and expected text using OpenAI embeddings:
import {
cosineSimilarityEvaluator,
cosineSimilarityThresholdEvaluator,
simpleCosineSimilarity
} from "@orq-ai/evaluators";
// Simple usage - returns similarity score (0-1)
const evaluator = simpleCosineSimilarity("The capital of France is Paris");
// With threshold - returns boolean based on threshold
const thresholdEvaluator = cosineSimilarityThresholdEvaluator({
expectedText: "The capital of France is Paris",
threshold: 0.8,
name: "semantic-match"
});
// Advanced configuration
const customEvaluator = cosineSimilarityEvaluator({
expectedText: "Expected output text",
model: "text-embedding-3-large", // optional: custom embedding model
name: "custom-similarity"
});Environment Variables
The cosine similarity evaluator requires one of:
OPENAI_API_KEY- For direct OpenAI API accessORQ_API_KEY- For Orq proxy access (automatically useshttps://api.orq.ai/v2/proxy)
When using Orq proxy, models should be prefixed with openai/ (e.g., openai/text-embedding-3-small).
