@ai-test-harness/core
v1.0.0
Published
Core framework for AI quality testing
Maintainers
Readme
@ai-test-harness/core
Core framework for AI quality testing with AWS Bedrock.
Installation
npm install @ai-test-harness/coreUsage
Programmatic API
import { runTests, LLMClientFactory, MockAdapter } from '@ai-test-harness/core';
// Run tests for a project
const result = await runTests({
projectPath: './my-tests',
configFile: 'config.yaml',
});
// Use LLM client directly
const client = LLMClientFactory.create({
provider: 'bedrock',
region: 'us-east-2',
});
const response = await client.chat({
model: 'us.anthropic.claude-3-5-sonnet-20240620-v1:0',
messages: [{ role: 'user', content: 'Hello!' }],
temperature: 0.7,
maxTokens: 1000,
});
// Unit testing with mock adapter
const mockClient = new MockAdapter();
mockClient.setDefaultResponse('Test response');Exports
Test Runners
TestRunner- For text-generation testsAgentTestRunner- For agent scenario testsAgentSimulationRunner- For dynamic agent simulation
LLM Components
LLMJudge- Evaluates outputs against quality criteriaLLMGenerator- Generates text from promptsUserSimulator- Simulates user responses
LLM Client Abstraction
ILLMClient- Interface for LLM clientsLLMClientFactory- Factory for creating clientsBedrockAdapter- AWS Bedrock implementationLiteLLMAdapter- LiteLLM HTTP implementationMockAdapter- Testing mock
Quality Library
registerAttribute()- Register custom quality attributesgenerateAssessmentPrompt()- Generate evaluation promptsbuildQualitySchema()- Build Zod schemas for validation
Utilities
Logger- Structured loggingConfigLoader- Load YAML configurationsretryWithBackoff()- Resilient API calls
Environment Variables
# LLM Provider (default: bedrock)
LLM_PROVIDER=bedrock
# For LiteLLM integration
LITELLM_URL=https://litellm.company.com
LITELLM_API_KEY=sk-xxx
# AWS configuration
AWS_REGION=us-east-2Documentation
For complete documentation, see the main repository.
Contributing
See CONTRIBUTING.md in the main repository.
License
MIT © AI Test Harness Contributors
