@traceai/fi-core
v0.1.16
Published
Core OpenTelemetry instrumentation for TypeScript
Readme
@traceai/fi-core
Core OpenTelemetry instrumentation library for TypeScript applications with advanced evaluation capabilities for AI systems.
Overview
@traceai/fi-core provides a comprehensive tracing solution built on OpenTelemetry that's specifically designed for AI applications. It offers custom span exporters, evaluation tags, and seamless integration with the TraceAI platform for observability and performance monitoring.
Features
- OpenTelemetry Integration: Built on top of OpenTelemetry APIs with custom implementations
- Custom Span Exporter: HTTP-based span exporter with configurable endpoints
- AI Evaluation Tags: Comprehensive evaluation system for AI applications with 50+ built-in evaluators
- Project Management: Support for project versioning, sessions, and metadata
- Flexible Configuration: Environment variable and programmatic configuration support
- TypeScript Support: Full TypeScript support with comprehensive type definitions
Installation
npm install @traceai/fi-core
# or
pnpm add @traceai/fi-core
# or
yarn add @traceai/fi-coreModule System Support
This package supports both CommonJS and ESM module systems for maximum compatibility.
ESM (ES Modules)
import { register, ProjectType, EvalTag } from '@traceai/fi-core';CommonJS
const { register, ProjectType, EvalTag } = require('@traceai/fi-core');TypeScript Configuration
For optimal compatibility, ensure your tsconfig.json includes:
{
"compilerOptions": {
"moduleResolution": "node",
"esModuleInterop": true,
"allowSyntheticDefaultImports": true
}
}The module setting can be "commonjs", "esnext", or any other module system your project requires.
Quick Start
Basic Setup
import { register, ProjectType } from '@traceai/fi-core';
// Initialize tracing with minimal configuration
const tracerProvider = register({
projectName: 'my-ai-project',
projectType: ProjectType.EXPERIMENT,
});Advanced Configuration
import { register, ProjectType, EvalTag, EvalName, EvalSpanKind } from '@traceai/fi-core';
// Create evaluation tags for AI model monitoring
const evalTags = [
new EvalTag({
type: EvalTagType.OBSERVATION_SPAN,
value: EvalSpanKind.LLM,
eval_name: EvalName.CONTEXT_ADHERENCE,
custom_eval_name: 'custom_context_check',
mapping: {
"context": "raw.input",
"output": "raw.output"
},
model: ModelChoices.TURING_SMALL
})
]
// Register with comprehensive configuration
const tracerProvider = register({
projectName: 'advanced-ai-project',
projectType: ProjectType.EXPERIMENT,
projectVersionName: 'v1.0.0',
evalTags: evalTags,
sessionName: 'experiment-session-1',
verbose: true,
});Core Components
TraceAI Provider
The FITracerProvider extends OpenTelemetry's BasicTracerProvider with custom functionality:
- Custom HTTP span exporter
- Automatic resource detection and configuration
- Built-in UUID generation for trace and span IDs
- Configurable batch or simple span processing
Evaluation System
The evaluation system provides comprehensive AI model assessment capabilities:
Built-in Evaluators
- Content Quality: Context adherence, completeness, groundedness
- Safety & Moderation: Toxicity, PII detection, content moderation
- Technical Validation: JSON validation, regex matching, length checks
- AI-Specific: Conversation coherence, prompt injection detection
- Custom Evaluations: API calls, custom code evaluation, agent-as-judge
Span Types
LLM: Large Language Model operationsAGENT: AI agent executionsTOOL: Tool usage and function callsRETRIEVER: Information retrieval operationsEMBEDDING: Vector embedding operationsRERANKER: Result reranking operations
Project Types
EXPERIMENT: For experimental AI development and testingOBSERVE: For production monitoring and observability
API Reference
Environment Variables
FI_BASE_URL: Base URL for the TraceAI collectorFI_API_KEY: API key for authenticationFI_SECRET_KEY: Secret key for authentication
Examples
Simple LLM Tracing
import { trace } from '@opentelemetry/api';
import { register, ProjectType } from '@traceai/fi-core';
// Initialize
register({
projectName: 'llm-chat-bot',
projectType: ProjectType.EXPERIMENT,
evalTags: [
new EvalTag({
type: EvalTagType.OBSERVATION_SPAN,
value: EvalSpanKind.LLM,
eval_name: EvalName.CHUNK_ATTRIBUTION,
config: {},
custom_eval_name: "Chunk_Attribution_5",
mapping: {
"context": "raw.input",
"output": "raw.output"
},
model: ModelChoices.TURING_SMALL
})
]
});
// Create traces
const tracer = trace.getTracer('my-app');
const span = tracer.startSpan('llm-completion');
span.setAttributes({
'llm.model': 'gpt-4o-mini',
'llm.prompt': 'What is the capital of France?',
'llm.response': 'The capital of France is Paris.'
});
span.end();Development
Building
pnpm buildTesting
pnpm testLinting
pnpm lintContributing
This package is part of the TraceAI project. Please refer to the main repository for contribution guidelines.
