@brizz/sdk
v0.1.8
Published
OpenTelemetry-based observability SDK for AI applications
Readme
Brizz SDK
OpenTelemetry-based observability SDK for AI applications. Automatically instruments popular AI libraries including OpenAI, Anthropic, Vercel AI SDK, and more.
Table of Contents
- Features
- Installation
- Quick Start
- Module System Support
- Supported Libraries
- PII Protection & Data Masking
- Session Tracking
- Custom Events & Logging
- Environment Variables
- Advanced Configuration
- Testing & Development
- Package.json Examples
- Examples
- Troubleshooting
- Contributing
- License
Features
- 🔍 Automatic Instrumentation - Zero-code setup for popular AI libraries
- 📊 OpenTelemetry Native - Standards-compliant tracing, metrics, and logs
- 🛡️ PII Protection - Built-in masking for sensitive data
- 🔄 Session Tracking - Group related operations and traces
- 📦 Dual Module Support - Works with both ESM and CommonJS
- ⚡ Multiple Initialization Methods - Preload, ESM loader, or manual setup
Installation
npm install @brizz/sdk
# or
yarn add @brizz/sdk
# or
pnpm add @brizz/sdkQuick Start
First, set up your environment variables:
BRIZZ_API_KEY=your-api-key
BRIZZ_BASE_URL=https://telemetry.brizz.dev # Optional
BRIZZ_APP_NAME=my-app # Optional
BRIZZ_LOG_LEVEL=info # Optional: debug, info, warn, error, noneCommonJS Projects
Option 1: Preload Only (Zero Config)
node --require @brizz/sdk/preload your-app.js// your-app.js - No SDK setup needed!
const { generateText } = require('ai');
const { openai } = require('@ai-sdk/openai');
// Libraries are automatically instrumented
generateText({
model: openai('gpt-3.5-turbo'),
prompt: 'Hello, world!',
experimental_telemetry: { isEnabled: true },
}).then((result) => console.log(result.text));Option 2: Preload + Initialize (Custom Config)
node --require @brizz/sdk/preload your-app.js// your-app.js - Custom configuration
const { Brizz } = require('@brizz/sdk');
Brizz.initialize({
apiKey: 'your-api-key',
appName: 'my-app',
// custom config here
});
const { generateText } = require('ai');
// ... rest of your codeOption 3: Manual Import + Initialize
// Must be first import
const { Brizz } = require('@brizz/sdk');
Brizz.initialize({
apiKey: 'your-api-key',
appName: 'my-app',
});
// Import AI libraries after initialization
const { generateText } = require('ai');
const { openai } = require('@ai-sdk/openai');ESM Projects
⚠️ ESM Requirement: ESM projects must use the
--import @brizz/sdk/loaderflag for instrumentation to work. Manual import without the loader will not instrument AI libraries.
Loader + Initialize (Required for ESM)
node --import @brizz/sdk/loader your-app.mjs// your-app.mjs
import { Brizz } from '@brizz/sdk';
// Must initialize SDK manually in ESM
Brizz.initialize({
apiKey: 'your-api-key',
appName: 'my-app',
});
// Import AI libraries after initialization
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await generateText({
model: openai('gpt-3.5-turbo'),
prompt: 'Hello, world!',
experimental_telemetry: { isEnabled: true },
});Important: Initialize Brizz before importing any libraries you want to instrument. If using
dotenv, useimport "dotenv/config"before importing@brizz/sdk.
Module System Support
CommonJS
- Preload:
node --require @brizz/sdk/preload app.js⭐ (with optionalBrizz.initialize()) - Manual: Require
@brizz/sdkfirst, thenBrizz.initialize(), then AI libraries
ESM (ES Modules)
- Loader:
node --import @brizz/sdk/loader app.mjs+Brizz.initialize()⭐ - Manual: Import
@brizz/sdkfirst, thenBrizz.initialize(), then AI libraries
For Next.js and Webpack environments that don't support automatic instrumentation, use manual instrumentation:
// For problematic bundlers
const aiModule = await import('ai');
Brizz.initialize({
apiKey: 'your-api-key',
instrumentModules: {
vercelAI: aiModule, // Manual instrumentation
},
});Supported Libraries
The SDK automatically instruments:
- OpenAI -
openaipackage - Anthropic -
@anthropic-ai/sdkpackage - Vercel AI SDK -
aipackage (generateText, streamText, etc.)- Requires
experimental_telemetry: { isEnabled: true }in function calls
- Requires
- LangChain -
langchainand@langchain/*packages - LlamaIndex -
llamaindexpackage - AWS Bedrock -
@aws-sdk/client-bedrock-runtime - Google Vertex AI -
@google-cloud/vertexai - Vector Databases - Pinecone, Qdrant, ChromaDB
- HTTP/Fetch - Automatic network request tracing
Vercel AI SDK Integration
For Vercel AI SDK instrumentation, you need to enable telemetry in your function calls:
import { generateText, streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// For generateText
const result = await generateText({
model: openai('gpt-4'),
prompt: 'Hello, world!',
experimental_telemetry: { isEnabled: true }, // Required for instrumentation
});
// For streamText
const stream = streamText({
model: openai('gpt-4'),
messages: [{ role: 'user', content: 'Hello!' }],
experimental_telemetry: { isEnabled: true }, // Required for instrumentation
});This enables automatic tracing of:
- Model calls and responses
- Token usage and costs
- Tool calls and executions
- Streaming data
PII Protection & Data Masking
Automatically protects sensitive data in traces and logs:
Brizz.initialize({
apiKey: 'your-api-key',
masking: {
spanMasking: {
rules: [
{
attributePattern: 'gen_ai\\.(prompt|completion)',
mode: 'partial', // 'partial' or 'full'
patterns: ['sk-[a-zA-Z0-9]{48}'], // Custom API key pattern
},
],
},
logMasking: {
rules: [
{
attributePattern: 'message',
mode: 'full',
patterns: ['\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b'],
},
],
},
},
});Built-in PII patterns automatically detect and mask emails, phone numbers, SSNs, credit cards, API keys, crypto addresses, IPs, and more.
Session Tracking
Group related operations under a session context:
Function Wrapper Pattern
import { withSessionId, emitEvent } from '@brizz/sdk';
async function processUserWorkflow(userId: string) {
// All traces within this function will include the session ID
const result = await generateText({
model: openai('gpt-4'),
messages: [{ role: 'user', content: 'Hello' }],
experimental_telemetry: { isEnabled: true },
});
return result;
}
// Create a wrapped function that always executes with session context
const sessionedWorkflow = withSessionId('session-123', processUserWorkflow);
// Call multiple times, each with the same session context
await sessionedWorkflow('user-456');
await sessionedWorkflow('user-789');Immediate Execution Pattern
import { callWithSessionId } from '@brizz/sdk';
// Execute function immediately with session context
await callWithSessionId('session-123', processUserWorkflow, null, 'user-456');Custom Properties
Add custom properties to telemetry context:
import { withProperties, callWithProperties } from '@brizz/sdk';
// Wrapper pattern - properties applied to all calls
const taggedFn = withProperties({ env: 'prod', feature: 'chat' }, myFunction);
await taggedFn(args);
// Immediate execution - properties applied once
await callWithProperties({ userId: 'user-123' }, myFunction, null, args);Handling Method Context
When wrapping methods that use this, you have several options:
Option 1: Arrow Function (Recommended)
class ChatService {
async processMessage(userId: string, message: string) {
// This method uses 'this' context
return `Processed by ${this.serviceName}: ${message}`;
}
}
const service = new ChatService();
// Wrap with arrow function to preserve 'this' context
const sessionedProcess = withSessionId('session-123', (userId: string, message: string) =>
service.processMessage(userId, message),
);Option 2: Using bind()
// Pre-bind the method to preserve 'this' context
const sessionedProcess = withSessionId('session-123', service.processMessage.bind(service));Option 3: Explicit thisArg Parameter
// Pass 'this' context explicitly as third parameter
const sessionedProcess = withSessionId(
'session-123',
service.processMessage,
service, // explicit 'this' context
);Note: The arrow function approach (Option 1) is recommended as it's more explicit, avoids lint
warnings, and is less prone to this binding issues.
Deployment Environment
Optionally specify the deployment environment for better filtering and organization:
Brizz.initialize({
apiKey: 'your-api-key',
appName: 'my-app',
environment: 'production', // Optional: 'dev', 'staging', 'production', etc.
});Custom Events & Logging
Emit custom events and structured logs:
import { emitEvent, logger } from '@brizz/sdk';
// Emit custom events
emitEvent('user.signup', {
userId: '123',
plan: 'pro',
source: 'website',
});
emitEvent('ai.request.completed', {
model: 'gpt-4',
tokens: 150,
latency: 1200,
});
// Structured logging
logger.info('Processing user request', { userId: '123', requestId: 'req-456' });
logger.error('AI request failed', { error: err.message, model: 'gpt-4' });Environment Variables
# Required
BRIZZ_API_KEY=your-api-key
# Optional Configuration
BRIZZ_BASE_URL=https://telemetry.brizz.dev # Default telemetry endpoint
BRIZZ_APP_NAME=my-app # Application name
BRIZZ_APP_VERSION=1.0.0 # Application version
BRIZZ_ENVIRONMENT=production # Environment (dev, staging, prod)
BRIZZ_LOG_LEVEL=info # SDK log level
# OpenTelemetry Standard Variables
OTEL_SERVICE_NAME=my-service # Service name
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true # Capture AI contentAdvanced Configuration
Brizz.initialize({
apiKey: 'your-api-key',
appName: 'my-app',
baseUrl: 'https://telemetry.brizz.dev',
// Custom headers for authentication
headers: {
'X-API-Version': '2024-01',
'X-Environment': 'production',
},
// Disable batching for immediate export (testing)
disableBatch: false,
// Custom exporters for testing
customSpanExporter: new InMemorySpanExporter(),
customLogExporter: new InMemoryLogExporter(),
// Disable internal NodeSDK (advanced usage)
disableNodeSdk: false,
// Log level for SDK diagnostics
logLevel: 'info', // debug, info, warn, error, none
});Testing & Development
For testing and development, you can use in-memory exporters:
import { InMemorySpanExporter } from '@opentelemetry/sdk-trace-base';
const spanExporter = new InMemorySpanExporter();
Brizz.initialize({
apiKey: 'test-key',
customSpanExporter: spanExporter,
logLevel: 'debug',
});
// Later in tests
const spans = spanExporter.getFinishedSpans();
expect(spans).toHaveLength(1);
expect(spans[0].name).toBe('openai.chat');Package.json Examples
CommonJS Projects:
{
"scripts": {
"start": "node --require @brizz/sdk/preload src/index.js",
"dev": "node --require @brizz/sdk/preload --watch src/index.js",
"debug": "node --require @brizz/sdk/preload --inspect src/index.js"
}
}ESM Projects:
{
"type": "module",
"scripts": {
"start": "node --import @brizz/sdk/loader src/index.js",
"dev": "node --import @brizz/sdk/loader --watch src/index.js"
}
}Examples
Check out the examples directory for complete working examples:
- Basic Usage - Simple AI application setup
- Vercel AI SDK - Integration with Vercel's AI SDK
- Session Tracking - Grouping related operations
- Custom Events - Emitting business metrics
- PII Masking - Data protection configuration
Troubleshooting
Common Issues
"Could not find declaration file"
- Make sure to build the SDK:
pnpm build - Check that
dist/contains.d.tsfiles
Instrumentation not working
- Ensure SDK is initialized before importing AI libraries
- For ESM, use loader +
Brizz.initialize() - For CommonJS, use preload (with optional
Brizz.initialize()) - Check that
BRIZZ_API_KEYis set - For Vercel AI SDK: Add
experimental_telemetry: { isEnabled: true }to function calls
CJS/ESM compatibility issues
- Use dynamic imports in CommonJS:
const ai = await import('ai') - For bundlers, use manual instrumentation with
instrumentModules
Debug Mode
Enable debug logging to troubleshoot issues:
BRIZZ_LOG_LEVEL=debug node --import @brizz/sdk/preload your-app.jsContributing
See our Contributing Guide for development setup and guidelines.
License
Apache-2.0 - see LICENSE file.
