npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@brizz/sdk

v0.1.8

Published

OpenTelemetry-based observability SDK for AI applications

Readme

Brizz SDK

npm version License: Apache-2.0 TypeScript

OpenTelemetry-based observability SDK for AI applications. Automatically instruments popular AI libraries including OpenAI, Anthropic, Vercel AI SDK, and more.

Table of Contents

Features

  • 🔍 Automatic Instrumentation - Zero-code setup for popular AI libraries
  • 📊 OpenTelemetry Native - Standards-compliant tracing, metrics, and logs
  • 🛡️ PII Protection - Built-in masking for sensitive data
  • 🔄 Session Tracking - Group related operations and traces
  • 📦 Dual Module Support - Works with both ESM and CommonJS
  • Multiple Initialization Methods - Preload, ESM loader, or manual setup

Installation

npm install @brizz/sdk
# or
yarn add @brizz/sdk
# or
pnpm add @brizz/sdk

Quick Start

First, set up your environment variables:

BRIZZ_API_KEY=your-api-key
BRIZZ_BASE_URL=https://telemetry.brizz.dev  # Optional
BRIZZ_APP_NAME=my-app                       # Optional
BRIZZ_LOG_LEVEL=info                        # Optional: debug, info, warn, error, none

CommonJS Projects

Option 1: Preload Only (Zero Config)

node --require @brizz/sdk/preload your-app.js
// your-app.js - No SDK setup needed!
const { generateText } = require('ai');
const { openai } = require('@ai-sdk/openai');

// Libraries are automatically instrumented
generateText({
  model: openai('gpt-3.5-turbo'),
  prompt: 'Hello, world!',
  experimental_telemetry: { isEnabled: true },
}).then((result) => console.log(result.text));

Option 2: Preload + Initialize (Custom Config)

node --require @brizz/sdk/preload your-app.js
// your-app.js - Custom configuration
const { Brizz } = require('@brizz/sdk');

Brizz.initialize({
  apiKey: 'your-api-key',
  appName: 'my-app',
  // custom config here
});

const { generateText } = require('ai');
// ... rest of your code

Option 3: Manual Import + Initialize

// Must be first import
const { Brizz } = require('@brizz/sdk');

Brizz.initialize({
  apiKey: 'your-api-key',
  appName: 'my-app',
});

// Import AI libraries after initialization
const { generateText } = require('ai');
const { openai } = require('@ai-sdk/openai');

ESM Projects

⚠️ ESM Requirement: ESM projects must use the --import @brizz/sdk/loader flag for instrumentation to work. Manual import without the loader will not instrument AI libraries.

Loader + Initialize (Required for ESM)

node --import @brizz/sdk/loader your-app.mjs
// your-app.mjs
import { Brizz } from '@brizz/sdk';

// Must initialize SDK manually in ESM
Brizz.initialize({
  apiKey: 'your-api-key',
  appName: 'my-app',
});

// Import AI libraries after initialization
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-3.5-turbo'),
  prompt: 'Hello, world!',
  experimental_telemetry: { isEnabled: true },
});

Important: Initialize Brizz before importing any libraries you want to instrument. If using dotenv, use import "dotenv/config" before importing @brizz/sdk.

Module System Support

CommonJS

  • Preload: node --require @brizz/sdk/preload app.js ⭐ (with optional Brizz.initialize())
  • Manual: Require @brizz/sdk first, then Brizz.initialize(), then AI libraries

ESM (ES Modules)

  • Loader: node --import @brizz/sdk/loader app.mjs + Brizz.initialize()
  • Manual: Import @brizz/sdk first, then Brizz.initialize(), then AI libraries

For Next.js and Webpack environments that don't support automatic instrumentation, use manual instrumentation:

// For problematic bundlers
const aiModule = await import('ai');

Brizz.initialize({
  apiKey: 'your-api-key',
  instrumentModules: {
    vercelAI: aiModule, // Manual instrumentation
  },
});

Supported Libraries

The SDK automatically instruments:

  • OpenAI - openai package
  • Anthropic - @anthropic-ai/sdk package
  • Vercel AI SDK - ai package (generateText, streamText, etc.)
    • Requires experimental_telemetry: { isEnabled: true } in function calls
  • LangChain - langchain and @langchain/* packages
  • LlamaIndex - llamaindex package
  • AWS Bedrock - @aws-sdk/client-bedrock-runtime
  • Google Vertex AI - @google-cloud/vertexai
  • Vector Databases - Pinecone, Qdrant, ChromaDB
  • HTTP/Fetch - Automatic network request tracing

Vercel AI SDK Integration

For Vercel AI SDK instrumentation, you need to enable telemetry in your function calls:

import { generateText, streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

// For generateText
const result = await generateText({
  model: openai('gpt-4'),
  prompt: 'Hello, world!',
  experimental_telemetry: { isEnabled: true }, // Required for instrumentation
});

// For streamText
const stream = streamText({
  model: openai('gpt-4'),
  messages: [{ role: 'user', content: 'Hello!' }],
  experimental_telemetry: { isEnabled: true }, // Required for instrumentation
});

This enables automatic tracing of:

  • Model calls and responses
  • Token usage and costs
  • Tool calls and executions
  • Streaming data

PII Protection & Data Masking

Automatically protects sensitive data in traces and logs:

Brizz.initialize({
  apiKey: 'your-api-key',
  masking: {
    spanMasking: {
      rules: [
        {
          attributePattern: 'gen_ai\\.(prompt|completion)',
          mode: 'partial', // 'partial' or 'full'
          patterns: ['sk-[a-zA-Z0-9]{48}'], // Custom API key pattern
        },
      ],
    },
    logMasking: {
      rules: [
        {
          attributePattern: 'message',
          mode: 'full',
          patterns: ['\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b'],
        },
      ],
    },
  },
});

Built-in PII patterns automatically detect and mask emails, phone numbers, SSNs, credit cards, API keys, crypto addresses, IPs, and more.

Session Tracking

Group related operations under a session context:

Function Wrapper Pattern

import { withSessionId, emitEvent } from '@brizz/sdk';

async function processUserWorkflow(userId: string) {
  // All traces within this function will include the session ID
  const result = await generateText({
    model: openai('gpt-4'),
    messages: [{ role: 'user', content: 'Hello' }],
    experimental_telemetry: { isEnabled: true },
  });

  return result;
}

// Create a wrapped function that always executes with session context
const sessionedWorkflow = withSessionId('session-123', processUserWorkflow);

// Call multiple times, each with the same session context
await sessionedWorkflow('user-456');
await sessionedWorkflow('user-789');

Immediate Execution Pattern

import { callWithSessionId } from '@brizz/sdk';

// Execute function immediately with session context
await callWithSessionId('session-123', processUserWorkflow, null, 'user-456');

Custom Properties

Add custom properties to telemetry context:

import { withProperties, callWithProperties } from '@brizz/sdk';

// Wrapper pattern - properties applied to all calls
const taggedFn = withProperties({ env: 'prod', feature: 'chat' }, myFunction);
await taggedFn(args);

// Immediate execution - properties applied once
await callWithProperties({ userId: 'user-123' }, myFunction, null, args);

Handling Method Context

When wrapping methods that use this, you have several options:

Option 1: Arrow Function (Recommended)

class ChatService {
  async processMessage(userId: string, message: string) {
    // This method uses 'this' context
    return `Processed by ${this.serviceName}: ${message}`;
  }
}

const service = new ChatService();
// Wrap with arrow function to preserve 'this' context
const sessionedProcess = withSessionId('session-123', (userId: string, message: string) =>
  service.processMessage(userId, message),
);

Option 2: Using bind()

// Pre-bind the method to preserve 'this' context
const sessionedProcess = withSessionId('session-123', service.processMessage.bind(service));

Option 3: Explicit thisArg Parameter

// Pass 'this' context explicitly as third parameter
const sessionedProcess = withSessionId(
  'session-123',
  service.processMessage,
  service, // explicit 'this' context
);

Note: The arrow function approach (Option 1) is recommended as it's more explicit, avoids lint warnings, and is less prone to this binding issues.

Deployment Environment

Optionally specify the deployment environment for better filtering and organization:

Brizz.initialize({
  apiKey: 'your-api-key',
  appName: 'my-app',
  environment: 'production', // Optional: 'dev', 'staging', 'production', etc.
});

Custom Events & Logging

Emit custom events and structured logs:

import { emitEvent, logger } from '@brizz/sdk';

// Emit custom events
emitEvent('user.signup', {
  userId: '123',
  plan: 'pro',
  source: 'website',
});

emitEvent('ai.request.completed', {
  model: 'gpt-4',
  tokens: 150,
  latency: 1200,
});

// Structured logging
logger.info('Processing user request', { userId: '123', requestId: 'req-456' });
logger.error('AI request failed', { error: err.message, model: 'gpt-4' });

Environment Variables

# Required
BRIZZ_API_KEY=your-api-key

# Optional Configuration
BRIZZ_BASE_URL=https://telemetry.brizz.dev    # Default telemetry endpoint
BRIZZ_APP_NAME=my-app                         # Application name
BRIZZ_APP_VERSION=1.0.0                       # Application version
BRIZZ_ENVIRONMENT=production                  # Environment (dev, staging, prod)
BRIZZ_LOG_LEVEL=info                         # SDK log level

# OpenTelemetry Standard Variables
OTEL_SERVICE_NAME=my-service                               # Service name
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true    # Capture AI content

Advanced Configuration

Brizz.initialize({
  apiKey: 'your-api-key',
  appName: 'my-app',
  baseUrl: 'https://telemetry.brizz.dev',

  // Custom headers for authentication
  headers: {
    'X-API-Version': '2024-01',
    'X-Environment': 'production',
  },

  // Disable batching for immediate export (testing)
  disableBatch: false,

  // Custom exporters for testing
  customSpanExporter: new InMemorySpanExporter(),
  customLogExporter: new InMemoryLogExporter(),

  // Disable internal NodeSDK (advanced usage)
  disableNodeSdk: false,

  // Log level for SDK diagnostics
  logLevel: 'info', // debug, info, warn, error, none
});

Testing & Development

For testing and development, you can use in-memory exporters:

import { InMemorySpanExporter } from '@opentelemetry/sdk-trace-base';

const spanExporter = new InMemorySpanExporter();

Brizz.initialize({
  apiKey: 'test-key',
  customSpanExporter: spanExporter,
  logLevel: 'debug',
});

// Later in tests
const spans = spanExporter.getFinishedSpans();
expect(spans).toHaveLength(1);
expect(spans[0].name).toBe('openai.chat');

Package.json Examples

CommonJS Projects:

{
  "scripts": {
    "start": "node --require @brizz/sdk/preload src/index.js",
    "dev": "node --require @brizz/sdk/preload --watch src/index.js",
    "debug": "node --require @brizz/sdk/preload --inspect src/index.js"
  }
}

ESM Projects:

{
  "type": "module",
  "scripts": {
    "start": "node --import @brizz/sdk/loader src/index.js",
    "dev": "node --import @brizz/sdk/loader --watch src/index.js"
  }
}

Examples

Check out the examples directory for complete working examples:

  • Basic Usage - Simple AI application setup
  • Vercel AI SDK - Integration with Vercel's AI SDK
  • Session Tracking - Grouping related operations
  • Custom Events - Emitting business metrics
  • PII Masking - Data protection configuration

Troubleshooting

Common Issues

"Could not find declaration file"

  • Make sure to build the SDK: pnpm build
  • Check that dist/ contains .d.ts files

Instrumentation not working

  • Ensure SDK is initialized before importing AI libraries
  • For ESM, use loader + Brizz.initialize()
  • For CommonJS, use preload (with optional Brizz.initialize())
  • Check that BRIZZ_API_KEY is set
  • For Vercel AI SDK: Add experimental_telemetry: { isEnabled: true } to function calls

CJS/ESM compatibility issues

  • Use dynamic imports in CommonJS: const ai = await import('ai')
  • For bundlers, use manual instrumentation with instrumentModules

Debug Mode

Enable debug logging to troubleshoot issues:

BRIZZ_LOG_LEVEL=debug node --import @brizz/sdk/preload your-app.js

Contributing

See our Contributing Guide for development setup and guidelines.

License

Apache-2.0 - see LICENSE file.