moda-ai
v1.15.1
Published
Official TypeScript/Node.js SDK for Moda AI analytics with automatic conversation threading
Maintainers
Readme
moda-ai
Official TypeScript/Node.js SDK for Moda LLM observability with automatic conversation threading.
Features
- Automatic Instrumentation: Zero-config tracing for OpenAI and Anthropic clients
- Vercel AI SDK Integration: First-class support via
experimental_telemetrywith tool call tracking - Conversation Threading: Groups multi-turn conversations together
- Streaming Support: Full support for streaming responses
- User Tracking: Associate LLM calls with specific users
- OpenTelemetry Native: Built on OpenTelemetry for standard-compliant telemetry
Installation
npm install moda-aiQuick Start
import { Moda } from 'moda-ai';
import OpenAI from 'openai';
// Initialize once at application startup
Moda.init('moda_your_api_key');
// Set conversation ID for your session (recommended)
Moda.conversationId = 'session_' + sessionId;
// All OpenAI calls are now automatically tracked
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
// Flush before exit
await Moda.flush();Conversation Tracking
Setting Conversation ID (Recommended)
For production use, explicitly set a conversation ID to group related LLM calls:
// Property-style API (recommended)
Moda.conversationId = 'support_ticket_123';
await openai.chat.completions.create({ ... });
await openai.chat.completions.create({ ... });
// Both calls share the same conversation_id
Moda.conversationId = null; // clear when done
// Method-style API (also supported)
Moda.setConversationId('support_ticket_123');
await openai.chat.completions.create({ ... });
Moda.clearConversationId();Setting User ID
Associate LLM calls with specific users:
Moda.userId = 'user_12345';
await openai.chat.completions.create({ ... });
Moda.userId = null; // clear when done
// Or use method-style
Moda.setUserId('user_12345');
await openai.chat.completions.create({ ... });
Moda.clearUserId();Scoped Context
For callback-based scoping (useful in async contexts):
import { withConversationId, withUserId, withContext } from 'moda-ai';
// Scoped conversation ID
await withConversationId('my_session_123', async () => {
await openai.chat.completions.create({ ... });
await openai.chat.completions.create({ ... });
// Both calls use 'my_session_123'
});
// Scoped user ID
await withUserId('user_456', async () => {
await openai.chat.completions.create({ ... });
});
// Both at once
await withContext('conv_123', 'user_456', async () => {
// ...
});Automatic Fallback (Simple Chatbots Only)
If you don't set a conversation ID, the SDK automatically computes one by hashing the first user message and system prompt. This only works for simple chatbots where you pass the full message history with each API call:
// Turn 1
let messages = [{ role: 'user', content: 'Hi, help with TypeScript' }];
const r1 = await openai.chat.completions.create({ model: 'gpt-4', messages });
// Turn 2 - automatically linked to Turn 1
messages.push({ role: 'assistant', content: r1.choices[0].message.content });
messages.push({ role: 'user', content: 'How do I read a file?' });
const r2 = await openai.chat.completions.create({ model: 'gpt-4', messages });
// Both turns have the SAME conversation_id because "Hi, help with TypeScript"
// is still the first user message in both callsWhy This Works
LLM APIs are stateless. Each API call must include the full conversation history. The SDK extracts the first user message from the messages array and hashes it to create a stable conversation ID across turns.
When Automatic Detection Does NOT Work
Agent frameworks (LangChain, Claude Agent SDK, CrewAI, AutoGPT, etc.) do NOT pass full message history. Each agent iteration typically passes only:
- System prompt (with context baked in)
- Tool results from the previous step
- A continuation prompt
This means each iteration has a different first user message, resulting in different conversation IDs:
// Agent iteration 1
messages = [{ role: 'user', content: 'What are my top clusters?' }] // conv_abc123
// Agent iteration 2 (tool result)
messages = [{ role: 'user', content: 'Tool returned: ...' }] // conv_xyz789 - DIFFERENT!
// Agent iteration 3
messages = [{ role: 'user', content: 'Based on the data...' }] // conv_def456 - DIFFERENT!For agent-based applications, you MUST use explicit conversation IDs:
// Wrap your entire agent execution
Moda.conversationId = 'agent_session_' + sessionId;
const agent = new LangChainAgent();
await agent.run('What are my top clusters?'); // All internal LLM calls share same ID
Moda.conversationId = null;Anthropic Support
Works the same way with Anthropic's Claude:
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
Moda.conversationId = 'claude_session_123';
const response = await anthropic.messages.create({
model: 'claude-3-haiku-20240307',
max_tokens: 1024,
system: 'You are a helpful assistant.',
messages: [{ role: 'user', content: 'Hello!' }],
});OpenClaw Integration
OpenClaw can export OpenTelemetry data through its diagnostics-otel plugin.
Use Moda helpers to generate OpenClaw config and OTEL environment variables.
import { Moda } from 'moda-ai';
Moda.init(process.env.MODA_API_KEY!);
const openclawConfig = Moda.getOpenClawTelemetryConfig({
serviceName: 'openclaw-gateway',
});
const openclawEnv = Moda.getOpenClawEnvironment();
// Optionally trace gateway/CLI lifecycle operations
await Moda.withOpenClawOperation({ operation: 'gateway.request' }, async ({ span }) => {
span.setAttribute('openclaw.route', '/chat');
// your OpenClaw call here
});Vercel AI SDK Support
The Moda SDK integrates with the Vercel AI SDK (ai package) via its built-in OpenTelemetry telemetry. Pass Moda.getVercelAITelemetry() to the experimental_telemetry option on any AI SDK function.
Basic Usage
import { Moda } from 'moda-ai';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
Moda.init('moda_your_api_key');
Moda.conversationId = 'session_123';
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Hello!',
experimental_telemetry: Moda.getVercelAITelemetry(),
});Streaming
import { streamText } from 'ai';
const result = streamText({
model: openai('gpt-4o'),
prompt: 'Write a poem about coding',
experimental_telemetry: Moda.getVercelAITelemetry(),
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}Tool Calls
Tool calls and results are automatically captured in telemetry spans. The Vercel AI SDK creates ai.toolCall child spans with tool name, arguments, and results.
import { generateText, tool } from 'ai';
import { z } from 'zod';
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'What is the weather in San Francisco?',
experimental_telemetry: Moda.getVercelAITelemetry(),
tools: {
getWeather: tool({
description: 'Get the current weather',
parameters: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
return { temperature: 72, condition: 'sunny', location };
},
}),
},
maxSteps: 3,
});Structured Output (generateObject)
import { generateObject } from 'ai';
import { z } from 'zod';
const result = await generateObject({
model: openai('gpt-4o'),
prompt: 'Generate a recipe for pasta',
schema: z.object({
name: z.string(),
ingredients: z.array(z.string()),
steps: z.array(z.string()),
}),
experimental_telemetry: Moda.getVercelAITelemetry(),
});Options
Moda.getVercelAITelemetry({
// Don't record prompts (for sensitive data)
recordInputs: false,
// Don't record completions (for sensitive data)
recordOutputs: false,
// Group telemetry by function (shown in dashboard)
functionId: 'my-chat-handler',
// Custom metadata (merged with Moda's conversation_id/user_id)
metadata: {
environment: 'production',
feature: 'customer-support',
},
});What Gets Captured
The Vercel AI SDK emits rich telemetry spans that Moda captures automatically:
| Span | Attributes |
|------|------------|
| ai.generateText / ai.streamText | Model, provider, prompt, response text, finish reason, usage tokens |
| ai.generateText.doGenerate / ai.streamText.doStream | Per-step details: messages, tools, tool choice, response, provider metadata |
| ai.toolCall | Tool name, tool call ID, arguments, result |
| ai.generateObject / ai.streamObject | Schema, structured output, validation |
| ai.embed / ai.embedMany | Embedding model, input values, dimensions |
Works With Any AI SDK Provider
Since the telemetry is handled by the AI SDK core (not the provider), it works with any @ai-sdk/* provider:
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
import { mistral } from '@ai-sdk/mistral';
// All of these work with the same telemetry config
const telemetry = Moda.getVercelAITelemetry();
await generateText({ model: anthropic('claude-3-5-sonnet-20241022'), ..., experimental_telemetry: telemetry });
await generateText({ model: google('gemini-1.5-pro'), ..., experimental_telemetry: telemetry });
await generateText({ model: mistral('mistral-large-latest'), ..., experimental_telemetry: telemetry });Streaming Support
The SDK fully supports streaming responses:
const stream = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Count to 5' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Streaming responses are automatically trackedUsing with Sentry (or other OpenTelemetry SDKs)
The Moda SDK automatically detects and coexists with other OpenTelemetry-based SDKs like Sentry. When an existing TracerProvider is detected, Moda adds its SpanProcessor to the existing provider instead of creating a new one.
Sentry v8+ Integration
Sentry v8+ uses OpenTelemetry internally for tracing. Initialize Sentry first, then Moda:
import * as Sentry from '@sentry/node';
import { Moda } from 'moda-ai';
import OpenAI from 'openai';
// 1. Initialize Sentry FIRST (sets up OpenTelemetry TracerProvider)
Sentry.init({
dsn: 'https://[email protected]/xxx',
tracesSampleRate: 1.0,
});
// 2. Initialize Moda SECOND (detects Sentry's provider automatically)
await Moda.init('moda_your_api_key', {
debug: true, // Shows: "[Moda] Detected existing TracerProvider, adding Moda SpanProcessor to it"
});
// 3. Use OpenAI normally - spans go to BOTH Sentry and Moda
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});
// 4. Cleanup - Moda shutdown preserves Sentry
await Moda.flush();
await Moda.shutdown(); // Only shuts down Moda's processor, Sentry continues workingHow It Works
When Moda detects an existing TracerProvider (e.g., from Sentry):
- Moda adds its SpanProcessor to the existing provider
- Both SDKs receive the same spans with identical trace IDs
Moda.shutdown()only removes Moda's processor, preserving the other SDK- You can re-initialize Moda after shutdown
Expected Behavior
With debug: true, you should see:
[Moda] Detected existing TracerProvider, adding Moda SpanProcessor to itYou should NOT see:
Error: Attempted duplicate registration of tracer providerSupported SDKs
This coexistence works with any SDK that uses OpenTelemetry's TracerProvider:
- Sentry v8+
- Datadog APM
- New Relic
- Honeycomb
- Custom OpenTelemetry setups
Advanced: Standalone Provider (bypasses Sentry sampling)
If Sentry filters out LLM spans (only shows HTTP/DB spans), use Moda.createModaProvider to create a separate provider that bypasses Sentry's sampling:
// instrument.js - load AFTER Sentry.init()
import { Moda } from 'moda-ai';
if (process.env.MODA_API_KEY) {
// Create Moda's own provider (doesn't affect Sentry)
Moda.createModaProvider({ apiKey: process.env.MODA_API_KEY });
// Register OpenAI/Anthropic instrumentations
Moda.registerInstrumentations();
}This approach:
- ✅ Bypasses Sentry's span sampling/filtering
- ✅ Sentry continues working normally for HTTP/DB/errors
- ✅ Moda receives all LLM spans independently
- ✅ Two separate pipelines, no interference
Configuration Options
Moda.init('moda_api_key', {
// Base URL for telemetry ingestion
baseUrl: 'https://ingest.moda.so/v1/traces',
// Environment name (shown in dashboard)
environment: 'production',
// Enable/disable the SDK
enabled: true,
// Enable debug logging
debug: false,
// Batch size for telemetry export
batchSize: 100,
// Flush interval in milliseconds
flushInterval: 5000,
});API Reference
Moda Object
// Initialize the SDK
Moda.init(apiKey: string, options?: ModaInitOptions): void
// Force flush pending telemetry
Moda.flush(): Promise<void>
// Shutdown and release resources
Moda.shutdown(): Promise<void>
// Check initialization status
Moda.isInitialized(): boolean
// Vercel AI SDK integration
Moda.getVercelAITelemetry(options?: GetVercelAITelemetryOptions): VercelAITelemetryConfig
// Property-style context (recommended)
Moda.conversationId: string | null // get/set
Moda.userId: string | null // get/set
// Method-style context (also supported)
Moda.setConversationId(id: string): void
Moda.clearConversationId(): void
Moda.setUserId(id: string): void
Moda.clearUserId(): voidContext Functions
import { withConversationId, withUserId, withContext } from 'moda-ai';
// Scoped conversation ID
await withConversationId('conv_123', async () => {
// All LLM calls here use 'conv_123'
});
// Scoped user ID
await withUserId('user_456', async () => {
// All LLM calls here are associated with 'user_456'
});
// Both at once
await withContext('conv_123', 'user_456', async () => {
// ...
});Utility Functions
import { computeConversationId, generateRandomConversationId } from 'moda-ai';
// Compute conversation ID from messages (same algorithm SDK uses)
const id = computeConversationId(messages, systemPrompt);
// Generate a random conversation ID
const randomId = generateRandomConversationId();Graceful Shutdown
Always flush before your application exits:
process.on('SIGTERM', async () => {
await Moda.flush();
await Moda.shutdown();
process.exit(0);
});Requirements
- Node.js >= 18.0.0
- TypeScript >= 5.0 (for type definitions)
Peer Dependencies
Install the LLM clients you want to use:
# For OpenAI (auto-instrumented)
npm install openai
# For Anthropic (auto-instrumented)
npm install @anthropic-ai/sdk
# For Vercel AI SDK (use with Moda.getVercelAITelemetry())
npm install ai @ai-sdk/openai # or @ai-sdk/anthropic, @ai-sdk/google, etc.License
MIT
