mcp-obs-sdk
v0.1.7
Published
SDK for instrumenting applications with MCP (Model Context Protocol) observability - trace LLM calls, track costs, monitor performance
Maintainers
Readme
MCP Observability SDK
Official SDK for instrumenting your applications with MCP (Model Context Protocol) observability.
Instrument your app in 5 steps
- Create a tenant + token (dashboard API Tokens or
pnpm --filter @mcp-obs/api run bootstrap:tenant) and notetenant_id+ ingestion token. - Set env vars in your app:
MCP_OBS_ENDPOINT=<ingestion-url>,MCP_OBS_API_KEY=<token>,MCP_OBS_SOURCE=<your-service>,MCP_OBS_TENANT=<tenant_id>,MCP_OBS_ENVIRONMENT=production|staging|development. - Install
pnpm add mcp-obs-sdk(or npm/yarn). - Wrap LLM calls with
MCPTracer.traceorwrapLLMCall; callawait client.flush()for instant visibility. - Verify in the dashboard: traces appear in seconds; charts populate once the metrics worker is consuming the
trace-eventsstream (already live on hosted; self-host start it viapnpm --filter @mcp-obs/metrics devalongside the ingestion worker).
Installation
npm install mcp-obs-sdkQuick Start
Option 1: CLI Setup (Recommended)
Let the CLI install the SDK, detect your framework, and wire up tracing/logging boilerplate for you:
npx mcp-obs quickstart
# or with pnpm
pnpm dlx mcp-obs quickstartDuring the quickstart the CLI will:
- 📦 Install
mcp-obs-sdk(using npm / pnpm / yarn automatically) - 🧭 Detect your framework + LLM provider
- 🧰 Drop in tracing + logging middleware
- 🔑 Configure
.envwith endpoint/API key placeholders - 🧪 Verify connectivity via
npx mcp-obs health
# Install SDK
npm install mcp-obs-sdk
# Auto-detect your environment
npx mcp-obs detect
# Initialize with interactive setup
npx mcp-obs init
# Verify setup
npx mcp-obs healthThe CLI will:
- ✅ Auto-detect your framework (Express, Next.js, Fastify, etc.)
- ✅ Auto-detect LLM providers (OpenAI, Anthropic, etc.)
- ✅ Generate instrumentation code
- ✅ Create environment configuration
- ✅ Set up middleware
See CLI.md for complete CLI documentation.
Option 2: Manual Setup
import { MCPTracer, LLMProvider } from 'mcp-obs-sdk';
const tracer = new MCPTracer({
endpoint: 'https://your-ingestion-url.com/v1/traces',
source: 'my-app',
defaultTags: { service: 'my-app', environment: 'production' },
});
// Wrap your LLM calls
const { response, traceId } = await tracer.trace({
provider: LLMProvider.OPENAI,
model: 'gpt-4o-mini',
prompt: 'Hello, world!',
callFn: async () => {
return await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello, world!' }],
});
},
});
console.log('Trace ID:', traceId);
console.log('Response:', response);Capturing Logs, Session Events, and Tool Runs
The SDK ships the emitters you need for holistic observability—no extra packages required:
import { initializeObservability } from 'mcp-obs-sdk';
const client = initializeObservability({
endpoint: process.env.MCP_OBS_ENDPOINT!,
sourceMCP: 'my-app',
enableLogging: true,
});
// Structured application logs (ships via `/v1/logs`)
client.log({
level: 'info',
message: 'LLM request queued',
tags: { feature: 'summaries', 'session.id': 'session-42' },
attributes: { job: 'daily-delta' },
});
// Session timeline events (tools, execution steps, server runs)
client.emitSessionEvent({
session_id: 'session-42',
event_type: 'tool_invocation',
payload: {
invocation_id: crypto.randomUUID(),
name: 'calendar.lookup',
status: 'success',
started_at: new Date().toISOString(),
},
});
// Tool analytics feed (powers the dashboard Tool Runs view)
client.recordToolRun({
session_id: 'session-42',
tool_name: 'salesforce.contact.search',
status: 'completed',
});✅
initializeObservabilitywires the trace ingester and the log/session/tool run emitters, so once the SDK is installed you already have every transport needed for the dashboard’s Logs, Traces, Sessions, and Tool Runs tabs.
CRM Revenue Context (NEW!)
The SDK now keeps CRM context close to every trace, log, session event, and tool run so dashboards can pivot on rep/deal/pipeline questions without extra plumbing.
import { initializeObservability, MCPTracer, LLMProvider } from 'mcp-obs-sdk';
const client = initializeObservability({ /* ... */ });
const tracer = new MCPTracer({ client, source: 'rev-agent' });
// 1) Seed org-wide defaults or pull from /v1/crm/config
client.setDefaultCRMContext({
rep: { rep_id: 'rep-ashley', name: 'Ashley Gomez' },
deal: { pipeline_id: 'enterprise', stage: 'Proposal' },
});
// 2) Teach the SDK about your GTM pipelines (probabilities, stages, etc.)
client.registerCrmPipelines([
{
pipelineId: 'enterprise',
name: 'Enterprise New Biz',
stages: [
{ name: 'Discovery', sequence: 1, probability: 0.2 },
{ name: 'Proposal', sequence: 3, probability: 0.55 },
],
},
]);
// 3) Attach CRM context to a conversation once instead of on every trace
client.setSessionCRMContext('session-42', {
rep: { rep_id: 'rep-ashley', email: '[email protected]' },
deal: { opportunity_id: 'opp-9821', stage: 'Proposal' },
});
// 4) All traces/session events/tool runs emitted inside this session inherit CRM metadata
await tracer.trace({
sessionId: 'session-42',
provider: LLMProvider.OPENAI,
model: 'gpt-4o-mini',
prompt: 'Summarize latest forecast risk notes',
});💡 RevOps can update the same defaults and pipeline catalog without code under Settings → CRM in the dashboard (backed by the
/v1/crm/configAPI). The SDK automatically hydrates those definitions at startup if you load them from the API.🪄
initializeObservability()now attempts to fetch/v1/crm/configon boot (using your SDK API key or Supabase auth) so the defaults/pipelines managed in the dashboard stay synced without manual glue. You can also callobs.refreshCrmConfigFromServer()at runtime to pick up changes immediately.
MCP Server Auto-Instrumentation
Wrap any McpServer once and the SDK will stream spans, logs, session events, and tool runs for every request (tools, prompts, fallbacks, etc.). The CLI’s init/quickstart commands scaffold this for you via .mcp-obs/instrument.ts.
// .mcp-obs/instrument.ts (generated)
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp';
import { initializeObservability } from 'mcp-obs-sdk';
import { instrumentMcpServer } from 'mcp-obs-sdk/integrations/mcp';
export const obs = initializeObservability({ /* ... */ });
export function attachObservability(server: McpServer) {
instrumentMcpServer(server, {
client: obs,
provider: LLMProvider.OPENAI,
model: 'gpt-4o-mini',
sessionResolver: (request, extra) => extra?.sessionId ?? request?.session_id,
crmContextResolver: (request) => lookupCrmContext(request), // optional
});
return server;
}Use it wherever you create servers:
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp';
import { attachObservability } from './.mcp-obs/instrument';
const server = attachObservability(
new McpServer({
name: 'rev-assistant',
version: '1.0.0',
})
);instrumentMcpServer automatically:
- traces every request/tool call with model/provider metadata
- emits structured logs at start/finish/error (with trace/session correlation)
- publishes session timeline events (start/finish/error tool invocations)
- records tool runs for the dashboard’s Agent Analytics view
Features
Core Tracing
- 🚀 Zero-config tracing - Wrap any LLM call with automatic instrumentation
- 📊 Multi-provider support - OpenAI, Anthropic, Google, Mistral, and more
- 💰 Cost tracking - Automatic token counting and cost estimation
- ⚡ Performance monitoring - Latency tracking and error detection
- 🏷️ Custom tagging - Add metadata and tags to your traces
- 🔄 Session tracking - Group related calls together
- 🎯 Type-safe - Full TypeScript support
Enterprise Features (NEW!)
- 🔄 Retry Logic - Exponential backoff with jitter for transient failures
- 🛡️ Circuit Breaker - Prevent cascading failures with automatic recovery
- 🗜️ Gzip Compression - 70-90% bandwidth reduction automatically
- 💾 Disk Buffering - Offline resilience with automatic replay
- 🔗 W3C Trace Context - Distributed tracing across services
- 📤 OTLP Export - OpenTelemetry compatibility (Jaeger, Zipkin, etc.)
- 🔒 PII Sanitization - GDPR/HIPAA compliance with 20+ patterns
- 🌟 Graceful Shutdown - Automatic flush on SIGTERM/SIGINT
- ☁️ Cloud Metadata - Canonical infra/runtime metadata with privacy & compliance controls
📖 See ENTERPRISE_FEATURES.md for detailed documentation
Runtime Context & Auto-Tuning
- 🔍 Runtime Fingerprinting – Automatically detects AWS Lambda, Vercel Functions/Edge, Cloudflare Workers, Netlify Functions, Google Cloud Functions/Run, Azure Functions, Docker/Kubernetes, Fly.io, GitHub Actions, GitLab CI, CircleCI, and local environments.
- 🧠
RuntimeContextAPI – CallgetRuntimeContext()(orclient.getRuntimeContext()) to read the normalized provider/platform/mode, filesystem hints, CI metadata, recommended batch/flush/disk-buffer/retry tuning, and feature flags. - ⚙️ Behavior Switching – The SDK auto-adjusts batching, flush cadence, retries, and disk buffering for serverless/edge runtimes, containers, and CI; also surfaces ingestion endpoint hints (
https://<region>.ingest.mcp-obs.com). - 🔌 Custom Detectors – Use
registerRuntimeDetector()to plug in proprietary platforms.
import { getRuntimeContext, initializeObservability } from 'mcp-obs-sdk';
const runtime = getRuntimeContext();
console.log(runtime.provider, runtime.mode, runtime.featureFlags);
const client = initializeObservability({
endpoint: process.env.MCP_OBS_ENDPOINT!,
sourceMCP: 'my-mcp',
});
if (client.getRuntimeFeatureFlags()['runtime.edge']) {
console.log('Using fast flush mode for edge runtimes');
}Configuration Options
interface MCPTracerConfig {
endpoint: string; // Ingestion endpoint URL
source: string; // Source identifier for your app
apiKey?: string; // Optional API key for authentication
sessionId?: string; // Optional session ID
tenantId?: string; // Optional tenant ID for multi-tenancy
defaultTags?: Record<string, string>; // Default tags for all traces
timeout?: number; // Request timeout in ms (default: 5000)
retries?: number; // Number of retries (default: 3)
enableLogging?: boolean; // Enable debug logging (default: false)
}Supported Providers
enum LLMProvider {
OPENAI = 'openai',
ANTHROPIC = 'anthropic',
GOOGLE = 'google',
MISTRAL = 'mistral',
COHERE = 'cohere',
HUGGINGFACE = 'huggingface',
REPLICATE = 'replicate',
CUSTOM = 'custom',
}Advanced Usage
Session Tracking
const sessionId = 'user-session-123';
await tracer.trace({
provider: LLMProvider.OPENAI,
model: 'gpt-4',
prompt: 'First message',
sessionId,
callFn: () => openai.chat.completions.create({...}),
});
await tracer.trace({
provider: LLMProvider.OPENAI,
model: 'gpt-4',
prompt: 'Follow-up message',
sessionId, // Same session
callFn: () => openai.chat.completions.create({...}),
});Custom Tags and Metadata
await tracer.trace({
provider: LLMProvider.OPENAI,
model: 'gpt-4',
prompt: 'Analyze this document',
tags: {
feature: 'document-analysis',
user: 'user-123',
priority: 'high',
},
metadata: {
documentId: 'doc-456',
pageCount: 10,
},
callFn: () => openai.chat.completions.create({...}),
});Error Handling
try {
const { response } = await tracer.trace({
provider: LLMProvider.OPENAI,
model: 'gpt-4',
prompt: 'Hello',
callFn: () => openai.chat.completions.create({...}),
});
} catch (error) {
// The trace is automatically recorded with error status
console.error('LLM call failed:', error);
}Multi-Provider Example
// OpenAI
const openaiResponse = await tracer.trace({
provider: LLMProvider.OPENAI,
model: 'gpt-4',
callFn: () => openai.chat.completions.create({...}),
});
// Anthropic
const anthropicResponse = await tracer.trace({
provider: LLMProvider.ANTHROPIC,
model: 'claude-3-opus-20240229',
callFn: () => anthropic.messages.create({...}),
});
// Google
const googleResponse = await tracer.trace({
provider: LLMProvider.GOOGLE,
model: 'gemini-pro',
callFn: () => google.generateContent({...}),
});Cloud Metadata & Identity
Every trace now includes a canonical metadata payload under metadata.infra with schema_version, cloud provider/region, runtime details, and workload identity. You can merge overrides or enforce privacy policies globally:
const tracer = new MCPTracer({
endpoint: process.env.MCP_OBS_ENDPOINT!,
source: 'payments-api',
metadata: {
overrides: {
service: {
name: 'payments-api',
environment: process.env.RUNTIME_ENV ?? 'production',
version: process.env.GIT_SHA,
},
workload: {
deployment: process.env.DEPLOYMENT_ID,
commit_sha: process.env.GIT_SHA,
build_number: process.env.BUILD_ID,
},
},
allowlist: ['cloud.provider', 'cloud.region', 'service.*', 'workload.deployment'],
redaction: {
redactInstanceIds: true,
fields: ['workload.commit_sha'],
},
tags: {
'team.name': 'observability',
},
},
});- Automatic detectors collect AWS (EC2/ECS/EKS/Lambda), GCP (GCE/GKE/Cloud Run), Azure (VM/App Service/Functions), Vercel, and container metadata with graceful timeouts.
metadata.infrais versioned for backwards-compatible evolution and can be overridden via config or environment variables (MCP_OBS_METADATA_SERVICE_NAME,MCP_OBS_METADATA_ALLOWLIST, etc.).- Privacy controls include redaction of hostnames/instance IDs, regex-based scrubbing, and allow/deny lists for compliance (SOC2, GDPR, GxP).
Best Practices
- Reuse tracer instances - Create one tracer per application/service
- Use session IDs - Group related calls for better insights
- Add meaningful tags - Make traces searchable and filterable
- Set source identifier - Use unique names per service
- Handle errors gracefully - Traces are recorded even on failures
Integration with Popular Frameworks
Express.js
import express from 'express';
import { MCPTracer, LLMProvider } from 'mcp-obs-sdk';
const app = express();
const tracer = new MCPTracer({
endpoint: process.env.MCP_OBS_ENDPOINT!,
source: 'express-api',
});
app.post('/chat', async (req, res) => {
const { message } = req.body;
const { response } = await tracer.trace({
provider: LLMProvider.OPENAI,
model: 'gpt-4',
prompt: message,
sessionId: req.session.id,
tags: { endpoint: '/chat' },
callFn: () => openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: message }],
}),
});
res.json({ response });
});Next.js API Route
import { MCPTracer, LLMProvider } from 'mcp-obs-sdk';
import { NextRequest, NextResponse } from 'next/server';
const tracer = new MCPTracer({
endpoint: process.env.MCP_OBS_ENDPOINT!,
source: 'nextjs-app',
});
export async function POST(req: NextRequest) {
const { message } = await req.json();
const { response } = await tracer.trace({
provider: LLMProvider.OPENAI,
model: 'gpt-4',
prompt: message,
callFn: () => openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: message }],
}),
});
return NextResponse.json({ response });
}Environment Variables
# .env
MCP_OBS_ENDPOINT=https://your-ingestion-url.com/v1/traces
MCP_OBS_API_KEY=your-api-key-here
MCP_OBS_SOURCE=my-appDashboard
View your traces in the MCP Observability Dashboard:
- Real-time metrics and analytics
- Request volume and latency charts
- Cost tracking by provider
- Error rate monitoring
- Session timelines
- Search and filter traces
Troubleshooting
"Cannot read properties of null (reading 'matches')" when installing
If you encounter this error when running npm install mcp-obs-sdk in a monorepo:
npm error Cannot read properties of null (reading 'matches')Cause: You're trying to install the SDK inside a pnpm/yarn workspace directory that has workspace configuration files.
Solution: Install in a clean project directory outside the monorepo:
# Create a new directory
mkdir my-app && cd my-app
# Initialize a new project
npm init -y
# Install the SDK
npm install mcp-obs-sdkOr if you need to test within the monorepo, use pnpm:
pnpm add mcp-obs-sdkSDK imports successfully but configuration doesn't work
Make sure you're passing configuration to the MCPTracer constructor:
// ❌ Wrong
const tracer = new MCPTracer();
// ✅ Correct
const tracer = new MCPTracer({
endpoint: 'https://your-api.com/v1/traces',
source: 'my-app',
});Contributing
Contributions are welcome! Please see CONTRIBUTING.md for details.
License
MIT
