npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

mcp-obs-sdk

v0.1.7

Published

SDK for instrumenting applications with MCP (Model Context Protocol) observability - trace LLM calls, track costs, monitor performance

Readme

MCP Observability SDK

Official SDK for instrumenting your applications with MCP (Model Context Protocol) observability.

Instrument your app in 5 steps

  1. Create a tenant + token (dashboard API Tokens or pnpm --filter @mcp-obs/api run bootstrap:tenant) and note tenant_id + ingestion token.
  2. Set env vars in your app: MCP_OBS_ENDPOINT=<ingestion-url>, MCP_OBS_API_KEY=<token>, MCP_OBS_SOURCE=<your-service>, MCP_OBS_TENANT=<tenant_id>, MCP_OBS_ENVIRONMENT=production|staging|development.
  3. Install pnpm add mcp-obs-sdk (or npm/yarn).
  4. Wrap LLM calls with MCPTracer.trace or wrapLLMCall; call await client.flush() for instant visibility.
  5. Verify in the dashboard: traces appear in seconds; charts populate once the metrics worker is consuming the trace-events stream (already live on hosted; self-host start it via pnpm --filter @mcp-obs/metrics dev alongside the ingestion worker).

Installation

npm install mcp-obs-sdk

Quick Start

Option 1: CLI Setup (Recommended)

Let the CLI install the SDK, detect your framework, and wire up tracing/logging boilerplate for you:

npx mcp-obs quickstart
# or with pnpm
pnpm dlx mcp-obs quickstart

During the quickstart the CLI will:

  • 📦 Install mcp-obs-sdk (using npm / pnpm / yarn automatically)
  • 🧭 Detect your framework + LLM provider
  • 🧰 Drop in tracing + logging middleware
  • 🔑 Configure .env with endpoint/API key placeholders
  • 🧪 Verify connectivity via npx mcp-obs health
# Install SDK
npm install mcp-obs-sdk

# Auto-detect your environment
npx mcp-obs detect

# Initialize with interactive setup
npx mcp-obs init

# Verify setup
npx mcp-obs health

The CLI will:

  • ✅ Auto-detect your framework (Express, Next.js, Fastify, etc.)
  • ✅ Auto-detect LLM providers (OpenAI, Anthropic, etc.)
  • ✅ Generate instrumentation code
  • ✅ Create environment configuration
  • ✅ Set up middleware

See CLI.md for complete CLI documentation.

Option 2: Manual Setup

import { MCPTracer, LLMProvider } from 'mcp-obs-sdk';

const tracer = new MCPTracer({
  endpoint: 'https://your-ingestion-url.com/v1/traces',
  source: 'my-app',
  defaultTags: { service: 'my-app', environment: 'production' },
});

// Wrap your LLM calls
const { response, traceId } = await tracer.trace({
  provider: LLMProvider.OPENAI,
  model: 'gpt-4o-mini',
  prompt: 'Hello, world!',
  callFn: async () => {
    return await openai.chat.completions.create({
      model: 'gpt-4o-mini',
      messages: [{ role: 'user', content: 'Hello, world!' }],
    });
  },
});

console.log('Trace ID:', traceId);
console.log('Response:', response);

Capturing Logs, Session Events, and Tool Runs

The SDK ships the emitters you need for holistic observability—no extra packages required:

import { initializeObservability } from 'mcp-obs-sdk';

const client = initializeObservability({
  endpoint: process.env.MCP_OBS_ENDPOINT!,
  sourceMCP: 'my-app',
  enableLogging: true,
});

// Structured application logs (ships via `/v1/logs`)
client.log({
  level: 'info',
  message: 'LLM request queued',
  tags: { feature: 'summaries', 'session.id': 'session-42' },
  attributes: { job: 'daily-delta' },
});

// Session timeline events (tools, execution steps, server runs)
client.emitSessionEvent({
  session_id: 'session-42',
  event_type: 'tool_invocation',
  payload: {
    invocation_id: crypto.randomUUID(),
    name: 'calendar.lookup',
    status: 'success',
    started_at: new Date().toISOString(),
  },
});

// Tool analytics feed (powers the dashboard Tool Runs view)
client.recordToolRun({
  session_id: 'session-42',
  tool_name: 'salesforce.contact.search',
  status: 'completed',
});

initializeObservability wires the trace ingester and the log/session/tool run emitters, so once the SDK is installed you already have every transport needed for the dashboard’s Logs, Traces, Sessions, and Tool Runs tabs.

CRM Revenue Context (NEW!)

The SDK now keeps CRM context close to every trace, log, session event, and tool run so dashboards can pivot on rep/deal/pipeline questions without extra plumbing.

import { initializeObservability, MCPTracer, LLMProvider } from 'mcp-obs-sdk';

const client = initializeObservability({ /* ... */ });
const tracer = new MCPTracer({ client, source: 'rev-agent' });

// 1) Seed org-wide defaults or pull from /v1/crm/config
client.setDefaultCRMContext({
  rep: { rep_id: 'rep-ashley', name: 'Ashley Gomez' },
  deal: { pipeline_id: 'enterprise', stage: 'Proposal' },
});

// 2) Teach the SDK about your GTM pipelines (probabilities, stages, etc.)
client.registerCrmPipelines([
  {
    pipelineId: 'enterprise',
    name: 'Enterprise New Biz',
    stages: [
      { name: 'Discovery', sequence: 1, probability: 0.2 },
      { name: 'Proposal', sequence: 3, probability: 0.55 },
    ],
  },
]);

// 3) Attach CRM context to a conversation once instead of on every trace
client.setSessionCRMContext('session-42', {
  rep: { rep_id: 'rep-ashley', email: '[email protected]' },
  deal: { opportunity_id: 'opp-9821', stage: 'Proposal' },
});

// 4) All traces/session events/tool runs emitted inside this session inherit CRM metadata
await tracer.trace({
  sessionId: 'session-42',
  provider: LLMProvider.OPENAI,
  model: 'gpt-4o-mini',
  prompt: 'Summarize latest forecast risk notes',
});

💡 RevOps can update the same defaults and pipeline catalog without code under Settings → CRM in the dashboard (backed by the /v1/crm/config API). The SDK automatically hydrates those definitions at startup if you load them from the API.

🪄 initializeObservability() now attempts to fetch /v1/crm/config on boot (using your SDK API key or Supabase auth) so the defaults/pipelines managed in the dashboard stay synced without manual glue. You can also call obs.refreshCrmConfigFromServer() at runtime to pick up changes immediately.

MCP Server Auto-Instrumentation

Wrap any McpServer once and the SDK will stream spans, logs, session events, and tool runs for every request (tools, prompts, fallbacks, etc.). The CLI’s init/quickstart commands scaffold this for you via .mcp-obs/instrument.ts.

// .mcp-obs/instrument.ts (generated)
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp';
import { initializeObservability } from 'mcp-obs-sdk';
import { instrumentMcpServer } from 'mcp-obs-sdk/integrations/mcp';

export const obs = initializeObservability({ /* ... */ });

export function attachObservability(server: McpServer) {
  instrumentMcpServer(server, {
    client: obs,
    provider: LLMProvider.OPENAI,
    model: 'gpt-4o-mini',
    sessionResolver: (request, extra) => extra?.sessionId ?? request?.session_id,
    crmContextResolver: (request) => lookupCrmContext(request), // optional
  });
  return server;
}

Use it wherever you create servers:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp';
import { attachObservability } from './.mcp-obs/instrument';

const server = attachObservability(
  new McpServer({
    name: 'rev-assistant',
    version: '1.0.0',
  })
);

instrumentMcpServer automatically:

  • traces every request/tool call with model/provider metadata
  • emits structured logs at start/finish/error (with trace/session correlation)
  • publishes session timeline events (start/finish/error tool invocations)
  • records tool runs for the dashboard’s Agent Analytics view

Features

Core Tracing

  • 🚀 Zero-config tracing - Wrap any LLM call with automatic instrumentation
  • 📊 Multi-provider support - OpenAI, Anthropic, Google, Mistral, and more
  • 💰 Cost tracking - Automatic token counting and cost estimation
  • Performance monitoring - Latency tracking and error detection
  • 🏷️ Custom tagging - Add metadata and tags to your traces
  • 🔄 Session tracking - Group related calls together
  • 🎯 Type-safe - Full TypeScript support

Enterprise Features (NEW!)

  • 🔄 Retry Logic - Exponential backoff with jitter for transient failures
  • 🛡️ Circuit Breaker - Prevent cascading failures with automatic recovery
  • 🗜️ Gzip Compression - 70-90% bandwidth reduction automatically
  • 💾 Disk Buffering - Offline resilience with automatic replay
  • 🔗 W3C Trace Context - Distributed tracing across services
  • 📤 OTLP Export - OpenTelemetry compatibility (Jaeger, Zipkin, etc.)
  • 🔒 PII Sanitization - GDPR/HIPAA compliance with 20+ patterns
  • 🌟 Graceful Shutdown - Automatic flush on SIGTERM/SIGINT
  • ☁️ Cloud Metadata - Canonical infra/runtime metadata with privacy & compliance controls

📖 See ENTERPRISE_FEATURES.md for detailed documentation

Runtime Context & Auto-Tuning

  • 🔍 Runtime Fingerprinting – Automatically detects AWS Lambda, Vercel Functions/Edge, Cloudflare Workers, Netlify Functions, Google Cloud Functions/Run, Azure Functions, Docker/Kubernetes, Fly.io, GitHub Actions, GitLab CI, CircleCI, and local environments.
  • 🧠 RuntimeContext API – Call getRuntimeContext() (or client.getRuntimeContext()) to read the normalized provider/platform/mode, filesystem hints, CI metadata, recommended batch/flush/disk-buffer/retry tuning, and feature flags.
  • ⚙️ Behavior Switching – The SDK auto-adjusts batching, flush cadence, retries, and disk buffering for serverless/edge runtimes, containers, and CI; also surfaces ingestion endpoint hints (https://<region>.ingest.mcp-obs.com).
  • 🔌 Custom Detectors – Use registerRuntimeDetector() to plug in proprietary platforms.
import { getRuntimeContext, initializeObservability } from 'mcp-obs-sdk';

const runtime = getRuntimeContext();

console.log(runtime.provider, runtime.mode, runtime.featureFlags);

const client = initializeObservability({
  endpoint: process.env.MCP_OBS_ENDPOINT!,
  sourceMCP: 'my-mcp',
});

if (client.getRuntimeFeatureFlags()['runtime.edge']) {
  console.log('Using fast flush mode for edge runtimes');
}

Configuration Options

interface MCPTracerConfig {
  endpoint: string;              // Ingestion endpoint URL
  source: string;                // Source identifier for your app
  apiKey?: string;               // Optional API key for authentication
  sessionId?: string;            // Optional session ID
  tenantId?: string;             // Optional tenant ID for multi-tenancy
  defaultTags?: Record<string, string>; // Default tags for all traces
  timeout?: number;              // Request timeout in ms (default: 5000)
  retries?: number;              // Number of retries (default: 3)
  enableLogging?: boolean;       // Enable debug logging (default: false)
}

Supported Providers

enum LLMProvider {
  OPENAI = 'openai',
  ANTHROPIC = 'anthropic',
  GOOGLE = 'google',
  MISTRAL = 'mistral',
  COHERE = 'cohere',
  HUGGINGFACE = 'huggingface',
  REPLICATE = 'replicate',
  CUSTOM = 'custom',
}

Advanced Usage

Session Tracking

const sessionId = 'user-session-123';

await tracer.trace({
  provider: LLMProvider.OPENAI,
  model: 'gpt-4',
  prompt: 'First message',
  sessionId,
  callFn: () => openai.chat.completions.create({...}),
});

await tracer.trace({
  provider: LLMProvider.OPENAI,
  model: 'gpt-4',
  prompt: 'Follow-up message',
  sessionId, // Same session
  callFn: () => openai.chat.completions.create({...}),
});

Custom Tags and Metadata

await tracer.trace({
  provider: LLMProvider.OPENAI,
  model: 'gpt-4',
  prompt: 'Analyze this document',
  tags: {
    feature: 'document-analysis',
    user: 'user-123',
    priority: 'high',
  },
  metadata: {
    documentId: 'doc-456',
    pageCount: 10,
  },
  callFn: () => openai.chat.completions.create({...}),
});

Error Handling

try {
  const { response } = await tracer.trace({
    provider: LLMProvider.OPENAI,
    model: 'gpt-4',
    prompt: 'Hello',
    callFn: () => openai.chat.completions.create({...}),
  });
} catch (error) {
  // The trace is automatically recorded with error status
  console.error('LLM call failed:', error);
}

Multi-Provider Example

// OpenAI
const openaiResponse = await tracer.trace({
  provider: LLMProvider.OPENAI,
  model: 'gpt-4',
  callFn: () => openai.chat.completions.create({...}),
});

// Anthropic
const anthropicResponse = await tracer.trace({
  provider: LLMProvider.ANTHROPIC,
  model: 'claude-3-opus-20240229',
  callFn: () => anthropic.messages.create({...}),
});

// Google
const googleResponse = await tracer.trace({
  provider: LLMProvider.GOOGLE,
  model: 'gemini-pro',
  callFn: () => google.generateContent({...}),
});

Cloud Metadata & Identity

Every trace now includes a canonical metadata payload under metadata.infra with schema_version, cloud provider/region, runtime details, and workload identity. You can merge overrides or enforce privacy policies globally:

const tracer = new MCPTracer({
  endpoint: process.env.MCP_OBS_ENDPOINT!,
  source: 'payments-api',
  metadata: {
    overrides: {
      service: {
        name: 'payments-api',
        environment: process.env.RUNTIME_ENV ?? 'production',
        version: process.env.GIT_SHA,
      },
      workload: {
        deployment: process.env.DEPLOYMENT_ID,
        commit_sha: process.env.GIT_SHA,
        build_number: process.env.BUILD_ID,
      },
    },
    allowlist: ['cloud.provider', 'cloud.region', 'service.*', 'workload.deployment'],
    redaction: {
      redactInstanceIds: true,
      fields: ['workload.commit_sha'],
    },
    tags: {
      'team.name': 'observability',
    },
  },
});
  • Automatic detectors collect AWS (EC2/ECS/EKS/Lambda), GCP (GCE/GKE/Cloud Run), Azure (VM/App Service/Functions), Vercel, and container metadata with graceful timeouts.
  • metadata.infra is versioned for backwards-compatible evolution and can be overridden via config or environment variables (MCP_OBS_METADATA_SERVICE_NAME, MCP_OBS_METADATA_ALLOWLIST, etc.).
  • Privacy controls include redaction of hostnames/instance IDs, regex-based scrubbing, and allow/deny lists for compliance (SOC2, GDPR, GxP).

Best Practices

  1. Reuse tracer instances - Create one tracer per application/service
  2. Use session IDs - Group related calls for better insights
  3. Add meaningful tags - Make traces searchable and filterable
  4. Set source identifier - Use unique names per service
  5. Handle errors gracefully - Traces are recorded even on failures

Integration with Popular Frameworks

Express.js

import express from 'express';
import { MCPTracer, LLMProvider } from 'mcp-obs-sdk';

const app = express();
const tracer = new MCPTracer({
  endpoint: process.env.MCP_OBS_ENDPOINT!,
  source: 'express-api',
});

app.post('/chat', async (req, res) => {
  const { message } = req.body;

  const { response } = await tracer.trace({
    provider: LLMProvider.OPENAI,
    model: 'gpt-4',
    prompt: message,
    sessionId: req.session.id,
    tags: { endpoint: '/chat' },
    callFn: () => openai.chat.completions.create({
      model: 'gpt-4',
      messages: [{ role: 'user', content: message }],
    }),
  });

  res.json({ response });
});

Next.js API Route

import { MCPTracer, LLMProvider } from 'mcp-obs-sdk';
import { NextRequest, NextResponse } from 'next/server';

const tracer = new MCPTracer({
  endpoint: process.env.MCP_OBS_ENDPOINT!,
  source: 'nextjs-app',
});

export async function POST(req: NextRequest) {
  const { message } = await req.json();

  const { response } = await tracer.trace({
    provider: LLMProvider.OPENAI,
    model: 'gpt-4',
    prompt: message,
    callFn: () => openai.chat.completions.create({
      model: 'gpt-4',
      messages: [{ role: 'user', content: message }],
    }),
  });

  return NextResponse.json({ response });
}

Environment Variables

# .env
MCP_OBS_ENDPOINT=https://your-ingestion-url.com/v1/traces
MCP_OBS_API_KEY=your-api-key-here
MCP_OBS_SOURCE=my-app

Dashboard

View your traces in the MCP Observability Dashboard:

  • Real-time metrics and analytics
  • Request volume and latency charts
  • Cost tracking by provider
  • Error rate monitoring
  • Session timelines
  • Search and filter traces

Troubleshooting

"Cannot read properties of null (reading 'matches')" when installing

If you encounter this error when running npm install mcp-obs-sdk in a monorepo:

npm error Cannot read properties of null (reading 'matches')

Cause: You're trying to install the SDK inside a pnpm/yarn workspace directory that has workspace configuration files.

Solution: Install in a clean project directory outside the monorepo:

# Create a new directory
mkdir my-app && cd my-app

# Initialize a new project
npm init -y

# Install the SDK
npm install mcp-obs-sdk

Or if you need to test within the monorepo, use pnpm:

pnpm add mcp-obs-sdk

SDK imports successfully but configuration doesn't work

Make sure you're passing configuration to the MCPTracer constructor:

// ❌ Wrong
const tracer = new MCPTracer();

// ✅ Correct
const tracer = new MCPTracer({
  endpoint: 'https://your-api.com/v1/traces',
  source: 'my-app',
});

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for details.

License

MIT

Links