npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

autotel

v2.18.0

Published

Write Once, Observe Anywhere

Readme

🔭 autotel

npm version npm subscribers License: MIT

Write once, observe everywhere. Instrument your Node.js code a single time, keep the DX you love, and stream traces, metrics, logs, and product events to any observability stack without vendor lock-in.

  • Drop-in DX – one init() and ergonomic helpers like trace(), span(), withTracing(), decorators, and batch instrumentation.
  • Platform freedom – OTLP-first design plus subscribers for PostHog, Mixpanel, Amplitude, and anything else via custom exporters/readers.
  • Production hardening – adaptive sampling (10% baseline, 100% errors/slow paths), rate limiting, circuit breakers, payload validation, and automatic sensitive-field redaction.
  • Auto enrichment – service metadata, deployment info, and AsyncLocalStorage-powered correlation IDs automatically flow into spans, metrics, logs, and events.

Raw OpenTelemetry is verbose, and vendor SDKs create lock-in. Autotel gives you the best parts of both: clean ergonomics and total ownership of your telemetry.

Migrating from OpenTelemetry?

Migration Guide - Pattern-by-pattern migration walkthrough with side-by-side comparisons and deployment checklist.

Replace NODE_OPTIONS and 30+ lines of SDK boilerplate with init(), wrap functions with trace() instead of manual span.start()/span.end().


Table of Contents

Why Autotel

| Challenge | With autotel | | ---------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | | Writing raw OpenTelemetry spans/metrics takes dozens of lines and manual lifecycle management. | Wrap any function in trace() or span() and get automatic span lifecycle, error capture, attributes, and adaptive sampling. | | Vendor SDKs simplify setup but trap your data in a single platform. | Autotel is OTLP-native and works with Grafana Cloud, Datadog, New Relic, Tempo, Honeycomb, Elasticsearch, or your own collector. | | Teams need both observability and product events. | Ship technical telemetry and funnel/behavior events through the same API with contextual enrichment. | | Production readiness requires redaction, rate limiting, and circuit breakers. | Those guardrails are on by default so you can safely enable telemetry everywhere. |

Quick Start

Want to follow along in code? This repo ships with apps/example-basic (mirrors the steps below) and apps/example-http for an Express server, you can run either with pnpm start after pnpm install && pnpm build at the root.

1. Install

npm install autotel
# or
pnpm add autotel

2. Initialize once at startup

import { init } from 'autotel';

init({
  service: 'checkout-api',
  environment: process.env.NODE_ENV,
});

Defaults:

  • OTLP endpoint: process.env.OTLP_ENDPOINT || http://localhost:4318
  • Metrics: on in every environment
  • Sampler: adaptive (10% baseline, 100% for errors/slow spans)
  • Version: auto-detected from package.json
  • Events auto-flush when the root span finishes

3. Instrument code with trace()

import { trace } from 'autotel';

export const createUser = trace(async function createUser(
  data: CreateUserData,
) {
  const user = await db.users.insert(data);
  return user;
});
  • Named function expressions automatically become span names (code.function).
  • Errors are recorded, spans are ended, and status is set automatically.

4. See the value everywhere

import { init, track } from 'autotel';

init({
  service: 'checkout-api',
  endpoint: 'https://otlp-gateway-prod.grafana.net/otlp',
  subscribers: [new PostHogSubscriber({ apiKey: process.env.POSTHOG_KEY! })],
});

export const processOrder = trace(async function processOrder(order) {
  track('order.completed', { amount: order.total });
  return charge(order);
});

Every span, metric, log line, and event includes traceId, spanId, operation.name, service.version, and deployment.environment automatically.

Choose Any Destination

import { init } from 'autotel';

init({
  service: 'my-app',
  // Grafana / Tempo / OTLP collector
  endpoint: 'https://otlp-gateway-prod.grafana.net/otlp',
});

init({
  service: 'my-app',
  // Datadog (traces + metrics + logs via OTLP)
  endpoint: 'https://otlp.datadoghq.com',
  headers: 'dd-api-key=...',
});

init({
  service: 'my-app',
  // Honeycomb (gRPC protocol)
  protocol: 'grpc',
  endpoint: 'api.honeycomb.io:443',
  headers: {
    'x-honeycomb-team': process.env.HONEYCOMB_API_KEY!,
  },
});

init({
  service: 'my-app',
  // Custom pipeline with your own exporters/readers
  spanProcessor: new BatchSpanProcessor(
    new JaegerExporter({ endpoint: 'http://otel:14268/api/traces' }),
  ),
  metricReader: new PeriodicExportingMetricReader({
    exporter: new OTLPMetricExporter({
      url: 'https://metrics.example.com/v1/metrics',
    }),
  }),
  logRecordProcessors: [
    new BatchLogRecordProcessor(
      new OTLPLogExporter({ url: 'https://logs.example.com/v1/logs' }),
    ),
  ],
  instrumentations: [new HttpInstrumentation()],
});

init({
  service: 'my-app',
  // Product events subscribers (ship alongside OTLP)
  subscribers: [
    new PostHogSubscriber({ apiKey: process.env.POSTHOG_KEY! }),
    new MixpanelSubscriber({ projectToken: process.env.MIXPANEL_TOKEN! }),
  ],
});

init({
  service: 'my-app',
  // OpenLLMetry integration for LLM observability
  openllmetry: {
    enabled: true,
    options: {
      disableBatch: process.env.NODE_ENV !== 'production',
      apiKey: process.env.TRACELOOP_API_KEY,
    },
  },
});

Autotel never owns your data, it's a thin layer over OpenTelemetry with optional adapters.

LLM Observability with OpenLLMetry

Autotel integrates seamlessly with OpenLLMetry to provide comprehensive observability for LLM applications. OpenLLMetry automatically instruments LLM providers (OpenAI, Anthropic, etc.), vector databases, and frameworks (LangChain, LlamaIndex, etc.).

Installation

Install the OpenLLMetry SDK as an optional peer dependency:

pnpm add @traceloop/node-server-sdk
# or
npm install @traceloop/node-server-sdk

Usage

Enable OpenLLMetry in your autotel configuration:

import { init } from 'autotel';

init({
  service: 'my-llm-app',
  endpoint: process.env.OTLP_ENDPOINT,
  openllmetry: {
    enabled: true,
    options: {
      // Disable batching in development for immediate traces
      disableBatch: process.env.NODE_ENV !== 'production',
      // Optional: Traceloop API key if using Traceloop backend
      apiKey: process.env.TRACELOOP_API_KEY,
    },
  },
});

OpenLLMetry will automatically:

  • Instrument LLM calls (OpenAI, Anthropic, Cohere, etc.)
  • Track vector database operations (Pinecone, Chroma, Qdrant, etc.)
  • Monitor LLM frameworks (LangChain, LlamaIndex, LangGraph, etc.)
  • Reuse autotel's OpenTelemetry tracer provider for unified traces

All LLM spans will appear alongside your application traces in your observability backend.

AI Workflow Patterns: See AI/LLM Workflow Documentation for comprehensive patterns including:

  • Multi-agent workflows (orchestration and handoffs)
  • RAG pipelines (embeddings, search, generation)
  • Streaming responses
  • Evaluation loops
  • Working examples in apps/example-ai-agent

Core Building Blocks

trace()

Wrap any sync/async function to create spans automatically.

import { trace } from 'autotel';

export const updateUser = trace(async function updateUser(
  id: string,
  data: UserInput,
) {
  return db.users.update(id, data);
});

// Explicit name (useful for anonymous/arrow functions)
export const deleteUser = trace('user.delete', async (id: string) => {
  return db.users.delete(id);
});

// Factory form exposes the `ctx` helper (see below)
export const createOrder = trace((ctx) => async (order: Order) => {
  ctx.setAttribute('order.id', order.id);
  return submit(order);
});

// Immediate execution - wraps and executes instantly (for middleware/wrappers)
function timed<T>(operation: string, fn: () => Promise<T>): Promise<T> {
  return trace(operation, async (ctx) => {
    ctx.setAttribute('operation', operation);
    return await fn();
  });
}
// Executes immediately, returns Promise<T> directly

Two patterns supported:

  1. Factory pattern trace(ctx => (...args) => result) – Returns a wrapped function for reuse
  2. Immediate execution trace(ctx => result) – Executes once immediately, returns the result directly
  • Automatic span lifecycle (start, end, status, and error recording).
  • Function names feed operation.name, code.function, and events enrichment.
  • Works with promises, async/await, or sync functions.

span()

Create nested spans for individual code blocks without wrapping entire functions.

import { span, trace } from 'autotel';

export const rollDice = trace(async function rollDice(rolls: number) {
  const results: number[] = [];

  for (let i = 0; i < rolls; i++) {
    await span(
      { name: 'roll.once', attributes: { roll: i + 1 } },
      async (span) => {
        span.setAttribute('range', '1-6');
        span.addEvent('dice.rolled', { value: rollOnce() });
        results.push(rollOnce());
      },
    );
  }

  return results;
});

Nested spans automatically inherit context and correlation IDs.

Trace Context (ctx)

Every trace((ctx) => ...) factory receives a type-safe helper backed by AsyncLocalStorage.

export const createUser = trace((ctx) => async (input: CreateUserData) => {
  logger.info({ traceId: ctx.traceId }, 'Handling request');
  ctx.setAttributes({ 'user.id': input.id, 'user.plan': input.plan });

  try {
    const user = await db.users.create(input);
    ctx.setStatus({ code: SpanStatusCode.OK });
    return user;
  } catch (error) {
    ctx.recordException(error as Error);
    ctx.setStatus({
      code: SpanStatusCode.ERROR,
      message: 'Failed to create user',
    });
    throw error;
  }
});

Available helpers: traceId, spanId, correlationId, setAttribute, setAttributes, setStatus, recordException, getBaggage, setBaggage, deleteBaggage, getAllBaggage.

Baggage (Context Propagation)

Baggage allows you to propagate custom key-value pairs across distributed traces. Baggage is automatically included in HTTP headers when using injectTraceContext() from autotel/http.

import { trace, withBaggage } from 'autotel';
import { injectTraceContext } from 'autotel/http';

// Set baggage for downstream services
export const createOrder = trace((ctx) => async (order: Order) => {
  return await withBaggage({
    baggage: {
      'tenant.id': order.tenantId,
      'user.id': order.userId,
    },
    fn: async () => {
      // Baggage is available to all child spans and HTTP calls
      const tenantId = ctx.getBaggage('tenant.id');
      ctx.setAttribute('tenant.id', tenantId || 'unknown');

      // HTTP headers automatically include baggage
      const headers = injectTraceContext();
      await fetch('/api/charge', { headers, body: JSON.stringify(order) });
    },
  });
});

Typed Baggage (Optional):

For type-safe baggage operations, use defineBaggageSchema():

import { trace, defineBaggageSchema } from 'autotel';

type TenantBaggage = { tenantId: string; region?: string };
const tenantBaggage = defineBaggageSchema<TenantBaggage>('tenant');

export const handler = trace<TenantBaggage>((ctx) => async () => {
  // Type-safe get
  const tenant = tenantBaggage.get(ctx);
  if (tenant?.tenantId) {
    console.log('Tenant:', tenant.tenantId);
  }

  // Type-safe set with proper scoping
  return await tenantBaggage.with(ctx, { tenantId: 't1' }, async () => {
    // Baggage is available here and in child spans
  });
});

Automatic Baggage → Span Attributes:

Enable baggage: true in init() to automatically copy all baggage entries to span attributes, making them visible in trace UIs without manual ctx.setAttribute() calls:

import { init, trace, withBaggage } from 'autotel';

init({
  service: 'my-app',
  baggage: true, // Auto-copy baggage to span attributes
});

export const processOrder = trace((ctx) => async (order: Order) => {
  return await withBaggage({
    baggage: {
      'tenant.id': order.tenantId,
      'user.id': order.userId,
    },
    fn: async () => {
      // Span automatically has baggage.tenant.id and baggage.user.id attributes!
      // No need for: ctx.setAttribute('tenant.id', ctx.getBaggage('tenant.id'))
      await chargeCustomer(order);
    },
  });
});

Custom prefix:

init({
  service: 'my-app',
  baggage: 'ctx', // Creates ctx.tenant.id, ctx.user.id
  // Or use '' for no prefix: tenant.id, user.id
});

Extracting Baggage from Incoming Requests:

import { extractTraceContext, trace, context } from 'autotel';

// In Express middleware
app.use((req, res, next) => {
  const extractedContext = extractTraceContext(req.headers);
  context.with(extractedContext, () => {
    next();
  });
});

Key Points:

  • Typed baggage is completely optional - existing untyped baggage code continues to work without changes
  • baggage: true in init() eliminates manual attribute setting for baggage
  • Baggage values are strings (convert numbers/objects before setting)
  • Never put PII in baggage - it propagates in HTTP headers across services!

Reusable Middleware Helpers

  • withTracing(options) – create a preconfigured wrapper (service name, default attributes, skip rules).
  • instrument(object, options) – batch-wrap entire modules while skipping helpers or private functions.
import { withTracing, instrument } from 'autotel';

const traceFn = withTracing({ serviceName: 'user' });

export const create = traceFn((ctx) => async (payload) => {
  /* ... */
});
export const update = traceFn((ctx) => async (id, payload) => {
  /* ... */
});

export const repository = instrument(
  {
    createUser: async () => {
      /* ... */
    },
    updateUser: async () => {
      /* ... */
    },
    _internal: async () => {
      /* skipped */
    },
  },
  { serviceName: 'repository', skip: ['_internal'] },
);

Decorators (TypeScript 5+)

Prefer classes or NestJS-style services? Use the @Trace decorator.

import { Trace } from 'autotel/decorators';

class OrderService {
  @Trace('order.create', { withMetrics: true })
  async createOrder(data: OrderInput) {
    return db.orders.create(data);
  }

  // No arguments → method name becomes the span name
  @Trace()
  async processPayment(orderId: string) {
    return charge(orderId);
  }

  @Trace()
  async refund(orderId: string) {
    const ctx = (this as any).ctx;
    ctx.setAttribute('order.id', orderId);
    return refund(orderId);
  }
}

Decorators are optional, everything also works in plain functions.

Database Instrumentation

Turn on query tracing in one line.

import { instrumentDatabase } from 'autotel/db';

const db = drizzle(pool);

instrumentDatabase(db, {
  dbSystem: 'postgresql',
  database: 'myapp',
});

await db.select().from(users); // queries emit spans automatically

Type-Safe Attributes

Autotel provides type-safe attribute builders following OpenTelemetry semantic conventions. These helpers give you autocomplete, compile-time validation, and automatic PII redaction.

Pattern A: Key Builders

Build individual attributes with full autocomplete:

import { attrs, mergeAttrs } from 'autotel/attributes';

// Single attribute
ctx.setAttributes(attrs.user.id('user-123'));
// → { 'user.id': 'user-123' }

ctx.setAttributes(attrs.http.request.method('GET'));
// → { 'http.request.method': 'GET' }

ctx.setAttributes(attrs.db.client.system('postgresql'));
// → { 'db.system.name': 'postgresql' }

// Combine multiple attributes
ctx.setAttributes(
  mergeAttrs(
    attrs.user.id('user-123'),
    attrs.session.id('sess-456'),
    attrs.http.response.statusCode(200),
  ),
);

Pattern B: Object Builders

Pass an object to set multiple related attributes at once:

import { attrs } from 'autotel/attributes';

// User attributes
ctx.setAttributes(
  attrs.user.data({
    id: 'user-123',
    email: '[email protected]',
    roles: ['admin', 'editor'],
  }),
);
// → { 'user.id': 'user-123', 'user.email': '[email protected]', 'user.roles': ['admin', 'editor'] }

// HTTP server attributes
ctx.setAttributes(
  attrs.http.server({
    method: 'POST',
    route: '/api/users/:id',
    statusCode: 201,
  }),
);
// → { 'http.request.method': 'POST', 'http.route': '/api/users/:id', 'http.response.status_code': 201 }

// Database attributes
ctx.setAttributes(
  attrs.db.client.data({
    system: 'postgresql',
    name: 'myapp_db', // Maps to db.namespace
    operation: 'SELECT',
    collectionName: 'users',
  }),
);

Attachers (Signal Helpers)

Attachers know WHERE to attach attributes - they handle spans, resources, and apply guardrails automatically:

import { setUser, httpServer, identify, dbClient } from 'autotel/attributes';

// Set user attributes with automatic PII redaction
export const handleRequest = trace((ctx) => async (req) => {
  setUser(ctx, {
    id: req.userId,
    email: req.userEmail, // Automatically redacted by default
  });

  // HTTP attributes + automatic span name update
  httpServer(ctx, {
    method: req.method,
    route: req.route,
    statusCode: 200,
  });
  // Span name becomes: "HTTP GET /api/users"
});

// Bundle user, session, and device attributes together
export const identifyUser = trace((ctx) => async (data) => {
  identify(ctx, {
    user: { id: data.userId, name: data.userName },
    session: { id: data.sessionId },
    device: { id: data.deviceId, manufacturer: 'Apple' },
  });
});

// Database client attributes
export const queryUsers = trace((ctx) => async () => {
  dbClient(ctx, {
    system: 'postgresql',
    operation: 'SELECT',
    collectionName: 'users',
  });
  return await db.query('SELECT * FROM users');
});

PII Guardrails

safeSetAttributes() applies automatic PII detection and configurable guardrails:

import { safeSetAttributes, attrs } from 'autotel/attributes';

export const processUser = trace((ctx) => async (user) => {
  // Default: PII is redacted automatically
  safeSetAttributes(ctx, attrs.user.data({ email: '[email protected]' }));
  // → { 'user.email': '[REDACTED]' }

  // Allow PII (use with caution)
  safeSetAttributes(ctx, attrs.user.data({ email: '[email protected]' }), {
    guardrails: { pii: 'allow' },
  });
  // → { 'user.email': '[email protected]' }

  // Hash PII for correlation without exposing raw values
  safeSetAttributes(ctx, attrs.user.data({ email: '[email protected]' }), {
    guardrails: { pii: 'hash' },
  });
  // → { 'user.email': 'hash_a1b2c3d4...' }

  // Truncate long values
  safeSetAttributes(ctx, attrs.user.data({ id: 'a'.repeat(500) }), {
    guardrails: { maxLength: 255 },
  });
  // → { 'user.id': 'aaaa...aaa...' } (truncated with ellipsis)

  // Warn on deprecated attributes
  safeSetAttributes(
    ctx,
    { 'http.method': 'GET' }, // Deprecated!
    { guardrails: { warnDeprecated: true } },
  );
  // Console: [autotel/attributes] Attribute "http.method" is deprecated. Use "http.request.method" instead.
});

Guardrail Options:

| Option | Values | Default | Description | | ---------------- | ------------------------------------------ | ---------- | ------------------------------------------ | | pii | 'allow', 'redact', 'hash', 'block' | 'redact' | How to handle PII in attribute values | | maxLength | number | 255 | Maximum string length before truncation | | validateEnum | boolean | true | Normalize enum values (e.g., HTTP methods) | | warnDeprecated | boolean | true | Log warnings for deprecated attributes |

Domain Helpers

Domain helpers bundle multiple attribute groups for common scenarios:

import { transaction } from 'autotel/attributes';

// Bundle HTTP request with user context
export const handleRequest = trace((ctx) => async (req) => {
  transaction(ctx, {
    user: { id: req.userId },
    session: { id: req.sessionId },
    method: req.method,
    route: req.route,
    statusCode: 200,
    clientIp: req.ip,
  });
  // Sets: user.id, session.id, http.request.method, http.route,
  //       http.response.status_code, network.peer.address
  // Also updates span name to "HTTP GET /api/users"
});

Available Attribute Domains

| Domain | Key Builders | Object Builder | | ----------- | ---------------------------------------------------- | -------------------------------------------- | | user | id, email, name, fullName, hash, roles | attrs.user.data() | | session | id, previousId | attrs.session.data() | | device | id, manufacturer, modelIdentifier, modelName | attrs.device.data() | | http | request.*, response.*, route | attrs.http.server(), attrs.http.client() | | db | client.system, client.operation, etc. | attrs.db.client.data() | | service | name, instance, version | attrs.service.data() | | network | peerAddress, peerPort, transport, etc. | attrs.network.data() | | error | type, message, stackTrace, code | attrs.error.data() | | exception | escaped, message, stackTrace, type | attrs.exception.data() | | cloud | provider, accountId, region, etc. | attrs.cloud.data() | | messaging | system, destination, operation, etc. | attrs.messaging.data() | | genAI | system, requestModel, responseModel, etc. | - | | rpc | system, service, method | - | | graphql | document, operationName, operationType | - |

Resource Merging

For enriching OpenTelemetry Resources with service attributes (Resource.attributes is readonly), use mergeServiceResource:

import { mergeServiceResource } from 'autotel/attributes';
import { Resource } from '@opentelemetry/resources';

// Create enriched resource for custom SDK configurations
const baseResource = Resource.default();
const enrichedResource = mergeServiceResource(baseResource, {
  name: 'my-service',
  version: '1.0.0',
  instance: 'instance-1',
});

// Use with custom TracerProvider
const provider = new NodeTracerProvider({ resource: enrichedResource });

Event-Driven Architectures

Autotel provides first-class support for tracing message-based systems like Kafka, SQS, and RabbitMQ. The traceProducer and traceConsumer helpers automatically set semantic attributes, handle context propagation, and create proper span links.

Message Producers (Kafka, SQS, RabbitMQ)

Use traceProducer to wrap message publishing functions with automatic tracing:

import { traceProducer, type ProducerContext } from 'autotel';

// Kafka producer
export const publishUserEvent = traceProducer({
  system: 'kafka',
  destination: 'user-events',
  messageIdFrom: (args) => args[0].eventId, // Extract message ID from args
})((ctx) => async (event: UserEvent) => {
  // Get W3C trace headers to inject into message
  const headers = ctx.getTraceHeaders();

  await producer.send({
    topic: 'user-events',
    messages: [
      {
        key: event.userId,
        value: JSON.stringify(event),
        headers, // Trace context propagates to consumers
      },
    ],
  });
});

// SQS producer with custom attributes
export const publishOrder = traceProducer({
  system: 'sqs',
  destination: 'orders-queue',
  attributes: { 'custom.priority': 'high' },
})((ctx) => async (order: Order) => {
  ctx.setAttribute('order.total', order.total);

  await sqs.sendMessage({
    QueueUrl: QUEUE_URL,
    MessageBody: JSON.stringify(order),
    MessageAttributes: {
      traceparent: {
        DataType: 'String',
        StringValue: ctx.getTraceHeaders().traceparent,
      },
    },
  });
});

Automatic Span Attributes (OTel Semantic Conventions):

  • messaging.system - The messaging system (kafka, sqs, rabbitmq, etc.)
  • messaging.operation - Always "publish" for producers
  • messaging.destination.name - Topic/queue name
  • messaging.message.id - Extracted message ID (if configured)
  • messaging.kafka.destination.partition - Partition number (Kafka-specific)

Message Consumers

Use traceConsumer to wrap message handlers with automatic link extraction and DLQ support:

import { traceConsumer, extractLinksFromBatch } from 'autotel';

// Single message consumer
export const processUserEvent = traceConsumer({
  system: 'kafka',
  destination: 'user-events',
  consumerGroup: 'event-processor',
  headersFrom: (msg) => msg.headers, // Extract trace headers
})((ctx) => async (message: KafkaMessage) => {
  // Links to producer span are automatically created
  const event = JSON.parse(message.value);
  await processEvent(event);
});

// Batch consumer with automatic link extraction
export const processBatch = traceConsumer({
  system: 'kafka',
  destination: 'user-events',
  consumerGroup: 'batch-processor',
  batchMode: true, // Extract links from all messages
  headersFrom: (msg) => msg.headers,
})((ctx) => async (messages: KafkaMessage[]) => {
  // ctx.links contains SpanContext from each message's traceparent
  for (const msg of messages) {
    await processMessage(msg);
  }
});

// Consumer with DLQ handling
export const processWithDLQ = traceConsumer({
  system: 'sqs',
  destination: 'orders-queue',
  headersFrom: (msg) => msg.MessageAttributes,
})((ctx) => async (message: SQSMessage) => {
  try {
    await processOrder(JSON.parse(message.Body));
  } catch (error) {
    if (message.ApproximateReceiveCount > 3) {
      // Record DLQ routing
      ctx.recordDLQ('orders-dlq', error.message);
      throw error; // Let SQS move to DLQ
    }
    throw error; // Retry
  }
});

Consumer-Specific Attributes:

  • messaging.consumer.group - Consumer group name
  • messaging.batch.message_count - Batch size (if batch mode)
  • messaging.operation - "receive" or "process"

Consumer Lag Metrics

Track consumer lag for performance monitoring:

import { traceConsumer } from 'autotel';

export const processWithLag = traceConsumer({
  system: 'kafka',
  destination: 'events',
  consumerGroup: 'processor',
  lagMetrics: {
    getCurrentOffset: (msg) => Number(msg.offset),
    getEndOffset: async () => {
      const offsets = await admin.fetchTopicOffsets('events');
      return Number(offsets[0].high);
    },
    partition: 0,
  },
})((ctx) => async (message) => {
  // Lag attributes automatically added:
  // - messaging.kafka.consumer_lag
  // - messaging.kafka.message_offset
  await processMessage(message);
});

Custom Messaging System Adapters

For messaging systems not directly supported (NATS, Temporal, Cloudflare Queues, etc.), use pre-built adapters or create your own:

import { traceConsumer, traceProducer } from 'autotel/messaging';
import {
  natsAdapter,
  temporalAdapter,
  cloudflareQueuesAdapter,
  datadogContextExtractor,
  b3ContextExtractor,
} from 'autotel/messaging/adapters';

// NATS JetStream consumer with automatic attribute extraction
const processNatsMessage = traceConsumer({
  system: 'nats',
  destination: 'orders.created',
  consumerGroup: 'order-processor',
  ...natsAdapter.consumer, // Adds nats.subject, nats.stream, nats.consumer
})((ctx) => async (msg) => {
  await handleOrder(msg.data);
  msg.ack();
});

// Temporal activity with workflow context
const processActivity = traceConsumer({
  system: 'temporal',
  destination: 'order-activities',
  ...temporalAdapter.consumer, // Adds temporal.workflow_id, temporal.run_id, temporal.attempt
})((ctx) => async (info, input) => {
  return processOrder(input);
});

// Consume messages with Datadog trace context (non-W3C format)
const processFromDatadog = traceConsumer({
  system: 'kafka',
  destination: 'events',
  customContextExtractor: datadogContextExtractor, // Converts Datadog decimal IDs to OTel hex
})((ctx) => async (msg) => {
  // Links to parent Datadog span automatically
});

Available Adapters:

| Adapter | Captures | | ------------------------- | ----------------------------------------------------- | | natsAdapter | subject, stream, consumer, pending, redelivery_count | | temporalAdapter | workflow_id, run_id, activity_id, task_queue, attempt | | cloudflareQueuesAdapter | message_id, timestamp, attempts | | datadogContextExtractor | Converts Datadog decimal trace IDs to OTel hex | | b3ContextExtractor | Parses B3/Zipkin single or multi-header format | | xrayContextExtractor | Parses AWS X-Ray trace header |

Building Custom Adapters:

See Bring Your Own System Guide for step-by-step instructions on creating adapters for any messaging system.

Safe Baggage Propagation

Baggage allows key-value pairs to propagate across service boundaries. Autotel provides safe baggage schemas with built-in guardrails for PII detection, size limits, and high-cardinality value hashing.

BusinessBaggage (Pre-built Schema)

Use the pre-built BusinessBaggage schema for common business context:

import { BusinessBaggage, trace } from 'autotel';

export const processOrder = trace((ctx) => async (order: Order) => {
  // Set business context (propagates to downstream services)
  BusinessBaggage.set(ctx, {
    tenantId: order.tenantId,
    userId: order.userId, // Auto-hashed for privacy
    priority: 'high', // Validated against enum
    correlationId: order.id,
  });

  // Make downstream call - baggage propagates automatically
  await fetch('/api/charge', {
    headers: ctx.getTraceHeaders(), // Includes baggage header
  });
});

// In downstream service
export const chargeOrder = trace((ctx) => async () => {
  // Read business context
  const { tenantId, userId, priority } = BusinessBaggage.get(ctx);

  // Use for routing, logging, access control, etc.
  logger.info({ tenantId, priority }, 'Processing charge');
});

Pre-defined Fields:

  • tenantId - String, max 64 chars
  • userId - String, auto-hashed for privacy
  • correlationId - String, for request correlation
  • workflowId - String, for saga/workflow tracking
  • priority - Enum: 'low', 'normal', 'high', 'critical'
  • region - String, deployment region
  • channel - String (web, mobile, api, etc.)

Custom Baggage Schemas

Create type-safe baggage schemas with validation and guardrails:

import { createSafeBaggageSchema } from 'autotel';

// Define custom schema
const OrderBaggage = createSafeBaggageSchema(
  {
    orderId: { type: 'string', maxLength: 36 },
    customerId: { type: 'string', hash: true }, // Auto-hash for privacy
    tier: { type: 'enum', values: ['free', 'pro', 'enterprise'] as const },
    amount: { type: 'number' },
    isVip: { type: 'boolean' },
  },
  {
    prefix: 'order', // Baggage keys: order.orderId, order.tier, etc.
    maxKeyLength: 64, // Validate key length
    maxValueLength: 256, // Validate value length
    redactPII: true, // Auto-detect and redact PII patterns
    hashHighCardinality: true, // Hash values that look high-cardinality
  },
);

// Use in traced functions
export const processOrder = trace((ctx) => async (order: Order) => {
  // Type-safe set (TypeScript validates fields)
  OrderBaggage.set(ctx, {
    orderId: order.id,
    customerId: order.customerId, // Will be hashed
    tier: order.tier, // Must be 'free' | 'pro' | 'enterprise'
    amount: order.total,
    isVip: order.customer.isVip,
  });

  // Type-safe get
  const { orderId, tier, isVip } = OrderBaggage.get(ctx);

  // Check if specific field is set
  if (OrderBaggage.has(ctx, 'customerId')) {
    // ...
  }

  // Delete specific field
  OrderBaggage.delete(ctx, 'amount');

  // Clear all fields
  OrderBaggage.clear(ctx);
});

Guardrails:

  • Size Limits - Prevents baggage from growing unbounded
  • PII Detection - Auto-redacts email, phone, SSN patterns
  • High-Cardinality Hashing - Hashes UUIDs, timestamps to reduce cardinality
  • Enum Validation - Rejects invalid enum values
  • Type Coercion - Numbers/booleans serialized correctly

Workflow & Saga Tracing

Track distributed workflows and sagas with compensation support. Each step creates a linked span, and failed steps can trigger automatic compensation.

Basic Workflows

Use traceWorkflow and traceStep for multi-step processes:

import { traceWorkflow, traceStep } from 'autotel';

// Define workflow with unique ID
export const processOrder = traceWorkflow({
  name: 'OrderFulfillment',
  workflowId: (order) => order.id, // Generate from first arg
})((ctx) => async (order: Order) => {
  // Step 1: Validate order
  await traceStep({ name: 'ValidateOrder' })((ctx) => async () => {
    await validateOrder(order);
  })();

  // Step 2: Reserve inventory (links to previous step)
  await traceStep({
    name: 'ReserveInventory',
    linkToPrevious: true,
  })((ctx) => async () => {
    await inventoryService.reserve(order.items);
  })();

  // Step 3: Process payment
  await traceStep({
    name: 'ProcessPayment',
    linkToPrevious: true,
  })((ctx) => async () => {
    await paymentService.charge(order);
  })();

  return { success: true };
});

Workflow Attributes:

  • workflow.name - Workflow type name
  • workflow.id - Unique instance ID
  • workflow.version - Optional version
  • workflow.step.name - Current step name
  • workflow.step.index - Step sequence number
  • workflow.step.status - completed, failed, compensated

Saga Pattern with Compensation

Define compensating actions for rollback on failure:

import { traceWorkflow, traceStep } from 'autotel';

export const orderSaga = traceWorkflow({
  name: 'OrderSaga',
  workflowId: (order) => order.id,
})((ctx) => async (order: Order) => {
  // Step 1: Reserve inventory (with compensation)
  await traceStep({
    name: 'ReserveInventory',
    compensate: async (stepCtx, error) => {
      // Called if later step fails
      await inventoryService.release(order.items);
      stepCtx.setAttribute('compensation.reason', error.message);
    },
  })((ctx) => async () => {
    await inventoryService.reserve(order.items);
  })();

  // Step 2: Charge payment (with compensation)
  await traceStep({
    name: 'ChargePayment',
    linkToPrevious: true,
    compensate: async (stepCtx, error) => {
      await paymentService.refund(order.id);
    },
  })((ctx) => async () => {
    await paymentService.charge(order);
  })();

  // Step 3: Ship order (no compensation - point of no return)
  await traceStep({
    name: 'ShipOrder',
    linkToPrevious: true,
  })((ctx) => async () => {
    await shippingService.ship(order);
  })();
});

// If ShipOrder fails, compensations run in reverse:
// 1. ChargePayment.compensate (refund)
// 2. ReserveInventory.compensate (release)

WorkflowContext Methods:

  • ctx.getWorkflowId() - Get current workflow instance ID
  • ctx.getWorkflowName() - Get workflow type name
  • ctx.getStepIndex() - Current step number
  • ctx.getPreviousStepContext() - SpanContext for linking

Compensation Attributes:

  • workflow.step.compensated - Boolean, true if compensation ran
  • workflow.compensation.executed - Number of compensations executed
  • compensation.reason - Why compensation was triggered

Business Metrics & Product Events

Autotel treats metrics and events as first-class citizens so engineers and product teams share the same context.

OpenTelemetry Metrics (Metric class + helpers)

import { Metric, createHistogram } from 'autotel';

const metrics = new Metric('checkout');
const revenue = createHistogram('checkout.revenue');

export const processOrder = trace((ctx) => async (order) => {
  metrics.trackEvent('order.completed', {
    orderId: order.id,
    amount: order.total,
  });
  metrics.trackValue('revenue', order.total, { currency: order.currency });
  revenue.record(order.total, { currency: order.currency });
});
  • Emits OpenTelemetry counters/histograms via the OTLP endpoint configured in init().
  • Infrastructure metrics are enabled by default in every environment.

Product Events (PostHog, Mixpanel, Amplitude, …)

Track user behavior, conversion funnels, and business outcomes alongside your OpenTelemetry traces.

Recommended: Configure subscribers in init(), use global track() function:

import { init, track, trace } from 'autotel';
import { PostHogSubscriber } from 'autotel-subscribers/posthog';

init({
  service: 'checkout',
  subscribers: [new PostHogSubscriber({ apiKey: process.env.POSTHOG_KEY! })],
});

export const signup = trace('user.signup', async (user) => {
  // All events use subscribers from init() automatically
  track('user.signup', { userId: user.id, plan: user.plan });
  track.funnelStep('checkout', 'completed', { cartValue: user.cartTotal });
  track.value('lifetimeValue', user.cartTotal, { currency: 'USD' });
  track.outcome('user.signup', 'success', { cohort: user.cohort });
});

Event instance (inherits subscribers from init()):

import { Event } from 'autotel/event';

// Uses subscribers configured in init() - no need to pass them again
const events = new Event('checkout');

events.trackEvent('order.completed', { amount: 99.99 });
events.trackFunnelStep('checkout', 'started', { cartValue: 99.99 });

Override subscribers for specific Event instance:

import { Event } from 'autotel/event';
import { MixpanelSubscriber } from 'autotel-subscribers/mixpanel';

// Override: use different subscribers for this instance (multi-tenant, A/B testing, etc.)
const marketingEvents = new Event('marketing', {
  subscribers: [new MixpanelSubscriber({ token: process.env.MIXPANEL_TOKEN! })],
});

marketingEvents.trackEvent('campaign.viewed', { campaignId: '123' });

Subscriber Resolution:

  • If subscribers passed to Event constructor → uses those (instance override)
  • If no subscribers passed → falls back to init() subscribers (global config)
  • If neither configured → events logged only (graceful degradation)

Auto-enrichment adds traceId, spanId, correlationId, operation.name, service.version, and deployment.environment to every event payload without manual wiring.

Logging with Trace Context

Bring your own logger (Pino, Winston, Bunyan, etc.) and autotel automatically instruments it to:

  • Inject trace context (traceId, spanId, correlationId) into every log record
  • Record errors in the active OpenTelemetry span
  • Bridge logs to the OpenTelemetry Logs API for OTLP export to Grafana, Datadog, etc.

Using Pino (recommended)

Note: While @opentelemetry/auto-instrumentations-node includes Pino instrumentation, you may need to install @opentelemetry/instrumentation-pino separately for trace context injection to work reliably.

npm install pino
# Optional but recommended:
npm install @opentelemetry/instrumentation-pino
import pino from 'pino';
import { init, trace } from 'autotel';

const logger = pino({
  level: process.env.LOG_LEVEL || 'info',
});

init({
  service: 'user-service',
  logger,
  autoInstrumentations: ['pino'], // Enable Pino instrumentation for trace context
});

export const createUser = trace(async (data: UserData) => {
  logger.info({ userId: data.id }, 'Creating user');
  try {
    const user = await db.users.create(data);
    logger.info({ userId: user.id }, 'User created');
    return user;
  } catch (error) {
    logger.error({ err: error, userId: data.id }, 'Create failed');
    throw error;
  }
});

Using Winston

Note: While @opentelemetry/auto-instrumentations-node includes Winston instrumentation, you must install @opentelemetry/instrumentation-winston separately for trace context injection to work.

npm install winston @opentelemetry/instrumentation-winston
import winston from 'winston';
import { init } from 'autotel';

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.json(),
  transports: [new winston.transports.Console()],
});

init({
  service: 'user-service',
  logger,
  autoInstrumentations: ['winston'], // Enable Winston instrumentation for trace context
});

Using Bunyan (or other loggers)

Note: While @opentelemetry/auto-instrumentations-node includes Bunyan instrumentation, you must install @opentelemetry/instrumentation-bunyan separately for trace context injection to work.

npm install bunyan @opentelemetry/instrumentation-bunyan
import bunyan from 'bunyan';
import { init } from 'autotel';

const logger = bunyan.createLogger({ name: 'user-service' });

init({
  service: 'user-service',
  logger,
  autoInstrumentations: ['bunyan'], // Enable Bunyan instrumentation for trace context
});

Note: For manual instrumentation configuration, you can also use:

import { BunyanInstrumentation } from '@opentelemetry/instrumentation-bunyan';

init({
  service: 'user-service',
  logger,
  instrumentations: [new BunyanInstrumentation()], // Manual instrumentation with custom config
});

Can't find your logger? Check OpenTelemetry JS Contrib for available instrumentations, or open an issue to request official support!

What you get automatically

  • ✅ Logs include traceId, spanId, correlationId for correlation with traces
  • ✅ Errors are automatically recorded in the active span
  • ✅ Logs export via OTLP to your observability backend (Grafana, Datadog, etc.)
  • ✅ Simple setup - install the instrumentation package and enable it in autoInstrumentations

Canonical Log Lines (Wide Events)

Canonical log lines implement the "wide events" pattern: one comprehensive log line per request with ALL context. This makes logs queryable as structured data instead of requiring string search.

Key Benefits:

  • One log line per request with all context (user, cart, payment, errors, etc.)
  • High-cardinality, high-dimensionality data for powerful queries
  • Automatic - no manual logging needed, just use trace() and ctx.setAttribute()
  • Queryable - WHERE user.id = 'user-123' AND error.code IS NOT NULL

Basic Usage

import { init, trace, setUser, httpServer } from 'autotel';
import pino from 'pino';

const logger = pino();
init({
  service: 'checkout-api',
  logger,
  canonicalLogLines: {
    enabled: true,
    rootSpansOnly: true, // One canonical log line per request
    logger, // Use Pino for canonical log lines
  },
});

export const processCheckout = trace((ctx) => async (order: Order) => {
  setUser(ctx, {
    id: order.userId,
    subscription: order.plan,
    accountAgeDays: daysSince(order.userCreatedAt),
  });

  httpServer(ctx, {
    method: 'POST',
    route: '/api/checkout',
    statusCode: 200,
  });

  ctx.setAttributes({
    'cart.total_cents': order.total,
    'payment.method': order.paymentMethod,
    'payment.provider': 'stripe',
  });

  // When this span ends, a canonical log line is automatically emitted
  // with ALL attributes: user.id, user.subscription, cart.total_cents, etc.
});

What You Get

When a span ends, a canonical log line is automatically emitted with:

  • Core fields: operation, traceId, spanId, correlationId, duration_ms, status_code
  • ALL span attributes: Every attribute you set with ctx.setAttribute()
  • Resource attributes: service.name, service.version, deployment.environment
  • Timestamp: ISO 8601 format

Example canonical log line:

{
  "level": "info",
  "msg": "[processCheckout] Request completed",
  "operation": "processCheckout",
  "traceId": "4bf92f3577b34da6a3ce929d0e0e4736",
  "spanId": "00f067aa0ba902b7",
  "correlationId": "4bf92f3577b34da",
  "duration_ms": 124.7,
  "status_code": 1,
  "user.id": "user-123",
  "user.subscription": "premium",
  "user.account_age_days": 847,
  "cart.total_cents": 15999,
  "payment.method": "card",
  "payment.provider": "stripe",
  "service.name": "checkout-api",
  "timestamp": "2024-01-15T10:23:45.612Z"
}

Query Examples

With canonical log lines, you can run powerful queries:

-- Find all checkout failures for premium users
SELECT * FROM logs
WHERE user.subscription = 'premium'
  AND error.code IS NOT NULL;

-- Group errors by code
SELECT error.code, COUNT(*)
FROM logs
WHERE error.code IS NOT NULL
GROUP BY error.code;

-- Find slow checkouts with coupons
SELECT * FROM logs
WHERE duration_ms > 200
  AND cart.coupon_applied IS NOT NULL;

Configuration Options

init({
  service: 'my-app',
  canonicalLogLines: {
    enabled: true,
    rootSpansOnly: true, // Only log root spans (one per request)
    minLevel: 'info', // Minimum log level ('debug' | 'info' | 'warn' | 'error')
    logger: pino(), // Custom logger (defaults to OTel Logs API)
    messageFormat: (span) => {
      // Custom message format
      const status = span.status.code === 2 ? 'ERROR' : 'SUCCESS';
      return `${span.name} [${status}]`;
    },
    includeResourceAttributes: true, // Include service.name, service.version, etc.
  },
});

Auto Instrumentation & Advanced Configuration

  • autoInstrumentations – Enable OpenTelemetry auto-instrumentations (HTTP, Express, Fastify, Prisma, Pino…). Requires @opentelemetry/auto-instrumentations-node.
  • instrumentations – Provide manual instrumentation instances, e.g., new HttpInstrumentation().
  • resource / resourceAttributes – Declare cluster/region/tenant metadata once and it flows everywhere.
  • spanProcessor, metricReader, logRecordProcessors – Plug in any OpenTelemetry exporter or your in-house pipeline.
  • headers – Attach vendor auth headers when using the built-in OTLP HTTP exporters.
  • sdkFactory – Receive the Autotel defaults and return a fully customized NodeSDK for the rare cases you need complete control.
import { init } from 'autotel';
import { HttpInstrumentation } from '@opentelemetry/instrumentation-http';

init({
  service: 'checkout',
  environment: 'production',
  resourceAttributes: {
    'cloud.region': 'us-east-1',
    'deployment.environment': 'production',
  },
  autoInstrumentations: ['http', 'express', 'pino'],
  instrumentations: [new HttpInstrumentation()],
  headers: 'Authorization=Basic ...',
  subscribers: [new PostHogSubscriber({ apiKey: 'phc_xxx' })],
});

⚠️ autoInstrumentations vs. Manual Instrumentations

When using both autoInstrumentations and instrumentations, manual instrumentations always take precedence. If you need custom configs (like requireParentSpan: false for standalone scripts), use one or the other:

Option A: Auto-instrumentations only (all defaults)

init({
  service: 'my-app',
  autoInstrumentations: true, // All libraries with default configs
});

Option B: Manual instrumentations with custom configs

import { MongoDBInstrumentation } from '@opentelemetry/instrumentation-mongodb';
import { MongooseInstrumentation } from '@opentelemetry/instrumentation-mongoose';

init({
  service: 'my-app',
  autoInstrumentations: false, // Must be false to avoid conflicts
  instrumentations: [
    new MongoDBInstrumentation({
      requireParentSpan: false, // Custom config for scripts/cron jobs
    }),
    new MongooseInstrumentation({
      requireParentSpan: false,
    }),
  ],
});

Option C: Mix auto + manual (best of both)

import { MongoDBInstrumentation } from '@opentelemetry/instrumentation-mongodb';

init({
  service: 'my-app',
  autoInstrumentations: ['http', 'express'], // Auto for most libraries
  instrumentations: [
    // Manual config only for libraries that need custom settings
    new MongoDBInstrumentation({
      requireParentSpan: false,
    }),
  ],
});

Why requireParentSpan matters: Many instrumentations default to requireParentSpan: true, which prevents spans from being created in standalone scripts, cron jobs, or background workers without an active parent span. Set it to false for these use cases.

⚠️ Auto-Instrumentation Setup Requirements

OpenTelemetry's auto-instrumentation packages require special setup depending on your module system:

ESM Setup (Recommended for Node 18.19+)

Use autotel/register for clean ESM instrumentation without complex NODE_OPTIONS:

// instrumentation.mjs (or .ts)
import 'autotel/register'; // MUST be first import!
import { init } from 'autotel';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';

init({
  service: 'my-app',
  instrumentations: getNodeAutoInstrumentations({
    '@opentelemetry/instrumentation-pino': { enabled: true },
  }),
});
# Run with --import flag
tsx --import ./instrumentation.mjs src/server.ts
# or with Node
node --import ./instrumentation.mjs src/server.js

Requirements for ESM instrumentation:

  • Install @opentelemetry/auto-instrumentations-node as a direct dependency in your app
  • Import autotel/register before any other imports
  • Use --import flag (not --require)

CommonJS Setup

No special flags required. Just use --require:

// package.json
{
  "type": "commonjs" // or remove "type" field
}
node --require ./instrumentation.js src/server.js

Zero-Config ESM (reads from env vars):

OTEL_SERVICE_NAME=my-app tsx --import autotel/auto src/index.ts

Legacy ESM (Node 18.0-18.18)

If you can't use autotel/register, use the --experimental-loader flag:

NODE_OPTIONS="--experimental-loader=@opentelemetry/instrumentation/hook.mjs --import ./instrumentation.ts" tsx src/server.ts

Note: The loader hook is an OpenTelemetry upstream requirement for ESM, not an autotel limitation. See OpenTelemetry ESM docs for details.

Operational Safety & Runtime Controls

  • Adaptive sampling – 10% baseline, 100% for errors/slow spans by default (override via sampler).
  • Rate limiting & circuit breakers – Prevent telemetry storms when backends misbehave.
  • Validation – Configurable attribute/event name lengths, maximum counts, and nesting depth.
  • Sensitive data redaction – Passwords, tokens, API keys, and any custom regex you provide are automatically masked before export.
  • Auto-flush – Events buffers drain when root spans end (disable with flushOnRootSpanEnd: false).
  • Runtime flags – Toggle metrics or swap endpoints via env vars without code edits.
# Disable metrics without touching code (metrics are ON by default)
AUTOTEL_METRICS=off node server.js

# Point at a different collector
OTLP_ENDPOINT=https://otel.mycompany.com node server.js

Configuration Reference

init({
  service: string; // required
  subscribers?: EventSubscriber[];
  endpoint?: string;
  protocol?: 'http' | 'grpc'; // OTLP protocol (default: 'http')
  metrics?: boolean | 'auto';
  sampler?: Sampler;
  version?: string;
  environment?: string;
  baggage?: boolean | string; // Auto-copy baggage to span attributes
  flushOnRootSpanEnd?: boolean;  // Auto-flush events (default: true)
  forceFlushOnShutdown?: boolean;  // Force-flush spans on shutdown (default: false)
  autoInstrumentations?: string[] | boolean | Record<string, { enabled?: boolean }>;
  instrumentations?: NodeSDKConfiguration['instrumentations'];
  spanProcessor?: SpanProcessor;
  metricReader?: MetricReader;
  logRecordProcessors?: LogRecordProcessor[];
  resource?: Resource;
  resourceAttributes?: Record<string, string>;
  headers?: Record<string, string> | string;
  sdkFactory?: (defaults: NodeSDK) => NodeSDK;
  validation?: Partial<ValidationConfig>;
  logger?: Logger; // created via createLogger() or bring your own
  openllmetry?: {
    enabled: boolean;
    options?: Record<string, unknown>; // Passed to @traceloop/node-server-sdk
  };
});

Event Subscribers:

Configure event subscribers globally to send product events to PostHog, Mixpanel, Amplitude, etc.:

import { init } from 'autotel';
import { PostHogSubscriber } from 'autotel-subscribers/posthog';

init({
  service: 'my-app',
  subscribers: [new PostHogSubscriber({ apiKey: process.env.POSTHOG_KEY! })],
});

Event instances automatically inherit these subscribers unless you explicitly override them. See Product Events for details.

Baggage Configuration:

Enable automatic copying of baggage entries to span attributes:

init({
  service: 'my-app',
  baggage: true, // Copies baggage to span attributes with 'baggage.' prefix
});

// With custom prefix
init({
  service: 'my-app',
  baggage: 'ctx', // Copies with 'ctx.' prefix → ctx.tenant.id
});

// No prefix
init({
  service: 'my-app',
  baggage: '', // Copies directly → tenant.id
});

This eliminates the need to manually call ctx.setAttribute() for baggage values. See Baggage (Context Propagation) for usage examples.

Protocol Configuration:

Use the protocol parameter to switch between HTTP/protobuf (default) and gRPC:

// HTTP (default) - uses port 4318
init({
  service: 'my-app',
  protocol: 'http', // or omit (defaults to http)
  endpoint: 'http://localhost:4318',
});

// gRPC - uses port 4317, better performance
init({
  service: 'my-app',
  protocol: 'grpc',
  endpoint: 'localhost:4317',
});

Vendor Backend Configurations:

For simplified setup with popular observability platforms, see autotel-backends:

npm install autotel-backends
import { init } from 'autotel';
import { createDatadogConfig } from 'autotel-backends/datadog';
import { createHoneycombConfig } from 'autotel-backends/honeycomb';

// Datadog
init(
  createDatadogConfig({
    apiKey: process.env.DATADOG_API_KEY!,
    service: 'my-app',
    environment: 'production',
  }),
);

// Honeycomb (automatically uses gRPC)
init(
  createHoneycombConfig({
    apiKey: process.env.HONEYCOMB_API_KEY!,
    service: 'my-app',
    environment: 'production',
    dataset: 'production', // optional, for classic accounts
  }),
);

Environment Variables:

Autotel supports standard OpenTelemetry environment variables for zero-code configuration across environments:

# Service configuration
export OTEL_SERVICE_NAME=my-app

# OTLP collector endpoint
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318

# Protocol: 'http' or 'grpc' (default: 'http')
export OTEL_EXPORTER_OTLP_PROTOCOL=http

# Authentication headers (comma-separated key=value pairs)
export OTEL_EXPORTER_OTLP_HEADERS=x-honeycomb-team=YOUR_API_KEY

# Resource attributes (comma-separated key=value pairs)
export OTEL_RESOURCE_ATTRIBUTES=service.version=1.2.3,deployment.environment=production,team=backend

Configuration Precedence: Explicit init() config > env vars > defaults

Example: Honeycomb with env vars

export OTEL_SERVICE_NAME=my-app
export OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io
export OTEL_EXPORTER_OTLP_PROTOCOL=grpc
export OTEL_EXPORTER_OTLP_HEADERS=x-honeycomb-team=YOUR_API_KEY
export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=production

Example: Datadog with env vars

export OTEL_SERVICE_NAME=my-app
export OTEL_EXPORTER_OTLP_ENDPOINT=https://http-intake.logs.datadoghq.com
export OTEL_EXPORTER_OTLP_HEADERS=DD-API-KEY=YOUR_API_KEY
export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=production

See packages/autotel/.env.example for a complete template.

Validation tuning example:

init({
  service: 'checkout',
  validation: {
    sensitivePatterns: [/password/i, /secret/i, /creditCard/i],
    maxAttributeValueLength: 5_000,
    maxAttributeCount: 100,
    maxNestingDepth: 5,
  },
});

Building Custom Instrumentation

Autotel is designed as an enabler - it provides composable primitives that let you instrument anything in your codebase. Here's how to use the building blocks to create custom instrumentation for queues, cron jobs, and other patterns.

Instrumenting Queue Consumers

import { trace, span, track } from 'autotel';

// Wrap your consumer handler with trace()
export const processMessage = trace(async function processMessage(
  message: Message,
) {
  // Use span() to break down processing stages
  await span({ name: 'parse.message' }, async (ctx) => {
    ctx.setAttribute('message.id', message.id);
    ctx.setAttribute('message.type', message.type);
    return parseMessage(message);
  });

  await span({ name: 'validate.message' }, async () => {
    return validateMessage(message);
  });

  await span({ name: 'process.business.logic' }, async () => {
    return handleMessage(message);
  });

  // Track events
  track('message.processed', {
    messageType: message.type,
    processingTime: Date.now() - message.timestamp,
  });
});

// Use i