npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@arizeai/phoenix-otel

v1.0.2

Published

Phoenix OpenTelemetry tracing SDK for Node.js — register tracing, manual span helpers, and context attributes for LLM observability

Downloads

42,423

Readme

A lightweight wrapper around OpenTelemetry for Node.js applications that simplifies sending traces to Arize Phoenix. @arizeai/phoenix-otel handles provider registration and OTLP export, then re-exports the full @arizeai/openinference-core helper surface from the same package path so you can register tracing and author manual spans from one import.

Note: This package is under active development and APIs may change.

Features

  • Simple Setup - One-line configuration with sensible defaults
  • Environment Variables - Automatic configuration from environment variables
  • Batch Processing - Built-in batch span processing for production use
  • OpenInference Helpers Included - Re-exports withSpan, traceChain, traceAgent, traceTool, observe, context setters, attribute builders, OITracer, and utility helpers
  • Provider-Swap Safe Wrappers - The re-exported OpenInference helpers resolve the default tracer when the wrapped function executes, so module-scoped wrappers continue following global provider changes
  • Agent-Friendly Local Docs - Ships curated docs and source in node_modules/@arizeai/phoenix-otel/

Installation

npm install @arizeai/phoenix-otel

Quick Start

Basic Usage

The simplest way to get started is to use register() together with the built-in tracing helpers:

import { register, traceChain } from "@arizeai/phoenix-otel";

const provider = register({
  projectName: "my-app",
});

const answerQuestion = traceChain(
  async (question: string) => `Handled: ${question}`,
  { name: "answer-question" }
);

await answerQuestion("What is Phoenix?");
await provider.shutdown();

register() sets up the Phoenix exporter. The tracing helpers come from @arizeai/openinference-core, re-exported through @arizeai/phoenix-otel.

Production Setup

For production use with Phoenix Cloud:

import { register } from "@arizeai/phoenix-otel";

register({
  projectName: "my-app",
  url: "https://app.phoenix.arize.com",
  apiKey: process.env.PHOENIX_API_KEY,
});

Configuration

Environment Variables

The register function automatically reads from environment variables:

# For local Phoenix server (default)
export PHOENIX_COLLECTOR_ENDPOINT="http://localhost:6006"

# For Phoenix Cloud
export PHOENIX_COLLECTOR_ENDPOINT="https://app.phoenix.arize.com"
export PHOENIX_API_KEY="your-api-key"

Configuration Options

The register function accepts the following parameters:

| Parameter | Type | Default | Description | | ------------------ | ------------------------ | ------------------------- | ------------------------------------------------------ | | projectName | string | "default" | The project name for organizing traces in Phoenix | | url | string | "http://localhost:6006" | The URL to your Phoenix instance | | apiKey | string | undefined | API key for Phoenix authentication | | headers | Record<string, string> | {} | Custom headers for OTLP requests | | batch | boolean | true | Use batch span processing (recommended for production) | | instrumentations | Instrumentation[] | undefined | Array of OpenTelemetry instrumentations to register | | global | boolean | true | Register the tracer provider globally | | diagLogLevel | DiagLogLevel | undefined | Diagnostic logging level for debugging |

Usage Examples

With Auto-Instrumentation

Automatically instrument common libraries (works best with CommonJS):

import { register } from "@arizeai/phoenix-otel";
import { HttpInstrumentation } from "@opentelemetry/instrumentation-http";
import { ExpressInstrumentation } from "@opentelemetry/instrumentation-express";

register({
  projectName: "my-express-app",
  instrumentations: [new HttpInstrumentation(), new ExpressInstrumentation()],
});

Note: Auto-instrumentation via the instrumentations parameter works best with CommonJS projects. ESM projects require manual instrumentation.

With OpenAI (ESM)

For ESM projects, manually instrument libraries:

// instrumentation.ts
import { register, registerInstrumentations } from "@arizeai/phoenix-otel";
import OpenAI from "openai";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";

register({
  projectName: "openai-app",
});

// Manual instrumentation for ESM
const instrumentation = new OpenAIInstrumentation();
instrumentation.manuallyInstrument(OpenAI);

registerInstrumentations({
  instrumentations: [instrumentation],
});
// main.ts
import "./instrumentation.ts";
import OpenAI from "openai";

const openai = new OpenAI();

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});

Tracing Helpers

The package includes withSpan, traceChain, traceAgent, and traceTool for wrapping functions with OpenInference spans. Each helper automatically records inputs, outputs, errors, and span kind.

import {
  register,
  traceAgent,
  traceChain,
  traceTool,
  withSpan,
} from "@arizeai/phoenix-otel";

register({ projectName: "my-app" });

// traceTool — for tool calls and API lookups
const searchDocs = traceTool(
  async (query: string) => {
    const response = await fetch(`/api/search?q=${query}`);
    return response.json();
  },
  { name: "search-docs" }
);

// traceChain — for pipeline steps and orchestration
const summarize = traceChain(
  async (text: string) => `Summary of ${text.length} chars`,
  { name: "summarize" }
);

// traceAgent — for autonomous agent entry points
const supportAgent = traceAgent(
  async (question: string) => {
    const docs = await searchDocs(question);
    return summarize(JSON.stringify(docs));
  },
  { name: "support-agent" }
);

// withSpan — general purpose, specify kind explicitly
const retrieveDocs = withSpan(
  async (query: string) =>
    fetch(`/api/search?q=${query}`).then((r) => r.json()),
  { name: "retrieve-docs", kind: "RETRIEVER" }
);

These helpers resolve the default tracer when the wrapped function runs, so traced functions defined at module scope keep following global provider changes.

Custom Input And Output Processing

Use processInput and processOutput when you want richer OpenInference attributes than the default JSON-serialized input.value and output.value.

import {
  OpenInferenceSpanKind,
  getInputAttributes,
  getRetrieverAttributes,
  withSpan,
} from "@arizeai/phoenix-otel";

const retrieveDocs = withSpan(
  async (query: string) => [`Doc A for ${query}`, `Doc B for ${query}`],
  {
    name: "retrieve-docs",
    kind: OpenInferenceSpanKind.RETRIEVER,
    processInput: (query) => getInputAttributes(query),
    processOutput: (documents) =>
      getRetrieverAttributes({
        documents: documents.map((content, index) => ({
          id: `doc-${index}`,
          content,
        })),
      }),
  }
);

Context Attributes

Propagate session IDs, user IDs, metadata, and tags to all child spans using context setters:

import {
  context,
  register,
  setMetadata,
  setSession,
  setUser,
  traceChain,
} from "@arizeai/phoenix-otel";

register({ projectName: "my-app" });

const handleQuery = traceChain(async (query: string) => `Handled: ${query}`, {
  name: "handle-query",
});

// All spans inside context.with() inherit session, user, and metadata
await context.with(
  setMetadata(
    setUser(setSession(context.active(), { sessionId: "sess-123" }), {
      userId: "user-456",
    }),
    { environment: "production" }
  ),
  () => handleQuery("Hello")
);

Available setters: setSession, setUser, setMetadata, setTags, setAttributes, setPromptTemplate.

If you create spans manually with a plain OpenTelemetry tracer, copy the propagated context attributes onto the span explicitly:

import {
  context,
  getAttributesFromContext,
  register,
  trace,
} from "@arizeai/phoenix-otel";

register({ projectName: "my-app" });

const tracer = trace.getTracer("manual-tracer");
const span = tracer.startSpan("manual-span");
span.setAttributes(getAttributesFromContext(context.active()));
span.end();

Decorators

observe wraps class methods with tracing while preserving method this context. Use TypeScript 5+ standard decorators.

import { OpenInferenceSpanKind, observe } from "@arizeai/phoenix-otel";

class ChatService {
  @observe({ kind: OpenInferenceSpanKind.CHAIN })
  async runWorkflow(message: string) {
    return `processed: ${message}`;
  }

  @observe({ name: "llm-call", kind: OpenInferenceSpanKind.LLM })
  async callModel(prompt: string) {
    return `model output for: ${prompt}`;
  }
}

Attribute Helper APIs

Use the attribute helpers when you want to build OpenInference-compatible span attributes directly:

import { getLLMAttributes, trace } from "@arizeai/phoenix-otel";

const tracer = trace.getTracer("llm-service");

tracer.startActiveSpan("llm-inference", (span) => {
  span.setAttributes(
    getLLMAttributes({
      provider: "openai",
      modelName: "gpt-4o-mini",
      inputMessages: [{ role: "user", content: "What is Phoenix?" }],
      outputMessages: [{ role: "assistant", content: "Phoenix is..." }],
      tokenCount: { prompt: 12, completion: 44, total: 56 },
      invocationParameters: { temperature: 0.2 },
    })
  );
  span.end();
});

Available helpers include:

  • getLLMAttributes
  • getEmbeddingAttributes
  • getRetrieverAttributes
  • getToolAttributes
  • getMetadataAttributes
  • getInputAttributes / getOutputAttributes
  • defaultProcessInput / defaultProcessOutput

Trace Config And Redaction

OITracer wraps an OpenTelemetry tracer and can redact or drop sensitive OpenInference attributes before spans are written:

import {
  OITracer,
  OpenInferenceSpanKind,
  trace,
  withSpan,
} from "@arizeai/phoenix-otel";

const tracer = new OITracer({
  tracer: trace.getTracer("my-service"),
  traceConfig: {
    hideInputs: true,
    hideOutputText: true,
    hideEmbeddingVectors: true,
    hideLLMTools: true,
    base64ImageMaxLength: 8_000,
  },
});

const safeLLMCall = withSpan(
  async (prompt: string) => `model response for ${prompt}`,
  {
    tracer,
    kind: OpenInferenceSpanKind.LLM,
    name: "safe-llm-call",
  }
);

Supported environment variables include:

  • OPENINFERENCE_HIDE_INPUTS
  • OPENINFERENCE_HIDE_OUTPUTS
  • OPENINFERENCE_HIDE_INPUT_MESSAGES
  • OPENINFERENCE_HIDE_OUTPUT_MESSAGES
  • OPENINFERENCE_HIDE_INPUT_IMAGES
  • OPENINFERENCE_HIDE_INPUT_TEXT
  • OPENINFERENCE_HIDE_OUTPUT_TEXT
  • OPENINFERENCE_HIDE_EMBEDDING_VECTORS
  • OPENINFERENCE_BASE64_IMAGE_MAX_LENGTH
  • OPENINFERENCE_HIDE_PROMPTS
  • OPENINFERENCE_HIDE_LLM_TOOLS

Raw OpenTelemetry Spans

For full control over attributes and timing, use the OpenTelemetry API directly:

import { register, trace, SpanStatusCode } from "@arizeai/phoenix-otel";

register({ projectName: "my-app" });

const tracer = trace.getTracer("my-service");

async function processOrder(orderId: string) {
  return tracer.startActiveSpan("process-order", async (span) => {
    try {
      span.setAttribute("order.id", orderId);
      const result = await fetchOrderDetails(orderId);
      span.setAttribute("order.status", result.status);
      return result;
    } catch (error) {
      span.recordException(error as Error);
      span.setStatus({ code: SpanStatusCode.ERROR });
      throw error;
    } finally {
      span.end();
    }
  });
}

Utility Helpers

The package also re-exports small utilities from @arizeai/openinference-core:

  • withSafety({ fn, onError? }) wraps a function and returns null on error
  • safelyJSONStringify(value) and safelyJSONParse(value) guard JSON operations

Development vs Production

Development (with debug logging):

import { DiagLogLevel, register } from "@arizeai/phoenix-otel";

register({
  projectName: "my-app-dev",
  url: "http://localhost:6006",
  batch: false, // Immediate span delivery for faster feedback
  diagLogLevel: DiagLogLevel.DEBUG,
});

Production (optimized for performance):

import { register } from "@arizeai/phoenix-otel";

register({
  projectName: "my-app-prod",
  url: "https://app.phoenix.arize.com",
  apiKey: process.env.PHOENIX_API_KEY,
  batch: true, // Batch processing for better performance
});

Custom Headers

Add custom headers to OTLP requests:

import { register } from "@arizeai/phoenix-otel";

register({
  projectName: "my-app",
  url: "https://app.phoenix.arize.com",
  headers: {
    "X-Custom-Header": "custom-value",
    "X-Environment": process.env.NODE_ENV || "development",
  },
});

Non-Global Provider

Use the provider explicitly without registering globally:

import { register } from "@arizeai/phoenix-otel";

const provider = register({
  projectName: "my-app",
  global: false,
});

// Use the provider explicitly
const tracer = provider.getTracer("my-tracer");

Docs And Source Code In node_modules

After install, a coding agent can inspect the exact versioned docs and implementation that shipped with the package:

node_modules/@arizeai/phoenix-otel/docs/
node_modules/@arizeai/phoenix-otel/src/

Because @arizeai/phoenix-otel re-exports @arizeai/openinference-core, the dependency docs are also useful local references:

node_modules/@arizeai/openinference-core/docs/
node_modules/@arizeai/openinference-core/src/

Coding Agent Skill

The Phoenix repo includes a phoenix-tracing skill that teaches coding agents (Claude Code, Cursor, etc.) how to instrument LLM applications with OpenInference tracing. Install it with:

npx skills add Arize-ai/phoenix --skill phoenix-tracing

Tracing helpers:

import {
  observe,
  traceAgent,
  traceChain,
  traceTool,
  withSpan,
} from "@arizeai/phoenix-otel";

Context attribute setters:

import {
  setAttributes,
  setMetadata,
  setPromptTemplate,
  setSession,
  setTags,
  setUser,
} from "@arizeai/phoenix-otel";

Attribute builders for rich span data:

import {
  defaultProcessInput,
  defaultProcessOutput,
  getEmbeddingAttributes,
  getLLMAttributes,
  getRetrieverAttributes,
  getToolAttributes,
} from "@arizeai/phoenix-otel";

Redaction and utility helpers:

import {
  OITracer,
  safelyJSONParse,
  safelyJSONStringify,
  withSafety,
} from "@arizeai/phoenix-otel";

The tracing helper wrappers resolve the default tracer when they run. That keeps spans attached to the current provider in experiments and in any workflow that swaps providers during process lifetime.

Documentation

Community

Join our community to connect with thousands of AI builders: