npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@arizeai/openinference-tanstack-ai

v0.1.1

Published

OpenInference middleware for TanStack AI

Readme

OpenInference TanStack AI

npm version

This package provides an OpenInference middleware for TanStack AI. It emits OpenTelemetry spans shaped according to the OpenInference specification so TanStack AI runs can be visualized in systems like Arize and Phoenix.

Installation

npm install --save @arizeai/openinference-tanstack-ai @tanstack/ai

You will also need an OpenTelemetry setup in your application. For example:

npm install --save @arizeai/phoenix-otel

or:

npm install --save @opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/exporter-trace-otlp-proto

Install the provider adapter you plan to use with TanStack AI as well, for example:

npm install --save @tanstack/ai-openai

Usage

@arizeai/openinference-tanstack-ai exports openInferenceMiddleware, which plugs directly into TanStack AI's middleware option.

import { chat } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";

import { openInferenceMiddleware } from "@arizeai/openinference-tanstack-ai";

const stream = chat({
  adapter: openaiText("gpt-4o-mini"),
  messages: [{ role: "user", content: "What is OpenInference?" }],
  middleware: [openInferenceMiddleware()],
});

The middleware works for both streaming and non-streaming TanStack AI calls.

const text = await chat({
  adapter: openaiText("gpt-4o-mini"),
  stream: false,
  systemPrompts: ["You are a concise technical explainer."],
  messages: [{ role: "user", content: "Explain OpenInference in one sentence." }],
  middleware: [openInferenceMiddleware()],
});

Tracer Setup

This package uses your application's existing OpenTelemetry tracer provider and exporters. It does not export spans by itself.

[!NOTE] Your instrumentation code should run before the middleware is applied. This ensures that the tracer provider is properly configured before the middleware starts emitting spans.

The recommended quick start is to pair it with @arizeai/phoenix-otel.

import { register } from "@arizeai/phoenix-otel";

register({
  projectName: "my-tanstack-ai-app",
  endpoint: process.env["PHOENIX_COLLECTOR_ENDPOINT"] ?? "http://localhost:6006/v1/traces",
  apiKey: process.env["PHOENIX_API_KEY"],
});

If you already have a standard OpenTelemetry setup, that works as well. For example, with a local Phoenix collector, a minimal manual setup looks like this:

import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { Resource } from "@opentelemetry/resources";
import { SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { SEMRESATTRS_PROJECT_NAME } from "@arizeai/openinference-semantic-conventions";

const tracerProvider = new NodeTracerProvider({
  resource: new Resource({
    [SEMRESATTRS_PROJECT_NAME]: "my-tanstack-ai-app",
  }),
  spanProcessors: [
    new SimpleSpanProcessor(
      new OTLPTraceExporter({
        url: process.env["PHOENIX_COLLECTOR_ENDPOINT"] ?? "http://localhost:6006/v1/traces",
        headers:
          process.env["PHOENIX_API_KEY"] == null
            ? undefined
            : {
                Authorization: `Bearer ${process.env["PHOENIX_API_KEY"]}`,
              },
      }),
    ),
  ],
});

tracerProvider.register();

Custom Tracer

By default, the middleware uses the global tracer for this package. If your application already has a request-scoped or custom tracer, pass it explicitly.

import { trace } from "@opentelemetry/api";

const tracer = trace.getTracer("tanstack-ai-request");

const middleware = openInferenceMiddleware({ tracer });

This is useful when you want the middleware to participate in a specific tracer setup without relying on the global default.

What Gets Traced

The middleware emits the following span structure for a TanStack AI run:

  • One AGENT span for the overall chat() invocation
  • One LLM span for each model turn
  • One TOOL span for each executed tool call

For a tool loop, the trace will typically look like:

  • AGENT
  • LLM 1
  • TOOL
  • LLM 2

The AGENT span captures the top-level request and final response. The LLM spans capture provider/model metadata, input messages, output messages, tool definitions, and token counts. The TOOL spans capture tool names, arguments, outputs, and errors.

Examples

This package includes example files in examples/:

  • examples/chat-with-tools.ts - OpenAI example with one tool call
  • examples/anthropic-multi-tool.ts - Anthropic example with multiple tool calls
  • examples/non-streaming-chat.ts - Anthropic non-streaming example with a system prompt

See examples/README.md for setup and run commands.

Notes

  • This package is ESM-only because TanStack AI is ESM-only.
  • The middleware works in both server and client environments, but client/server trace stitching depends on your application's context propagation setup.