npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@struct-ai/sdk

v0.1.2

Published

Struct agent observability SDK — auto-instruments AI agent frameworks with OpenTelemetry

Readme

@struct-ai/sdk

Struct agent observability SDK for TypeScript/Node.js. Auto-instruments AI agent frameworks — the Anthropic SDK and LangChain.js — and emits OpenTelemetry traces + logs to struct.ai with zero config.

This is the TypeScript port of struct-sdk (Python). Span names, attribute keys, and log event shapes are identical across the two SDKs so the server processes both uniformly.

Install

npm install @struct-ai/sdk
# optional — the SDK auto-instruments these if present
npm install @anthropic-ai/sdk @langchain/core @langchain/langgraph

Requires Node 18+.

Quickstart

Get an ingest key from app.struct.ai/settings?tab=ingest-keys, then:

import { struct } from "@struct-ai/sdk";
// Initialize once, as early as possible in your process
struct.init({
  ingestKey: process.env.STRUCT_INGEST_KEY!, // or pass the string directly
  serviceName: "my-agent",
  environment: "production",
});

// Use your agent code as normal — spans + log events are emitted automatically.
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();

await struct.agent({ name: "checkout" }, async () => {
  const msg = await client.messages.create({
    model: "claude-3-5-sonnet-20241022",
    max_tokens: 1024,
    messages: [{ role: "user", content: "plan my checkout flow" }],
  });

  // tool_call_id is auto-filled from the preceding Anthropic response
  await struct.tool({ name: "search" }, async () => {
    return await search(msg);
  });
});

What gets instrumented

| Library | Hook | Span type | Notes | |---|---|---|---| | @anthropic-ai/sdk | Messages.prototype.create, .stream | chat {model} | Cache-token accounting, streaming with tool-use reconstruction | | @anthropic-ai/bedrock-sdk, @anthropic-ai/vertex-sdk | Messages.prototype.* | chat {model} | Best-effort, if installed | | @langchain/core BaseChatModel | .invoke, .stream | chat {model} | Skipped when a provider-direct instrumentor is active (e.g. ChatAnthropic + Anthropic patch → single span) | | @langchain/core StructuredTool | .invoke | execute_tool {name} | Extracts tool_call_id from LangChain ToolCall input or pending queue | | @langchain/core BaseRetriever | .invoke | retrieval {name} | | | @langchain/langgraph Pregel | .invoke, .stream | invoke_agent {name} | Covers createReactAgent, custom graphs. thread_idgen_ai.conversation.id |

Framework integration

struct.init() takes the same options regardless of which framework you're instrumenting. Required: ingestKey (get one at app.struct.ai/settings?tab=ingest-keys). Recommended: serviceName, environment.

What you need to do beyond init() depends on whether you're using an agent framework (which has built-in concepts of agents and tools) or an LLM SDK directly (which only knows about chat completions). The SDK auto-instruments both, but only agent frameworks get full agent + tool spans for free — when you call an LLM SDK directly, you have to tell the SDK where the agent and tool boundaries are.

Call init() once, as early as possible, before the instrumented libraries are imported, so their prototypes are patched before any instance is constructed.

Agent frameworks — fully auto-instrumented

For these, calling struct.init() is the only setup. Agent, tool, chat, and retrieval spans all emit automatically.

LangChain / LangGraph (with an agent or graph)

import { struct } from "@struct-ai/sdk";
struct.init({ ingestKey: "pk-...", serviceName: "my-graph" });

import { createReactAgent } from "@langchain/langgraph/prebuilt";
// Pregel invocations get invoke_agent spans. BaseChatModel calls get
// chat spans. StructuredTool.invoke gets execute_tool spans.
// BaseRetriever.invoke gets retrieval spans.

LLM SDKs used directly — manual agent + tool scopes required

When you call an LLM SDK directly (no agent framework wrapping it), only chat spans emit automatically. You need to wrap your agent loop in struct.agent() and each tool execution in struct.tool() so the SDK knows where to put the agent and tool boundaries — otherwise you'll see free-floating chat spans with no agent or tool context around them.

Anthropic SDK (raw)

import { struct } from "@struct-ai/sdk";
struct.init({ ingestKey: "pk-...", serviceName: "checkout-agent" });

import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();

// Required: wrap the agent loop yourself.
await struct.agent({ name: "checkout" }, async () => {
  const msg = await client.messages.create({
    model: "claude-3-5-sonnet-20241022",
    max_tokens: 1024,
    messages: [...],
  });

  // Required: wrap each tool execution.
  // tool_call_id is auto-filled from the preceding Anthropic response.
  await struct.tool({ name: "search" }, async () => {
    return await search(...);
  });
});

@anthropic-ai/sdk, @anthropic-ai/bedrock-sdk, and @anthropic-ai/vertex-sdk are all auto-instrumented for chat spans.

LangChain BaseChatModel (no agent/graph)

If you call ChatAnthropic.invoke(...) (or any other BaseChatModel) without wrapping it in AgentExecutor or a LangGraph graph, only the chat span emits automatically. Same rule as raw Anthropic — wrap your agent loop in struct.agent() and tool execution in struct.tool().

import { struct } from "@struct-ai/sdk";
struct.init({ ingestKey: "pk-...", serviceName: "my-agent" });

import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({ model: "claude-3-5-sonnet-20241022" });

await struct.agent({ name: "my-agent" }, async () => {
  const response = await llm.invoke([["user", "..."]]);
  await struct.tool({ name: "search" }, async () => {
    // ...
  });
});

When you do use ChatAnthropic and have @anthropic-ai/sdk installed, the chat span comes from the Anthropic patch (single span); the LangChain layer suppresses its duplicate.

Content capture

The SDK supports four capture modes controlling how prompt/response content is emitted.

import { struct, ContentCaptureMode } from "@struct-ai/sdk";
struct.init({
  ingestKey: ...,
  contentCapture: ContentCaptureMode.EventOnly, // default
  // or ContentCaptureMode.None, SpanOnly, SpanAndEvent
});
  • EventOnly (default): per-message content lands on OTel log records (gen_ai.{user,assistant,system,tool}.message, gen_ai.choice). Spans carry metadata only.
  • SpanOnly: content on span attributes (gen_ai.input.messages, gen_ai.output.messages).
  • SpanAndEvent: both.
  • None: no content captured. Token counts, tool call IDs, finish reasons, and other metadata still flow.

Set captureContent: false for the legacy bool API (equivalent to ContentCaptureMode.None).

Manual scopes

struct.agent() and struct.tool() create invoke_agent and execute_tool spans. These are optional — LangChain's Pregel patch creates agent spans automatically when your graph has a thread_id in the config.

await struct.agent(
  { name: "onboarding", sessionId: conversationId, metadata: { tenant: "acme" } },
  async () => {
    await struct.tool({ name: "fetch-profile" }, async () => {
      return fetchProfile();
    });
  }
);

Nested agents set struct.agent.parent_session_id on the inner span, linking subagents back to the parent.

Semantic conventions

Emits attributes per the OTel GenAI semantic conventions:

  • gen_ai.operation.namechat, execute_tool, invoke_agent, retrieval
  • gen_ai.provider.nameanthropic, openai, langchain, struct, …
  • gen_ai.request.{model, max_tokens, temperature, top_p, top_k, stop_sequences}
  • gen_ai.response.{model, id, finish_reasons}
  • gen_ai.usage.{input_tokens, output_tokens, cache_read.input_tokens, cache_creation.input_tokens}
  • gen_ai.conversation.id
  • gen_ai.tool.{name, call.id, call.arguments, call.result}
  • error.type + StatusCode.ERROR on failures

Note: gen_ai.usage.input_tokens for Anthropic is the TRUE total — we add back cache_read_input_tokens + cache_creation_input_tokens (which Anthropic's raw response excludes). Matches the Python SDK.

Troubleshooting

  • Spans missing after instrumenting: Import @struct-ai/sdk (or struct.init()) before the instrumented libraries, so their class prototypes are patched before any instance is constructed. In most Node setups this is automatic, but some bundlers tree-shake aggressively.
  • No logs appearing: LogRecords only emit when sdk.emitEvents is true (EventOnly or SpanAndEvent capture mode, which is the default). If you set captureContent: false you disable them.
  • Duplicate chat spans: The LangChain integration suppresses its own chat span when a provider-direct patch is active (e.g. ChatAnthropic calls through to @anthropic-ai/sdk, which emits its own chat span). If you see doubles, confirm both integrations are auto-instrumenting (check struct.initialized).
  • Subagent in a different trace / missing from parent's "Subagents" list: If you invoke a nested agent (subagent.invoke(...)) from inside a tool body, define the outer tool with tool(func, { name, description, schema }) from @langchain/core/tools — not new DynamicTool({...}). The tool() factory wraps your function in AsyncLocalStorageProviderSingleton.runWithConfig(...), which is what lets the nested invoke inherit the tool's callback chain and share the parent's trace_id. DynamicTool skips that wrap, so the subagent starts a new trace and parent↔subagent linkage breaks. The struct.agent.parent_session_id attribute is still set, so "Spawned by" on the child side still renders — but the parent's forward link to the subagent won't appear.

License

Apache-2.0