@struct-ai/sdk
v0.1.2
Published
Struct agent observability SDK — auto-instruments AI agent frameworks with OpenTelemetry
Maintainers
Readme
@struct-ai/sdk
Struct agent observability SDK for TypeScript/Node.js. Auto-instruments AI agent frameworks — the Anthropic SDK and LangChain.js — and emits OpenTelemetry traces + logs to struct.ai with zero config.
This is the TypeScript port of struct-sdk
(Python). Span names, attribute keys, and log event shapes are identical across
the two SDKs so the server processes both uniformly.
Install
npm install @struct-ai/sdk
# optional — the SDK auto-instruments these if present
npm install @anthropic-ai/sdk @langchain/core @langchain/langgraphRequires Node 18+.
Quickstart
Get an ingest key from app.struct.ai/settings?tab=ingest-keys, then:
import { struct } from "@struct-ai/sdk";
// Initialize once, as early as possible in your process
struct.init({
ingestKey: process.env.STRUCT_INGEST_KEY!, // or pass the string directly
serviceName: "my-agent",
environment: "production",
});
// Use your agent code as normal — spans + log events are emitted automatically.
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
await struct.agent({ name: "checkout" }, async () => {
const msg = await client.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [{ role: "user", content: "plan my checkout flow" }],
});
// tool_call_id is auto-filled from the preceding Anthropic response
await struct.tool({ name: "search" }, async () => {
return await search(msg);
});
});What gets instrumented
| Library | Hook | Span type | Notes |
|---|---|---|---|
| @anthropic-ai/sdk | Messages.prototype.create, .stream | chat {model} | Cache-token accounting, streaming with tool-use reconstruction |
| @anthropic-ai/bedrock-sdk, @anthropic-ai/vertex-sdk | Messages.prototype.* | chat {model} | Best-effort, if installed |
| @langchain/core BaseChatModel | .invoke, .stream | chat {model} | Skipped when a provider-direct instrumentor is active (e.g. ChatAnthropic + Anthropic patch → single span) |
| @langchain/core StructuredTool | .invoke | execute_tool {name} | Extracts tool_call_id from LangChain ToolCall input or pending queue |
| @langchain/core BaseRetriever | .invoke | retrieval {name} | |
| @langchain/langgraph Pregel | .invoke, .stream | invoke_agent {name} | Covers createReactAgent, custom graphs. thread_id → gen_ai.conversation.id |
Framework integration
struct.init() takes the same options regardless of which framework
you're instrumenting. Required: ingestKey (get one at
app.struct.ai/settings?tab=ingest-keys).
Recommended: serviceName, environment.
What you need to do beyond init() depends on whether you're using an
agent framework (which has built-in concepts of agents and tools) or
an LLM SDK directly (which only knows about chat completions). The
SDK auto-instruments both, but only agent frameworks get full agent +
tool spans for free — when you call an LLM SDK directly, you have to
tell the SDK where the agent and tool boundaries are.
Call init() once, as early as possible, before the instrumented
libraries are imported, so their prototypes are patched before any
instance is constructed.
Agent frameworks — fully auto-instrumented
For these, calling struct.init() is the only setup. Agent, tool, chat,
and retrieval spans all emit automatically.
LangChain / LangGraph (with an agent or graph)
import { struct } from "@struct-ai/sdk";
struct.init({ ingestKey: "pk-...", serviceName: "my-graph" });
import { createReactAgent } from "@langchain/langgraph/prebuilt";
// Pregel invocations get invoke_agent spans. BaseChatModel calls get
// chat spans. StructuredTool.invoke gets execute_tool spans.
// BaseRetriever.invoke gets retrieval spans.LLM SDKs used directly — manual agent + tool scopes required
When you call an LLM SDK directly (no agent framework wrapping it), only
chat spans emit automatically. You need to wrap your agent loop in
struct.agent() and each tool execution in struct.tool() so the SDK
knows where to put the agent and tool boundaries — otherwise you'll see
free-floating chat spans with no agent or tool context around them.
Anthropic SDK (raw)
import { struct } from "@struct-ai/sdk";
struct.init({ ingestKey: "pk-...", serviceName: "checkout-agent" });
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
// Required: wrap the agent loop yourself.
await struct.agent({ name: "checkout" }, async () => {
const msg = await client.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [...],
});
// Required: wrap each tool execution.
// tool_call_id is auto-filled from the preceding Anthropic response.
await struct.tool({ name: "search" }, async () => {
return await search(...);
});
});@anthropic-ai/sdk, @anthropic-ai/bedrock-sdk, and @anthropic-ai/vertex-sdk
are all auto-instrumented for chat spans.
LangChain BaseChatModel (no agent/graph)
If you call ChatAnthropic.invoke(...) (or any other BaseChatModel)
without wrapping it in AgentExecutor or a LangGraph graph, only the chat
span emits automatically. Same rule as raw Anthropic — wrap your agent
loop in struct.agent() and tool execution in struct.tool().
import { struct } from "@struct-ai/sdk";
struct.init({ ingestKey: "pk-...", serviceName: "my-agent" });
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({ model: "claude-3-5-sonnet-20241022" });
await struct.agent({ name: "my-agent" }, async () => {
const response = await llm.invoke([["user", "..."]]);
await struct.tool({ name: "search" }, async () => {
// ...
});
});When you do use ChatAnthropic and have @anthropic-ai/sdk installed,
the chat span comes from the Anthropic patch (single span); the LangChain
layer suppresses its duplicate.
Content capture
The SDK supports four capture modes controlling how prompt/response content is emitted.
import { struct, ContentCaptureMode } from "@struct-ai/sdk";
struct.init({
ingestKey: ...,
contentCapture: ContentCaptureMode.EventOnly, // default
// or ContentCaptureMode.None, SpanOnly, SpanAndEvent
});EventOnly(default): per-message content lands on OTel log records (gen_ai.{user,assistant,system,tool}.message,gen_ai.choice). Spans carry metadata only.SpanOnly: content on span attributes (gen_ai.input.messages,gen_ai.output.messages).SpanAndEvent: both.None: no content captured. Token counts, tool call IDs, finish reasons, and other metadata still flow.
Set captureContent: false for the legacy bool API (equivalent to
ContentCaptureMode.None).
Manual scopes
struct.agent() and struct.tool() create invoke_agent and execute_tool
spans. These are optional — LangChain's Pregel patch creates agent spans
automatically when your graph has a thread_id in the config.
await struct.agent(
{ name: "onboarding", sessionId: conversationId, metadata: { tenant: "acme" } },
async () => {
await struct.tool({ name: "fetch-profile" }, async () => {
return fetchProfile();
});
}
);Nested agents set struct.agent.parent_session_id on the inner span, linking
subagents back to the parent.
Semantic conventions
Emits attributes per the OTel GenAI semantic conventions:
gen_ai.operation.name—chat,execute_tool,invoke_agent,retrievalgen_ai.provider.name—anthropic,openai,langchain,struct, …gen_ai.request.{model, max_tokens, temperature, top_p, top_k, stop_sequences}gen_ai.response.{model, id, finish_reasons}gen_ai.usage.{input_tokens, output_tokens, cache_read.input_tokens, cache_creation.input_tokens}gen_ai.conversation.idgen_ai.tool.{name, call.id, call.arguments, call.result}error.type+StatusCode.ERRORon failures
Note: gen_ai.usage.input_tokens for Anthropic is the TRUE total — we add back
cache_read_input_tokens + cache_creation_input_tokens (which Anthropic's raw
response excludes). Matches the Python SDK.
Troubleshooting
- Spans missing after instrumenting: Import
@struct-ai/sdk(orstruct.init()) before the instrumented libraries, so their class prototypes are patched before any instance is constructed. In most Node setups this is automatic, but some bundlers tree-shake aggressively. - No logs appearing:
LogRecords only emit whensdk.emitEventsis true (EventOnlyorSpanAndEventcapture mode, which is the default). If you setcaptureContent: falseyou disable them. - Duplicate chat spans: The LangChain integration suppresses its own chat
span when a provider-direct patch is active (e.g. ChatAnthropic calls through
to
@anthropic-ai/sdk, which emits its ownchatspan). If you see doubles, confirm both integrations are auto-instrumenting (checkstruct.initialized). - Subagent in a different trace / missing from parent's "Subagents" list:
If you invoke a nested agent (
subagent.invoke(...)) from inside a tool body, define the outer tool withtool(func, { name, description, schema })from@langchain/core/tools— notnew DynamicTool({...}). Thetool()factory wraps your function inAsyncLocalStorageProviderSingleton.runWithConfig(...), which is what lets the nested invoke inherit the tool's callback chain and share the parent's trace_id.DynamicToolskips that wrap, so the subagent starts a new trace and parent↔subagent linkage breaks. Thestruct.agent.parent_session_idattribute is still set, so "Spawned by" on the child side still renders — but the parent's forward link to the subagent won't appear.
License
Apache-2.0
