impact-ai
v0.3.3
Published
Impact AI Observability SDK for JS/TS (OpenTelemetry-native, GenAI semantic conventions, OTLP to Impact).
Readme
Impact JS/TS SDK
OpenTelemetry-native LLM observability SDK for sending traces/logs to Impact.
Design goals:
- Align with
impact-sdk(Python) runtime and schema semantics. - Keep the public API small and predictable.
- Be safe in BYO-OTel environments (
auto|bootstrap|attach). - Never hard-fail customer apps due to optional instrumentation.
Install
npm i impact-aiLangChain instrumentation is opt-in (to avoid changing app dependency resolution for @langchain/core):
npm i @traceloop/instrumentation-langchainQuickstart
import impact from "impact-ai";
impact.init({
apiKey: process.env.IMPACT_API_KEY,
endpoint: process.env.IMPACT_BASE_URL, // optional when apiKey is impact_<region>_*
serviceName: "my-app",
mode: "auto",
captureContent: true,
});
impact.context({
userId: "user_123",
interactionId: "interaction_456",
versionId: "v1.0.0",
attributes: { team: "growth" },
});
const run = impact.trace("checkout", async (orderId: string) => {
return { ok: true, orderId };
});
await run("order_abc");
await impact.shutdown();Startup Model
Use one startup path:
import impact from "impact-ai"impact.init({...})
Provider usage model:
- Import
impact-aiand callimpact.init(...)first. - Import provider SDKs directly from vendor packages (
openai,@openai/agents,@google/genai, etc.). - Keep
impact.init(...)at process startup so instrumentations attach before first provider usage.
SDK Contract
Principles:
- Keep the public surface small and explicit.
- Match Python SDK semantics where possible.
- Emit canonical Impact attributes only.
- Never crash customer code due to optional instrumentation.
Canonical Schema
Context attributes:
userId->impact.context.user_idinteractionId->impact.context.interaction_idversionId->impact.context.version_idattributes->impact.context.<key>
Manual span attributes:
impact.trace.typeimpact.trace.nameimpact.trace.pathimpact.trace.inputimpact.trace.output
Public API
init(options?)context(ctx)trace(...)flush()shutdown()instrumentationResults(getter)vercelAITelemetry(options?)
Notes:
context(ctx)is the Python-aligned context entrypoint.
Runtime Modes
auto(default): attach to existing tracer provider when possible, otherwise bootstrap.bootstrap: always create/register Impact providers.attach: require an existing tracer provider and never replace it.
Endpoint resolution order:
init({ endpoint })IMPACT_BASE_URL- derive from
impact_<region>_*API key (https://api.<region>.<domain>)
Runtime contract:
mode: auto|bootstrap|attachcontrols provider setup.- In
auto, SDK attaches if possible, otherwise bootstraps. - In
attach, SDK never replaces user providers. - Optional instrumentations are best-effort.
- In bootstrap mode, SDK registers a default W3C tracecontext propagator.
- In attach mode, SDK does not override caller propagators.
- Instrumentation outcomes are deterministic and available on
impact.instrumentationResults. - Diagnostics level can be set with
diagLogLevelorIMPACT_DIAG_LOG_LEVEL.
Next.js and Vercel AI SDK
// instrumentation.ts
import impact from "impact-ai";
export async function register() {
impact.init({
apiKey: process.env.IMPACT_API_KEY,
endpoint: process.env.IMPACT_BASE_URL,
});
}import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import impact from "impact-ai";
await generateText({
model: openai("gpt-4.1-mini"),
prompt: "hello",
experimental_telemetry: impact.vercelAITelemetry(),
});Vercel AI SDK requirement:
impact.init(...)must run before first AI SDK usage.- Each AI SDK call that you want traced must set
experimental_telemetry: impact.vercelAITelemetry().
Diagnostics
Set diagnostics level via either:
init({ diagLogLevel: "warn" })IMPACT_DIAG_LOG_LEVEL=warn
Supported levels:
noneerrorwarninfodebugverbose
What to check:
- Tracer attach/bootstrap decisions (
mode=auto|attach|bootstrap). - Instrumentation load outcomes (
impact.instrumentationResults). - Logger provider attach failures (non-fatal).
- Detached fallback warning when auto-attach fails against an incompatible global provider.
Recommended debug flow:
- Start with
IMPACT_DIAG_LOG_LEVEL=info. - Confirm
impact.init()succeeds and inspectimpact.instrumentationResults. - If spans are missing, increase to
debugand validate provider attach messages. - For serverless, call
impact.flush()before process exit.
Coverage
Scope: Node.js runtime auto-instrumentation coverage.
Status legend:
Covered:impact.init()can auto-enable instrumentation (best effort) when customer SDK + instrumentation package are installed.Model calls only: model SDK spans can be captured, but no dedicated orchestration instrumentation is currently wired.Not supported: no stable JS/TS instrumentation path is integrated today.
Covered:
| Provider / Framework | Layer | Source | Package / Strategy | Status |
|---|---|---|---|---|
| OpenAI | Model | OpenLLMetry + OpenTelemetry fallback | @traceloop/instrumentation-openai -> @opentelemetry/instrumentation-openai | Covered |
| OpenAI Agents SDK (JS) | Agent framework | Impact custom bridge over OpenAI tracing API | addTraceProcessor -> OTel span bridge (retained across setTraceProcessors) | Covered |
| Anthropic | Model | OpenLLMetry | @traceloop/instrumentation-anthropic | Covered |
| Azure OpenAI | Model | OpenLLMetry | @traceloop/instrumentation-azure | Covered |
| Azure Foundry Agents / Azure AI Agents SDKs | Agent framework | OpenTelemetry (Azure official, disabled by default) | @azure/opentelemetry-instrumentation-azure-sdk | Covered |
| AWS Bedrock | Model | OpenLLMetry | @traceloop/instrumentation-bedrock | Covered |
| Google GenAI (@google/genai) | Model | Impact custom wrapper + fetch fallback | GoogleGenAI.models.generateContent(*) wrapping + fetch patch for :generateContent | Covered |
| Google Vertex AI | Model | OpenLLMetry | @traceloop/instrumentation-vertexai | Covered |
| Cohere | Model | OpenLLMetry | @traceloop/instrumentation-cohere | Covered |
| Together AI | Model | OpenLLMetry | @traceloop/instrumentation-together | Covered |
| LangChain | Agent framework | OpenLLMetry | @traceloop/instrumentation-langchain | Covered (opt-in) |
| LlamaIndex | Agent framework | OpenLLMetry | @traceloop/instrumentation-llamaindex | Covered |
| MCP | Tooling | OpenLLMetry | @traceloop/instrumentation-mcp | Covered |
| Pinecone | Vector DB | OpenLLMetry | @traceloop/instrumentation-pinecone | Covered |
| Qdrant | Vector DB | OpenLLMetry | @traceloop/instrumentation-qdrant | Covered |
| ChromaDB | Vector DB | OpenLLMetry | @traceloop/instrumentation-chromadb | Covered |
| OpenAI-compatible providers (xAI, Fireworks, Cerebras, SambaNova, OpenRouter, etc.) | Model | OpenAI SDK path | Captured via OpenAI instrumentation when using OpenAI-compatible SDK clients | Covered |
Model calls only:
| Provider / Framework | Why |
|---|---|
| Microsoft Agent Framework (JS) | No stable first-party JS OTel package/API for generic MAF traces; SDK uses best-effort enable hook when present. |
| Orchestration code without framework instrumentation | Model/tool calls are traceable, orchestration spans require manual impact.trace(...) wrappers. |
Not supported:
| Provider / Framework | Notes | |---|---| | CrewAI / Agno JS | No integrated JS instrumentation path in the current registry. |
Coverage notes:
- Coverage is best-effort. Missing packages, dependency conflicts, and constructor/registration failures are non-fatal and reported in instrumentation result codes.
impact-sdk-jsexposes instrumentation load results viaimpact.instrumentationResults.- OpenAI spans rely on upstream OpenTelemetry/OpenLLMetry package behavior for supported SDK versions.
@traceloop/instrumentation-ai-sdkis not currently available in npm and is not used. Vercel AI SDK tracing is handled by the built-in Impact Vercel span processor andimpact.vercelAITelemetry().- For app-owned SDKs (for example
@google/genai,@openai/agents) module resolution is attempted from both SDK package path and consumer app working directory. - LangChain instrumentation is not bundled by default; install
@traceloop/instrumentation-langchainin the app and enableinstrumentations.langchain: true.
Latest package snapshot (as researched on 2026-02-24):
@opentelemetry/instrumentation-openai:0.10.0@azure/opentelemetry-instrumentation-azure-sdk:1.0.0-beta.9@traceloop/instrumentation-openai:0.22.5@traceloop/instrumentation-anthropic:0.22.6@traceloop/instrumentation-azure:0.14.0@traceloop/instrumentation-bedrock:0.22.6@traceloop/instrumentation-vertexai:0.22.5@traceloop/instrumentation-langchain:0.22.6@traceloop/instrumentation-llamaindex:0.22.6@traceloop/instrumentation-cohere:0.22.6@traceloop/instrumentation-together:0.22.5@traceloop/instrumentation-mcp:0.22.6@traceloop/instrumentation-pinecone:0.22.5@traceloop/instrumentation-qdrant:0.22.6@traceloop/instrumentation-chromadb:0.22.5- OpenAI Agents JS: no dedicated
@opentelemetry/*package published; uses OpenAI tracing processor API. - Google GenAI JS: no dedicated
@opentelemetry/*package published; uses Impact wrapper plus fetch fallback instrumentation. - Microsoft Agent Framework JS:
@microsoft/[email protected]currently ships without a validindex.jsentrypoint, so auto-activation remains unavailable.
Validation
Core package:
npm run lint
npm run test:contracts
npm run test
npm run buildtest:contracts is the fast semantic guard suite. It validates canonical attribute flow for:
- OpenAI Agents span mapping
- Google GenAI wrappers (method + fetch patch paths)
- Microsoft Agent Framework idempotent activation
- Foundry registration fallback paths
Release Checklist
Pre-release:
- Run
npm run lint. - Run
npm run typecheck:tests. - Run
npm run test:contracts. - Run
npm run test. - Run
npm run build. - Verify package entrypoints:
dist/index.mjs(ESM)dist/index.cjs(CJS)- Validate this
README.mdis aligned with current runtime behavior.
End-to-end validation:
- In
../demos-js, runnpm run typecheck. - In
../demos-js, runnpm run matrix:required. - In
../demos-js, runnpm run matrix. - Confirm required scenarios pass and optional scenario failures are documented.
Publish readiness:
- Confirm
package.jsonversion. - Confirm
fileswhitelist only includes intended artifacts. - Run
npm packand inspect tarball contents. - Tag release commit and publish.
End-to-end matrix quick run:
cd ../demos-js
npm run typecheck
npm run matrix:required
npm run matrixLicense
Apache-2.0
