npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

impact-ai

v0.3.3

Published

Impact AI Observability SDK for JS/TS (OpenTelemetry-native, GenAI semantic conventions, OTLP to Impact).

Readme

Impact JS/TS SDK

OpenTelemetry-native LLM observability SDK for sending traces/logs to Impact.

Design goals:

  1. Align with impact-sdk (Python) runtime and schema semantics.
  2. Keep the public API small and predictable.
  3. Be safe in BYO-OTel environments (auto|bootstrap|attach).
  4. Never hard-fail customer apps due to optional instrumentation.

Install

npm i impact-ai

LangChain instrumentation is opt-in (to avoid changing app dependency resolution for @langchain/core):

npm i @traceloop/instrumentation-langchain

Quickstart

import impact from "impact-ai";

impact.init({
  apiKey: process.env.IMPACT_API_KEY,
  endpoint: process.env.IMPACT_BASE_URL, // optional when apiKey is impact_<region>_*
  serviceName: "my-app",
  mode: "auto",
  captureContent: true,
});

impact.context({
  userId: "user_123",
  interactionId: "interaction_456",
  versionId: "v1.0.0",
  attributes: { team: "growth" },
});

const run = impact.trace("checkout", async (orderId: string) => {
  return { ok: true, orderId };
});

await run("order_abc");
await impact.shutdown();

Startup Model

Use one startup path:

  1. import impact from "impact-ai"
  2. impact.init({...})

Provider usage model:

  1. Import impact-ai and call impact.init(...) first.
  2. Import provider SDKs directly from vendor packages (openai, @openai/agents, @google/genai, etc.).
  3. Keep impact.init(...) at process startup so instrumentations attach before first provider usage.

SDK Contract

Principles:

  1. Keep the public surface small and explicit.
  2. Match Python SDK semantics where possible.
  3. Emit canonical Impact attributes only.
  4. Never crash customer code due to optional instrumentation.

Canonical Schema

Context attributes:

  1. userId -> impact.context.user_id
  2. interactionId -> impact.context.interaction_id
  3. versionId -> impact.context.version_id
  4. attributes -> impact.context.<key>

Manual span attributes:

  1. impact.trace.type
  2. impact.trace.name
  3. impact.trace.path
  4. impact.trace.input
  5. impact.trace.output

Public API

  1. init(options?)
  2. context(ctx)
  3. trace(...)
  4. flush()
  5. shutdown()
  6. instrumentationResults (getter)
  7. vercelAITelemetry(options?)

Notes:

  1. context(ctx) is the Python-aligned context entrypoint.

Runtime Modes

  1. auto (default): attach to existing tracer provider when possible, otherwise bootstrap.
  2. bootstrap: always create/register Impact providers.
  3. attach: require an existing tracer provider and never replace it.

Endpoint resolution order:

  1. init({ endpoint })
  2. IMPACT_BASE_URL
  3. derive from impact_<region>_* API key (https://api.<region>.<domain>)

Runtime contract:

  1. mode: auto|bootstrap|attach controls provider setup.
  2. In auto, SDK attaches if possible, otherwise bootstraps.
  3. In attach, SDK never replaces user providers.
  4. Optional instrumentations are best-effort.
  5. In bootstrap mode, SDK registers a default W3C tracecontext propagator.
  6. In attach mode, SDK does not override caller propagators.
  7. Instrumentation outcomes are deterministic and available on impact.instrumentationResults.
  8. Diagnostics level can be set with diagLogLevel or IMPACT_DIAG_LOG_LEVEL.

Next.js and Vercel AI SDK

// instrumentation.ts
import impact from "impact-ai";

export async function register() {
  impact.init({
    apiKey: process.env.IMPACT_API_KEY,
    endpoint: process.env.IMPACT_BASE_URL,
  });
}
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import impact from "impact-ai";

await generateText({
  model: openai("gpt-4.1-mini"),
  prompt: "hello",
  experimental_telemetry: impact.vercelAITelemetry(),
});

Vercel AI SDK requirement:

  1. impact.init(...) must run before first AI SDK usage.
  2. Each AI SDK call that you want traced must set experimental_telemetry: impact.vercelAITelemetry().

Diagnostics

Set diagnostics level via either:

  1. init({ diagLogLevel: "warn" })
  2. IMPACT_DIAG_LOG_LEVEL=warn

Supported levels:

  1. none
  2. error
  3. warn
  4. info
  5. debug
  6. verbose

What to check:

  1. Tracer attach/bootstrap decisions (mode=auto|attach|bootstrap).
  2. Instrumentation load outcomes (impact.instrumentationResults).
  3. Logger provider attach failures (non-fatal).
  4. Detached fallback warning when auto-attach fails against an incompatible global provider.

Recommended debug flow:

  1. Start with IMPACT_DIAG_LOG_LEVEL=info.
  2. Confirm impact.init() succeeds and inspect impact.instrumentationResults.
  3. If spans are missing, increase to debug and validate provider attach messages.
  4. For serverless, call impact.flush() before process exit.

Coverage

Scope: Node.js runtime auto-instrumentation coverage.

Status legend:

  1. Covered: impact.init() can auto-enable instrumentation (best effort) when customer SDK + instrumentation package are installed.
  2. Model calls only: model SDK spans can be captured, but no dedicated orchestration instrumentation is currently wired.
  3. Not supported: no stable JS/TS instrumentation path is integrated today.

Covered:

| Provider / Framework | Layer | Source | Package / Strategy | Status | |---|---|---|---|---| | OpenAI | Model | OpenLLMetry + OpenTelemetry fallback | @traceloop/instrumentation-openai -> @opentelemetry/instrumentation-openai | Covered | | OpenAI Agents SDK (JS) | Agent framework | Impact custom bridge over OpenAI tracing API | addTraceProcessor -> OTel span bridge (retained across setTraceProcessors) | Covered | | Anthropic | Model | OpenLLMetry | @traceloop/instrumentation-anthropic | Covered | | Azure OpenAI | Model | OpenLLMetry | @traceloop/instrumentation-azure | Covered | | Azure Foundry Agents / Azure AI Agents SDKs | Agent framework | OpenTelemetry (Azure official, disabled by default) | @azure/opentelemetry-instrumentation-azure-sdk | Covered | | AWS Bedrock | Model | OpenLLMetry | @traceloop/instrumentation-bedrock | Covered | | Google GenAI (@google/genai) | Model | Impact custom wrapper + fetch fallback | GoogleGenAI.models.generateContent(*) wrapping + fetch patch for :generateContent | Covered | | Google Vertex AI | Model | OpenLLMetry | @traceloop/instrumentation-vertexai | Covered | | Cohere | Model | OpenLLMetry | @traceloop/instrumentation-cohere | Covered | | Together AI | Model | OpenLLMetry | @traceloop/instrumentation-together | Covered | | LangChain | Agent framework | OpenLLMetry | @traceloop/instrumentation-langchain | Covered (opt-in) | | LlamaIndex | Agent framework | OpenLLMetry | @traceloop/instrumentation-llamaindex | Covered | | MCP | Tooling | OpenLLMetry | @traceloop/instrumentation-mcp | Covered | | Pinecone | Vector DB | OpenLLMetry | @traceloop/instrumentation-pinecone | Covered | | Qdrant | Vector DB | OpenLLMetry | @traceloop/instrumentation-qdrant | Covered | | ChromaDB | Vector DB | OpenLLMetry | @traceloop/instrumentation-chromadb | Covered | | OpenAI-compatible providers (xAI, Fireworks, Cerebras, SambaNova, OpenRouter, etc.) | Model | OpenAI SDK path | Captured via OpenAI instrumentation when using OpenAI-compatible SDK clients | Covered |

Model calls only:

| Provider / Framework | Why | |---|---| | Microsoft Agent Framework (JS) | No stable first-party JS OTel package/API for generic MAF traces; SDK uses best-effort enable hook when present. | | Orchestration code without framework instrumentation | Model/tool calls are traceable, orchestration spans require manual impact.trace(...) wrappers. |

Not supported:

| Provider / Framework | Notes | |---|---| | CrewAI / Agno JS | No integrated JS instrumentation path in the current registry. |

Coverage notes:

  1. Coverage is best-effort. Missing packages, dependency conflicts, and constructor/registration failures are non-fatal and reported in instrumentation result codes.
  2. impact-sdk-js exposes instrumentation load results via impact.instrumentationResults.
  3. OpenAI spans rely on upstream OpenTelemetry/OpenLLMetry package behavior for supported SDK versions.
  4. @traceloop/instrumentation-ai-sdk is not currently available in npm and is not used. Vercel AI SDK tracing is handled by the built-in Impact Vercel span processor and impact.vercelAITelemetry().
  5. For app-owned SDKs (for example @google/genai, @openai/agents) module resolution is attempted from both SDK package path and consumer app working directory.
  6. LangChain instrumentation is not bundled by default; install @traceloop/instrumentation-langchain in the app and enable instrumentations.langchain: true.

Latest package snapshot (as researched on 2026-02-24):

  1. @opentelemetry/instrumentation-openai: 0.10.0
  2. @azure/opentelemetry-instrumentation-azure-sdk: 1.0.0-beta.9
  3. @traceloop/instrumentation-openai: 0.22.5
  4. @traceloop/instrumentation-anthropic: 0.22.6
  5. @traceloop/instrumentation-azure: 0.14.0
  6. @traceloop/instrumentation-bedrock: 0.22.6
  7. @traceloop/instrumentation-vertexai: 0.22.5
  8. @traceloop/instrumentation-langchain: 0.22.6
  9. @traceloop/instrumentation-llamaindex: 0.22.6
  10. @traceloop/instrumentation-cohere: 0.22.6
  11. @traceloop/instrumentation-together: 0.22.5
  12. @traceloop/instrumentation-mcp: 0.22.6
  13. @traceloop/instrumentation-pinecone: 0.22.5
  14. @traceloop/instrumentation-qdrant: 0.22.6
  15. @traceloop/instrumentation-chromadb: 0.22.5
  16. OpenAI Agents JS: no dedicated @opentelemetry/* package published; uses OpenAI tracing processor API.
  17. Google GenAI JS: no dedicated @opentelemetry/* package published; uses Impact wrapper plus fetch fallback instrumentation.
  18. Microsoft Agent Framework JS: @microsoft/[email protected] currently ships without a valid index.js entrypoint, so auto-activation remains unavailable.

Validation

Core package:

npm run lint
npm run test:contracts
npm run test
npm run build

test:contracts is the fast semantic guard suite. It validates canonical attribute flow for:

  1. OpenAI Agents span mapping
  2. Google GenAI wrappers (method + fetch patch paths)
  3. Microsoft Agent Framework idempotent activation
  4. Foundry registration fallback paths

Release Checklist

Pre-release:

  1. Run npm run lint.
  2. Run npm run typecheck:tests.
  3. Run npm run test:contracts.
  4. Run npm run test.
  5. Run npm run build.
  6. Verify package entrypoints:
  7. dist/index.mjs (ESM)
  8. dist/index.cjs (CJS)
  9. Validate this README.md is aligned with current runtime behavior.

End-to-end validation:

  1. In ../demos-js, run npm run typecheck.
  2. In ../demos-js, run npm run matrix:required.
  3. In ../demos-js, run npm run matrix.
  4. Confirm required scenarios pass and optional scenario failures are documented.

Publish readiness:

  1. Confirm package.json version.
  2. Confirm files whitelist only includes intended artifacts.
  3. Run npm pack and inspect tarball contents.
  4. Tag release commit and publish.

End-to-end matrix quick run:

cd ../demos-js
npm run typecheck
npm run matrix:required
npm run matrix

License

Apache-2.0