npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

agentid-sdk

v0.1.26

Published

AgentID JavaScript/TypeScript SDK for guard, ingest, tracing, and analytics.

Readme

agentid-sdk (Node.js / TypeScript)

npm version Node Node >=18 License: MIT

1. Introduction

agentid-sdk is the official Node.js/TypeScript SDK for AgentID, an AI security and compliance System of Record. It allows you to gate LLM traffic through guard checks, enforce policy before execution, and capture durable telemetry for audit and governance workflows.

The Mental Model

AgentID sits between your application and the LLM runtime:

User Input -> guard() -> [AgentID Policy] -> verdict
                              | allowed
                              v
                         LLM Provider
                              v
                           log() -> [Immutable Ledger]
  • guard(): evaluates prompt and context before model execution.
  • Model call: executes only if guard verdict is allowed.
  • log(): persists immutable telemetry (prompt, output, latency) for audit and compliance.

2. Installation

npm install agentid-sdk

3. Prerequisites

  1. Create an account at https://app.getagentid.com.
  2. Create an AI system and copy:
    • AGENTID_API_KEY (for example sk_live_...)
    • AGENTID_SYSTEM_ID (UUID)
  3. If using OpenAI/LangChain, set:
    • OPENAI_API_KEY
export AGENTID_API_KEY="sk_live_..."
export AGENTID_SYSTEM_ID="00000000-0000-0000-0000-000000000000"
export OPENAI_API_KEY="sk-proj-..."

Compatibility

  • Node.js: v18+ / Python: 3.9+ (cross-SDK matrix)
  • Thread Safety: AgentID clients are thread-safe and intended to be instantiated once and reused across concurrent requests.
  • Latency: async log() is non-blocking for model execution paths; sync guard() typically adds network latency (commonly ~50-100ms, environment-dependent).

4. Quickstart

import { AgentID } from "agentid-sdk";

const agent = new AgentID(); // auto-loads AGENTID_API_KEY
const systemId = process.env.AGENTID_SYSTEM_ID!;

const verdict = await agent.guard({
  system_id: systemId,
  input: "Summarize this ticket in one sentence.",
  model: "gpt-4o-mini",
  user_id: "quickstart-user",
});
if (!verdict.allowed) throw new Error(`Blocked: ${verdict.reason}`);

await agent.log({
  system_id: systemId,
  event_id: verdict.client_event_id,
  model: "gpt-4o-mini",
  input: "Summarize this ticket in one sentence.",
  output: "Summary generated.",
  metadata: { agent_role: "support-assistant" },
});

5. Core Integrations

OpenAI Wrapper

npm install agentid-sdk openai
import OpenAI from "openai";
import { AgentID } from "agentid-sdk";

const agent = new AgentID({
  piiMasking: true,
});

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const secured = agent.wrapOpenAI(openai, {
  system_id: process.env.AGENTID_SYSTEM_ID!,
  user_id: "customer-123",
  expected_languages: ["en"],
});

const response = await secured.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "What is the capital of the Czech Republic?" }],
});

console.log(response.choices[0]?.message?.content ?? "");

Scope note: AgentID compliance/risk controls apply to the specific SDK-wrapped LLM calls (guard(), wrapOpenAI(), LangChain callback-wrapped flows). They do not automatically classify unrelated code paths in your whole monolithic application.

LangChain Integration

npm install agentid-sdk openai @langchain/core @langchain/openai
import { AgentID } from "agentid-sdk";
import { AgentIDCallbackHandler } from "agentid-sdk/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const agent = new AgentID();
const handler = new AgentIDCallbackHandler(agent, {
  system_id: process.env.AGENTID_SYSTEM_ID!,
  expected_languages: ["en"],
});

const prompt = ChatPromptTemplate.fromTemplate("Answer in one sentence: {question}");
const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY!,
  model: "gpt-4o-mini",
});
const chain = prompt.pipe(model).pipe(new StringOutputParser());

const result = await chain.invoke(
  { question: "What is the capital of the Czech Republic?" },
  { callbacks: [handler] }
);
console.log(result);

Raw Ingest API (Telemetry Only)

import { AgentID } from "agentid-sdk";

const agent = new AgentID();

await agent.log({
  system_id: process.env.AGENTID_SYSTEM_ID!,
  event_type: "complete",
  severity: "info",
  model: "gpt-4o-mini",
  input: "Raw telemetry prompt",
  output: '{"ok": true}',
  metadata: { agent_role: "batch-worker", channel: "manual_ingest" },
});

Transparency Badge (Article 50 UI Evidence)

When rendering disclosure UI, log proof-of-render telemetry so you can demonstrate the end-user actually saw the badge.

import { AgentIDTransparencyBadge } from "agentid-sdk";

<AgentIDTransparencyBadge
  telemetry={{
    systemId: process.env.NEXT_PUBLIC_AGENTID_SYSTEM_ID!,
    // Prefer a backend relay endpoint so no secret key is exposed in browser code.
    ingestUrl: "/api/agentid/transparency-render",
    headers: { "x-agentid-system-id": process.env.NEXT_PUBLIC_AGENTID_SYSTEM_ID! },
    userId: "customer-123",
  }}
  placement="chat-header"
/>;

On mount, the component asynchronously emits event_type: "transparency_badge_rendered" to the AgentID ingest endpoint.

6. Advanced Configuration

Custom identity / role metadata

await agent.guard({
  system_id: process.env.AGENTID_SYSTEM_ID!,
  input: "Process user request",
  user_id: "service:billing-agent",
  model: "gpt-4o-mini",
});

await agent.log({
  system_id: process.env.AGENTID_SYSTEM_ID!,
  model: "gpt-4o-mini",
  input: "Process user request",
  output: "Done",
  metadata: { agent_role: "billing-agent", environment: "prod" },
});

Strict mode and timeout tuning

const agent = new AgentID({
  strictMode: true,      // fail-closed on guard connectivity/timeouts
  guardTimeoutMs: 10000, // default guard timeout is 10000ms
  ingestTimeoutMs: 10000 // default ingest timeout is 10000ms
});

Optional client-side fast fail

const agent = new AgentID({
  failureMode: "fail_close",
  clientFastFail: true, // opt-in local preflight before /guard
});

Error Handling & Strict Mode

By default, AgentID is designed to keep your application running if the AgentID API has a timeout or is temporarily unreachable.

| Mode | Connectivity Failure | LLM Execution | Best For | | :--- | :--- | :--- | :--- | | Default (Strict Off) | API Timeout / Unreachable | Fail-Open (continues) | Standard SaaS, chatbots | | Strict Mode (strictMode: true) | API Timeout / Unreachable | Direct guard() denies; wrapped flows can apply local fallback first | Healthcare, FinTech, high-risk |

  • guard() returns a verdict (allowed, reason); handle deny paths explicitly.
  • wrapOpenAI() and LangChain handlers throw SecurityBlockError when a prompt is blocked.
  • Backend /guard is the default authority for prompt injection, DB access, code execution, and PII leakage in SDK-wrapped flows.
  • clientFastFail / client_fast_fail is optional and disabled by default. Enable it only when you explicitly want local preflight before the backend call.
  • If backend guard is unreachable and the effective failure mode is fail_close, wrapped OpenAI/LangChain flows can run local fallback enforcement. Local hits still block; otherwise the request can continue with fallback telemetry attached.
  • If strictMode is not explicitly set in SDK code, runtime behavior follows the system configuration from AgentID (strict_security_mode / failure_mode).
  • Ingest retries transient failures (5xx/429) and logs warnings if persistence fails.

Event Identity Model

For consistent lifecycle correlation in Activity/Prompts, use this model:

  • client_event_id: external correlation ID for one end-to-end action.
  • guard_event_id: ID of the preflight guard event returned by guard().
  • event_id on log(): idempotency key for ingest. In the JS SDK it is canonicalized to client_event_id for stable one-row lifecycle updates.

SDK behavior:

  • guard() sends client_event_id and returns canonical client_event_id + guard_event_id.
  • log() sends:
    • event_id = canonical client_event_id
    • metadata.client_event_id
    • metadata.guard_event_id (when available from wrappers/callbacks)
    • x-correlation-id = client_event_id
  • after a successful primary ingest, SDK wrappers can call /ingest/finalize with the same client_event_id to attach sdk_ingest_ms
  • SDK requests include x-agentid-sdk-version for telemetry/version diagnostics.

This keeps Guard + Complete linked under one correlation key while preserving internal event linkage in the dashboard.

SDK Timing Telemetry

SDK-managed metadata can include:

  • sdk_config_fetch_ms: capability/config fetch time before dispatch.
  • sdk_local_scan_ms: optional local enforcement time (clientFastFail or fail-close fallback path).
  • sdk_guard_ms: backend /guard round-trip time observed by the SDK wrapper.
  • sdk_ingest_ms: post-ingest transport timing finalized by the SDK through /ingest/finalize after a successful primary /ingest.

Policy-Pack Runtime Telemetry

When the backend uses compiled policy packs, runtime metadata includes:

  • policy_pack_version: active compiled artifact version.
  • policy_pack_fallback: true means fallback detector path was used.
  • policy_pack_details: optional diagnostic detail for fallback/decision trace.

Latency interpretation:

  • Activity Latency (ms) maps to synchronous processing (processing_time_ms).
  • Async AI audit time is separate (ai_audit_duration_ms) and can be higher.
  • First request after warm-up boundaries can be slower than steady-state requests.

Monorepo QA Commands (Maintainers)

If you are validating runtime in the AgentID monorepo:

npm run qa:policy-pack-bootstrap -- --base-url=http://127.0.0.1:3000/api/v1 --system-id=<SYSTEM_UUID>
npm run bench:policy-pack-hotpath

PowerShell diagnostics:

powershell -ExecutionPolicy Bypass -File .\scripts\qa\run-guard-diagnostic.ps1 -BaseUrl http://127.0.0.1:3000/api/v1 -ApiKey $env:AGENTID_API_KEY -SystemId $env:AGENTID_SYSTEM_ID -SkipBenchmark
powershell -ExecutionPolicy Bypass -File .\scripts\qa\run-ai-label-audit-check.ps1 -BaseUrl http://127.0.0.1:3000/api/v1 -ApiKey $env:AGENTID_API_KEY -SystemId $env:AGENTID_SYSTEM_ID -Model gpt-4o-mini

7. Security & Compliance

  • Backend /guard remains the primary enforcement authority by default.
  • Optional local PII masking and opt-in clientFastFail are available for edge cases.
  • Guard checks run pre-execution; ingest + finalize telemetry captures prompt/output lifecycle and SDK timing breakdowns.
  • Safe for server and serverless runtimes (including async completion flows).
  • Supports compliance and forensics workflows with durable event records.

8. Support

  • Dashboard: https://app.getagentid.com
  • Repository: https://github.com/ondrejsukac-rgb/agentid/tree/main/js-sdk
  • Issues: https://github.com/ondrejsukac-rgb/agentid/issues

9. Publishing Notes (NPM)

NPM automatically renders README.md from the package root during npm publish.

  • File location: next to package.json in js-sdk/.
  • No additional NPM config is required for README rendering.