agentid-sdk
v0.1.26
Published
AgentID JavaScript/TypeScript SDK for guard, ingest, tracing, and analytics.
Readme
agentid-sdk (Node.js / TypeScript)
1. Introduction
agentid-sdk is the official Node.js/TypeScript SDK for AgentID, an AI security and compliance System of Record. It allows you to gate LLM traffic through guard checks, enforce policy before execution, and capture durable telemetry for audit and governance workflows.
The Mental Model
AgentID sits between your application and the LLM runtime:
User Input -> guard() -> [AgentID Policy] -> verdict
| allowed
v
LLM Provider
v
log() -> [Immutable Ledger]guard(): evaluates prompt and context before model execution.- Model call: executes only if guard verdict is allowed.
log(): persists immutable telemetry (prompt, output, latency) for audit and compliance.
2. Installation
npm install agentid-sdk3. Prerequisites
- Create an account at
https://app.getagentid.com. - Create an AI system and copy:
AGENTID_API_KEY(for examplesk_live_...)AGENTID_SYSTEM_ID(UUID)
- If using OpenAI/LangChain, set:
OPENAI_API_KEY
export AGENTID_API_KEY="sk_live_..."
export AGENTID_SYSTEM_ID="00000000-0000-0000-0000-000000000000"
export OPENAI_API_KEY="sk-proj-..."Compatibility
- Node.js: v18+ / Python: 3.9+ (cross-SDK matrix)
- Thread Safety: AgentID clients are thread-safe and intended to be instantiated once and reused across concurrent requests.
- Latency: async
log()is non-blocking for model execution paths; syncguard()typically adds network latency (commonly ~50-100ms, environment-dependent).
4. Quickstart
import { AgentID } from "agentid-sdk";
const agent = new AgentID(); // auto-loads AGENTID_API_KEY
const systemId = process.env.AGENTID_SYSTEM_ID!;
const verdict = await agent.guard({
system_id: systemId,
input: "Summarize this ticket in one sentence.",
model: "gpt-4o-mini",
user_id: "quickstart-user",
});
if (!verdict.allowed) throw new Error(`Blocked: ${verdict.reason}`);
await agent.log({
system_id: systemId,
event_id: verdict.client_event_id,
model: "gpt-4o-mini",
input: "Summarize this ticket in one sentence.",
output: "Summary generated.",
metadata: { agent_role: "support-assistant" },
});5. Core Integrations
OpenAI Wrapper
npm install agentid-sdk openaiimport OpenAI from "openai";
import { AgentID } from "agentid-sdk";
const agent = new AgentID({
piiMasking: true,
});
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const secured = agent.wrapOpenAI(openai, {
system_id: process.env.AGENTID_SYSTEM_ID!,
user_id: "customer-123",
expected_languages: ["en"],
});
const response = await secured.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "What is the capital of the Czech Republic?" }],
});
console.log(response.choices[0]?.message?.content ?? "");Scope note: AgentID compliance/risk controls apply to the specific SDK-wrapped LLM calls (
guard(),wrapOpenAI(), LangChain callback-wrapped flows). They do not automatically classify unrelated code paths in your whole monolithic application.
LangChain Integration
npm install agentid-sdk openai @langchain/core @langchain/openaiimport { AgentID } from "agentid-sdk";
import { AgentIDCallbackHandler } from "agentid-sdk/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const agent = new AgentID();
const handler = new AgentIDCallbackHandler(agent, {
system_id: process.env.AGENTID_SYSTEM_ID!,
expected_languages: ["en"],
});
const prompt = ChatPromptTemplate.fromTemplate("Answer in one sentence: {question}");
const model = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
const chain = prompt.pipe(model).pipe(new StringOutputParser());
const result = await chain.invoke(
{ question: "What is the capital of the Czech Republic?" },
{ callbacks: [handler] }
);
console.log(result);Raw Ingest API (Telemetry Only)
import { AgentID } from "agentid-sdk";
const agent = new AgentID();
await agent.log({
system_id: process.env.AGENTID_SYSTEM_ID!,
event_type: "complete",
severity: "info",
model: "gpt-4o-mini",
input: "Raw telemetry prompt",
output: '{"ok": true}',
metadata: { agent_role: "batch-worker", channel: "manual_ingest" },
});Transparency Badge (Article 50 UI Evidence)
When rendering disclosure UI, log proof-of-render telemetry so you can demonstrate the end-user actually saw the badge.
import { AgentIDTransparencyBadge } from "agentid-sdk";
<AgentIDTransparencyBadge
telemetry={{
systemId: process.env.NEXT_PUBLIC_AGENTID_SYSTEM_ID!,
// Prefer a backend relay endpoint so no secret key is exposed in browser code.
ingestUrl: "/api/agentid/transparency-render",
headers: { "x-agentid-system-id": process.env.NEXT_PUBLIC_AGENTID_SYSTEM_ID! },
userId: "customer-123",
}}
placement="chat-header"
/>;On mount, the component asynchronously emits event_type: "transparency_badge_rendered" to the AgentID ingest endpoint.
6. Advanced Configuration
Custom identity / role metadata
await agent.guard({
system_id: process.env.AGENTID_SYSTEM_ID!,
input: "Process user request",
user_id: "service:billing-agent",
model: "gpt-4o-mini",
});
await agent.log({
system_id: process.env.AGENTID_SYSTEM_ID!,
model: "gpt-4o-mini",
input: "Process user request",
output: "Done",
metadata: { agent_role: "billing-agent", environment: "prod" },
});Strict mode and timeout tuning
const agent = new AgentID({
strictMode: true, // fail-closed on guard connectivity/timeouts
guardTimeoutMs: 10000, // default guard timeout is 10000ms
ingestTimeoutMs: 10000 // default ingest timeout is 10000ms
});Optional client-side fast fail
const agent = new AgentID({
failureMode: "fail_close",
clientFastFail: true, // opt-in local preflight before /guard
});Error Handling & Strict Mode
By default, AgentID is designed to keep your application running if the AgentID API has a timeout or is temporarily unreachable.
| Mode | Connectivity Failure | LLM Execution | Best For |
| :--- | :--- | :--- | :--- |
| Default (Strict Off) | API Timeout / Unreachable | Fail-Open (continues) | Standard SaaS, chatbots |
| Strict Mode (strictMode: true) | API Timeout / Unreachable | Direct guard() denies; wrapped flows can apply local fallback first | Healthcare, FinTech, high-risk |
guard()returns a verdict (allowed,reason); handle deny paths explicitly.wrapOpenAI()and LangChain handlers throwSecurityBlockErrorwhen a prompt is blocked.- Backend
/guardis the default authority for prompt injection, DB access, code execution, and PII leakage in SDK-wrapped flows. clientFastFail/client_fast_failis optional and disabled by default. Enable it only when you explicitly want local preflight before the backend call.- If backend guard is unreachable and the effective failure mode is
fail_close, wrapped OpenAI/LangChain flows can run local fallback enforcement. Local hits still block; otherwise the request can continue with fallback telemetry attached. - If
strictModeis not explicitly set in SDK code, runtime behavior follows the system configuration from AgentID (strict_security_mode/failure_mode). - Ingest retries transient failures (5xx/429) and logs warnings if persistence fails.
Event Identity Model
For consistent lifecycle correlation in Activity/Prompts, use this model:
client_event_id: external correlation ID for one end-to-end action.guard_event_id: ID of the preflight guard event returned byguard().event_idonlog(): idempotency key for ingest. In the JS SDK it is canonicalized toclient_event_idfor stable one-row lifecycle updates.
SDK behavior:
guard()sendsclient_event_idand returns canonicalclient_event_id+guard_event_id.log()sends:event_id = canonical client_event_idmetadata.client_event_idmetadata.guard_event_id(when available from wrappers/callbacks)x-correlation-id = client_event_id
- after a successful primary ingest, SDK wrappers can call
/ingest/finalizewith the sameclient_event_idto attachsdk_ingest_ms - SDK requests include
x-agentid-sdk-versionfor telemetry/version diagnostics.
This keeps Guard + Complete linked under one correlation key while preserving internal event linkage in the dashboard.
SDK Timing Telemetry
SDK-managed metadata can include:
sdk_config_fetch_ms: capability/config fetch time before dispatch.sdk_local_scan_ms: optional local enforcement time (clientFastFailor fail-close fallback path).sdk_guard_ms: backend/guardround-trip time observed by the SDK wrapper.sdk_ingest_ms: post-ingest transport timing finalized by the SDK through/ingest/finalizeafter a successful primary/ingest.
Policy-Pack Runtime Telemetry
When the backend uses compiled policy packs, runtime metadata includes:
policy_pack_version: active compiled artifact version.policy_pack_fallback:truemeans fallback detector path was used.policy_pack_details: optional diagnostic detail for fallback/decision trace.
Latency interpretation:
- Activity
Latency (ms)maps to synchronous processing (processing_time_ms). - Async AI audit time is separate (
ai_audit_duration_ms) and can be higher. - First request after warm-up boundaries can be slower than steady-state requests.
Monorepo QA Commands (Maintainers)
If you are validating runtime in the AgentID monorepo:
npm run qa:policy-pack-bootstrap -- --base-url=http://127.0.0.1:3000/api/v1 --system-id=<SYSTEM_UUID>
npm run bench:policy-pack-hotpathPowerShell diagnostics:
powershell -ExecutionPolicy Bypass -File .\scripts\qa\run-guard-diagnostic.ps1 -BaseUrl http://127.0.0.1:3000/api/v1 -ApiKey $env:AGENTID_API_KEY -SystemId $env:AGENTID_SYSTEM_ID -SkipBenchmark
powershell -ExecutionPolicy Bypass -File .\scripts\qa\run-ai-label-audit-check.ps1 -BaseUrl http://127.0.0.1:3000/api/v1 -ApiKey $env:AGENTID_API_KEY -SystemId $env:AGENTID_SYSTEM_ID -Model gpt-4o-mini7. Security & Compliance
- Backend
/guardremains the primary enforcement authority by default. - Optional local PII masking and opt-in
clientFastFailare available for edge cases. - Guard checks run pre-execution; ingest + finalize telemetry captures prompt/output lifecycle and SDK timing breakdowns.
- Safe for server and serverless runtimes (including async completion flows).
- Supports compliance and forensics workflows with durable event records.
8. Support
- Dashboard:
https://app.getagentid.com - Repository:
https://github.com/ondrejsukac-rgb/agentid/tree/main/js-sdk - Issues:
https://github.com/ondrejsukac-rgb/agentid/issues
9. Publishing Notes (NPM)
NPM automatically renders README.md from the package root during npm publish.
- File location: next to
package.jsoninjs-sdk/. - No additional NPM config is required for README rendering.
