agentid-sdk
v0.1.41
Published
AgentID JavaScript/TypeScript SDK for guardrails, masking, workflow telemetry, and audit logging.
Readme
agentid-sdk (Node.js / TypeScript)
1. Introduction
agentid-sdk is the official Node.js/TypeScript SDK for AgentID, an AI security and compliance System of Record. It allows you to gate LLM traffic through guard checks, enforce policy before execution, and capture durable telemetry for audit and governance workflows.
The Mental Model
AgentID sits between your application and the LLM runtime:
User Input -> guard() -> [AgentID Policy] -> verdict
| allowed
v
LLM Provider
v
log() -> [Immutable Ledger]guard(): evaluates prompt and context before model execution.- Model call: executes only if guard verdict is allowed.
log(): persists immutable telemetry (prompt, output, latency) for audit and compliance.
2. Installation
npm install agentid-sdk3. Prerequisites
- Create an account at
https://app.getagentid.com. - Create an AI system and copy:
AGENTID_API_KEY(for examplesk_live_...)AGENTID_SYSTEM_ID(UUID)
- If using OpenAI/LangChain, set:
OPENAI_API_KEY
export AGENTID_API_KEY="sk_live_..."
export AGENTID_SYSTEM_ID="00000000-0000-0000-0000-000000000000"
export OPENAI_API_KEY="sk-proj-..."Compatibility
- Node.js: v18+ / Python: 3.9+ (cross-SDK matrix)
- Thread Safety: AgentID clients are thread-safe and intended to be instantiated once and reused across concurrent requests.
- Latency: async
log()is non-blocking for model execution paths; syncguard()typically adds network latency (commonly ~50-100ms, environment-dependent).
4. Quickstart
import { AgentID } from "agentid-sdk";
const agent = new AgentID(); // auto-loads AGENTID_API_KEY
const systemId = process.env.AGENTID_SYSTEM_ID!;
const verdict = await agent.guard({
system_id: systemId,
input: "Summarize this ticket in one sentence.",
model: "gpt-4o-mini",
user_id: "quickstart-user",
});
if (!verdict.allowed) throw new Error(`Blocked: ${verdict.reason}`);
await agent.log({
system_id: systemId,
event_id: verdict.client_event_id,
model: "gpt-4o-mini",
input: "Summarize this ticket in one sentence.",
output: "Summary generated.",
metadata: { agent_role: "support-assistant" },
});5. Core Integrations
OpenAI Wrapper
npm install agentid-sdk openaiimport OpenAI from "openai";
import { AgentID } from "agentid-sdk";
const agent = new AgentID();
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const secured = agent.wrapOpenAI(openai, {
system_id: process.env.AGENTID_SYSTEM_ID!,
user_id: "customer-123",
expected_languages: ["en"],
});
const response = await secured.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "What is the capital of the Czech Republic?" }],
});
console.log(response.choices[0]?.message?.content ?? "");By default, official AgentID SDK integrations inherit enable_sdk_pii_masking
from the dashboard/runtime config. You only need to set piiMasking: true in
code if you want to force local masking on even when the dashboard policy is off.
Starting with [email protected], fail-open dependency fallback keeps local
deterministic PII and secret masking enabled when /agent/config or /guard
is unreachable. Fail-open can preserve availability, but official wrappers must
not treat it as permission to send raw sensitive text to the provider.
When SDK-side masking is enabled, the wrapper now masks both classic PII and high-confidence secret material before the request leaves your process:
- emails, phones, card numbers, IBANs, national IDs, person names
- OpenAI / Anthropic / Google / AWS / GitHub / Slack / Stripe credentials
- bearer tokens, JWTs,
x-api-keyheaders - password / credential assignments, PEM private keys, Azure connection strings and SAS tokens
The masked form is what gets sent to /guard, logged to AgentID ingest, and
forwarded to the model provider. The wrapper also protects returned completion
text before it is logged or returned from the wrapped call when SDK-side masking
is enabled.
Important: this applies only to the wrapped call. If your app sends raw prompt or raw chat history through a separate direct provider call, AgentID cannot protect that bypass.
Correct:
const secured = agent.wrapOpenAI(openai, {
system_id: process.env.AGENTID_SYSTEM_ID!,
});
await secured.chat.completions.create({
model: "gpt-4o-mini",
messages: fullConversationHistory,
});Incorrect:
// Raw history reaches the provider.
await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: rawConversationHistory,
});
// Logging a masked copy later does not protect the model call above.
await agent.log({ system_id: systemId, input: maskedInput, output: maskedOutput });For chat apps and agent workflows, protect the full message history, not just the latest text field. If a previous user/assistant/tool/memory message contains raw PII, the model can still repeat it later.
If you cannot use wrapOpenAI() and need a manual integration, call
protectMessageHistory() on the exact history that will be sent to the
provider. Then pass protected.messages to the provider, not the raw
body.messages.
import { AgentID, protectMessageHistory } from "agentid-sdk";
const agent = new AgentID();
const protectedHistory = protectMessageHistory(body.messages, {
pii: true,
secrets: true,
});
const latestUserInput = extractLatestUserInput(protectedHistory.messages);
const verdict = await agent.guard({
system_id: process.env.AGENTID_SYSTEM_ID!,
input: latestUserInput,
model: "gpt-4o-mini",
metadata: {
runtime_surface: "manual_provider_integration",
full_history_protected: true,
messages_count: Array.isArray(protectedHistory.messages)
? protectedHistory.messages.length
: undefined,
protected_messages_count: Array.isArray(protectedHistory.messages)
? protectedHistory.messages.length
: undefined,
prompt_text_parts_count: protectedHistory.textPartsCount,
transformed_prompt_text_parts_count:
protectedHistory.transformedTextPartsCount,
},
});
if (!verdict.allowed) throw new Error(`Blocked: ${verdict.reason}`);
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: protectedHistory.messages,
});Wrapped OpenAI calls persist telemetry for both regular and streamed completions. For stream: true, logging happens when the stream finishes.
Scope note: AgentID compliance/risk controls apply to the specific SDK-wrapped LLM calls (
guard(),wrapOpenAI(), LangChain callback-wrapped flows). They do not automatically classify unrelated code paths in your whole monolithic application.
Vercel AI SDK Wrapper
If your app already uses Vercel AI SDK primitives such as generateText() or streamText(), prefer the dedicated wrapper package instead of rebuilding the lifecycle manually:
npm install ai agentid-vercel-sdk @ai-sdk/openaiagentid-vercel-sdk keeps AgentID backend-first by default, blocks before provider billing on denied prompts, and finalizes telemetry after completion or stream close.
LangChain Integration
npm install agentid-sdk openai @langchain/core @langchain/openaiimport {
AgentID,
createAgentIdCorrelationId,
createAgentIdTelemetryContext,
} from "agentid-sdk";
import { AgentIDCallbackHandler } from "agentid-sdk/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const agent = new AgentID();
const workflowRunId = createAgentIdCorrelationId();
const handler = new AgentIDCallbackHandler(agent, {
system_id: process.env.AGENTID_SYSTEM_ID!,
expected_languages: ["en"],
telemetry: createAgentIdTelemetryContext({
workflowRunId,
workflowStepName: "answer_question",
toolName: "langchain.chat",
toolTargetType: "conversation",
eventCategory: "ai",
eventSubtype: "answer_generated",
}),
});
const prompt = ChatPromptTemplate.fromTemplate("Answer in one sentence: {question}");
const model = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
const chain = prompt.pipe(model).pipe(new StringOutputParser());
const result = await chain.invoke(
{ question: "What is the capital of the Czech Republic?" },
{ callbacks: [handler] }
);
console.log(result);LangChain callbacks log on run completion. Constructor-level telemetry is copied
to the guard request, local policy telemetry, and final ingest log. You can
override or extend it per invocation with LangChain metadata:
{ metadata: { agentid_telemetry: { workflowStepName: "..." } } }.
Token/cost telemetry for streamed chains depends on the provider exposing usage
in the final LangChain result.
Raw Ingest API (Telemetry Only)
import { AgentID } from "agentid-sdk";
const agent = new AgentID();
await agent.log({
system_id: process.env.AGENTID_SYSTEM_ID!,
event_type: "complete",
severity: "info",
model: "gpt-4o-mini",
input: "Raw telemetry prompt",
output: '{"ok": true}',
usage: {
prompt_tokens: 33,
completion_tokens: 9,
total_tokens: 42,
},
latency: 1450,
metadata: { agent_role: "batch-worker", channel: "manual_ingest" },
});For manual integrations, preserve provider usage. Without usage or
normalized tokens, AgentID can store Activity but cannot compute token totals,
cost_usd, Total Spend, or ROI. ROI also requires the system business context
fields human_hourly_rate and human_time_per_task_min.
Agent workflow and tool events
Use logOperation() when an agent calls tools or performs operational work outside the wrapped LLM call. Reuse the same workflowRunId across steps.
import {
AgentID,
createAgentIdCorrelationId,
createAgentIdTelemetryContext,
} from "agentid-sdk";
const agent = new AgentID();
const workflowRunId = createAgentIdCorrelationId();
await agent.logOperation({
system_id: process.env.AGENTID_SYSTEM_ID!,
telemetry: createAgentIdTelemetryContext({
workflowRunId,
workflowStepName: "screen_candidate",
toolName: "hr.cv_screen",
toolTargetType: "candidate",
}),
event_category: "tool",
event_status: "completed",
});
await agent.logOperation({
system_id: process.env.AGENTID_SYSTEM_ID!,
telemetry: createAgentIdTelemetryContext({
workflowRunId,
workflowStepName: "send_followup",
toolName: "email.send",
toolTargetType: "email",
}),
event_category: "delivery",
event_status: "completed",
});Tool, delivery, inbox, workflow, guard, and operational events are logged as separate audit rows. They are grouped in the dashboard by workflow_run_id and do not count as model-used or spend-bearing unless you explicitly provide model/usage data. Do not reuse one client_event_id for the whole workflow; use workflowRunId for grouping and let each event keep its own idempotency key.
Dashboard behavior:
- prompt/guard checks remain visible as standalone Activity rows with
View DetailsandView Prompt - workflow summary rows open the grouped timeline with tools, delivery, inbox, workflow lifecycle, guard checks, and LLM steps
- the workflow timeline is operational context; the standalone prompt row is the forensic prompt inspection surface
- non-model workflow/tool/delivery rows show
Model: Not applicableand are not spend-bearing unless model/cost metadata is explicitly present
For full agent runs, prefer the workflow trail helper so each step gets a shared
workflow_step_id, plus automatic started/completed/failed rows:
import {
AgentID,
createAgentIdCorrelationId,
createAgentIdTelemetryContext,
createAgentIdWorkflowTrail,
} from "agentid-sdk";
const agent = new AgentID({ apiKey: process.env.AGENTID_API_KEY! });
const workflowRunId = createAgentIdCorrelationId();
const trail = createAgentIdWorkflowTrail({
agent,
system_id: process.env.AGENTID_SYSTEM_ID!,
telemetry: createAgentIdTelemetryContext({
workflowRunId,
workflowName: "Candidate intake",
}),
});
await trail.runStep(
{
telemetry: createAgentIdTelemetryContext({
workflowStepName: "screen_candidate",
toolName: "hr.cv_screen",
toolTargetType: "candidate",
eventCategory: "tool",
}),
},
async () => screenCandidate(),
{
complete: {
metadata: { result_count: 4 },
},
}
);Transparency Badge (Article 50 UI Evidence)
When rendering disclosure UI, log proof-of-render telemetry so you can demonstrate the end-user actually saw the badge.
import { AgentIDTransparencyBadge } from "agentid-sdk/transparency-badge";
<AgentIDTransparencyBadge
telemetry={{
systemId: process.env.NEXT_PUBLIC_AGENTID_SYSTEM_ID!,
// Prefer a backend relay endpoint so no secret key is exposed in browser code.
ingestUrl: "/api/agentid/transparency-render",
headers: { "x-agentid-system-id": process.env.NEXT_PUBLIC_AGENTID_SYSTEM_ID! },
userId: "customer-123",
}}
placement="chat-header"
/>;On mount, the component asynchronously emits event_type: "transparency_badge_rendered" to the AgentID ingest endpoint.
6. Advanced Configuration
Custom identity / role metadata
await agent.guard({
system_id: process.env.AGENTID_SYSTEM_ID!,
input: "Process user request",
user_id: "service:billing-agent",
model: "gpt-4o-mini",
});
await agent.log({
system_id: process.env.AGENTID_SYSTEM_ID!,
model: "gpt-4o-mini",
input: "Process user request",
output: "Done",
metadata: { agent_role: "billing-agent", environment: "prod" },
});Strict mode and timeout tuning
const agent = new AgentID({
strictMode: true, // fail-closed on guard connectivity/timeouts
guardTimeoutMs: 10000, // default guard timeout is 10000ms
ingestTimeoutMs: 10000 // default ingest timeout is 10000ms
});Optional client-side fast fail
const agent = new AgentID({
failureMode: "fail_close",
clientFastFail: true, // opt-in local preflight before /guard
});Error Handling & Strict Mode
By default, AgentID is designed to keep your application running if the AgentID API has a timeout or is temporarily unreachable.
| Mode | Connectivity Failure | LLM Execution | Best For |
| :--- | :--- | :--- | :--- |
| Default (Strict Off) | API Timeout / Unreachable | Fail-Open (continues) | Standard SaaS, chatbots |
| Strict Mode (strictMode: true) | API Timeout / Unreachable | Direct guard() denies; wrapped flows can apply local fallback first | Healthcare, FinTech, high-risk |
guard()returns a verdict (allowed,reason); handle deny paths explicitly.wrapOpenAI()and LangChain handlers throwSecurityBlockErrorwhen a prompt is blocked.- Backend
/guardis the default authority for prompt injection, DB access, code execution, and PII leakage in SDK-wrapped flows. clientFastFail/client_fast_failis optional and disabled by default. Enable it only when you explicitly want local preflight before the backend call.- If backend guard is unreachable and the effective failure mode is
fail_close, wrapped OpenAI/LangChain flows can run local fallback enforcement. Local hits still block; otherwise the request can continue with fallback telemetry attached. - If
strictModeis not explicitly set in SDK code, runtime behavior follows the system configuration from AgentID (strict_security_mode/failure_mode). - Ingest retries transient failures (5xx/429) and logs warnings if persistence fails.
SDK-side masking scope
If enable_sdk_pii_masking=true in AgentID runtime config, or if you force
piiMasking: true in code, masking happens locally before /guard and before
provider dispatch.
- Default mode: backend-first enforcement, optional local masking
clientFastFail=false: no local prompt/code/db blocker, but local masking can still rewrite prompt text before network dispatchclientFastFail=true: local prompt-injection scan and strict local enforcement can run before/guard
This means SDK masking is useful even when you keep backend guard as the main policy authority: it reduces raw data exposure on the wire without changing the server-side decision model.
Event Identity Model
For consistent lifecycle correlation in Activity/Prompts, use this model:
client_event_id: external correlation ID for one end-to-end action.guard_event_id: ID of the preflight guard event returned byguard().event_idonlog(): idempotency key for ingest. Inagentid-sdkit is canonicalized toclient_event_idfor stable one-row lifecycle updates.
SDK behavior:
guard()sendsclient_event_idand returns canonicalclient_event_id+guard_event_id.log()sends:event_id = canonical client_event_idmetadata.client_event_idmetadata.guard_event_id(when available from wrappers/callbacks)x-correlation-id = client_event_id
- after a successful primary ingest, SDK wrappers can call
/ingest/finalizewith the sameclient_event_idto attachsdk_ingest_ms - SDK requests include
x-agentid-sdk-versionfor telemetry/version diagnostics.
This keeps Guard + Complete linked under one correlation key while preserving internal event linkage in the dashboard.
SDK Timing Telemetry
SDK-managed metadata can include:
sdk_config_fetch_ms: capability/config fetch time before dispatch.sdk_local_scan_ms: optional local enforcement time (clientFastFailor fail-close fallback path).sdk_guard_ms: backend/guardround-trip time observed by the SDK wrapper.sdk_ingest_ms: post-ingest transport timing finalized by the SDK through/ingest/finalizeafter a successful primary/ingest.
Policy-Pack Runtime Telemetry
When the backend uses compiled policy packs, runtime metadata includes:
policy_pack_version: active compiled artifact version.policy_pack_fallback:truemeans fallback detector path was used.policy_pack_details: optional diagnostic detail for fallback/decision trace.
Latency interpretation:
- Activity
Latency (ms)maps to synchronous processing (processing_time_ms). - Async AI audit time is separate (
ai_audit_duration_ms) and can be higher. - First request after warm-up boundaries can be slower than steady-state requests.
Secret and PII Masking Edge Cases
SDK-side masking and the backend scanner include regression coverage for common boundary failures:
- multiline PEM, certificate, and PGP private key blocks
- natural-language password disclosures such as
my Password is Passwordk123 - environment-style assignments such as
DB_PASSWORD=... - secret values with suffix punctuation such as
# - high-entropy base64-like values with
=/==padding - security-question answers where the value appears after
answer is,is, or localized equivalents
When local masking is enabled, these values are replaced before provider dispatch and before AgentID ingest. Placeholder mappings stay local to the SDK for reversible deanonymization.
Monorepo QA Commands (Maintainers)
If you are validating runtime in the AgentID monorepo:
npm run qa:policy-pack-bootstrap -- --base-url=http://127.0.0.1:3000/api/v1 --system-id=<SYSTEM_UUID>
npm run bench:policy-pack-hotpathPowerShell diagnostics:
powershell -ExecutionPolicy Bypass -File .\scripts\qa\run-guard-diagnostic.ps1 -BaseUrl http://127.0.0.1:3000/api/v1 -ApiKey $env:AGENTID_API_KEY -SystemId $env:AGENTID_SYSTEM_ID -SkipBenchmark
powershell -ExecutionPolicy Bypass -File .\scripts\qa\run-ai-label-audit-check.ps1 -BaseUrl http://127.0.0.1:3000/api/v1 -ApiKey $env:AGENTID_API_KEY -SystemId $env:AGENTID_SYSTEM_ID -Model gpt-4o-mini7. Security & Compliance
- Backend
/guardremains the primary enforcement authority by default. - Optional local masking and opt-in
clientFastFailare available for edge cases. - SDK-side masking can now cover both structured PII and high-confidence leaked secrets before provider dispatch.
- Guard checks run pre-execution; ingest + finalize telemetry captures prompt/output lifecycle and SDK timing breakdowns.
- Safe for server and serverless runtimes (including async completion flows).
- Supports compliance and forensics workflows with durable event records.
8. Support
- Dashboard:
https://app.getagentid.com - Documentation:
https://docs.getagentid.com/docs/node-typescript-sdk - Repository:
https://github.com/ondrejsukac-rgb/agentid/tree/main/agentid-sdk - Issues:
https://github.com/ondrejsukac-rgb/agentid/issues
9. Publishing Notes (NPM)
NPM automatically renders README.md from the package root during npm publish.
- File location: next to
package.jsoninagentid-sdk/. - No additional NPM config is required for README rendering.
- Before publishing from the monorepo, run
npm run audit:allandnpm run qa:production-gatefrom the repository root. - The production gate audits the root app,
agentid-sdk,agentid-vercel-sdk, and the browser extension package so package-local lockfile issues are not missed.
