@agent-ledger/sdk-ts
v0.0.4
Published
Official TypeScript client for Agent Ledger. Use it to instrument any Node.js/Edge agent with structured telemetry, stream session events, and receive immediate feedback when budget guardrails block spending.
Readme
@agent-ledger/sdk-ts
Official TypeScript client for Agent Ledger. Use it to instrument any Node.js/Edge agent with structured telemetry, stream session events, and receive immediate feedback when budget guardrails block spending.
Table of contents
- Features
- Installation
- Runtime requirements
- Getting started
- Session lifecycle
- Event reference
- API reference
- Error handling
- Configuration & environments
- Recipes
- Testing & local dev
- License
Features
- Minimal, dependency-free client that speaks directly to the Agent Ledger REST API (
/v1/sessionsand/v1/events). - First-class TypeScript typings for every event structure (
LlmCallEvent,ToolCallEvent,ToolResultEvent). - Built-in budget guardrail awareness through
BudgetGuardrailErrorso you can halt expensive runs immediately. - Works anywhere
fetchis available (Node.js 18+, Bun, Deno, Edge runtimes, or browsers talking to your own proxy). - Simple abstractions so you can reuse the same instrumentation across CLI scripts, background workers, or serverless functions.
Installation
pnpm add @agent-ledger/sdk-ts
# or
npm install @agent-ledger/sdk-ts
# or
yarn add @agent-ledger/sdk-tsRuntime requirements
- Node.js 18 or newer (for the built-in
fetchimplementation). If you run older Node versions, polyfillfetchbefore importing the SDK. - An Agent Ledger API key generated from the dashboard (Settings → API Keys).
- Outbound HTTPS access to
https://agent-ledger-api.azurewebsites.net(or your self-hosted instance).
Getting started
import { AgentLedgerClient, BudgetGuardrailError } from "@agent-ledger/sdk-ts";
const ledger = new AgentLedgerClient({
apiKey: process.env.AGENT_LEDGER_API_KEY!,
});
export async function runSupportAgent(prompt: string) {
const sessionId = await ledger.startSession("support-bot");
try {
// 1. Run your own LLM/tool logic
const response = await callModel(prompt);
// 2. Log the LLM call (Agent Ledger auto-computes spend from provider/model/tokens)
await ledger.logLLMCall(sessionId, {
stepIndex: 0,
provider: "openai",
model: "gpt-4o-mini",
prompt,
response: response.text,
tokensIn: response.usage.inputTokens,
tokensOut: response.usage.outputTokens,
latencyMs: response.latencyMs,
});
await ledger.endSession(sessionId, "success");
return response.text;
} catch (err) {
if (err instanceof BudgetGuardrailError) {
console.warn("Budget exceeded", err.details);
}
await ledger.endSession(sessionId, "error", { errorMessage: (err as Error).message });
throw err;
}
}Session lifecycle
- Start sessions early with
startSession(agentName)to capture every downstream event. - Log events whenever you call an LLM or tool:
logLLMCallfor prompts/responses.logToolCallfor tool invocations (store the inputs).logToolResultfor tool responses (store outputs/latency).logEventsif you need to batch arbitrary event objects.
- End sessions with
endSession(sessionId, "success" | "error", { errorMessage? })so the dashboard knows whether the run finished cleanly.
Tip: keep a simple helper that wraps this flow so every agent in your repo emits consistent telemetry.
Event reference
| Event | Required fields | Optional fields | Notes |
| --- | --- | --- | --- |
| LlmCallEvent | stepIndex, model, provider, prompt, response, tokensIn, tokensOut, latencyMs | — | logLLMCall automatically sets type to llm_call and lets the backend price the call based on provider/model. |
| ToolCallEvent | stepIndex, toolName, toolInput | — | Capture the structured input you sent to an internal or external tool. |
| ToolResultEvent | stepIndex, toolName, toolOutput, latencyMs | — | Use together with ToolCallEvent to understand tool latency and result size. |
| Custom | Whatever your workflow needs plus type | — | Supply via logEvents if you want to store derived signals (examples: session_start, session_end, guardrail_trigger). |
Conventions:
stepIndexis a zero-based counter that makes it easy to diff runs. Increment it in the order events happen, even if multiple tools share the same LLM output.- Keep prompts/responses under 64 KB per event so they render nicely in the dashboard diff view.
- All numeric values are stored as numbers (no strings) so the API can aggregate cost statistics.
API reference
new AgentLedgerClient(options)
| Option | Type | Description |
| --- | --- | --- |
| apiKey | string (required) | Workspace API key from the dashboard. |
startSession(agentName: string): Promise<string>
Creates a session row and returns its UUID. agentName should match how you identify the workflow in the dashboard (e.g., support-bot, retrieval-worker).
endSession(sessionId, status, opts?)
Marks the session closed. Pass { errorMessage } for failures so the UI shows context next to the run.
logEvents(sessionId, events)
Lowest-level ingestion helper. Accepts an array of plain objects, so you can batch multiple events into a single network call. Events must include a type string (e.g., llm_call).
logLLMCall(sessionId, event) / logToolCall / logToolResult
Typed helpers that:
- Fill the
typeautomatically. - Validate required fields at compile time.
- Call
logEventsunder the hood.
Types exported
AgentLedgerClient, AgentLedgerClientOptions, BudgetGuardrailError, BudgetGuardrailDetails, LlmCallEvent, ToolCallEvent, ToolResultEvent, AnyEvent, EventType.
Error handling
BudgetGuardrailError(HTTP 429): thrown when the backend refuses the event because the agent exceeded its daily limit. Inspecterror.details:{ agentName: string; dailyLimitUsd: number; spentTodayUsd: number; attemptedCostUsd: number; projectedCostUsd: number; remainingBudgetUsd: number; }Generic
Error: wraps any other non-2xx response (startSession,endSession,logEvents). The.messagecontains the server-provided text when available.
Recommended practice: catch errors where you call logEvents so your business logic can continue (or at least emit a structured failure) even when the telemetry call is rejected.
Configuration & environments
- Provide
AGENT_LEDGER_API_KEY(or load it from your preferred secrets manager) and the SDK connects to the hosted API automatically. - Default endpoint →
https://agent-ledger-api.azurewebsites.net. - For local API experiments, keep the SDK untouched and proxy traffic through your own tooling (MSW, mock servers, etc.).
Because the client is stateless, you can instantiate one per agent type or share a singleton across the entire app.
Recipes
Streaming agents / multi-step workflows
Reuse a monotonically increasing stepIndex while you stream partial responses. You can emit interim tool calls before the final LLM response lands to visualize branching logic.
Custom tool instrumentation
async function callWeather(sessionId: string, city: string, stepIndex: number) {
await ledger.logToolCall(sessionId, {
stepIndex,
toolName: "weather",
toolInput: { city },
});
const result = await fetchWeather(city);
await ledger.logToolResult(sessionId, {
stepIndex,
toolName: "weather",
toolOutput: result,
latencyMs: result.latencyMs,
});
}Handling guardrail blocks
try {
await ledger.logLLMCall(sessionId, event);
} catch (err) {
if (err instanceof BudgetGuardrailError) {
await ledger.endSession(sessionId, "error", {
errorMessage: `Budget exceeded: remaining ${err.details.remainingBudgetUsd}`,
});
return;
}
throw err;
}Testing & local dev
- The SDK performs real HTTP requests. For unit tests, stub
global.fetchor intercept calls with tools like MSW. - When running the Agent Ledger API locally, ensure your test key exists in the development database and export it via
AGENT_LEDGER_API_KEY.
License
MIT
