npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@agent-ledger/sdk-ts

v0.0.4

Published

Official TypeScript client for Agent Ledger. Use it to instrument any Node.js/Edge agent with structured telemetry, stream session events, and receive immediate feedback when budget guardrails block spending.

Readme

@agent-ledger/sdk-ts

Official TypeScript client for Agent Ledger. Use it to instrument any Node.js/Edge agent with structured telemetry, stream session events, and receive immediate feedback when budget guardrails block spending.

Table of contents

  1. Features
  2. Installation
  3. Runtime requirements
  4. Getting started
  5. Session lifecycle
  6. Event reference
  7. API reference
  8. Error handling
  9. Configuration & environments
  10. Recipes
  11. Testing & local dev
  12. License

Features

  • Minimal, dependency-free client that speaks directly to the Agent Ledger REST API (/v1/sessions and /v1/events).
  • First-class TypeScript typings for every event structure (LlmCallEvent, ToolCallEvent, ToolResultEvent).
  • Built-in budget guardrail awareness through BudgetGuardrailError so you can halt expensive runs immediately.
  • Works anywhere fetch is available (Node.js 18+, Bun, Deno, Edge runtimes, or browsers talking to your own proxy).
  • Simple abstractions so you can reuse the same instrumentation across CLI scripts, background workers, or serverless functions.

Installation

pnpm add @agent-ledger/sdk-ts
# or
npm install @agent-ledger/sdk-ts
# or
yarn add @agent-ledger/sdk-ts

Runtime requirements

  • Node.js 18 or newer (for the built-in fetch implementation). If you run older Node versions, polyfill fetch before importing the SDK.
  • An Agent Ledger API key generated from the dashboard (Settings → API Keys).
  • Outbound HTTPS access to https://agent-ledger-api.azurewebsites.net (or your self-hosted instance).

Getting started

import { AgentLedgerClient, BudgetGuardrailError } from "@agent-ledger/sdk-ts";

const ledger = new AgentLedgerClient({
  apiKey: process.env.AGENT_LEDGER_API_KEY!,
});

export async function runSupportAgent(prompt: string) {
  const sessionId = await ledger.startSession("support-bot");

  try {
    // 1. Run your own LLM/tool logic
    const response = await callModel(prompt);

    // 2. Log the LLM call (Agent Ledger auto-computes spend from provider/model/tokens)
    await ledger.logLLMCall(sessionId, {
      stepIndex: 0,
      provider: "openai",
      model: "gpt-4o-mini",
      prompt,
      response: response.text,
      tokensIn: response.usage.inputTokens,
      tokensOut: response.usage.outputTokens,
      latencyMs: response.latencyMs,
    });

    await ledger.endSession(sessionId, "success");
    return response.text;
  } catch (err) {
    if (err instanceof BudgetGuardrailError) {
      console.warn("Budget exceeded", err.details);
    }
    await ledger.endSession(sessionId, "error", { errorMessage: (err as Error).message });
    throw err;
  }
}

Session lifecycle

  1. Start sessions early with startSession(agentName) to capture every downstream event.
  2. Log events whenever you call an LLM or tool:
    • logLLMCall for prompts/responses.
    • logToolCall for tool invocations (store the inputs).
    • logToolResult for tool responses (store outputs/latency).
    • logEvents if you need to batch arbitrary event objects.
  3. End sessions with endSession(sessionId, "success" | "error", { errorMessage? }) so the dashboard knows whether the run finished cleanly.

Tip: keep a simple helper that wraps this flow so every agent in your repo emits consistent telemetry.

Event reference

| Event | Required fields | Optional fields | Notes | | --- | --- | --- | --- | | LlmCallEvent | stepIndex, model, provider, prompt, response, tokensIn, tokensOut, latencyMs | — | logLLMCall automatically sets type to llm_call and lets the backend price the call based on provider/model. | | ToolCallEvent | stepIndex, toolName, toolInput | — | Capture the structured input you sent to an internal or external tool. | | ToolResultEvent | stepIndex, toolName, toolOutput, latencyMs | — | Use together with ToolCallEvent to understand tool latency and result size. | | Custom | Whatever your workflow needs plus type | — | Supply via logEvents if you want to store derived signals (examples: session_start, session_end, guardrail_trigger). |

Conventions:

  • stepIndex is a zero-based counter that makes it easy to diff runs. Increment it in the order events happen, even if multiple tools share the same LLM output.
  • Keep prompts/responses under 64 KB per event so they render nicely in the dashboard diff view.
  • All numeric values are stored as numbers (no strings) so the API can aggregate cost statistics.

API reference

new AgentLedgerClient(options)

| Option | Type | Description | | --- | --- | --- | | apiKey | string (required) | Workspace API key from the dashboard. |

startSession(agentName: string): Promise<string>

Creates a session row and returns its UUID. agentName should match how you identify the workflow in the dashboard (e.g., support-bot, retrieval-worker).

endSession(sessionId, status, opts?)

Marks the session closed. Pass { errorMessage } for failures so the UI shows context next to the run.

logEvents(sessionId, events)

Lowest-level ingestion helper. Accepts an array of plain objects, so you can batch multiple events into a single network call. Events must include a type string (e.g., llm_call).

logLLMCall(sessionId, event) / logToolCall / logToolResult

Typed helpers that:

  • Fill the type automatically.
  • Validate required fields at compile time.
  • Call logEvents under the hood.

Types exported

AgentLedgerClient, AgentLedgerClientOptions, BudgetGuardrailError, BudgetGuardrailDetails, LlmCallEvent, ToolCallEvent, ToolResultEvent, AnyEvent, EventType.

Error handling

  • BudgetGuardrailError (HTTP 429): thrown when the backend refuses the event because the agent exceeded its daily limit. Inspect error.details:

    {
      agentName: string;
      dailyLimitUsd: number;
      spentTodayUsd: number;
      attemptedCostUsd: number;
      projectedCostUsd: number;
      remainingBudgetUsd: number;
    }
  • Generic Error: wraps any other non-2xx response (startSession, endSession, logEvents). The .message contains the server-provided text when available.

Recommended practice: catch errors where you call logEvents so your business logic can continue (or at least emit a structured failure) even when the telemetry call is rejected.

Configuration & environments

  • Provide AGENT_LEDGER_API_KEY (or load it from your preferred secrets manager) and the SDK connects to the hosted API automatically.
  • Default endpoint → https://agent-ledger-api.azurewebsites.net.
  • For local API experiments, keep the SDK untouched and proxy traffic through your own tooling (MSW, mock servers, etc.).

Because the client is stateless, you can instantiate one per agent type or share a singleton across the entire app.

Recipes

Streaming agents / multi-step workflows

Reuse a monotonically increasing stepIndex while you stream partial responses. You can emit interim tool calls before the final LLM response lands to visualize branching logic.

Custom tool instrumentation

async function callWeather(sessionId: string, city: string, stepIndex: number) {
  await ledger.logToolCall(sessionId, {
    stepIndex,
    toolName: "weather",
    toolInput: { city },
  });

  const result = await fetchWeather(city);

  await ledger.logToolResult(sessionId, {
    stepIndex,
    toolName: "weather",
    toolOutput: result,
    latencyMs: result.latencyMs,
  });
}

Handling guardrail blocks

try {
  await ledger.logLLMCall(sessionId, event);
} catch (err) {
  if (err instanceof BudgetGuardrailError) {
    await ledger.endSession(sessionId, "error", {
      errorMessage: `Budget exceeded: remaining ${err.details.remainingBudgetUsd}`,
    });
    return;
  }
  throw err;
}

Testing & local dev

  • The SDK performs real HTTP requests. For unit tests, stub global.fetch or intercept calls with tools like MSW.
  • When running the Agent Ledger API locally, ensure your test key exists in the development database and export it via AGENT_LEDGER_API_KEY.

License

MIT