npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@useleash/agent

v0.1.4

Published

Keep your AI agents on a leash. Real-time spending limits in 3 lines of code.

Readme

@useleash/agent

Client-side spending guardrails for LLM agents. Wrap your OpenAI or Anthropic client, set a dollar cap, and Leash tracks usage and stops calls once the limit is hit.

Zero runtime dependencies — bring your own openai or @anthropic-ai/sdk (or any compatible client shape).

Requirements

  • Node.js 18+ (uses global fetch)

Install

npm install @useleash/agent

Create an API key in the Leash dashboard and pass it as apiKey.

Quick start (OpenAI)

import OpenAI from 'openai'
import { Leash, LeashError } from '@useleash/agent'

const leash = new Leash({
  apiKey: process.env.LEASH_API_KEY!,
  maxCostUsd: 0.5,
})

const openai = leash.wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY! }))

try {
  const res = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: 'Hello!' }],
  })
  console.log(res.choices[0]?.message?.content)
  console.log('Spent so far:', leash.totalCost)
} catch (err) {
  if (err instanceof LeashError) {
    console.error('Limit reached:', err.current, '/', err.limit)
  } else {
    throw err
  }
}

Anthropic

import Anthropic from '@anthropic-ai/sdk'
import { Leash } from '@useleash/agent'

const leash = new Leash({
  apiKey: process.env.LEASH_API_KEY!,
  maxCostUsd: 1,
})

const anthropic = leash.wrapAnthropic(
  new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY! })
)

const msg = await anthropic.messages.create({
  model: 'claude-sonnet-4-6',
  max_tokens: 256,
  messages: [{ role: 'user', content: 'Hi.' }],
})

Options

| Option | Type | Description | |--------|------|-------------| | apiKey | string | Leash API key (sent as x-api-key to the ingest service). | | maxCostUsd | number | Maximum estimated spend (USD) for this process before calls throw. | | ingestUrl | string (optional) | Base URL for event ingestion. Defaults to the hosted Leash app; override for self-hosted deployments. |

API

  • new Leash(opts) — Configure limits and tracking.
  • leash.wrapOpenAI(client, runId?) — Proxies client.chat.completions.create.
  • leash.wrapAnthropic(client, runId?) — Proxies client.messages.create.
  • await leash.completeRun(runId?) — Marks the run completed in the dashboard (only if it was still active, so it does not override stopped). Prefer passing runId explicitly (same value as wrapOpenAI(client, runId)). Uses the last tracked runId if you omit it. Call in finally when your agent exits normally. Throws if the HTTP request fails (wrong key, offline ingest, etc.).
  • Limit hit — When spend reaches maxCostUsd, the SDK sets the run to stopped before throwing LeashError.
  • leash.totalCost / leash.maxCost — Current estimated spend and cap.
  • LeashError — Thrown when the cap is reached (before or after a call, depending on usage). Exposes current and limit (USD).

Costs are estimated from built-in per-token rates for a fixed set of models; unknown models fall back to Sonnet 4–6–style pricing.

Try a multi-step agent (OpenAI)

From packages/sdk with OPENAI_API_KEY and LEASH_API_KEY in .env (the demo also reads LEASH_TEST_API_KEY for older local setups):

npm run demo:agent

Optional: LEASH_INGEST_URL (your deployed app origin), LEASH_MAX_USD (cap). Confirm events under that runId in the Leash dashboard.

License

MIT