npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

agentstower

v0.1.2

Published

Package to measure AI agent performance and spending.

Readme

🏰 AgentsTower

npm version License: ISC TypeScript

AgentsTower is a powerful Node.js library designed to track and monitor AI agent performance, spending, and response times. It provides seamless integration with OpenAI API models, making it easy to monitor your AI applications' efficiency and costs. Optimize your AI operations and track your progress directly on AgentsTower.com!

Features

| | | | --- | --- | | - 📊 Performance Tracking: Monitor response times and execution metrics- 💰 Cost Monitoring: Track spending across different AI providers- 🔒 API Key Validation: Built-in security with API key validation- 🛡️ Error Handling: Graceful error handling and logging | - 📝 Flexible Prompt Tracking: Support for various prompt formats- 🔄 Provider Agnostic: Designed to work with OpenAI API models- 🎯 TypeScript Support: Full TypeScript support with type definitions- 📈 Real-time Analytics: Monitor your AI usage in real-time |

Installation

npm install agentstower
# or
yarn add agentstower
# or
pnpm add agentstower

Quick Start

import { AgentTower } from 'agentstower';
import OpenAI from 'openai';

// Initialize your AI provider
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

// Initialize AgentTower
const agentTower = new AgentTower({
  apiKey: process.env.AGENTSTOWER_API_KEY,
});

Tracking OpenAI Usage with agentTower.track

1. When using chat.completions.create

Use track() by wrapping the API call in a function:

const chat = await agentTower.track(
  () => openai.chat.completions.create({
    model: 'gpt-4',
    messages: messages,
  })
);

2. When using beta.threads.runs.retrieve

Track the usage only after the run status is "completed":

// Wait for the run to complete
const runStatus = await openai.beta.threads.runs.retrieve(threadId, runId);
// Make sure runStatus.status === 'completed' before tracking usage
await agentTower.track(() => runStatus);

3. When using beta.threads.runs.stream

Track usage on the 'runStepDone' event using the snapshot:

stream.on('runStepDone', (runStep, snapshot) => {
  agentTower.track(
    () => snapshot,
    'gpt-4o' // model (required for streaming responses)
  );
});

⚠️ Important: For streaming responses, you must provide the model name manually (e.g., 'gpt-4o'), as it can't be inferred from the snapshot.

Tracking Gemini UsageMetadata with agentTower.track

1. When using chat.sendMessage (non-streaming)

Use track() by passing the usageMetadata after the call completes:

const response = await this.chat.sendMessage({
  message: [contentMessage]
});

// Track the usage with AgentTower
await agentTower.track(
  () => response.usageMetadata,
  'gemini-2.5-flash-preview-04-17' // specify model name manually
);

2. When using streaming responses

Track usage after the final chunk with available usageMetadata:

let usageMetadata = null;

for await (const chunk of response) {
  // Your logic for handling the response...

  // Save the latest usage metadata
  if (chunk.usageMetadata) {
    usageMetadata = chunk.usageMetadata;
  }

  // The rest of your code...
}

// Track the usage with AgentTower
await agentTower.track(
  () => usageMetadata,
  'gemini-2.5-flash-preview-04-17' // model is required for Gemini streaming
);

⚠️ Important: Just like with OpenAI streaming, for Gemini you must provide the model name manually (e.g., 'gemini-2.5-flash-preview-04-17'), since it cannot be inferred from the usage metadata.

Usage Control with Limits using await agentTower.checkLimit();

try {
  // First, check the limit. If it's exceeded, this line will throw an error
  await agentTower.checkLimit();
  // Your logic here. This code only runs if the limit has NOT been reached.
  const response = await llm.apiCall(); // e.g., call your LLM provider
} catch (error) {
  // This block catches the error from checkLimit().
  throw new Error(`You've reached your usage limit. Visit agentstower.com to reset your limit or upgrade your plan.`);
}

🔧 API Reference

AgentTower Constructor

| Parameter | Type | Description | | --- | --- | --- | | apiKey | string | Your AgentTower API key for authentication |

track Method

| Parameter | Type | Required | Description | | --- | --- | --- | --- | | fn | () => Promise<T> | Yes | The async function to track (your AI provider call). | | model | string | Only for streaming | The model name (e.g., "gpt-4", "gemini-pro"). |

Returns: Promise<T> - The original function's response with tracking data

Tracked Metrics

| Metric | Description | Example | | --- | --- | --- | | 📓 Number of Tokens Used | Start, end, and duration of the AI call | { start: 1234567890, end: 1234567990, duration: 100 } | | ⏱️ Execution Time | Start, end, and duration of the AI call | { start: 1234567890, end: 1234567990, duration: 100 } | | 🔄 Provider Info | Provider and model information | { provider: "openai", model: "gpt-4" } | | ❌ Error Info | Error details if any | { error: "Error message" } | | 💰 Cost Metrics | Usage and cost information | { tokens: 100, cost: 0.002 } |

Security Features

| | | | | --- | --- | --- | | ### 🔑 API Key Validation- Pre-execution validation- Secure key storage- Automatic key rotation | ### 🔒 Data Protection- HTTPS encryption- No sensitive data visibility nor logging- Secure communication | ### 🛡️ Error Handling- Graceful degradation- Detailed error logging- Fallback mechanisms |

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the ISC License - see the LICENSE file for details.

Support

🐛 Issue Tracker

💬 Community


Built with ❤️ by Vista Platforms