agentstower
v0.1.2
Published
Package to measure AI agent performance and spending.
Readme
🏰 AgentsTower
AgentsTower is a powerful Node.js library designed to track and monitor AI agent performance, spending, and response times. It provides seamless integration with OpenAI API models, making it easy to monitor your AI applications' efficiency and costs. Optimize your AI operations and track your progress directly on AgentsTower.com!
Features
| | | | --- | --- | | - 📊 Performance Tracking: Monitor response times and execution metrics- 💰 Cost Monitoring: Track spending across different AI providers- 🔒 API Key Validation: Built-in security with API key validation- 🛡️ Error Handling: Graceful error handling and logging | - 📝 Flexible Prompt Tracking: Support for various prompt formats- 🔄 Provider Agnostic: Designed to work with OpenAI API models- 🎯 TypeScript Support: Full TypeScript support with type definitions- 📈 Real-time Analytics: Monitor your AI usage in real-time |
Installation
npm install agentstower
# or
yarn add agentstower
# or
pnpm add agentstowerQuick Start
import { AgentTower } from 'agentstower';
import OpenAI from 'openai';
// Initialize your AI provider
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
// Initialize AgentTower
const agentTower = new AgentTower({
apiKey: process.env.AGENTSTOWER_API_KEY,
});Tracking OpenAI Usage with agentTower.track
1. When using chat.completions.create
Use track() by wrapping the API call in a function:
const chat = await agentTower.track(
() => openai.chat.completions.create({
model: 'gpt-4',
messages: messages,
})
);2. When using beta.threads.runs.retrieve
Track the usage only after the run status is "completed":
// Wait for the run to complete
const runStatus = await openai.beta.threads.runs.retrieve(threadId, runId);
// Make sure runStatus.status === 'completed' before tracking usage
await agentTower.track(() => runStatus);3. When using beta.threads.runs.stream
Track usage on the 'runStepDone' event using the snapshot:
stream.on('runStepDone', (runStep, snapshot) => {
agentTower.track(
() => snapshot,
'gpt-4o' // model (required for streaming responses)
);
});⚠️ Important: For streaming responses, you must provide the model name manually (e.g.,
'gpt-4o'), as it can't be inferred from the snapshot.
Tracking Gemini UsageMetadata with agentTower.track
1. When using chat.sendMessage (non-streaming)
Use track() by passing the usageMetadata after the call completes:
const response = await this.chat.sendMessage({
message: [contentMessage]
});
// Track the usage with AgentTower
await agentTower.track(
() => response.usageMetadata,
'gemini-2.5-flash-preview-04-17' // specify model name manually
);2. When using streaming responses
Track usage after the final chunk with available usageMetadata:
let usageMetadata = null;
for await (const chunk of response) {
// Your logic for handling the response...
// Save the latest usage metadata
if (chunk.usageMetadata) {
usageMetadata = chunk.usageMetadata;
}
// The rest of your code...
}
// Track the usage with AgentTower
await agentTower.track(
() => usageMetadata,
'gemini-2.5-flash-preview-04-17' // model is required for Gemini streaming
);⚠️ Important: Just like with OpenAI streaming, for Gemini you must provide the model name manually (e.g.,
'gemini-2.5-flash-preview-04-17'), since it cannot be inferred from the usage metadata.
Usage Control with Limits using await agentTower.checkLimit();
try {
// First, check the limit. If it's exceeded, this line will throw an error
await agentTower.checkLimit();
// Your logic here. This code only runs if the limit has NOT been reached.
const response = await llm.apiCall(); // e.g., call your LLM provider
} catch (error) {
// This block catches the error from checkLimit().
throw new Error(`You've reached your usage limit. Visit agentstower.com to reset your limit or upgrade your plan.`);
}🔧 API Reference
AgentTower Constructor
| Parameter | Type | Description |
| --- | --- | --- |
| apiKey | string | Your AgentTower API key for authentication |
track Method
| Parameter | Type | Required | Description |
| --- | --- | --- | --- |
| fn | () => Promise<T> | Yes | The async function to track (your AI provider call). |
| model | string | Only for streaming | The model name (e.g., "gpt-4", "gemini-pro"). |
Returns: Promise<T> - The original function's response with tracking data
Tracked Metrics
| Metric | Description | Example |
| --- | --- | --- |
| 📓 Number of Tokens Used | Start, end, and duration of the AI call | { start: 1234567890, end: 1234567990, duration: 100 } |
| ⏱️ Execution Time | Start, end, and duration of the AI call | { start: 1234567890, end: 1234567990, duration: 100 } |
| 🔄 Provider Info | Provider and model information | { provider: "openai", model: "gpt-4" } |
| ❌ Error Info | Error details if any | { error: "Error message" } |
| 💰 Cost Metrics | Usage and cost information | { tokens: 100, cost: 0.002 } |
Security Features
| | | | | --- | --- | --- | | ### 🔑 API Key Validation- Pre-execution validation- Secure key storage- Automatic key rotation | ### 🔒 Data Protection- HTTPS encryption- No sensitive data visibility nor logging- Secure communication | ### 🛡️ Error Handling- Graceful degradation- Detailed error logging- Fallback mechanisms |
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the ISC License - see the LICENSE file for details.
Support
🐛 Issue Tracker
💬 Community
Built with ❤️ by Vista Platforms
