npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@metrxbot/mcp-server

v0.1.3

Published

Metrx MCP Server — AI agent cost intelligence tools for LLM agents. Track spend, optimize models, manage budgets, detect waste, and prove ROI.

Downloads

286

Readme

Metrx MCP Server

npm version CI License: MIT Smithery Glama

Your AI agents are wasting money. Metrx finds out how much, and fixes it.

The official MCP server for Metrx — the AI Agent Cost Intelligence Platform. Give any MCP-compatible agent (Claude, GPT, Gemini, Cursor, Windsurf) the ability to track its own costs, detect waste, optimize model selection, and prove ROI.

Why Metrx?

| Problem | What Metrx Does | |---------|-----------------| | No visibility into agent spend | Real-time cost dashboards per agent, model, and provider | | Overpaying for LLM calls | Provider arbitrage finds cheaper models for the same task | | Runaway costs | Budget enforcement with auto-pause when limits are hit | | Wasted tokens | Cost leak scanner detects retry storms, context bloat, model mismatch | | Can't prove AI ROI | Revenue attribution links agent actions to business outcomes |

Quick Start

One-command install (Claude Desktop, Cursor, Windsurf)

{
  "mcpServers": {
    "metrx": {
      "command": "npx",
      "args": ["@metrxbot/mcp-server"],
      "env": {
        "METRX_API_KEY": "sk_live_your_key_here"
      }
    }
  }
}

Get your free API key at app.metrxbot.com/sign-up.

Verify your setup

After installing, verify your API key works:

METRX_API_KEY=sk_live_your_key_here npx @metrxbot/mcp-server --test

You should see ✓ Connection successful! and a ready-to-paste MCP client config.

Remote HTTP endpoint

For remote agents (no local install needed):

POST https://metrxbot.com/api/mcp
Authorization: Bearer sk_live_your_key_here
Content-Type: application/json

From npm

npm install @metrxbot/mcp-server

23 Tools Across 10 Domains

Dashboard (2 tools)

| Tool | Description | |------|-------------| | metrx_get_cost_summary | Total spend, call counts, error rates, agent breakdown, and optimization opportunities | | metrx_list_agents | All agents with status, category, cost metrics, and health indicators |

Optimization (4 tools)

| Tool | Description | |------|-------------| | metrx_get_provider_arbitrage | Compare costs across providers — find cheaper alternatives | | metrx_get_revenue_intelligence | Revenue per agent with confidence scores and ROI metrics | | metrx_get_token_guardrails | Token limit recommendations and overflow detection | | metrx_get_model_recommendations | Model switching recommendations based on cost, latency, quality |

Budgets (4 tools)

| Tool | Description | |------|-------------| | metrx_create_budget | Create monthly/daily budgets with hard, soft, or monitor enforcement | | metrx_update_budget | Update limits, frequency, or enforcement mode | | metrx_list_budgets | All budgets with current spend vs. limits | | metrx_delete_budget | Remove a budget (historical data preserved) |

Alerts (3 tools)

| Tool | Description | |------|-------------| | metrx_create_alert_policy | Alert on cost overages, error rates, latency spikes, anomalies | | metrx_update_alert_policy | Update thresholds, channels, enable/disable | | metrx_list_alerts | Active alerts and current status per agent |

Experiments (2 tools)

| Tool | Description | |------|-------------| | metrx_start_experiment | A/B test comparing two LLM models with traffic splitting | | metrx_get_experiment_results | Statistical significance, cost delta, and recommended action |

Cost Leak Detector (2 tools)

| Tool | Description | |------|-------------| | metrx_scan_cost_leaks | Find cost anomalies and waste across your fleet | | metrx_analyze_cost_leak | Deep-dive into a specific anomaly with timeline and root cause |

Attribution (2 tools)

| Tool | Description | |------|-------------| | metrx_attribute_task | Link agent actions to business outcomes for ROI tracking | | metrx_get_attribution_report | Multi-source attribution report with confidence scores |

ROI & Reporting (2 tools)

| Tool | Description | |------|-------------| | metrx_get_upgrade_justification | ROI report for tier upgrades based on usage patterns | | metrx_generate_roi_audit | Board-ready ROI audit report |

Alert Configuration (2 tools)

| Tool | Description | |------|-------------| | metrx_configure_alert_threshold | Set cost/operational thresholds with email, webhook, or auto-pause | | metrx_get_failure_predictions | Predictive analysis — identify agents likely to fail before it happens |

Prompts

Pre-built prompt templates for common workflows:

| Prompt | Description | |--------|-------------| | analyze-costs | Comprehensive cost overview — spend breakdown, top agents, optimization opportunities | | find-savings | Discover optimization opportunities — model downgrades, caching, routing | | cost-leak-scan | Scan for waste patterns — retry storms, oversized contexts, model mismatch |

Examples

"How much am I spending?"

User: What was my AI cost this week?
→ metrx_get_cost_summary(period_days=7)

Total Spend: $234.56 | Calls: 2,450 | Error Rate: 0.2%
├── customer-support: $156.23 (1,800 calls)
└── code-generator: $78.33 (650 calls)

💡 Switch customer-support from GPT-4 to Claude Sonnet: Save $42/week

"Find me savings"

User: Am I overpaying for my agents?
→ metrx_get_provider_arbitrage(agent_id="agent_123")

Current: GPT-4 @ $15.20/1K calls
Alternative: Gemini 1.5 @ $6.80/1K calls (-55%)
Estimated Savings: $420/month

"Test a cheaper model"

User: Test Claude 3.5 Sonnet against my GPT-4 setup
→ metrx_start_experiment(name="Claude Trial", agent_id="agent_123",
    model_a="gpt-4", model_b="claude-3-5-sonnet", traffic_split=10)

Experiment started: 90% GPT-4, 10% Claude 3.5 Sonnet
Check back in 14 days for statistical significance.

Configuration

| Variable | Required | Description | |----------|----------|-------------| | METRX_API_KEY | Yes | Your Metrx API key (get one free) | | METRX_API_URL | No | Override API base URL (default: https://metrxbot.com/api/v1) |

Rate Limiting

60 requests per minute per tool. For higher limits, contact [email protected].

Development

git clone https://github.com/metrxbots/mcp-server.git
cd mcp-server
npm install
npm run typecheck
npm test

Contributing

See CONTRIBUTING.md for guidelines.

Links

License

MIT — see LICENSE.

💬 Feedback

Did Metrx work for you? We'd love to hear it — good or bad.

If you installed but hit a snag, tell us what happened — we read every report.