npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

token-saver-mcp

v1.0.1

Published

MCP plugin that fires warnings, errors, and alerts when AI token usage is wasteful. Works with any MCP client. Auto-suppresses logs and ignored history to keep context lean.

Readme

token-saver

MCP plugin that alerts you when AI token usage is wasteful. Works with Claude Code, Cursor, Windsurf, Zed, Continue.dev — any MCP-compatible client, any model. Fires warnings, errors, and alerts on large outputs, verbose logs, and repetitive history. Auto-suppresses noise to keep your context lean.

CI npm version License: MIT Node.js


Overview

In agentic coding sessions, AI model responses often contain massive log outputs, repeated tool results, or near-duplicate history entries — all of which are re-sent on every turn, burning tokens. token-saver monitors every output and tells you when something is wasteful, so you can suppress it before it poisons your context window.

Works with any MCP-compatible client: Claude Code, Cursor, Windsurf, Zed, Continue.dev, and any other tool that speaks the Model Context Protocol. No dependency on any specific AI provider or API — token-saver analyzes plain text and is model-agnostic by design.

Core value proposition: Most token waste in long AI sessions comes from outputs nobody actually reads — stack traces, verbose logs, repeated file contents. token-saver catches these early and tells you exactly why and how much you're wasting.


How it works

Your AI model output (Claude, GPT, Gemini, or any other)
        │
        ▼
  check_output          ← estimates tokens, detects log/noise patterns
        │
        ▼
  alert level           ← info / warning / error / alert
        │
        ▼
  shouldSuppress        ← true if output matches suppression criteria
        │
        ▼
  get_session_stats     ← cumulative waste report for the session

Installation

Three ways — pick what suits you:

Option A — npx (no install, always latest)

No global install needed. Add directly to your MCP client config:

{
  "mcpServers": {
    "token-saver-mcp": {
      "command": "npx",
      "args": ["-y", "token-saver-mcp"]
    }
  }
}

Option B — npm global

npm install -g token-saver-mcp

Then add to your MCP client config:

{
  "mcpServers": {
    "token-saver-mcp": {
      "command": "token-saver-mcp"
    }
  }
}

Option C — install directly from GitHub

npm install -g github:flightlesstux/token-saver

Same config as Option B. Works without a build step — compiled output is included in the repo.


Tools

| Tool | Description | |------|-------------| | set_mode | Switch mode: off (default, silent) · monitor (analyze only) · active (full suppression). Start here. | | check_output | Analyze a text output. Returns alert level, token count, suppression flag, and detected patterns. | | analyze_history | Scan a messages array for near-duplicates and ignored log outputs. Returns suggested truncation and savings estimate. | | get_session_stats | Cumulative session statistics: tokens analyzed, suppressed, saved, and alert counts. | | reset_session_stats | Reset session statistics to zero. | | set_thresholds | Override warning/error/alert token thresholds and suppression flags for the current session. |


Example usage

1. Enable the plugin (off by default)

{ "name": "set_mode", "arguments": { "mode": "active" } }
{ "mode": "active" }

2. Check a suspicious output

{ "name": "check_output", "arguments": { "text": "[INFO] server started\n[DEBUG] connection ok\n[TRACE] request received\n..." } }
{
  "alertLevel": "warning",
  "tokens": 87,
  "outputType": "log",
  "shouldSuppress": true,
  "reason": "Output matches log/noise patterns and will be suppressed",
  "detectedPatterns": [
    { "pattern": "\\[INFO\\]", "matchCount": 5, "description": "Log pattern matched 5 times" },
    { "pattern": "\\[DEBUG\\]", "matchCount": 5, "description": "Log pattern matched 5 times" }
  ]
}

3. Scan conversation history for waste

{ "name": "analyze_history", "arguments": { "messages": [ ...your messages array... ] } }
{
  "totalMessages": 6,
  "totalTokens": 114,
  "repetitiveMessages": [
    { "index": 2, "role": "user", "tokens": 19, "reason": "Near-duplicate of message 0" },
    { "index": 4, "role": "user", "tokens": 19, "reason": "Near-duplicate of message 0" }
  ],
  "suggestedTruncation": 2,
  "estimatedTokenSavings": 38,
  "alertLevel": "alert"
}

4. Session summary

{ "name": "get_session_stats", "arguments": {} }
{
  "turns": 5,
  "totalTokensAnalyzed": 1416,
  "totalTokensSuppressed": 201,
  "warningsFired": 2,
  "errorsFired": 0,
  "alertsFired": 1,
  "tokensSaved": 201
}

Proof test output

Run python3 test_live.py to verify the full mode/suppression/history flow locally:

============================================================
TOKEN-SAVER PROOF TEST
============================================================

[1] Default mode (off) — all analysis skipped
  [check_output] mode=off skipped=true
  [PASS] mode=off correctly skips analysis

[2] Switch to monitor mode
  [PASS] mode switched to monitor

[3] Short normal output → info
  [check_output] level=info tokens=3 suppress=False
    reason: Output is within normal bounds
  [PASS] info level, no suppression

[4] Large output (>1000 tokens) → warning or higher
  [check_output] level=warning tokens=1125 suppress=False
    reason: Output exceeds warning threshold (1125 tokens >= 1000)
  [PASS] warning level fired at 1125 tokens

[5] Log output in monitor mode → detected, not suppressed
  [check_output] level=info tokens=87 suppress=False
    patterns: 3 matched
  [PASS] patterns detected, suppression=false (monitor mode)

[6] Switch to active mode
  [PASS] mode switched to active

[7] Log output in active mode → suppressed
  [check_output] level=warning tokens=87 suppress=True
    reason: Output matches log/noise patterns and will be suppressed
  [PASS] suppressed 87 log tokens

[8] Repetitive history → alert
  totalMessages=6 totalTokens=114
  repetitive=5 savings=95 level=alert
  [PASS] 95 tokens saveable from repetitive history

[9] Session stats
  turns=5 analyzed=1416 suppressed=201 warnings=2 alerts=1
  [PASS] 201 tokens suppressed this session

============================================================
PROOF SUMMARY
============================================================
  Tokens suppressed this session : 201
  Turns analyzed                 : 5
  Warnings fired                 : 2
  Alerts fired                   : 1

  Overall: ALL CHECKS PASSED
============================================================

Alert levels

| Level | Trigger | |-------|---------| | info | Output is within normal bounds (<1000 tokens, no noise patterns) | | warning | Output exceeds 1000 tokens OR matches log/noise patterns | | error | Output exceeds 5000 tokens | | alert | Output exceeds 10000 tokens OR repetitive ignored messages exceed inactivity threshold |


Configuration

Optional .token-saver.json in your project root:

{
  "warningThresholdTokens": 1000,
  "errorThresholdTokens": 5000,
  "alertThresholdTokens": 10000,
  "suppressLogs": true,
  "suppressRepetitiveHistory": true,
  "logPatterns": [
    "\\[INFO\\]", "\\[DEBUG\\]", "\\[TRACE\\]"
  ],
  "inactivityTurnsBeforeAlert": 3
}

All fields are optional — defaults work well for most projects.


Requirements

  • Node.js >= 24
  • Any MCP-compatible AI client

FAQ

Does it work with non-Claude models and clients? Yes. token-saver has zero dependency on any AI provider or API. It analyzes plain text — Claude, GPT-4, Gemini, Mistral, Llama, whatever. Works with any MCP-compatible client: Claude Code, Cursor, Windsurf, Zed, Continue.dev.

Why is the default mode "off"? Intentional. Install it, verify it's there, then turn it on when you're ready. set_mode("monitor") to observe first, set_mode("active") for full suppression. Your MCP client (Claude) calls this for you when you ask — you don't touch JSON directly.

What's the difference between monitor and active mode? monitor — analyzes and reports waste, never suppresses. active — full mode, sets shouldSuppress: true on matching outputs so your client can skip feeding noise back into context.

Does it actually block or delete anything? No. It sets shouldSuppress: true on noisy outputs and explains why — but never intercepts or modifies any API call. Your client decides what to do with the signal.

How does token counting work? Fast heuristic: ~4 characters per token (English/code average). Not the exact tokenizer — that would add latency. Accurate enough to catch waste at scale.

What's the difference between warning, error, and alert? info — normal output. warning — over 1,000 tokens or log patterns detected. error — over 5,000 tokens. alert — over 10,000 tokens or repetitive ignored history detected.

Can I add custom log patterns? Yes. Add a logPatterns array to .token-saver.json with regex strings. Merged with built-in patterns.

Does it send data anywhere? No. Everything runs locally in memory. No telemetry. Stats evaporate when the MCP server stops. See PRIVACY.md.

Is it free? MIT license. Free forever. No SaaS, no subscription.


Contributing

Contributions are welcome — new detection heuristics, better suppression logic, benchmark improvements, and docs.

Read CONTRIBUTING.md before opening a PR. All commits must follow Conventional Commits. The CI pipeline enforces typechecking, linting, testing, and coverage on every PR.


License

MITflightlesstux.github.io/token-saver