npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

token-shrinker

v1.8.1

Published

Model Context Protocol (MCP) server for compressing AI context to reduce token usage. Provides tools to shrink text, summarize files, and cache repository content.

Readme

TokenShrinker - MCP Server

npm version License: MIT

Model Context Protocol (MCP) Server - Comresses AI context to reduce token usage.

TokenShrinker provides AI context compression tools via the Model Context Protocol (MCP). It reduces token usage by intelligently summarizing text, files, and repositories for MCP-compatible AI assistants.

Architecture Overview

AI Agent (MCP host)  -->  MCP request  -->  TokenShrinker (MCP server)
       (chat text)                   (shrink / summarize / select)
                                          |
                                          V
                             compressed/context (returned)
                                          |
                                          V
                       Agent forwards compressed payload to model backend

Installation

npm install -g token-shrinker

Environment Variables

TokenShrinker supports multiple AI providers! Create a .env file in your project directory:

# Choose your provider (default: openrouter)
echo "AI_PROVIDER=openrouter" >> .env  # Options: openrouter, openai, anthropic

# Provider-specific API keys (choose one based on your AI_PROVIDER)
echo "OPENROUTER_API_KEY=sk-or-v1-your-openrouter-key-here" >> .env
# OR
echo "OPENAI_API_KEY=sk-your-openai-key-here" >> .env
# OR
echo "ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here" >> .env

# Optional: Set your preferred model for your provider
echo "AI_MODEL=anthropic/claude-3.5-sonnet" >> .env

Environment Variables:

Provider Selection:

  • AI_PROVIDER - Choose your AI provider (openrouter, openai, anthropic)
    • Default: openrouter (free tier model)

API Keys (choose based on your provider):

Model Selection:

  • AI_MODEL - Generic model name that works across providers
  • or provider-specific: OPENROUTER_MODEL, OPENAI_MODEL, ANTHROPIC_MODEL

Examples by Provider:

OpenRouter (Recommended for Free Tier):

AI_PROVIDER=openrouter
OPENROUTER_API_KEY=sk-or-v1-...
AI_MODEL=meta-llama/llama-4-maverick:free

OpenAI:

AI_PROVIDER=openai
OPENAI_API_KEY=sk-...
AI_MODEL=gpt-4o-mini

Anthropic:

AI_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
AI_MODEL=claude-3-haiku-20240307

MCP Client Configuration

Claude Desktop

Add to your claude_desktop_config.json:

For OpenRouter (default):

{
  "mcpServers": {
    "token-shrinker": {
      "command": "npx",
      "args": ["token-shrinker"],
      "env": {
        "AI_PROVIDER": "openrouter",
        "OPENROUTER_API_KEY": "sk-or-v1-your-openrouter-key-here",
        "AI_MODEL": "meta-llama/llama-4-maverick:free"
      }
    }
  }
}

For OpenAI:

{
  "mcpServers": {
    "token-shrinker": {
      "command": "npx",
      "args": ["token-shrinker"],
      "env": {
        "AI_PROVIDER": "openai",
        "OPENAI_API_KEY": "sk-your-openai-key-here",
        "AI_MODEL": "gpt-4o-mini"
      }
    }
  }
}

For Anthropic:

{
  "mcpServers": {
    "token-shrinker": {
      "command": "npx",
      "args": ["token-shrinker"],
      "env": {
        "AI_PROVIDER": "anthropic",
        "ANTHROPIC_API_KEY": "sk-ant-your-anthropic-key-here",
        "AI_MODEL": "claude-3-haiku-20240307"
      }
    }
  }
}

Cursor/VS Code

Add similar configurations to your MCP settings. You can switch between providers by changing the AI_PROVIDER and corresponding API key environment variables.

Dynamic Provider Switching

Once connected, you can switch providers on-the-fly using MCP tools:

# Ask Claude/Cursor to switch providers
"I want to use OpenAI instead of OpenRouter for compression"

# Or switch models
"Use Claude 3.5 Sonnet for better compression quality"

The set-provider, set-api-key, and set-model tools allow you to configure TokenShrinker dynamically through natural language!

Where Files Are Stored

All summaries are saved in a summaries/ directory in your project root:

your-project/
├── src/
│   ├── app.js
│   └── utils.js
├── summaries/
│   ├── src/
│   │   ├── app.js.summary.json
│   │   └── utils.js.summary.json
│   └── .cache.json
├── .env
└── package.json

File Structure:

  • summaries/ - Mirror of your source tree with .summary.json files
  • summaries/.cache.json - Cache metadata (file hashes and timestamps)
  • Summary files contain: compressed text, token counts, compression ratios, and timestamps

Available Tools

TokenShrinker provides 5 MCP tools for AI assistants:

shrink

Compress text content to reduce token usage

// Input
{
  "text": "Your large text content here..."
}

// Output
{
  "compressedText": "Shortened version...",
  "compressionRatio": "75%",
  "success": true
}

summarize

Generate summaries for text, files, or entire repositories

// Input
{
  "content": "your content or file path",
  "type": "text" // or "file" or "repo"
}

fetch-summary

Retrieve cached repository summaries

// Input
{
  "repoPath": "/path/to/repo" // optional, uses current dir
}

set-model

Set your preferred model for the current provider

// Input
{
  "model": "anthropic/claude-3.5-sonnet"
}

// Output
{
  "message": "Model set to: anthropic/claude-3.5-sonnet",
  "model": "anthropic/claude-3.5-sonnet",
  "note": "This setting persists for the current session..."
}

get-config

View current configuration and available models

// Input
{}

// Output
{
  "openRouterApiKey": "configured",
  "currentModel": "meta-llama/llama-4-maverick:free",
  "availableModels": ["anthropic/claude-3.5-sonnet", "openai/gpt-4o", "..."]
}

Usage Examples

When connected to Claude Desktop or Cursor, you can use natural language:

"Can you compress this long code snippet for me?"
"Show me a summary of this entire codebase"
"What's the cached summary of our current repository?"

The MCP server handles everything automatically!

Repository

https://github.com/corbybender/token-shrinker