npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

mcpblox

v0.1.3

Published

A programmable MCP proxy that reshapes tools with natural language transform prompts

Readme

mcpblox

A programmable MCP proxy that takes any existing MCP server and a natural language transform prompt, and produces a new MCP server whose tools are reshaped by that prompt.

  • Rename tools, reformat outputs, change schemas
  • Hide tools you don't need
  • Compose multiple upstream tools into new synthetic tools
  • Chain instances via Unix pipes into multi-stage transform pipelines

All without modifying the original server.

Installation

# Run directly with npx
npx mcpblox --upstream "your-mcp-server" --prompt "your transform"

# Or install globally
npm install -g mcpblox

Quick Start

# 1. Proxy an MCP server unchanged (transparent pass-through)
mcpblox --upstream "npx @modelcontextprotocol/server-filesystem /tmp" --api-key $ANTHROPIC_API_KEY

# 2. Preview what transforms the LLM would apply (no server started)
mcpblox \
  --upstream "npx @modelcontextprotocol/server-filesystem /tmp" \
  --prompt "Rename read_file to cat and hide write_file" \
  --api-key $ANTHROPIC_API_KEY \
  --dry-run

# 3. Run with transforms applied
mcpblox \
  --upstream "npx @modelcontextprotocol/server-filesystem /tmp" \
  --prompt "Rename read_file to cat and hide write_file" \
  --api-key $ANTHROPIC_API_KEY

# 4. Point any MCP host at http://localhost:8000/mcp
curl http://localhost:8000/health

# 5. Chain transforms via Unix pipes
mcpblox --upstream "npx @modelcontextprotocol/server-filesystem /tmp" --prompt "Hide write_file" \
  | mcpblox --prompt "Rename read_file to cat" \
  | mcpblox --prompt "Format outputs as markdown"

How It Works

┌──────────┐      ┌──────────────────────────────────────────┐      ┌───────────┐
│          │      │               mcpblox                     │      │           │
│   MCP    │◄────►│  ┌────────┐  ┌───────────┐  ┌─────────┐ │◄────►│ Upstream  │
│   Host   │ HTTP │  │Exposed │  │ Transform │  │Upstream │ │stdio/│ MCP       │
│          │      │  │Server  │──│ Engine    │──│Client   │ │ HTTP │ Server    │
└──────────┘      │  └────────┘  └─────┬─────┘  └─────────┘ │      └───────────┘
                  │                    │                      │
                  │              ┌─────▼─────┐               │
                  │              │    LLM    │               │
                  │              │ (startup  │               │
                  │              │  codegen) │               │
                  │              └───────────┘               │
                  └──────────────────────────────────────────┘

At startup, mcpblox:

  1. Connects to the upstream MCP server and discovers its tools
  2. Sends your transform prompt + tool definitions to an LLM
  3. The LLM produces a transform plan (which tools to modify, hide, pass through, or compose into new synthetic tools)
  4. For each modified tool, the LLM generates JavaScript transform functions
  5. Generated code runs in a sandboxed vm context (no filesystem/network access)
  6. Results are cached — subsequent startups with the same prompt skip the LLM entirely

At runtime, tool calls flow through the transform pipeline: input args are transformed, the upstream tool is called, and the output is transformed before returning to the host. Pass-through tools are proxied directly with no overhead.

CLI Reference

mcpblox [options]

Upstream (required unless stdin is a pipe):
  --upstream <command>         Upstream MCP server as stdio command
                               e.g., "npx @modelcontextprotocol/server-filesystem /tmp"
  --upstream-url <url>         Upstream MCP server as HTTP/SSE URL
  --upstream-token <token>     Bearer token for HTTP upstream (env: MCP_UPSTREAM_TOKEN)

Transform:
  --prompt <text>              Transform prompt (inline)
  --prompt-file <path>         Transform prompt from file

LLM:
  --provider <name>            LLM provider: anthropic | openai (default: anthropic)
  --model <id>                 LLM model ID (default: claude-sonnet-4-20250514 / gpt-4o)
  --api-key <key>              LLM API key (env: ANTHROPIC_API_KEY | OPENAI_API_KEY)

Server:
  --port <number>              HTTP server port (default: 8000, or 0 for OS-assigned when piped)

Cache:
  --cache-dir <path>           Cache directory (default: .mcpblox-cache)
  --no-cache                   Disable caching, regenerate on every startup

Other:
  --dry-run                    Show the transform plan as JSON without starting the server
  --verbose                    Verbose logging (generated code, cache keys, tool call details)

Without --prompt, mcpblox runs as a transparent proxy — all tools pass through unchanged.

Examples

Rename and restructure tools:

mcpblox \
  --upstream "npx @mcp/server-github" \
  --prompt "Rename search_repositories to find_repos. For list_issues, add a max_results parameter (default 10) that truncates the output."

Format outputs:

mcpblox \
  --upstream "uvx mcp-server-yfinance" \
  --prompt "Format all numeric values in tool outputs with thousand separators and 2 decimal places. Prefix currency values with $."

Hide tools you don't need:

mcpblox \
  --upstream "npx @modelcontextprotocol/server-filesystem /tmp" \
  --prompt "Hide write_file, create_directory, and move_file. Only expose read-only tools."

Synthetic tools (compose upstream tools into new ones):

mcpblox \
  --upstream "uvx yfinance-mcp" \
  --prompt-file period-returns.txt \
  --port 18500

The prompt creates a get_period_returns tool that calls yfinance_get_price_history four times (for 1-month, 3-month, 6-month, and 12-month periods), parses the results, and returns calculated returns for a given stock ticker — all orchestrated in a single tool call.

Connect to an HTTP/SSE upstream instead of stdio:

# Proxy an already-running MCP server over HTTP
mcpblox --upstream-url http://localhost:3000/mcp --api-key $ANTHROPIC_API_KEY

# With bearer token authentication
mcpblox --upstream-url http://localhost:3000/mcp \
  --upstream-token $MCP_TOKEN \
  --prompt "Hide admin tools" \
  --api-key $ANTHROPIC_API_KEY

Load a complex prompt from a file:

mcpblox \
  --upstream "uvx yfinance-mcp" \
  --prompt-file transforms.txt \
  --api-key $ANTHROPIC_API_KEY

Chain instances via Unix pipes:

# Each instance reads its upstream URL from stdin and writes its own URL to stdout.
# Only the first instance needs --upstream.
mcpblox --upstream "node stock-server.js" --prompt "Add a max_results param to search" \
  | mcpblox --prompt "Format prices as USD with commas" \
  | mcpblox --prompt "Add caching hints to descriptions"

# Or feed an upstream URL via echo:
echo "http://localhost:3000/mcp" \
  | mcpblox --prompt "Hide admin tools" \
  | mcpblox --prompt "Format outputs as markdown"

When stdout is a pipe, mcpblox binds to an OS-assigned port and writes its URL (e.g. http://localhost:57403/mcp) to stdout. The next instance reads that URL from stdin. Use --port to override the auto-assigned port.

Chain manually with explicit ports:

# First instance: modify tool schemas
mcpblox --upstream "node stock-server.js" --prompt "Add a max_results param to search" --port 8001 &

# Second instance: format the output of the first
mcpblox --upstream-url http://localhost:8001/mcp --prompt "Format prices as USD with commas" --port 8002

Endpoints

| Endpoint | Method | Description | |----------|--------|-------------| | /mcp | POST | MCP protocol (StreamableHTTP) | | /health | GET | Health check — returns {"status":"ok","tools":<count>} |

Caching

Transforms are cached to disk in .mcpblox-cache/ (configurable with --cache-dir). The cache key is the hash of your transform prompt combined with the hash of the upstream tool schemas. If either changes, the cache auto-invalidates.

Use --no-cache to force regeneration. Use --dry-run to preview the plan without starting the server.

Security

LLM-generated transform code runs in a restricted Node.js vm context with no access to the filesystem, network, process environment, or module system. The sandbox provides only data-manipulation primitives (JSON, Math, String, Array, etc.) with a 5-second execution timeout for input/output transforms and a 30-second timeout for synthetic tool orchestration.

Synthetic tool orchestration code receives a callTool bridge function that restricts calls to only the upstream tools declared in the tool's plan — it cannot call arbitrary tools or access anything outside the sandbox.

Note: Node.js vm is not a full security boundary — it's sufficient for LLM-generated code in a trusted-user context, not for arbitrary untrusted input.