npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

pincer-mcp

v0.1.5

Published

The secure grip for your agent's secrets - A security-hardened MCP gateway with proxy token authentication

Readme

Pincer MCP 🦀

License TypeScript MCP NPM Version NPM Downloads

Pincer-MCP is a security-hardened Model Context Protocol (MCP) gateway that eliminates the "Lethal Trifecta" vulnerability in agentic AI systems. By acting as a stateless intermediary, Pincer ensures agents never see your real API keys.

🔒 The Problem

Current AI agents store long-lived API keys in plain-text .env files or local databases. If compromised via prompt injection or host intrusion, attackers gain direct access to your:

  • Database passwords
  • Third-party API keys

✨ The Solution: Proxy Token Architecture

Pincer implements a "blindfold" security model:

  1. Agent knows: Only a unique proxy token (pxr_abc123...)
  2. Pincer knows: Mapping of proxy tokens → real API keys (encrypted in OS keychain)
  3. Agent never sees: The actual credentials
sequenceDiagram
    participant Agent
    participant Pincer
    participant Vault (OS Keychain)
    participant External API

    Agent->>Pincer: tools/call + proxy_token: pxr_abc123
    Pincer->>Vault: Decrypt real API key
    Vault-->>Pincer: gemini_api_key: AIzaSy...
    Pincer->>External API: API call with real key
    External API-->>Pincer: Response
    Pincer->>Pincer: Scrub key from memory
    Pincer-->>Agent: Response (no credentials)

📦 Available Tools

  • gemini_generate: Secure Google Gemini API calls.
  • openai_chat: Chat completions with OpenAI GPT models (gpt-4o, gpt-4-turbo, gpt-3.5-turbo, etc.).
  • openai_list_models: List all available OpenAI models.
  • openai_compatible_chat: Chat completions with any OpenAI-compatible API (Azure OpenAI, Ollama, vLLM, etc.).
  • openai_compatible_list_models: List models from custom OpenAI-compatible endpoints.
  • claude_chat: Chat completions with Anthropic Claude models (Claude 3.5 Sonnet, Opus, Haiku).
  • openrouter_chat: Unified API access to 100+ models from multiple providers (OpenAI, Anthropic, Google, Meta, etc.).
  • openrouter_list_models: List all available models across OpenRouter providers.
  • openwebui_chat: OpenAI-compatible interface for self-hosted LLMs.
  • openwebui_list_models: Discover available models on an OpenWebUI instance.

(More callers coming soon!)

🚀 Quick Start

Prerequisites

  • Node.js 18+
  • macOS, Windows, or Linux with native keychain support

Installation

Option 1: Global Installation (Recommended)

npm install -g pincer-mcp
# Now 'pincer' command is available system-wide

Option 2: Local Development

git clone https://github.com/VouchlyAI/Pincer-MCP.git
cd Pincer-MCP
npm install
npm run build
npm link  # Makes 'pincer' command available locally

Setup Vault

# 1. Initialize vault (creates master key in OS keychain)
pincer init

# 2. Store your real API keys (encrypted)
pincer set gemini_api_key "AIzaSyDpxPq..."
pincer set openai_api_key "sk-proj-..."

# 3. Register an agent and generate proxy token
pincer agent add openclaw
# Output: 🎫 Proxy Token: pxr_V1StGXR8_Z5jdHi6B-myT

# 4. Authorize the agent for specific tools
pincer agent authorize openclaw gemini_generate

Multi-Key Support

Store multiple keys for the same tool and assign them to different agents:

# Store two different Gemini API keys
pincer set gemini_api_key "AIzaSy_KEY_FOR_CLAWDBOT..." --label key1
pincer set gemini_api_key "AIzaSy_KEY_FOR_MYBOT..." --label key2

# View all stored keys
pincer list

# Assign specific keys to each agent
pincer agent add clawdbot
pincer agent authorize clawdbot gemini_generate --key key1

pincer agent add mybot  
pincer agent authorize mybot gemini_generate --key key2

# View agent permissions
pincer agent list

Result: clawdbot uses key1, mybot uses key2 - perfect for rate limiting or cost tracking!

Run the Server

npm run dev

Configure Your Agent

Give your agent the proxy token (not the real API key):

export PINCER_PROXY_TOKEN="pxr_V1StGXR8_Z5jdHi6B-myT"

Tool-to-Secret Name Mappings

When storing secrets, you must use the correct secret name for each tool. See the Tool Mappings Guide for a complete reference.

When you run pincer agent authorize myagent gemini_generate, Pincer will inject the gemini_api_key secret when that tool is called.

Make a Tool Call

Your agent sends requests with the proxy token in the body:

{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "gemini_generate",
    "arguments": {
      "prompt": "Hello world",
      "model": "gemini-2.0-flash"
    },
    "_meta": {
      "pincer_token": "pxr_V1StGXR8_Z5jdHi6B-myT"
    }
  }
}

Pincer maps the proxy token to the real API key and executes the call securely.

🏗️ Architecture

Two-Tiered Vault System

Tier 1: Master Key (OS Keychain)

  • Stored in macOS Keychain, Windows Credential Manager, or GNOME Keyring
  • Never touches the filesystem
  • Accessed only for encryption/decryption

Tier 2: Encrypted Store (SQLite)

  • Database at ~/.pincer/vault.db
  • Three tables:
    • secrets: Real API keys (AES-256-GCM encrypted)
    • proxy_tokens: Proxy token → Agent ID mappings
    • agent_mappings: Agent ID → Tool authorization

Authentication Flow

Request (_meta.pincer_token: pxr_xxx)
  ↓
Gatekeeper: Extract proxy token from body
  ↓
Vault: Resolve pxr_xxx → agent_id → tool_name → real_api_key
  ↓
Injector: JIT decrypt & inject real key
  ↓
Caller: Execute external API call
  ↓
Scrubber: Overwrite key in memory with zeros
  ↓
Audit: Log to tamper-evident chain

🔐 Security & Compliance

Pincer is built for enterprise-grade security:

  • Hardware-Backed Cryptography: Master encryption keys never leave the OS-native keychain.
  • Proxy Token Isolation: Agents only handle ephemeral pxr_ tokens; they never touch real credentials.
  • JIT Decryption: Secrets are decrypted only for the duration of the API call.
  • Zero-Footprint Memory: Sensitive data is scrubbed (zeroed out) from memory immediately after use.
  • Fine-Grained Authorization: Strict per-agent, per-tool access control policies.
  • Tamper-Evident Audit Log: Append-only tool call history with SHA-256 chain-hashing.
  • Hardened Execution: Schema validation on all inputs and protected environment execution.
  • Stdio Compatible: Fully compatible with the standard Model Context Protocol transport.

🔍 Audit Logs

Every tool call is logged to ~/.pincer/audit.jsonl with both UTC and Local timestamps, plus character counts and estimated token usage:

{
  "agentId": "openclaw",
  "tool": "gemini_generate",
  "duration": 234,
  "status": "success",
  "input_chars": 156,
  "output_chars": 423,
  "estimated_input_tokens": 39,
  "estimated_output_tokens": 106,
  "timestamp_utc": "2026-02-05T08:32:00.000Z",
  "timestamp_local": "2/5/2026, 2:02:45 PM",
  "chainHash": "a1b2c3d4e5f6g7h8",
  "prevHash": "0000000000000000"
}

Token Estimation: Pincer automatically estimates token usage using a 4:1 character-to-token ratio (~4 characters per token average). This provides consistent cost tracking across all AI providers without relying on provider-specific APIs.


Chain hashes provide tamper detection - any modification breaks the SHA-256 chain.

## 🧪 Development

```bash
# Install dependencies
npm install

# Run tests
npm test

# Run with watch mode
npm run dev

# Build for production
npm run build

📚 Documentation

🤝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

📄 License

Apache 2.0 - See LICENSE for details.


Built with ❤️ for a more secure AI future.