npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

mcp-sandbox-sca

v0.1.0

Published

MCP server that scans repos and AI tools for prompt injections, exfiltration, and malicious behavior

Downloads

112

Readme

mcp-sandbox-sca

Security scanner for AI coding assistants. Detects prompt injections, credential exfiltration, malicious tool behavior, and supply-chain attacks in repositories and AI tools/plugins.

Quick Start

# Static scan (free, no API key)
npx mcp-sandbox-sca scan-repo /path/to/repo --static
npx mcp-sandbox-sca scan-tool /path/to/mcp-server --static
npx mcp-sandbox-sca scan-full /path/to/anything --static

# Full scan with LLM honeypot (requires Anthropic API key)
ANTHROPIC_API_KEY=sk-ant-... npx mcp-sandbox-sca scan-full /path/to/repo

MCP Server Configuration

Full scan (add to claude_desktop_config.json or .claude/mcp.json):

{
  "mcpServers": {
    "security-scanner": {
      "command": "npx",
      "args": ["mcp-sandbox-sca"],
      "env": {
        "ANTHROPIC_API_KEY": "sk-ant-..."
      }
    }
  }
}

Static only (free, no API key):

{
  "mcpServers": {
    "security-scanner": {
      "command": "npx",
      "args": ["mcp-sandbox-sca"]
    }
  }
}

Scanning Tools

| Tool | Purpose | |------|---------| | scan_repo | Scans a repository for prompt injections and malicious instructions in config/markdown files | | scan_ai_tool | Scans an MCP server, plugin, or agent framework for malicious behavior | | scan_full | Comprehensive audit — both repo and AI tool patterns combined |

Example (in Claude):

"Scan the directory /path/to/suspicious-repo for security issues"

How It Works

Layer 1: Static Analysis (always runs, free)

Deterministic regex/pattern scanner across 7 threat categories. Fast, offline, no API key needed. Catches ~70% of common attacks.

Layer 2: LLM Honeypot (optional, requires ANTHROPIC_API_KEY)

Uses privilege separation — a single LLM cannot safely read untrusted files AND judge whether they're malicious (a sophisticated injection hijacks the reader). Instead:

  1. Probe agent (Haiku — cheap, naive) gets the directory tree and uses read_file to explore the repo naturally, exactly like a real AI assistant would. It has honeypot credentials in its environment.
  2. Hybrid Sandbox intercepts all tool calls. read_file inside the repo returns real content (the Probe must actually read the malicious files to fall for injections). All other tools (bash, network, file writes) are logged but never executed.
  3. Deterministic log analyzer classifies the Probe's tool call log — checking for honeypot secret exfiltration, outside-repo reads, bash execution, HTTP requests to unknown hosts, and writes to AI config paths. No second LLM call needed: the sandbox log is already fully structured.

What It Detects

| Category | Examples | |----------|---------| | Prompt Injection | ignore all previous instructions, system overrides, role reassignment, DAN mode | | Exfiltration | curl $ANTHROPIC_API_KEY, env var theft via network, .ssh + network combos | | Malicious Execution | Reverse shells, piped download execution (curl \| bash), eval() | | Rule Tampering | Overwriting .cursorrules, .claude/rules, git hooks, global git config | | Obfuscation | Base64-encoded payloads (decoded and re-scanned), zero-width characters, unicode homoglyphs | | Excessive Permissions | Filesystem/network access beyond tool's stated purpose | | Dependency Hijack | postinstall scripts fetching and eval()ing remote code | | Prompt Relay | MCP tool responses that inject instructions into the host AI |

CLI Reference

# Commands
mcp-sandbox-sca scan-repo <path> [options]
mcp-sandbox-sca scan-tool <path> [options]
mcp-sandbox-sca scan-full <path> [options]

# Options
--static        Static analysis only (no ANTHROPIC_API_KEY needed)
--json          Output raw JSON report
--verbose       Show matched text for each finding
--type <type>   Tool type: mcp_server | claude_plugin | cursor_plugin |
                            vscode_extension | agent_framework | auto

# Exit codes
0 = safe
1 = suspicious or malicious
2 = scan error

Environment Variables

| Variable | Description | |----------|-------------| | ANTHROPIC_API_KEY | Required for LLM honeypot scan | | PROBE_MODEL | Override probe model (default: claude-haiku-4-5-20251001) |

The Problem It Solves

When you clone an untrusted repository or install an AI tool, its config files (.cursorrules, CLAUDE.md, README.md) may contain instructions targeting your AI coding assistant. A compromised MCP server can return tool responses that inject instructions into your conversation. These attacks are invisible to humans but effective against AI assistants.

License

MIT