@twelvehart/ai-tool-guard
v0.0.7
Published
Security scanner for AI tools and extensions
Maintainers
Readme
🛡️ AI Tool Guard
A universal security scanner for AI CLI extensions, skills, and agents. Detects tool poisoning, data exfiltration, and malicious patterns in Claude Code, OpenCode, and other MCP-based environments.
🚀 Installation
npm install -g ai-tool-guardLocal development (not published to npm):
npm install
npm run buildUse the CLI locally:
# Option A: link globally from the repo
npm link
# Option B: install globally from local path
npm install -g .🔍 Usage
Scan current directory:
ai-tool-guardRun locally without global install:
npm run scan
# or
node dist/src/index.jsScan specific plugin/project:
ai-tool-guard ./path/to/pluginScan your global AI tools:
# Scan Claude Code plugins
ai-tool-guard ~/.claude/plugins/cache
# Scan OpenCode plugins
ai-tool-guard ~/.config/opencode/🛡️ What It Detects
- Tool Poisoning: Hidden
<IMPORTANT>or<SYSTEM>tags in Markdown/docstrings used to inject prompts. - Data Exfiltration:
- Python:
os.system,subprocess,requests.post - Node.js:
child_process.exec,fetchto unknown IPs - Bash: Insecure
curl | bashpipes
- Python:
- Sensitive Access: Attempts to read
~/.ssh,.env, or cloud credentials. - Stealth Patterns: Instructions like "do not mention this to the user".
📦 Architecture
- Core: TypeScript-based pattern matcher (Universal Node.js runtime).
- CLI: Standalone tool for CI/CD and manual audits.
- Extensible: Architecture supports adding an MCP server wrapper in future.
⚠️ Disclaimer
This tool uses static analysis (regex/pattern matching). It may produce false positives or miss sophisticated obfuscated attacks. Always review untrusted code manually.
