@consulalialpric/llm-antivirus
v0.1.1
Published
Security layer for LLM-driven code agents - blocks dangerous operations before they execute
Maintainers
Readme
LLM Antivirus
A security layer for Claude Code that blocks dangerous operations before they execute. Protects against credential leakage, destructive commands, PII exposure, and prompt injection attacks.
Why?
LLM-driven development tools can inadvertently:
- Expose API keys and credentials from training data
- Execute destructive shell commands (
rm -rf /) - Leak sensitive information like SSNs or credit cards
- Fall victim to prompt injection attacks
LLM Antivirus intercepts tool calls via Claude Code hooks and blocks threats before execution.
Quick Start
npx llm-antivirus initThat's it. Zero configuration required.
Requirements
- Node.js >= 20.0.0
- Claude Code project (
.claude/directory) jqfor JSON parsing (brew install jqon macOS)
What It Detects
Layer 1: Sensitive Files
Blocks access to files like .env, .aws/credentials, .ssh/id_rsa, secrets.json
Layer 2: Credentials
Detects 9 credential patterns:
- AWS Access Keys (
AKIA...) - GitHub Tokens (
ghp_...) - OpenAI API Keys (
sk-...) - Slack Tokens (
xox[pboa]-...) - Stripe Keys (
sk_live_...,sk_test_...) - SendGrid, Twilio, Google API keys
- Bearer tokens
Layer 3: Private Keys
Catches PEM-format private keys (RSA, DSA, EC, OpenSSH)
Layer 4: Dangerous Commands
Blocks shell commands like:
rm -rfwith force+recursive flagscurl | bash(remote code execution)chmod 777(overly permissive)dd of=/dev/*(disk writes)mkfs(filesystem formatting)
Layer 5: PII
Detects SSNs and credit card numbers with format validation
Layer 6: Prompt Injection (Warning)
Alerts on suspicious phrases without blocking:
- "ignore previous instructions"
- "disregard all prior"
- Jailbreak attempts ("DAN mode", "developer mode")
- System prompt leakage indicators
Configuration
Allowlist & Blocklist
Create config files to customize detection:
Global config (~/.llm-av/config.json):
{
"allowlist": {
"paths": ["tests/fixtures/*"],
"patterns": ["test_credential_[a-z]+"]
},
"blocklist": {
"paths": ["production/secrets/*"],
"patterns": ["CUSTOM_SECRET_[A-Z0-9]+"]
}
}Project config (.llm-av/config.json):
Same structure. Project settings extend global settings.
Escape Hatch
For testing or emergencies:
LLMAV_SKIP=1 claudeThis bypasses all checks (logged to audit trail).
Audit Trail
All blocked operations are logged to .llm-av/audit.json in JSON Lines format:
{"timestamp":"2026-01-30T10:15:30Z","severity":"HIGH","layer":"credentials","pattern":"AWS Access Key","tool":"Write","blocked":true}How It Works
LLM Antivirus installs Claude Code hooks that intercept tool calls:
Claude Code Tool Call
↓
PreToolUse Hook
↓
┌───────────────────┐
│ security-check.sh │
│ Layer 1-5 check │
└───────┬───────────┘
│
┌────┴────┐
│ │
Exit 0 Exit 2
(allow) (block)
│ │
▼ ▼
Execute Show error
tool to userDetection runs in < 10ms using optimized Bash pattern matching.
OWASP LLM Vulnerabilities Addressed
| Vulnerability | Coverage | |---------------|----------| | LLM06: Sensitive Information Disclosure | Layers 1-3, 5 | | LLM08: Excessive Agency | Layer 4 | | LLM07: System Prompt Leakage | Layer 6 |
Development
# Install dependencies
npm install
# Build
npm run build
# Run locally
npm run dev initProject Structure
src/
├── cli.ts # CLI entry point
├── commands/
│ └── init.ts # Initialization logic
├── hooks/
│ ├── installer.ts # Hook installation
│ └── templates/
│ └── security-check.sh # Detection script (944 LOC)
├── rules/
│ └── default-rules.ts # Pattern definitions
└── utils/
└── claude-detector.ts # Project detectionLimitations
- Pattern-based detection can be bypassed with obfuscation
- Does not prevent training-time attacks (poisoning)
- Novel attack patterns may not be detected
- Prompt injection defense is warning-only (high false positive risk)
This tool reduces attack surface but does not eliminate risk entirely.
License
MIT
