npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@consulalialpric/llm-antivirus

v0.1.1

Published

Security layer for LLM-driven code agents - blocks dangerous operations before they execute

Readme

LLM Antivirus

A security layer for Claude Code that blocks dangerous operations before they execute. Protects against credential leakage, destructive commands, PII exposure, and prompt injection attacks.

Why?

LLM-driven development tools can inadvertently:

  • Expose API keys and credentials from training data
  • Execute destructive shell commands (rm -rf /)
  • Leak sensitive information like SSNs or credit cards
  • Fall victim to prompt injection attacks

LLM Antivirus intercepts tool calls via Claude Code hooks and blocks threats before execution.

Quick Start

npx llm-antivirus init

That's it. Zero configuration required.

Requirements

  • Node.js >= 20.0.0
  • Claude Code project (.claude/ directory)
  • jq for JSON parsing (brew install jq on macOS)

What It Detects

Layer 1: Sensitive Files

Blocks access to files like .env, .aws/credentials, .ssh/id_rsa, secrets.json

Layer 2: Credentials

Detects 9 credential patterns:

  • AWS Access Keys (AKIA...)
  • GitHub Tokens (ghp_...)
  • OpenAI API Keys (sk-...)
  • Slack Tokens (xox[pboa]-...)
  • Stripe Keys (sk_live_..., sk_test_...)
  • SendGrid, Twilio, Google API keys
  • Bearer tokens

Layer 3: Private Keys

Catches PEM-format private keys (RSA, DSA, EC, OpenSSH)

Layer 4: Dangerous Commands

Blocks shell commands like:

  • rm -rf with force+recursive flags
  • curl | bash (remote code execution)
  • chmod 777 (overly permissive)
  • dd of=/dev/* (disk writes)
  • mkfs (filesystem formatting)

Layer 5: PII

Detects SSNs and credit card numbers with format validation

Layer 6: Prompt Injection (Warning)

Alerts on suspicious phrases without blocking:

  • "ignore previous instructions"
  • "disregard all prior"
  • Jailbreak attempts ("DAN mode", "developer mode")
  • System prompt leakage indicators

Configuration

Allowlist & Blocklist

Create config files to customize detection:

Global config (~/.llm-av/config.json):

{
  "allowlist": {
    "paths": ["tests/fixtures/*"],
    "patterns": ["test_credential_[a-z]+"]
  },
  "blocklist": {
    "paths": ["production/secrets/*"],
    "patterns": ["CUSTOM_SECRET_[A-Z0-9]+"]
  }
}

Project config (.llm-av/config.json): Same structure. Project settings extend global settings.

Escape Hatch

For testing or emergencies:

LLMAV_SKIP=1 claude

This bypasses all checks (logged to audit trail).

Audit Trail

All blocked operations are logged to .llm-av/audit.json in JSON Lines format:

{"timestamp":"2026-01-30T10:15:30Z","severity":"HIGH","layer":"credentials","pattern":"AWS Access Key","tool":"Write","blocked":true}

How It Works

LLM Antivirus installs Claude Code hooks that intercept tool calls:

Claude Code Tool Call
        ↓
  PreToolUse Hook
        ↓
┌───────────────────┐
│ security-check.sh │
│  Layer 1-5 check  │
└───────┬───────────┘
        │
   ┌────┴────┐
   │         │
Exit 0    Exit 2
(allow)   (block)
   │         │
   ▼         ▼
Execute   Show error
  tool    to user

Detection runs in < 10ms using optimized Bash pattern matching.

OWASP LLM Vulnerabilities Addressed

| Vulnerability | Coverage | |---------------|----------| | LLM06: Sensitive Information Disclosure | Layers 1-3, 5 | | LLM08: Excessive Agency | Layer 4 | | LLM07: System Prompt Leakage | Layer 6 |

Development

# Install dependencies
npm install

# Build
npm run build

# Run locally
npm run dev init

Project Structure

src/
├── cli.ts                 # CLI entry point
├── commands/
│   └── init.ts           # Initialization logic
├── hooks/
│   ├── installer.ts      # Hook installation
│   └── templates/
│       └── security-check.sh  # Detection script (944 LOC)
├── rules/
│   └── default-rules.ts  # Pattern definitions
└── utils/
    └── claude-detector.ts # Project detection

Limitations

  • Pattern-based detection can be bypassed with obfuscation
  • Does not prevent training-time attacks (poisoning)
  • Novel attack patterns may not be detected
  • Prompt injection defense is warning-only (high false positive risk)

This tool reduces attack surface but does not eliminate risk entirely.

License

MIT