npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

vuln-monkey

v0.1.5

Published

AI-powered API security fuzzer that uses LLMs to discover logic flaws in your endpoints

Readme


Table of Contents

Quick Start

Install globally:

npm install -g vuln-monkey

Fuzz an API endpoint:

vuln-monkey "curl -X POST https://api.example.com/users \
  -H 'Authorization: Bearer tok_xxx' \
  -d '{\"name\":\"test\"}'"

That's it. It uses your Claude Code subscription by default. Zero configuration.

Outputs:

  • Terminal summary with severity colors.
  • Markdown report with payload details.
  • JSON export for CI/automation.
  • All written to ./reports/.

Demo

$ vuln-monkey "curl -X POST https://api.example.com/users -H 'Authorization: Bearer tok_xxx' -d '{\"name\":\"test\"}'"

✔ Parsed 1 endpoint(s)
✔ Found 7 potential vulnerabilities
✔ Generated 56 payloads

[1/56]   200  23ms  IDOR - Access user 2's profile
[2/56]   200  31ms  IDOR - Access user 999
[3/56]   500  89ms  Injection - SQL in name field
[4/56]   401  12ms  Auth bypass - Missing token validation
[5/56]   200  28ms  Mass assignment - Set role to admin
[6/56]   400  15ms  Type juggling - Integer as name
[7/56]   429  8ms   Rate limiting bypass - Rapid requests
...

VULN MONKEY REPORT
Target:             https://api.example.com/users
Model:              claude-cli
Endpoints scanned:  1
Payloads fired:     56
Duration:           14.23s
Findings:           3

  CRITICAL  CRASH: Injection - SQL in name field — https://api.example.com/users
  HIGH      ERROR: Type juggling - Integer as name — https://api.example.com/users
  MEDIUM    SUSPICIOUS: IDOR - Access user 2's profile — https://api.example.com/users

Risk score: 67/100
Risk rating: Needs Attention

Reports written:
  Markdown: ./reports/vuln-monkey-2026-04-03T12-00-00.000Z-a3f2c1.md
  JSON:     ./reports/vuln-monkey-2026-04-03T12-00-00.000Z-a3f2c1.json

Models & Backends

8 LLM backends. Use what you have.

| Backend | Requires | Command | |:--------|:---------|:--------| | claude-cli (default) | Claude Code CLI | vuln-monkey "curl ..." | | gemini-cli | Gemini CLI | vuln-monkey --model gemini-cli "curl ..." | | codex-cli | Codex CLI | vuln-monkey --model codex-cli "curl ..." |

Zero config. No API keys. Reads from your CLI subscriptions automatically.

# Uses Claude Code (default)
vuln-monkey "curl https://api.example.com/users"

# Switch to Gemini
vuln-monkey --model gemini-cli "curl https://api.example.com/users"

# Or Codex
vuln-monkey --model codex-cli "curl https://api.example.com/users"

| Backend | API Provider | Env Var | |:--------|:-------------|:--------| | claude | Anthropic API | ANTHROPIC_API_KEY | | gemini | Google Generative AI | GEMINI_API_KEY | | openai | OpenAI (GPT-4o, etc) | OPENAI_API_KEY |

Requires API keys. Useful for CI pipelines.

ANTHROPIC_API_KEY=sk-... vuln-monkey --model claude "curl https://api.example.com/users"
OPENAI_API_KEY=sk-... vuln-monkey --model openai "curl https://api.example.com/users"
GEMINI_API_KEY=... vuln-monkey --model gemini "curl https://api.example.com/users"

| Backend | Runs | Config | |:--------|:-----|:-------| | ollama | Ollama (localhost:11434) | Just ollama serve | | local | Any OpenAI-compatible server | OPENAI_BASE_URL env var |

Compatible with Ollama, LM Studio, vLLM, llama.cpp, text-generation-webui, or anything serving /v1/chat/completions.

# Ollama — auto-connects to localhost:11434
ollama serve &
vuln-monkey --model ollama "curl https://api.example.com/users"

# LM Studio, vLLM, or custom OpenAI-compatible server
OPENAI_BASE_URL=http://localhost:1234/v1 vuln-monkey --model local "curl https://api.example.com/users"

How It Works

            ┌──────────────────────┐
            │ curl / OpenAPI spec  │
            └──────────┬───────────┘
                       │
            ┌──────────▼───────────┐
            │  Parse endpoints     │
            └──────────┬───────────┘
                       │
            ┌──────────▼───────────┐
            │ LLM analysis         │  ◄─ Identifies IDOR, SQL injection,
            └──────────┬───────────┘     auth bypass, mass assignment, etc.
                       │
            ┌──────────▼───────────┐
            │ Generate payloads    │  ◄─ Creates attack variants
            └──────────┬───────────┘     (8-10 per vulnerability)
                       │
            ┌──────────▼───────────┐
            │ Fire requests        │  ◄─ Concurrent + SSRF protection
            └──────────┬───────────┘
                       │
            ┌──────────▼───────────┐
            │ Classify responses   │  ◄─ pass / suspicious / error / crash
            └──────────┬───────────┘
                       │
            ┌──────────▼───────────┐
            │ Generate reports     │  ◄─ Terminal, Markdown, JSON
            └──────────────────────┘

Usage

Input Modes

Curl command:

vuln-monkey "curl -X POST https://api.example.com/users -d '{\"name\":\"test\"}'"

OpenAPI specification:

vuln-monkey --spec https://api.example.com/openapi.json

Dry run (preview payloads without firing):

vuln-monkey --dry-run "curl https://api.example.com/users"

CLI Options

| Option | Description | Default | |:-------|:-----------|:--------| | --spec <url> | OpenAPI/Swagger spec URL | — | | --model <name> | LLM backend (see Models) | claude-cli | | --output <dir> | Report output directory | ./reports | | --concurrency <n> | Parallel requests | 5 | | --timeout <ms> | Request timeout in milliseconds | 10000 | | --dry-run | Generate payloads without firing | false |

Examples

Fuzz a protected endpoint:

vuln-monkey "curl -X GET https://api.example.com/users/42 \
  -H 'Authorization: Bearer token_xyz'"

Fuzz an entire API using OpenAPI spec:

vuln-monkey --spec https://api.example.com/v1/openapi.json \
  --model openai --concurrency 10

Fuzz with a local LLM, 20s timeout:

vuln-monkey --model ollama --timeout 20000 \
  "curl -X POST https://api.example.com/login -d '{\"password\":\"test\"}'"

Preview payloads before execution:

vuln-monkey --dry-run "curl https://api.example.com/users"

Risk Scoring

Findings are weighted by severity and summed into a 0-100 risk score.

| Severity | Weight | Risk 0-39 | Risk 40-69 | Risk 70-100 | |:---------|:------:|:---------:|:---------:|:----------:| | Critical | 25 | — | — | Fail | | High | 15 | — | Needs Attention | — | | Medium | 5 | Acceptable | — | — | | Low | 2 | Acceptable | — | — |

Scores aggregate across all findings. A single critical vulnerability = 25 points. Two critical + one high = 65 points (Needs Attention).

vuln-monkey identifies:

  • IDOR / BOLA - Insecure Direct Object References
  • Injection - SQL, NoSQL, command injection
  • Auth Bypass - Missing/weak authentication
  • Mass Assignment - Unintended field exposure
  • Type Juggling - Type coercion vulnerabilities
  • Rate Limiting Bypass - No/weak rate limits
  • Race Conditions - Concurrency issues
  • Overflow - Integer/buffer overflow
  • Data Exposure - Excessive information disclosure
  • CORS Misconfiguration - Broken CORS policies
  • Info Disclosure - Stack traces, version leaks

Safety & Guardrails

vuln-monkey is a security testing tool with built-in protections:

| Protection | What It Does | |:-----------|:------------| | SSRF Guard | Blocks requests to localhost, private IPs, link-local, AWS metadata | | Redirect Control | Does not follow HTTP redirects | | Response Cap | 1 MB max response body to prevent memory exhaustion | | Credential Redaction | Authorization headers masked in Markdown reports | | Path Validation | Prevents report writes to sensitive system directories |

Legal notice: This tool is for authorized security testing only. Always get written permission before testing APIs you do not own or operate.

Tech Stack

Core: TypeScript, Node.js 20+, Zod validation

CLI: Commander, Chalk, Ora spinners

Testing: Vitest, 68 passing tests

LLM Support: Claude, Gemini, OpenAI, Ollama

Development

Clone and install:

git clone https://github.com/cdbkk/vuln-monkey.git
cd vuln-monkey
npm install

Run tests:

npm test              # run once
npm run test:watch    # watch mode

Type check:

npx tsc --noEmit

Try locally:

npm run dev -- --help
npm run dev -- "curl https://api.example.com/users"

Requirements

  • Node.js 20+
  • One of:
    • Claude Code CLI (claude command)
    • Gemini CLI (gemini command)
    • Codex CLI (codex command)
    • API key for Claude, Gemini, or OpenAI
    • Local LLM running on localhost (Ollama, LM Studio, etc.)

Contributing

Found a bug? Have a feature idea? Pull requests welcome.

See CONTRIBUTING.md for setup and guidelines.

License

MIT — Build what you want.