npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llm-authz-audit

v0.1.4

Published

Static security analyzer for LLM applications — eslint for LLM security

Readme

npm PyPI License: MIT Tests

Scan your LLM-powered applications for authorization gaps, leaked credentials, missing rate limits, prompt injection risks, and other security issues — before they reach production.

  ╦   ╦   ╔╦╗   ╔═╗ ╦ ╦ ╔╦╗ ╦ ╦ ╔═╗   ╔═╗ ╦ ╦ ╦═╗  ╦  ╔╦╗
  ║   ║   ║║║   ╠═╣ ║ ║  ║  ╠═╣ ╔═╝   ╠═╣ ║ ║ ║ ║  ║   ║
  ╩═╝ ╩═╝ ╩ ╩   ╩ ╩ ╚═╝  ╩  ╩ ╩ ╚══   ╩ ╩ ╚═╝ ╩═╝  ╩   ╩

Quick Start

# With npx (no install needed)
npx llm-authz-audit scan .

# Or install via pip
pip install llm-authz-audit
llm-authz-audit scan .

How It Works

flowchart TD
    A["📂 Your Code"] --> B["File Discovery"]
    B --> C["Build Scan Context"]
    C --> D["Run 13 Analyzers\n(27 Rules)"]
    D --> E["Auth Context Analysis"]
    E --> F["Deduplicate"]
    F --> G["Apply Suppressions"]
    G --> H["Confidence Filter"]
    H --> I["Sort by Severity"]
    I --> J{"Findings ≥\nthreshold?"}
    J -- "Yes" --> K["Exit 1 ❌"]
    J -- "No" --> L["Exit 0 ✅"]

    style A fill:#1e293b,stroke:#10b981,color:#e2e8f0
    style B fill:#1e293b,stroke:#10b981,color:#e2e8f0
    style C fill:#1e293b,stroke:#10b981,color:#e2e8f0
    style D fill:#1e293b,stroke:#10b981,color:#e2e8f0
    style E fill:#1e293b,stroke:#10b981,color:#e2e8f0
    style F fill:#1e293b,stroke:#10b981,color:#e2e8f0
    style G fill:#1e293b,stroke:#10b981,color:#e2e8f0
    style H fill:#1e293b,stroke:#10b981,color:#e2e8f0
    style I fill:#1e293b,stroke:#10b981,color:#e2e8f0
    style J fill:#1e293b,stroke:#f59e0b,color:#e2e8f0
    style K fill:#7f1d1d,stroke:#ef4444,color:#fca5a5
    style L fill:#14532d,stroke:#10b981,color:#86efac

The scan pipeline runs in a single pass:

  1. File Discovery — Walks the target directory, applies --exclude globs and --diff filtering
  2. Scan Context — Loads file content, parses ASTs (Python/JS/TS), builds lazy-loaded FileEntry objects
  3. Analyzers — Each of the 13 analyzers matches files by type and runs its rule set against the AST/content
  4. Auth Context — Cross-file analysis detects project-wide auth patterns (FastAPI Depends(), Flask login_required, Express JWT/Passport) and downgrades endpoint findings accordingly
  5. Deduplication — Removes duplicate findings (same rule + file + line)
  6. Suppressions — Applies inline # nosec comments and YAML suppression file entries
  7. Confidence Filter — Drops findings below --min-confidence threshold
  8. Sort & Exit — Sorts by severity, returns exit code 1 if any finding meets --fail-on threshold

What It Checks

llm-authz-audit ships with 13 analyzers and 27 rules covering the OWASP Top 10 for LLM Applications:

| Analyzer | ID Prefix | What It Detects | OWASP | |---|---|---|---| | PromptInjectionAnalyzer | PI | Unsanitized user input in prompts, string concat in prompts, missing delimiters | LLM01 | | SecretsAnalyzer | SEC | Hardcoded API keys, tokens, and passwords in Python, JS, and TS files | LLM06 | | EndpointAnalyzer | EP | Unauthenticated FastAPI/Flask endpoints serving LLM functionality | LLM06 | | JSEndpointAnalyzer | EP | Unauthenticated Express/Node.js endpoints with LLM calls | LLM06 | | ToolRBACAnalyzer | TR | LangChain/LlamaIndex tools without RBAC or permission checks | LLM06 | | RAGACLAnalyzer | RAG | Vector store retrievals without document-level access controls | LLM06 | | MCPPermissionAnalyzer | MCP | Over-permissioned MCP server configurations | LLM06 | | SessionIsolationAnalyzer | SI | Shared conversation memory without user/session scoping | LLM06 | | RateLimitingAnalyzer | RL | LLM endpoints without rate limiting middleware | LLM04 | | OutputFilteringAnalyzer | OF | LLM output used without content filtering or PII redaction | LLM02 | | CredentialForwardingAnalyzer | CF | Credentials forwarded to LLM via prompt templates | LLM06 | | AuditLoggingAnalyzer | AL | LLM API calls without surrounding audit logging (per-call proximity detection) | LLM09 | | InputValidationAnalyzer | IV | User input passed directly to LLM without validation | LLM01 |

Output Formats

Console (default) — Semgrep-style

╭──────────────────╮
│ 16 Code Findings │
╰──────────────────╯

    api/__init__.py
   ❯❯❱ EP001  [LLM06]
          Unauthenticated LLM endpoint
          29┆ @app.route('/api/v1/predict', methods=['POST'])
          fix: Add authentication dependency: Depends(get_current_user)

    api/model_service.py
   ❯❱ AL001  [LLM09]
          LLM API call without logging
          16┆ r = openai.Moderation.create(
          fix: Add logging around LLM API calls for audit purposes.

╭──────────────╮
│ Scan Summary │
╰──────────────╯
  ⚠ Findings: 16 (2 blocking)
  • Analyzers run: 8
  • Files scanned: 13
  • ❯❯❱ High: 2
  • ❯❱ Medium: 14

JSON

llm-authz-audit scan . --format json

SARIF (GitHub Code Scanning)

llm-authz-audit scan . --format sarif > results.sarif

Upload to GitHub Code Scanning for inline PR annotations — see CI/CD Integration.

Installation

npx (recommended for quick scans)

npx llm-authz-audit scan .

Requires Python >= 3.11 on your PATH. The npm wrapper automatically creates an isolated venv and installs the tool.

pip / pipx

# Install globally
pip install llm-authz-audit

# Or use pipx for isolation
pipx install llm-authz-audit

From source

git clone https://github.com/aiauthz/llm-authz-audit.git
cd llm-authz-audit
pip install -e ".[dev]"

Usage

scan — Analyze a project

llm-authz-audit scan [PATH] [OPTIONS]

| Option | Default | Description | |---|---|---| | --format | console | Output format: console, json, or sarif | | --fail-on | high | Minimum severity for non-zero exit: critical, high, medium, low | | --analyzers | all | Comma-separated list of analyzers to enable | | --exclude | — | Comma-separated glob patterns to skip | | --min-confidence | — | Minimum confidence to include: low, medium, high | | --suppress | — | Path to suppression YAML file | | --extra-rules | — | Comma-separated paths to custom rule YAML directories | | --diff | — | Only scan files changed since this git ref (e.g. HEAD~1, main) | | --ai | off | Enable LLM-powered deep analysis | | --ai-provider | anthropic | AI provider: openai or anthropic | | --ai-model | claude-sonnet-4-5-20250929 | AI model to use | | --ai-max-findings | 20 | Max findings to send to AI (cost guardrail) | | --config | — | Path to .llm-audit.yaml config file | | -q, --quiet | off | Suppress the intro banner | | -v, --verbose | off | Show debug output |

Examples:

# Scan current directory
llm-authz-audit scan .

# Scan with SARIF output, fail only on critical
llm-authz-audit scan ./my-app --format sarif --fail-on critical

# Scan with specific analyzers
llm-authz-audit scan . --analyzers SecretsAnalyzer,EndpointAnalyzer

# Exclude test files
llm-authz-audit scan . --exclude "tests/*,*.test.py"

# Filter out low-confidence noise
llm-authz-audit scan . --min-confidence medium

# Only scan files changed since main
llm-authz-audit scan . --diff main

# Suppress known findings
llm-authz-audit scan . --suppress .llm-audit-suppress.yaml

# Load custom rules
llm-authz-audit scan . --extra-rules ./my-rules,./team-rules

list-analyzers — Show available analyzers

llm-authz-audit list-analyzers

list-rules — Show all rules

llm-authz-audit list-rules

# Include custom rules
llm-authz-audit list-rules --extra-rules ./my-rules

init — Generate config template

llm-authz-audit init

Creates a .llm-audit.yaml in the current directory with sensible defaults.

Rules Reference

Prompt Injection (PI) — LLM01

| Rule | Severity | Description | |---|---|---| | PI001 | CRITICAL | Unsanitized user input in LLM prompt (f-string / .format()) | | PI002 | HIGH | Direct string concatenation in LLM prompt | | PI003 | MEDIUM | Missing prompt/input delimiter between system and user content |

Secrets (SEC) — LLM06

| Rule | Severity | Description | |---|---|---| | SEC001 | CRITICAL | Hardcoded OpenAI API key | | SEC002 | CRITICAL | Hardcoded Anthropic API key | | SEC003 | CRITICAL | Hardcoded HuggingFace API token | | SEC004 | CRITICAL | Hardcoded AWS access key | | SEC005 | HIGH | Hardcoded generic API key or secret | | SEC006 | HIGH | Hardcoded password |

Secrets rules scan Python, JavaScript, and TypeScript files.

Endpoints (EP) — LLM06

| Rule | Severity | Description | |---|---|---| | EP001 | HIGH | Unauthenticated LLM endpoint (FastAPI/Flask) | | EP002 | MEDIUM | LLM endpoint without rate limiting | | EP003 | MEDIUM | Unauthenticated LLM endpoint (Express/Node.js) |

Tool RBAC (TR) — LLM06

| Rule | Severity | Description | |---|---|---| | TR001 | HIGH | LangChain tool without permission checks | | TR002 | CRITICAL | Destructive LangChain tool without safeguards | | TR003 | HIGH | LlamaIndex FunctionTool without permission checks |

RAG Access Control (RAG) — LLM06

| Rule | Severity | Description | |---|---|---| | RAG001 | HIGH | Vector store retrieval without metadata filtering | | RAG002 | HIGH | LlamaIndex query engine without access controls |

MCP Permissions (MCP) — LLM06

| Rule | Severity | Description | |---|---|---| | MCP001 | CRITICAL | MCP server with root filesystem access | | MCP002 | HIGH | MCP server without authentication | | MCP003 | HIGH | MCP wildcard tool grants |

Session Isolation (SI) — LLM06

| Rule | Severity | Description | |---|---|---| | SI001 | HIGH | Shared conversation memory without user scoping | | SI002 | HIGH | LlamaIndex chat memory without user scoping |

Other Rules

| Rule | Severity | OWASP | Description | |---|---|---|---| | CF001 | CRITICAL | LLM06 | Credential in prompt template | | AL001 | MEDIUM | LLM09 | LLM API call without logging | | IV001 | MEDIUM | LLM01 | User input passed directly to LLM | | OF001 | MEDIUM | LLM02 | LLM output without filtering | | RL001 | MEDIUM | LLM04 | Missing rate limiting on LLM endpoint |

Suppression

Inline Suppression

Suppress individual findings with # nosec:

api_key = "sk-proj-abc123..."  # nosec — used for testing only

YAML Suppression File

Create a suppression file for bulk suppressions:

# .llm-audit-suppress.yaml
suppressions:
  - rule_id: SEC001
    file_pattern: "tests/*"
    reason: "Test fixtures with fake keys"

  - rule_id: EP001
    reason: "Public API — auth handled by API gateway"

  - file_pattern: "scripts/*"
    reason: "Internal tooling, not deployed"
llm-authz-audit scan . --suppress .llm-audit-suppress.yaml

Smart Suppression

Patterns are automatically recognized as safe:

# Environment variable — not flagged
api_key = os.environ["OPENAI_API_KEY"]

# Auth decorator — EP001 suppressed
@app.post("/chat")
@login_required
def chat_endpoint(request): ...

# Rate limiter present — RL001 suppressed
@limiter.limit("10/minute")
@app.post("/chat")
def chat_endpoint(request): ...

Cross-file Auth Context

When your project uses authentication middleware (FastAPI Depends(), Flask login_required, Express Passport/JWT), endpoint findings (EP001/EP003) are automatically downgraded to LOW confidence. Use --min-confidence medium to filter them out.

Configuration

Generate a config file with llm-authz-audit init, or create .llm-audit.yaml manually:

# Output format: console, json, or sarif
format: console

# Minimum severity to cause non-zero exit
fail_on: high

# Analyzers to enable (omit to enable all)
# analyzers:
#   - SecretsAnalyzer
#   - EndpointAnalyzer
#   - PromptInjectionAnalyzer

# Glob patterns to exclude
exclude:
  - "tests/*"
  - "*.test.py"

# AI-powered deep analysis
ai:
  enabled: false
  provider: anthropic
  model: claude-sonnet-4-5-20250929

AI Mode

Enable LLM-powered analysis to reduce false positives:

# Using Anthropic (default)
export ANTHROPIC_API_KEY=your-key
llm-authz-audit scan . --ai

# Using OpenAI
export OPENAI_API_KEY=your-key
llm-authz-audit scan . --ai --ai-provider openai

# Limit AI cost (default: 20 findings max)
llm-authz-audit scan . --ai --ai-max-findings 10

AI mode sends each finding's surrounding code context to the LLM for review. Findings classified as false positives are automatically dropped. The --ai-max-findings flag caps the number of findings sent to the LLM (sorted by severity, highest first) to control costs.

Requires the ai extra:

pip install llm-authz-audit[ai]

CI/CD Integration

GitHub Actions — Basic

name: LLM Security Audit
on: [push, pull_request]

jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"
      - run: pip install llm-authz-audit
      - run: llm-authz-audit scan . --format json --fail-on high

GitHub Actions — SARIF (Code Scanning)

Upload SARIF results to get inline annotations on pull requests:

name: LLM Security Audit
on: [push, pull_request]

jobs:
  audit:
    runs-on: ubuntu-latest
    permissions:
      security-events: write
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"
      - run: pip install llm-authz-audit
      - run: llm-authz-audit scan . --format sarif > results.sarif
        continue-on-error: true
      - uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: results.sarif

GitHub Actions — Diff Mode (PR only)

Scan only files changed in the PR:

- run: llm-authz-audit scan . --diff origin/main --format sarif > results.sarif
  continue-on-error: true

Exit Codes

| Code | Meaning | |---|---| | 0 | No findings above threshold | | 1 | Findings above --fail-on severity detected | | 2 | Invalid arguments or runtime error |

Custom Rules

Add your own rules as YAML files:

# my-rules/custom.yaml
rules:
  - id: CUSTOM001
    title: "Internal API called without auth header"
    severity: high
    owasp_llm: LLM06
    file_types: ["*.py"]
    pattern: "requests\\.(?:get|post)\\(.+internal-api"
    negative_pattern: "headers.*[Aa]uth"
    remediation: "Add Authorization header to internal API calls."
llm-authz-audit scan . --extra-rules ./my-rules
llm-authz-audit list-rules --extra-rules ./my-rules

Contributing

See CONTRIBUTING.md for development setup, how to add analyzers and rules, and PR guidelines.

License

MIT