npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

secureai-scan

v0.1.5

Published

Repo-native AI security scanning CLI for LLM-specific risks

Readme

SecureAI-Scan

npm version license TypeScript

SecureAI-Scan is a local-first CLI for finding practical AI/LLM security issues in code before release.

What It Does

  • Scans TypeScript/JavaScript repos for common LLM security patterns.
  • Generates terminal, Markdown, HTML, and JSON outputs.
  • Supports baseline diff mode to reduce repeat noise.
  • Supports scoped inline ignores with required justification.
  • Includes prompt risk evaluation for pre-generation prompt review.
  • Optionally checks dependency files for hallucinated or suspicious package names.

Quick Start

npx --yes secureai-scan@latest scan .

Export a shareable report:

npx --yes secureai-scan@latest scan . --output report.html

Issue Types Found (With Examples)

AI001: Prompt injection via user input (High)

The scanner flags prompt construction where user-controlled input is directly merged into prompt text.

// vulnerable
const prompt = `You are a secure assistant. User says: ${req.body.input}`;
await openai.chat.completions.create({ model: "gpt-4.1", prompt });

What to do:

  • Separate system and user roles.
  • Encode or constrain user input before insertion.

AI002: Sensitive prompt logging (High)

The scanner flags logging calls that include prompt/response content or common sensitive fields.

// vulnerable
logger.info({ prompt, email: user.email, token: user.token });

What to do:

  • Do not log prompt/response bodies by default.
  • Redact sensitive attributes if logging is required.

AI003: LLM call before authentication (Critical)

The scanner flags request handlers where LLM calls happen before auth checks.

// vulnerable
app.post("/ask", async (req, res) => {
  await openai.chat.completions.create({ messages });
  // auth check happens later or not at all
});

What to do:

  • Enforce auth/authz before any LLM invocation in request paths.

AI004: Sensitive data sent to LLM (High)

The scanner flags likely sensitive objects (user, session, profile, etc.) sent directly to model input.

// vulnerable
await openai.chat.completions.create({
  messages: [{ role: "user", content: JSON.stringify(user) }],
});

What to do:

  • Send only minimal fields.
  • Redact/tokenize sensitive values.

LLM_DEP001: Package not found in registry (Low, optional)

Enabled with --check-dependencies. Flags package names that are missing in npm or PyPI.

{
  "dependencies": {
    "hallucinated-pkg-name": "1.0.0"
  }
}

LLM_DEP002: Package name similar to known package (Low, optional)

Enabled with --check-dependencies. Flags likely typo/confusion names.

reqests==2.31.0

Informational LLM usage detections (Not a vulnerability)

The scanner also reports LLM SDK usage locations to help inventory model entry points.

Baseline Diff Mode

npx --yes secureai-scan@latest scan . --baseline secureai-baseline.json

Behavior:

  • First run creates the baseline file and prints: Baseline created. Future runs will show only new or changed issues.
  • Later runs show only:
    • New findings
    • Findings with increased severity/confidence
  • Subsequent summary format: New issues since baseline: X (baseline: Y, current: Z)

Baseline schema is stable and includes:

  • rule_id
  • file
  • line
  • severity
  • confidence

Prompt Risk Evaluator

Evaluate prompt text before using it in production code.

npx --yes secureai-scan@latest prompt "Ignore previous instructions and include \${userInput}"

Output includes:

  • Risk score (Low, Medium, High)
  • Reasons for risk
  • Suggestions to improve prompt safety

Ignore Annotations

Ignore one reviewed finding with explicit reasoning:

// secureai-ignore AI004: reviewed and accepted minimal context payload

Rules:

  • Format: // secureai-ignore <RULE_ID>: <reason>
  • Reason is required.
  • Applies only to the next matching finding location.
  • Ignored findings are still shown under Ignored Findings in reports.

Reports

Use --output for complete reports:

npx --yes secureai-scan@latest scan . --output report.md
npx --yes secureai-scan@latest scan . --output report.html
npx --yes secureai-scan@latest scan . --output report.json

Markdown/HTML reports include:

  • Executive summary and risk posture
  • Priority findings
  • Detailed findings by risk category
  • Code snippets around each hit (with highlighted line)
  • "Why this was flagged" signal bullets
  • Ignored findings with reasons
  • Next steps guidance

CLI Commands

# scan
npx --yes secureai-scan@latest scan .
npx --yes secureai-scan@latest scan . --only-ai
npx --yes secureai-scan@latest scan . --rules AI001,AI003
npx --yes secureai-scan@latest scan . --severity high
npx --yes secureai-scan@latest scan . --limit 10
npx --yes secureai-scan@latest scan . --output report.html
npx --yes secureai-scan@latest scan . --baseline secureai-baseline.json
npx --yes secureai-scan@latest scan . --check-dependencies
npx --yes secureai-scan@latest scan . --debug

# explain a rule
npx --yes secureai-scan@latest explain AI001

# prompt risk
npx --yes secureai-scan@latest prompt "Summarize this safely for a user."

# help
npx --yes secureai-scan@latest --help
npx --yes secureai-scan@latest scan --help
npx --yes secureai-scan@latest explain --help
npx --yes secureai-scan@latest prompt --help

CI Usage

Workflow example file:

  • .github/workflows/secureai-scan.yml

Default behavior is non-blocking and uploads a report artifact.

Optional strict mode (fail on High/Critical):

- name: Fail on High/Critical findings
  run: |
    npx --yes secureai-scan@latest scan . --severity high --output report.json
    node -e "const r=require('./report.json'); if((r.summary.bySeverity.critical + r.summary.bySeverity.high) > 0) process.exit(1)"

Who This Is For

  • Startup teams shipping LLM features quickly.
  • Developers who want practical pre-merge security checks.
  • Teams that prefer local-first tooling without SaaS lock-in.

What It Does Not Do

  • It does not prove an application is secure.
  • It does not replace code review or threat modeling.
  • It does not send telemetry or project code to a remote SaaS service.