npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@secure-ai-app/cli

v0.1.0

Published

Security guardrails for AI-built applications

Downloads

54

Readme

secure-ai-app

Security guardrails for AI-generated code.

license npm npm downloads zero LLM


The first time I ran this on my own codebase, it found a live Google Play RSA private key hardcoded in a file. My own codebase. First run. That's not a demo — that's the point.


Built for vibe coders

If you built your app with Cursor, Lovable, Replit, or Claude — run this before you ship.

AI writes code fast. It also confidently writes code with hardcoded secrets, exposed credentials, missing auth guards, and unscoped API calls. Not because it's broken — because it was never asked to check.

secure-ai-app is the check.

npx @secure-ai-app/cli scan

No account. No platform. No GitHub OAuth. No signup. Just results.

Runs in milliseconds. Works offline. Gates your CI. Understands your stack.

No LLM in the loop

This is deterministic — regex, AST analysis, file system checks. Same input, same results, every time. No model, no API key, no inference cost.

Why? Because LLMs optimize for completion, not safety. They'll confidently generate code that runs, passes type checks, and leaks your API keys to the client bundle. They're solving for "does it work" — not "should it ship." The same fluency that makes them fast makes them dangerous when nobody's checking.

secure-ai-app doesn't reason about your code. It inspects it. That's the difference between a tool that might catch something and one that always will.


See it in action

secure-ai-app scan demo


Install

npm install -g @secure-ai-app/cli

Or run without installing:

npx @secure-ai-app/cli scan

Quick start

# Initialize config + install pre-commit hook
secure-ai-app init

# Scan your project (82ms for 78 files)
secure-ai-app scan

# Auto-fix what's fixable
secure-ai-app fix

# Check your score
secure-ai-app status

What it catches

🔑 Secrets & Keys

  • Hardcoded API keys (OpenAI, AWS, generic)
  • NEXT_PUBLIC_ prefix on sensitive env vars
  • .env files not in .gitignore
  • Credentials logged to console

🔐 Auth & Access

  • Route components missing auth guards
  • API calls missing user/tenant scoping
  • AI agents with unrestricted tool access
  • Credentials stored in localStorage or sessionStorage

Rules

10 rules across 4 categories. 7 are universal — 3 provide deep SDK-aware analysis for Flowstack projects and are automatically skipped in projects that don't use the SDK.

| Rule | Severity | Engine | |------|----------|--------| | secrets/hardcoded-api-key | 🔴 Critical | Regex | | secrets/env-exposure | 🟠 High | Regex | | secrets/dotenv-security | 🟠 High | File check | | general/unsafe-eval | 🟠 High | Regex | | general/console-credentials | 🟡 Medium | Regex | | auth/missing-auth-guard | 🟠 High | AST | | tenant/missing-tenant-scope | 🟡 Medium | AST | | tenant/missing-user-scope | 🟠 High | AST | | ai/secret-in-prompt | 🔴 Critical | AST | | ai/unrestricted-tools | 🟡 Medium | AST |


Scoring

Starts at 100. Deducts per finding.

| Severity | Deduction | Grade | |----------|-----------|-------| | Critical | −20 | A 90+ | | High | −10 | B 75+ | | Medium | −5 | C 60+ | | Low | −2 | D 40+ · F <40 |


Commands

scan

secure-ai-app scan [options]

| Flag | Description | |------|-------------| | -p, --path <path> | Project root (default: .) | | -f, --format <format> | Output: table, json, sarif | | -s, --severity <level> | Minimum: critical, high, medium, low | | --changed-only | Only scan files changed since last commit |

fix

secure-ai-app fix [options]

Auto-applies fixes for rules that have safe, deterministic resolutions. Unsafe fixes require manual review and are flagged, never auto-applied.

init

secure-ai-app init

Creates .secure-ai-app.json config and installs a pre-commit hook that blocks commits with critical findings.

hooks

secure-ai-app hooks install   # Add pre-commit hook
secure-ai-app hooks remove    # Remove pre-commit hook

CI/CD integration

GitHub Actions

- name: Security scan
  run: npx @secure-ai-app/cli scan --format sarif --severity high

Block on critical findings

secure-ai-app scan --severity critical && echo "✓ Clean" || exit 1

Configuration

{
  "exclude": ["**/node_modules/**", "**/dist/**", "**/Pods/**"],
  "rules": {
    "general/console-credentials": "warn",
    "ai/unrestricted-tools": "off"
  },
  "severity": "medium"
}

Programmatic API

import { ScanEngine, FixEngine } from '@secure-ai-app/cli';

const engine = new ScanEngine();
const report = await engine.scan({ path: '.', severity: 'high' });

console.log(report.score);     // { value: 93, grade: 'A', ... }
console.log(report.findings);  // Finding[]

const fixer = new FixEngine();
const results = fixer.applyAll(fixer.getFixableFindings(report.findings), '.');

Part of Flowstack

secure-ai-app is the security layer of the Flowstack SDK.

Flowstack provides production-grade primitives for AI-native apps — auth, multi-tenancy, agent connectivity — all as React hooks. secure-ai-app knows every hook, every API pattern, every failure mode, because we built both sides.

That's what makes the auth guard detection, tenant scoping rules, and agent tool analysis work at the semantic level — not just pattern matching.

Flowstack users get deeper analysis automatically. Non-Flowstack projects get everything else.


Why not Semgrep?

Semgrep is a rule engine. A powerful one. But its default quickstart is:

  1. Go to the AppSec Platform
  2. Sign up with GitHub
  3. Create an organization
  4. Then scan

secure-ai-app is:

npx @secure-ai-app/cli scan

No account. No org. No OAuth. No platform dependency. Works before you've committed anything. Works in CI without a token. Works on a weekend side project that doesn't have a GitHub org yet.


Contributing

Found a pattern secure-ai-app misses? Open an issue with the code sample. That's how the rule library grows — real codebases, real findings.

Known false positive? Flag it. We'd rather tune the rules than generate noise.


License

MIT — use it, fork it, build on it.


Built by Keon Cummings · Not In The SOW newsletter

The work nobody scoped.