npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@avinashchby/aireview

v0.1.1

Published

Catch what your AI missed -- static analysis for LLM-generated code

Downloads

180

Readme

aireview

Catch what your AI missed. Static analysis for LLM-generated code -- no API keys, no cloud, no cost.

npm version license node

npx aireview .
src/api.ts (confidence: 42%)
  ✗ 3:1   'openai-functions' is a commonly hallucinated package  [phantom-imports]
    | import { createFunction } from 'openai-functions';

  ✗ 14:5  'fs.readFileAsync' does not exist  [hallucinated-apis]
    | const data = await fs.readFileAsync('config.json');
    Fix: Use 'fs.promises.readFile'

  ✗ 22:0  eval() is a code injection vector  [security-antipatterns]
    | const result = eval(userInput);

  ⚠ 28:1  Hedging comment: "adjust as needed"  [confidence-patterns]
    | // adjust as needed for your use case

---
Scanned 12 files in 38ms
Found 3 errors, 1 warning

Why this exists

LLMs write confident-looking code that compiles, passes a glance review, and breaks in production. The failure modes are predictable:

  • Phantom imports -- packages that sound right but don't exist on npm/PyPI
  • Hallucinated APIs -- fs.readFileAsync(), array.flatten(), dict.has_key() -- methods that were never real or were removed years ago
  • Placeholder stubs -- empty function bodies, // TODO comments, throw new Error("Not implemented") hidden behind working-looking signatures
  • Security holes -- eval(), SQL string concatenation, hardcoded API keys -- patterns LLMs reproduce from training data without understanding the risk
  • Swallowed errors -- catch(e) {}, bare except:, .catch(() => {}) -- silent failure modes that make debugging impossible

These patterns are hard to catch in code review because they look intentional. aireview knows the specific shapes of LLM mistakes and flags them before they ship.

Zero API calls. Pure AST parsing and pattern matching. Runs in milliseconds, works offline, costs nothing.

Install

npx aireview .              # run without installing
npm install -g aireview     # or install globally

Requires Node.js >= 18.

Usage

aireview .                              # scan current directory
aireview src/api.ts                     # scan a specific file
aireview src/ lib/                      # scan multiple paths
aireview --diff                         # scan staged git changes
aireview --diff HEAD~1                  # scan last commit
aireview --ci                           # CI mode: exit 1 if errors found
aireview --format json                  # JSON output for tooling
aireview --fix                          # auto-fix safe issues
aireview --severity error               # only errors, skip warnings/info
aireview --ignore phantom-imports       # skip specific rules

Detection rules

| Rule | Catches | Severity | |------|---------|----------| | phantom-imports | Imports of packages not in your dependency file, known hallucinated package names | warning/error | | hallucinated-apis | fs.readFileAsync(), array.flatten(), new Buffer(), dict.has_key(), 30+ more | error | | placeholder-code | TODO/FIXME, empty functions, NotImplementedError, // ... elisions | warning/error | | confidence-patterns | Generic variable names (data, result, temp), hedging comments, inconsistent style | info/warning | | security-antipatterns | eval(), new Function(), SQL injection, hardcoded secrets, disabled TLS | error | | type-safety | any overuse, as any casts, @ts-ignore, @ts-nocheck, loose equality | warning/error | | error-handling | Empty catch blocks, catch-only-logs, bare except:, missing .catch() | error/warning |

Configuration

Create .aireviewrc.json in your project root:

{
  "rules": {
    "phantom-imports": "error",
    "hallucinated-apis": "error",
    "placeholder-code": "warn",
    "confidence-patterns": "off",
    "security-antipatterns": "error",
    "type-safety": "warn",
    "error-handling": "error"
  },
  "ignore": ["*.test.ts", "*.spec.js"]
}

Or add an "aireview" key to your package.json.

GitHub Actions

name: AI Code Review
on: [pull_request]

jobs:
  aireview:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Run aireview
        run: npx aireview --diff origin/main --ci

Findings appear as inline annotations on the PR diff. Use --format json to pipe into other tools.

Programmatic API

import { scan } from 'aireview';

const result = await scan({
  paths: ['src/'],
  severity: 'warning',
  ignore: ['confidence-patterns'],
});

for (const file of result.files) {
  console.log(`${file.filePath}: ${file.confidenceScore}% confidence`);
  for (const finding of file.findings) {
    console.log(`  ${finding.line}:${finding.column} ${finding.message}`);
  }
}

Supported languages

  • JavaScript (.js, .jsx) -- TypeScript compiler API
  • TypeScript (.ts, .tsx) -- TypeScript compiler API
  • Python (.py) -- regex-based parser (no native dependencies)

Designed for extensibility. Add a language by implementing a parser that returns ParserResult and registering rules.

License

MIT