npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ai-authorship

v0.2.0

Published

See what your AI wrote — authorship, blind spots, risk.

Downloads

125

Readme

ai-authorship

Scan your git history to see how much of your code your AI tools actually wrote, which models did the work, and where those models have known weaknesses.

npx ai-authorship scan

Previously known as @mattersec/vibecheck. That package is deprecated — point your scripts at ai-authorship instead.

Why this exists

Almost every developer uses AI to write code. Almost nobody knows where their AI gets things wrong.

ai-authorship answers four questions about a repository:

  1. How much of the code is AI-written? Authorship percentage across commits and contributors.
  2. Which tools and models? Claude Code, Cursor, Copilot, Codex, and others, with confidence levels.
  3. Where are the blind spots? Categories and languages where the detected models score poorly on the SecLens benchmark.
  4. What is the highest-risk area? Directories with heavy AI authorship in languages where the model has weak coverage.

It is not a vulnerability scanner. It is a mirror.

Quick start

# Scan the current directory
npx ai-authorship scan

# Scan a specific repo
npx ai-authorship scan --path ~/code/my-project

# Limit how many commits to analyze (default 5000)
npx ai-authorship scan --max-commits 1000

# Restrict to one branch
npx ai-authorship scan --branch main

# Machine-readable output
npx ai-authorship scan --json > ai-authorship.json

Share your scan

Generate a 1200×630 PNG card you can post to Twitter, Slack, or drop into a README:

# Default path: <repo>/ai-authorship.png
npx ai-authorship scan --png

# Custom path (file or existing directory)
npx ai-authorship scan --png ./docs/card.png

# Copy the image straight to your clipboard
npx ai-authorship scan --copy

# Hide the repo name (useful for private repos)
npx ai-authorship scan --png --anonymous

The card shows AI authorship percentage, risk grade, top models, and top blind spots. The repo name and contributor data never leave your machine.

ai-authorship share card example

Requirements: Node 18+ and a git repository.

What you get

The scan produces a single-screen terminal report:

  • AI authorship: overall percentage, with a breakdown of confirmed (commit trailers, tags, co-authors) versus heuristic detections.
  • Model distribution: which AI tools and underlying models contributed.
  • Repo insights: AI hotspot directories, AI commit size compared to human, contributor mix.
  • Blind spots: categories where your detected models underperform on SecLens benchmarks (model × language × OWASP category).
  • Code quality: per-language exposure (% AI-authored) and the percentage of vulnerabilities your AI is likely to miss.
  • Risk score: an A-through-F grade based on AI coverage and blind-spot severity.
  • Tips: concrete prompt and review suggestions tailored to the models in use.

ai-authorship terminal output

How it works

git log + diffs
   ↓
detectors (trailers, tags, co-authors, message patterns, file/diff heuristics)
   ↓
per-author baselines (calibrate heuristics)
   ↓
SecLens intelligence join (model × category × language)
   ↓
scoring + report

Detection strategies

Six detectors live in src/scanner/detectors/:

| Detector | Signal | Confidence | |---|---|---| | trailer | Generated-By: / Assistant: git trailers | confirmed | | co-author | Co-Authored-By: lines naming Claude, Copilot, Cursor, etc. | confirmed | | tag | [claude], [cursor], [ai] tags in subjects | confirmed | | message-patterns | Conversational or model-style commit prose | heuristic | | conventional-rich | Suspiciously polished conventional commits | heuristic | | files-multiplier | Commits much larger than the author's baseline | heuristic |

A commit is attributed to AI if a confirmed detector fires, or if multiple heuristics agree. Per-author baselines in src/scanner/baselines.ts keep heuristics from over-firing on contributors who normally write large or polished commits.

Intelligence data

Blind-spot data ships in data/seclens-intelligence.json, a snapshot of SecLens benchmark runs (12 models × 8 OWASP categories × 10 languages). Refresh it with:

python3 scripts/extract-seclens.py /path/to/seclens/reports/

Privacy

Everything runs locally. The scan reads git history with git log and never uploads source code, diffs, commit messages, author names, or paths. There is no telemetry. The whole report is generated on your machine from data already in your .git/ directory.

If we ever add an opt-in pattern-sharing feature for the intelligence flywheel, it will be opt-in, anonymized, and called out explicitly in the CLI before the first send.

Development

# Install
npm install

# Run from source
npx tsx src/index.ts scan --path /path/to/repo

# Build single-file bundle
npm run build

# Type check
npm run lint

# Tests
npm test

# Link the local build globally
npm link && ai-authorship scan

Stack

TypeScript (strict, ESM), Node 18+. tsup for bundling, vitest for tests, commander for the CLI, chalk / boxen / cli-table3 for the terminal UI.

Project layout

src/
  cli.ts              # commander setup
  scanner/            # git log parsing, detectors, baselines, insights
  intelligence/       # SecLens data loader + model registry
  scoring/            # risk score + grade
  report/             # terminal renderer
  png/                # share card: sanitize, satori, resvg, PNG
  output/             # flag dispatcher (terminal / json / png / clipboard)
data/
  seclens-intelligence.json
  fonts/              # bundled JetBrains Mono used by the share card
scripts/
  extract-seclens.py        # rebuild intelligence data from SecLens runs
  png-preview.ts            # render every fixture to /tmp for design eyeballing
  png-update-snapshots.ts   # refresh visual-regression baselines

Migrating from @mattersec/vibecheck

Anything you used to run as npx @mattersec/vibecheck ... now runs as npx ai-authorship .... Flags, JSON output, and PNG behavior are unchanged. The only cosmetic difference is the default PNG filename: ai-authorship.png instead of vibe-check.png.

If you pinned the old package in CI, swap the name; if you never did, nothing to do.

Contributing

Contributions welcome, especially new detectors and intelligence improvements.

  • Commits: Conventional commits (feat:, fix:, docs:, etc.).
  • Style: TypeScript strict, ESM only, no default exports.
  • Tests: Add *.test.ts next to the file you change.

License

MIT, see LICENSE.

Built by MatterSec. Questions: [email protected].