npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@jamierumbelow/lgtm

v0.1.7

Published

Structured PR review companion - because 'lgtm' should mean something

Downloads

563

Readme

lgtm

Structured PR review companion — because "lgtm" should mean something

lgtm takes a GitHub PR (or local branch diff) and generates a structured review that breaks down changes into semantic groups, asks the questions that matter, and even tries to find the AI coding sessions that generated the code.

Quickstart

npx @jamierumbelow/lgtm https://github.com/org/repo/pull/123

Or with Bun:

bunx @jamierumbelow/lgtm https://github.com/org/repo/pull/123

Requirements

  • GitHub CLI (gh) installed and authenticated
  • Node.js 20+ (or Bun)

Installation

You can also install globally:

npm install -g @jamierumbelow/lgtm

Canary Builds

To run the latest build from master (may be unstable):

npx @jamierumbelow/lgtm@canary https://github.com/org/repo/pull/123

Or install the canary globally:

npm install -g @jamierumbelow/lgtm@canary

Shell Alias

For convenience, add an alias to your shell config (~/.zshrc, ~/.bashrc, etc.):

alias lgtm="npx @jamierumbelow/lgtm"

Then just run:

lgtm https://github.com/org/repo/pull/123

API Key Setup

lgtm uses LLMs to generate detailed analysis. Configure your API key(s) using the interactive config:

lgtm --config

This stores keys securely in your system keychain. Alternatively, set environment variables:

  • ANTHROPIC_API_KEY for Claude models
  • OPENAI_API_KEY for GPT models
  • GOOGLE_GENERATIVE_AI_API_KEY for Gemini models

Usage

# Review a GitHub PR
lgtm https://github.com/org/repo/pull/123

# Review a local branch diff (auto-detects base from current branch)
lgtm

# Review specific branches
lgtm --base main --head feature/my-branch
lgtm main...feature/my-branch

# Output formats
lgtm <target> --format html         # default, opens in browser
lgtm <target> --format md           # markdown to stdout
lgtm <target> --format json | jq '.changeGroups[]'

# Save to file instead of stdout/browser
lgtm <target> --format html -o review.html
lgtm <target> --format md -o review.md

# Hunt for LLM traces
lgtm <target> --find-traces --claude-dir ~/.claude

# Choose LLM model
lgtm <target> -m claude-opus-4.5
lgtm <target> -m gpt-5.2
lgtm <target> -m gemini-3-flash

# Skip LLM analysis (faster, less detailed)
lgtm <target> --no-llm

# Bypass cache
lgtm <target> --fresh

Management Flags

lgtm --config               # Configure API keys (Anthropic, OpenAI, Gemini)
lgtm --upgrade              # Upgrade to latest version
lgtm --upgrade --canary     # Switch to canary (bleeding edge) builds
lgtm --upgrade --stable     # Switch to stable builds
lgtm --version              # Show detailed version information

Options

| Option | Description | | ----------------------- | ------------------------------------------------------------------------------ | | -b, --base <branch> | Base branch for local comparison (auto-detected by default) | | -h, --head <branch> | Head branch for local comparison | | -f, --format <format> | Output format: html (default), md, json | | -o, --output <file> | Output file (defaults to browser for html, stdout for md/json) | | -p, --port <port> | Port for HTML server (default: 24601) | | -m, --model <model> | LLM model: claude-sonnet-4.5, claude-opus-4.5, gpt-5.2, gemini-3-flash | | --no-llm | Skip LLM-powered analysis | | --find-traces | Find LLM session traces that generated the changes | | --claude-dir <path> | Path to Claude Code history (default: ~/.claude) | | --cursor-dir <path> | Path to Cursor history directory | | --fresh | Bypass cache and fetch fresh data | | --verbose | Enable verbose logging |

What it does

1. Semantic Change Grouping

Instead of just showing file-by-file diffs, lgtm groups changes by logical unit:

  • Changes to the same function across multiple files
  • Related test and implementation changes
  • Configuration updates

2. Standard Review Questions

Every review answers a consistent set of questions:

  • Failure modes: What can go wrong? What's already handled?
  • Input domain: What inputs does this code accept?
  • Output range: What outputs can it produce?
  • External dependencies: What systems outside this codebase does it rely on?
  • Decomposition: Can this PR be split into smaller ones?
  • New symbols: What functions/classes/types does it introduce?
  • Duplication: Does it duplicate existing code?
  • Abstractions: Do the design choices make sense?
  • Context owners: Who has worked on these files before?

3. LLM Trace Finder

When you use --find-traces, lgtm searches for AI coding sessions that might have generated the changes:

  • Claude Code history (default: ~/.claude/projects/, customize with --claude-dir)
  • Cursor history (specify with --cursor-dir)

This helps answer "what was the AI told to do?" — useful for understanding intent behind generated code.

4. Suggested Reviewers

Based on git blame and PR review history, lgtm suggests who might have context to review the changes.

Output Formats

HTML (default)

A static, self-contained webpage that opens in your browser with:

  • Sidebar navigation
  • Collapsible sections
  • Keyboard navigation (← → or j k)
  • Dark mode (GitHub-style)

Markdown

Clean, readable markdown suitable for pasting into a PR comment or wiki.

lgtm <target> --format md

JSON

Machine-readable output for integrating with other tools:

lgtm <target> --format json | jq '.questions[] | select(.id == "failure-modes")'

Development

git clone https://github.com/jamierumbelow/lgtm
cd lgtm
bun install
bun run dev https://github.com/org/repo/pull/123

Global Dev Command

To install a global lgtm command that runs the local source code:

./install-dev.sh

This creates a wrapper at /usr/local/bin/lgtm that runs bun src/cli.ts. Any changes you make to the source are immediately reflected.

Roadmap

  • [ ] Tree-sitter integration for better semantic grouping
  • [x] LLM-powered descriptions and question answers
  • [x] Cursor history support (partial - use --cursor-dir)
  • [ ] GitHub Action for automated PR reviews
  • [ ] VSCode extension

License

MIT