npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

diff-hound

v1.2.0

Published

AI-powered code review bot for GitHub, GitLab, and Bitbucket

Downloads

138

Readme

Diff Hound

Diff Hound is an automated AI-powered code review tool that posts intelligent, contextual comments directly on pull requests across supported platforms.

Supports GitHub today. GitLab and Bitbucket support are planned.


✨ Features

  • 🧠 Automated code review using OpenAI or Ollama (Upcoming: Claude, DeepSeek, Gemini)
  • 💬 Posts inline or summary comments on pull requests
  • 🔌 Plug-and-play architecture for models and platforms
  • ⚙️ Configurable with JSON/YAML config files and CLI overrides
  • 🛠️ Designed for CI/CD pipelines and local runs
  • 🧐 Tracks last reviewed commit to avoid duplicate reviews
  • 🖥️ Local diff mode — review local changes without a remote PR

🛠️ Installation

Option 1: Install via npm

npm install -g diff-hound

Option 2: Install from source

git clone https://github.com/runtimebug/diff-hound.git
cd diff-hound
npm install
npm run build
npm link

🚀 How to Use

Step 1: Setup Environment Variables

Copy the provided .env.example to .env and fill in your credentials:

cp .env.example .env

Then modify with your keys / tokens:

# Platform tokens
GITHUB_TOKEN=your_github_token # Requires 'repo' scope

# AI Model API keys (set one depending on your provider)
OPENAI_API_KEY=your_openai_key

🔐 GITHUB_TOKEN is used to fetch PRs and post comments – get it here 🔐 OPENAI_API_KEY is used to generate code reviews via GPT – get it here 💡 Using Ollama? No API key needed — just have Ollama running locally. See Ollama (Local Models) below.


Step 2: Create a Config File

You can define your config in .aicodeconfig.json or .aicode.yml:

JSON Example (.aicodeconfig.json)

{
  "provider": "openai",
  "model": "gpt-4o", // Or any other openai model
  "endpoint": "", // Optional: custom endpoint
  "gitProvider": "github",
  "repo": "your-username/your-repo",
  "dryRun": false,
  "verbose": false,
  "rules": [
    "Prefer const over let when variables are not reassigned",
    "Avoid reassigning const variables",
    "Add descriptive comments for complex logic",
    "Remove unnecessary comments",
    "Follow the DRY (Don't Repeat Yourself) principle",
    "Use descriptive variable and function names",
    "Handle errors appropriately",
    "Add type annotations where necessary"
  ],
  "ignoreFiles": ["*.md", "package-lock.json", "yarn.lock", "LICENSE", "*.log"],
  "commentStyle": "inline",
  "severity": "suggestion"
}

YAML Example (.aicode.yml)

provider: openai
model: gpt-4o # Or any other openai model
endpoint: "" # Optional: custom endpoint
gitProvider: github
repo: your-username/your-repo
dryRun: false
verbose: false
commentStyle: inline
severity: suggestion
ignoreFiles:
  - "*.md"
  - package-lock.json
  - yarn.lock
  - LICENSE
  - "*.log"
rules:
  - Prefer const over let when variables are not reassigned
  - Avoid reassigning const variables
  - Add descriptive comments for complex logic
  - Remove unnecessary comments
  - Follow the DRY (Don't Repeat Yourself) principle
  - Use descriptive variable and function names
  - Handle errors appropriately
  - Add type annotations where necessary

Step 3: Run It

diff-hound

Or override config values via CLI:

diff-hound --repo=owner/repo --provider=openai --model=gpt-4o --dry-run

Add --dry-run to print comments to console instead of posting them.


Local Diff Mode

Review local git changes without a remote PR or GitHub token. Only an LLM API key is needed.

# Review changes between current branch and main
diff-hound --local --base main

# Review last commit
diff-hound --local --base HEAD~1

# Review changes between two specific refs
diff-hound --local --base main --head feature-branch

# Review a patch file directly
diff-hound --patch changes.patch

Local mode always runs in dry-run — output goes to your terminal. If --base is omitted, it defaults to the upstream tracking branch or HEAD~1.


Ollama (Local Models)

Run fully offline code reviews using Ollama — no API key, no cloud, zero cost.

Prerequisites: Install and start Ollama, then pull a model:

# Install Ollama (see https://ollama.com/download)
ollama serve          # Start the Ollama server
ollama pull llama3    # Pull a model (one-time)

Run a review with Ollama:

# Review local changes using Ollama
diff-hound --provider ollama --model llama3 --local --base main

# Use a code-specialized model
diff-hound --provider ollama --model codellama --local --base main

# Point to a remote Ollama instance
diff-hound --provider ollama --model llama3 --model-endpoint http://my-server:11434 --local --base main

# Increase timeout for large diffs on slower models (default: 120000ms)
diff-hound --provider ollama --model llama3 --request-timeout 300000 --local --base main

Or set it in your config file (.aicodeconfig.json):

{
  "provider": "ollama",
  "model": "llama3",
  "endpoint": "http://localhost:11434"
}

💡 Ollama's default endpoint is http://localhost:11434. You only need --model-endpoint / endpoint if running Ollama on a different host or port.


Output Example (Dry Run)

== Comments for PR #42: Fix input validation ==

src/index.ts:17 —
Prefer `const` over `let` since `userId` is not reassigned.

src/utils/parse.ts:45 —
Consider refactoring to reduce nesting.

Optional CLI Flags

| Flag | Short | Description | | ------------------- | ----- | -------------------------------------------------- | | --provider | -p | AI model provider (openai, ollama) | | --model | -m | AI model (e.g. gpt-4o, llama3) | | --model-endpoint | -e | Custom API endpoint for the model | | --git-provider | -g | Repo platform (default: github) | | --repo | -r | GitHub repo in format owner/repo | | --comment-style | -s | inline or summary | | --dry-run | -d | Don't post comments, only print | | --verbose | -v | Enable debug logs | | --config-path | -c | Custom config file path | | --local | -l | Review local git diff (always dry-run) | | --base | | Base ref for local diff (branch/commit) | | --head | | Head ref for local diff (default: HEAD) | | --patch | | Path to a patch file (implies --local) | | --request-timeout | | Request timeout in ms (default: 120000) |


🛠️ Development

Project Structure

diff-hound/
├── bin/                  # CLI entrypoint
├── src/
│   ├── cli/              # CLI argument parsing
│   ├── config/           # JSON/YAML config handling
│   ├── core/             # Diff parsing, formatting
│   ├── models/           # AI model adapters (OpenAI, Ollama)
│   ├── platforms/        # GitHub, local git, etc.
│   ├── schemas/          # Structured output types and validation
│   └── types/            # TypeScript types
├── .env
├── README.md

Add Support for New AI Models

Create a new class in src/models/ that implements the CodeReviewModel interface.


Add Support for New Platforms

Create a new class in src/platforms/ that implements the CodeReviewPlatform interface.


✅ Next Steps

🔧 Structured logging (pino) 🌐 GitLab and Bitbucket platform adapters 🌍 Anthropic and Gemini model adapters 📤 Webhook server mode and GitHub Action 📦 Docker image for self-hosting 🧩 Plugin system with pipeline hooks 🧠 Repo indexing and context-aware reviews


🤝 Contributing

We welcome contributions! See CONTRIBUTING.md for:

  • Branching and commit conventions (Angular style)
  • PR workflow (squash-merge)
  • How to add new platform and model adapters

📜 License

MIT – Use freely, contribute openly.