npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

mr-pilot

v1.1.2

Published

AI code reviewer for GitLab MRs and GitHub PRs - Get instant feedback on bugs, quality, and requirements. Works with OpenAI, Ollama, Claude & more.

Readme

AI Code Review Bot

Automated code review tool that analyzes GitLab Merge Requests and GitHub Pull Requests using AI.

Setup

  1. Install dependencies:
npm install

For development (includes testing tools):

npm install --include=dev
  1. Create .env file (copy from .env.example):
cp .env.example .env
  1. Fill in your credentials in .env:
    • For GitLab:
      • GITLAB_TOKEN: Your GitLab personal access token (with api scope)
      • GITLAB_API: Your GitLab API URL (e.g., https://gitlab.com/api/v4)
      • GITLAB_DEFAULT_PROJECT: (Optional) Default project path for using MR ID only
    • For GitHub:
      • GITHUB_TOKEN: Your GitHub personal access token (with repo scope)
      • GITHUB_DEFAULT_REPO: (Optional) Default repository (e.g., owner/repo) for using PR number only
    • General:
      • MAX_DIFF_CHARS: (Optional) Maximum characters for diffs (default: 50000)
    • LLM Configuration:
      • LLM_PROVIDER: LLM provider to use (openrouter, openai, ollama, azure)
      • LLM_API_KEY: Your LLM API key (not needed for Ollama)
      • LLM_MODEL: Model to use (e.g., openai/gpt-oss-120b:exacto, gpt-4o, llama3.1:8b)

Usage

GitLab

Using full MR URL:

node src/index.js https://gitlab.com/MyOrg/MyGroup/MyProject/-/merge_requests/1763

Using MR ID with default project (set in .env):

# Set GITLAB_DEFAULT_PROJECT=RD_soft/simpliciti-frontend/geored-v3 in .env
node src/index.js 1763

Using MR ID with project argument:

node src/index.js 1763 --project RD_soft/simpliciti-frontend/geored-v3

Or using short flag:

node src/index.js 1763 -p RD_soft/simpliciti-frontend/geored-v3

GitHub

Using full PR URL:

node src/index.js https://github.com/owner/repo/pull/123

Using PR number with default repository (set in .env):

# Set GITHUB_DEFAULT_REPO=owner/repo in .env
node src/index.js 123

Auto-selection rules for numeric IDs:

  • 2-segment paths (e.g., owner/repo) → Auto-selects GitHub
  • 3+ segment paths (e.g., group/subgroup/project) → Auto-selects GitLab
  • Priority: --project argument > GITHUB_DEFAULT_REPO > GITLAB_DEFAULT_PROJECT
  • Override anytime with --platform gitlab or --platform github

Using PR number with repository argument:

node src/index.js 123 --project owner/repo

Or using short flag:

node src/index.js 123 -p owner/repo

With ticket specification file:

node src/index.js 1763 --input-file input.txt
# or
node src/index.js 1763 -i input.txt

With project guidelines (reduces false positives):

node src/index.js 1763 -i input.txt --guidelines-file guidelines.txt
# or
node src/index.js 1763 -i input.txt -g guidelines.txt

Post review as comment on MR:

node src/index.js 1763 --comment
# or
node src/index.js 1763 -c

Debug mode (see what's sent to LLM):

node src/index.js 1763 -i input.txt --debug
# or
node src/index.js 1763 -i input.txt -d

Increase diff size limit for large MRs:

# First run shows: "For complete review, use: --max-diff-chars 75000"
node src/index.js 1763

# Then run with recommended size
node src/index.js 1763 --max-diff-chars 75000
# or
node src/index.js 1763 -m 75000

Combine all options:

node src/index.js 1763 -p RD_soft/simpliciti-frontend/geored-v3 -i input.txt -m 100000 --comment --debug

Options

  • --comment, -c: Post the review as a comment on the MR/PR
  • --input-file <path>, -i <path>: Path to a file containing ticket/requirement specification
  • --guidelines-file <path>, -g <path>: Path to project guidelines file (helps reduce false positives)
  • --project <path>, -p <path>: GitLab project path (e.g., group/subgroup/project) or GitHub repository (e.g., owner/repo)
  • --max-diff-chars <number>, -m <number>: Maximum characters for diffs (overrides MAX_DIFF_CHARS in .env)
  • --fail-on-truncate: Exit with error if diff is truncated (useful for CI/CD to enforce complete reviews)
  • --platform <gitlab|github>: Explicitly specify the platform when using a numeric ID with an ambiguous project path
  • --debug, -d: Show detailed debug information (prompt sent to LLM, raw response, etc.)

Diff Size Management

Large MRs may have their diffs truncated to fit within token limits. The tool helps you handle this:

  1. First run: Shows if truncation occurred and recommends the exact size needed

    ⚠️  DIFF TRUNCATED
       Original size: 72,580 chars
       Showing: 50,000 chars
       Files hidden: 16
       💡 For complete review, use: --max-diff-chars 73580
  2. Re-run with recommended size: Get a complete review

    node src/index.js 1763 --max-diff-chars 73580
  3. Set default in .env (optional): Avoid specifying each time

    MAX_DIFF_CHARS=100000
  4. Fail on truncation (CI/CD): Exit with error code 1 if diff is incomplete

    # Useful in CI/CD pipelines to ensure reviews are complete
    node src/index.js 1763 --fail-on-truncate
    
    # Output when truncated:
    # ❌ Exiting: diff is truncated (--fail-on-truncate enabled)
    #    Run with the recommended --max-diff-chars to review all changes.
    # Exit code: 1

Debug Mode

When using --debug, the tool will display:

  • 📋 Full ticket specification content
  • 📊 MR metadata (title, description, branches, file count)
  • 🤖 Complete prompt sent to the LLM
  • 📏 Character counts for each section
  • 💬 Raw LLM response before parsing
  • 💬 Comment body that will be posted (if using --comment)

This helps you understand what context the AI is working with and verify the quality of the analysis.

Input File Format

The input file should contain the ticket scope, requirements, or specification that the MR is supposed to implement. This helps the AI evaluate whether the code changes meet the stated goals.

Important: When an input file is provided, the AI will ONLY review changes related to those requirements. This helps ignore unrelated commits that may be present due to branch merges.

Example input.txt:

Feature: Add user authentication
- Implement login form with email/password
- Add JWT token generation
- Include logout functionality
- Add session management

Guidelines File Format

The guidelines file helps reduce false positives by informing the AI about project-specific conventions and configurations.

Example guidelines.txt:

1. console logs (any type) are automatically disabled in production.
2. VITE_ envs are normalized so it can be specified in camelCase (works out of the box)
3. Unit tests are handled by a separate CI pipeline, not required in every MR
4. TypeScript strict mode is not enabled project-wide

When provided, the AI will NOT flag these as issues, reducing noise in the review.

LLM Provider Configuration

The tool supports multiple LLM providers. Configure via environment variables:

OpenRouter (default)

LLM_PROVIDER=openrouter
LLM_API_KEY=sk-or-v1-...
LLM_MODEL=openai/gpt-oss-120b:exacto

OpenAI

LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_MODEL=gpt-4o

Ollama (local)

LLM_PROVIDER=ollama
LLM_MODEL=llama3.1:8b
# LLM_API_URL=http://localhost:11434/v1/chat/completions  # default

Azure OpenAI

LLM_PROVIDER=azure
LLM_API_KEY=your_azure_key
LLM_MODEL=gpt-4
LLM_API_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/chat/completions?api-version=2024-02-15-preview

Custom OpenAI-compatible API

LLM_PROVIDER=openai
LLM_API_KEY=your_key
LLM_MODEL=your-model
LLM_API_URL=https://your-custom-endpoint.com/v1/chat/completions

Note: Legacy OPENROUTER_API_KEY and OPENROUTER_MODEL variables are still supported for backward compatibility.

Testing

Run the test suite:

npm test

Run tests with coverage:

npm test -- --coverage

Output

The tool provides:

  • Goal status (met/partially_met/unmet)
  • List of potential issues
  • Overall remarks
  • Quality score (0-100)

With --comment flag, this same output is posted as a formatted comment on the MR.

Error Handling & Reliability

The tool includes robust error handling:

Automatic Retries

  • LLM requests: Up to 3 attempts with 2-minute timeout per attempt
  • GitLab/GitHub API: Up to 3 attempts with 30-second timeout per attempt
  • Rate limits (429): Automatic wait until rate limit resets, then retry
  • Server errors (5xx): Automatic retry with 2-second delay

Timeout Protection

If requests hang, the tool will:

  1. Wait for the configured timeout (30s for GitLab, 2min for LLM)
  2. Retry automatically (up to 3 times)
  3. Show clear error message if all attempts fail

What gets retried:

  • ✅ Timeouts
  • ✅ Network errors
  • ✅ Server errors (500, 502, 503, etc.)
  • ✅ Rate limits (429)
  • ❌ Authentication errors (401, 403)
  • ❌ Bad requests (400)
  • ❌ Not found (404)