@mikestreety/ruck
v1.2.0
Published
Automated GitLab MR code reviews using Gemini or Claude
Readme
Ruck
Note: This code was 100% generated with AI (mainly Claude Code) as an experiment.
AI-powered code reviews for GitLab Merge Requests and local git branches using various LLM providers.
Features
- 🤖 Multi-LLM Support: Works with Claude, Gemini, OpenAI, ChatGPT, and Ollama (enhanced with text-to-JSON conversion)
- 🌍 Dual Mode Support: Review GitLab Merge Requests OR local git branch changes
- 🔍 Smart Detection: Automatically detects available LLM binaries and git branches
- 📝 Comprehensive Reviews: Provides detailed code analysis with line-specific comments
- 🛡️ Conflict Prevention: Checks for unresolved discussions before proceeding (GitLab mode)
- 🎯 Interactive CLI: Professional command-line interface with helpful prompts
- 📊 Multiple Output Formats: HTML reports, CLI output, or direct GitLab posting
- ⚡ Fast Setup: Simple installation and configuration
Installation
Global Installation (Recommended)
# Install globally via npm
npm install -g @mikestreety/ruck
# Now use 'ruck' command anywhere
ruck --helpNPX Usage (No Installation)
# Run directly from GitHub (works immediately)
npx github:mikestreety/ruck
# Or run from npm registry (after package is published)
npx @mikestreety/ruckDevelopment Setup
Clone the repository:
git clone <repository-url> cd ruckInstall dependencies:
npm installRun locally:
node ./bin/run.js
Prerequisites
- Node.js: Version 16 or higher
- Git Repository: For local mode, run from within a git repository
- GitLab Access Token: Personal access token with appropriate permissions (GitLab mode only)
- LLM CLI Tool: At least one of the following:
- Claude CLI
- Gemini CLI
- OpenAI CLI
- Ollama (for local models)
- ChatGPT CLI
- GitHub CLI (for Copilot)
Configuration
Create a .ruckconfig file in your home directory for centralized configuration:
# ~/.ruckconfig
GITLAB_PRIVATE_TOKEN=your_gitlab_token_here
DEFAULT_LLM=claude
DEFAULT_OUTPUT=htmlNote: Local mode doesn't require GitLab token configuration.
- Verify LLM availability:
ruck list-llms
Usage
This tool uses oclif as its CLI framework, providing a professional command-line interface with built-in help, argument validation, and consistent behavior. The interface now features enhanced user experience with Inquirer.js for interactive prompts and Ora for elegant loading animations.
Command Structure
# Fully interactive mode (prompts for review mode, then appropriate options)
ruck
# Interactive mode with command
ruck review
# Local branch review (compare current branch with base)
ruck review --mode local
# GitLab MR review
ruck review <merge-request-url> --mode gitlab
# Specify all options for local review
ruck review my-feature-branch --mode local --base main --llm claude --output html
# Specify all options for GitLab review
ruck review <merge-request-url> --mode gitlab --llm claude --output gitlab
# List available LLMs
ruck list-llms
# Configure setup
ruck setupAll Available Options
ruck review [url_or_branch] [options]
Arguments:
url_or_branch GitLab MR URL or local branch name (optional, will prompt if missing)
Options:
-m, --mode <mode> Review mode: local (compare branches) or gitlab (MR review)
-b, --base <branch> Base branch for local comparison (default: auto-detect)
-l, --llm <provider> LLM provider: claude, gemini, openai, ollama, chatgpt
-o, --output <format> Output format: gitlab (GitLab mode), html, cli
--list-llms List available LLM providers and exit
-h, --help Display help informationUsage Examples
1. Fully Interactive Mode
ruck
# Prompts for:
# - Review mode (local/gitlab)
# - Branch information OR GitLab MR URL
# - LLM provider (from available options)
# - Output format (html/cli for local, gitlab/html/cli for GitLab)2. Local Branch Review Examples
Compare current branch with auto-detected base:
ruck review --mode local
# Automatically detects current branch and suggests base branch (main/master/develop)Compare specific branch with base:
ruck review my-feature-branch --mode local --base main
# Compares my-feature-branch with main branchComplete local review with all options:
ruck review --mode local --base main --llm claude --output html
# Generates HTML report comparing current branch with main using ClaudeLocal review with CLI output:
ruck review my-feature --mode local --llm gemini --output cli
# Shows linter-style output in console3. GitLab MR Review Examples
Interactive GitLab review:
ruck review --mode gitlab
# Prompts for GitLab MR URL, LLM, and output formatDirect GitLab MR URL:
ruck review https://gitlab.example.com/project/repo/-/merge_requests/123 --mode gitlab
# Prompts for LLM and output formatComplete GitLab review with all options:
ruck review https://gitlab.example.com/project/repo/-/merge_requests/123 --mode gitlab --llm claude --output gitlab
# Posts comments directly to GitLab MRGenerate HTML report from GitLab MR:
ruck review <gitlab-url> --mode gitlab --llm openai --output html
# Downloads MR data and generates offline HTML report4. Output Format Examples
HTML Report (recommended for local reviews):
ruck review --mode local --output html
# Generates beautiful standalone HTML file: code-review-[timestamp].htmlCLI Output (great for CI/CD):
ruck review --mode local --output cli
# Shows ESLint-style output: file:line label: messageGitLab Posting (GitLab mode only):
ruck review <gitlab-url> --mode gitlab --output gitlab
# Posts line-specific comments and summary to the MR5. LLM Provider Examples
# Use Claude (recommended for cloud)
ruck review --mode local --llm claude
# Use Gemini
ruck review --mode local --llm gemini
# Use OpenAI (requires API key)
ruck review --mode local --llm openai
# Use local Ollama (recommended for local models)
ruck review --mode local --llm ollama
# Use ChatGPT (requires TOKEN env var)
ruck review --mode local --llm chatgpt6. Check Available Providers
ruck list-llms
# Output:
# Available LLM providers:
# - claude
# - gemini
# - openai (if installed)
# - ollama (if installed)Command Options Reference
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| [url_or_branch] | Argument | prompted | GitLab MR URL or local branch name |
| -m, --mode <mode> | Option | prompted | Review mode: local or gitlab |
| -b, --base <branch> | Option | auto-detect | Base branch for local comparison |
| -l, --llm <provider> | Option | prompted | LLM provider: claude, gemini, openai, ollama, chatgpt |
| -o, --output <format> | Option | prompted | Output format: gitlab (GitLab mode only), html, cli |
| --list-llms | Flag | - | List available LLM providers and exit |
| -h, --help | Flag | - | Display help information |
| -V, --version | Flag | - | Display version number |
Interactive Prompts
The tool will interactively prompt for missing required information:
Review Mode Prompt: If no
--modespecified, shows options:local- Compare local git branches (default)gitlab- Review GitLab Merge Request
Branch Selection (Local mode):
- Current Branch: Automatically detected
- Base Branch: Auto-suggests main/master/develop, allows custom input
URL Prompt (GitLab mode): If no URL provided as argument
LLM Provider Prompt: If no
--llmspecified, shows numbered list of available providers:- Automatically detects which LLM CLI tools are installed
- Shows default option (first available)
- Allows selection by number
Output Format Prompt: If no
--outputspecified, shows appropriate options:- Local mode:
html(default),cli - GitLab mode:
gitlab(default),html,cli
- Local mode:
Configuration
Environment Variables
| Variable | Description | Required |
|----------|-------------|----------|
| GITLAB_PRIVATE_TOKEN | GitLab personal access token | Yes |
Supported LLM Providers
The tool automatically detects which LLM CLI tools are installed:
| Provider | CLI Command | Status | Installation |
|----------|-------------|--------|--------------|
| Claude | claude | ✅ Fully supported | Claude CLI Setup |
| Gemini | gemini | ✅ Fully supported | Gemini CLI Setup |
| OpenAI | openai | ✅ Requires API key | OpenAI CLI Setup |
| ChatGPT | chatgpt | ✅ Requires TOKEN env | npm install -g chatgpt-cli |
| Ollama | ollama | ✅ Enhanced support | Ollama Setup - Recommended for local LLMs |
Compatibility Notes
- Ollama: Enhanced with automatic text-to-JSON conversion for seamless integration
- OpenAI: Requires
OPENAI_API_KEYenvironment variable - ChatGPT: Requires
TOKENenvironment variable with OpenAI API key - Local LLMs: Use Ollama for best compatibility with various models (Llama, Phi, Gemma, etc.)
Unsupported LLMs
- Llama CLI (
llama-cli): Incompatible with stdin-based architecture, designed for interactive use - GitHub Copilot: Not designed for code review, only for command suggestions
Complete Workflow
Local Mode Workflow
1. Input Collection
- Review Mode: Local branch comparison
- Branch Info: Current branch (auto-detected) and base branch (auto-suggested)
- LLM Provider: Prompted from available CLI tools, or specified with
--llm - Output Format: Prompted for HTML or CLI (GitLab posting not available)
2. Validation & Setup
- Validates current directory is a git repository
- Detects current branch and suggests appropriate base branch
- Detects available LLM CLI tools on the system
3. Local Repository Analysis
- Generates diff between current branch and base branch
- Identifies changed files between branches
- Reads full file contents for better context
4. AI-Powered Review
- Sends code diff and context to selected LLM
- Uses Conventional Comments format for structured feedback
- Focuses on critical issues: bugs, security, performance
5. Output Generation
- HTML Output: Generates beautiful standalone report
- CLI Output: Shows linter-style console output
GitLab Mode Workflow
1. Input Collection
- Review Mode: GitLab Merge Request review
- MR URL: Provided as argument or prompted interactively
- LLM Provider: Prompted from available CLI tools, or specified with
--llm - Output Format: Prompted for GitLab, HTML, or CLI
2. Validation & Setup
- Validates GitLab private token is set
- Checks for unresolved discussions on the MR
- Detects available LLM CLI tools on the system
3. Repository Analysis
- Clones the source branch to a temporary directory
- Retrieves the MR diff and changed files list
- Reads full file contents for better context
4. AI-Powered Review
- Sends code diff and context to selected LLM
- Uses Conventional Comments format for structured feedback
- Focuses on critical issues: bugs, security, performance
5. Output Generation
Based on selected format:
GitLab Output:
- Posts line-specific comments to MR
- Adds summary comment with overall assessment
- Shows real-time progress in console
HTML Output:
- Generates standalone HTML report file
- Professional Playwright-inspired styling
- Color-coded labels and summary statistics
- Self-contained file for sharing/archiving
CLI Output:
- Shows linter-style console output
- Format:
file:line label: message - Summary statistics at the end
- Perfect for CI/CD integration
Output Formats
The tool supports three output formats:
GitLab (Default)
- Line-specific comments: Posted directly on the relevant lines
- Summary comment: Overall review summary posted to the MR
- Console output: Real-time progress and results
HTML Report
- Professional styling: Playwright-inspired design with modern UI
- Color-coded labels: Visual distinction for different comment types
- Summary statistics: Overview of comments, blocking issues, and praise
- Standalone file: Self-contained report that can be shared or archived
CLI Output
- Linter-style format: Similar to ESLint output with
file:line label: message - Summary statistics: Quick overview of issues found
- Console-friendly: Perfect for CI/CD pipelines and terminal workflows
LLM Compatibility & Architecture
How LLM Integration Works
Ruck integrates with LLM providers through their CLI tools using a standard architecture:
- Process Spawning: Spawns the LLM CLI process (e.g.,
claude,gemini,ollama) - Stdin Communication: Sends the code review prompt via stdin
- Output Parsing: Expects structured JSON output for processing
- Timeout Handling: Manages process timeouts and error handling
Compatible LLM Requirements
For an LLM to work with Ruck, it must:
- ✅ Accept input via stdin (pipe support)
- ✅ Process prompts in non-interactive mode
- ✅ Output structured data (JSON preferred)
- ✅ Handle timeouts gracefully
- ✅ Return consistent response format
Why Some LLMs Don't Work
Llama CLI (llama-cli)
- ❌ Interactive Design: Built for chat-style interactions
- ❌ Stdin Issues: EPIPE errors when reading from stdin
- ❌ Complex Arguments: Requires model files and complex setup
- ❌ Output Format: Doesn't provide structured JSON output
GitHub Copilot (gh copilot)
- ❌ Wrong Purpose: Designed for shell command suggestions
- ❌ Limited Scope: Not built for code review analysis
- ❌ API Mismatch: Different use case than code review
Ollama Enhancement
Ollama receives special treatment with enhanced text-to-JSON conversion:
- 📝 Text Processing: Converts natural language responses to JSON
- 🔍 Content Extraction: Identifies file references, line numbers, suggestions
- 📊 Structure Creation: Builds proper review objects with comments and ratings
- 🎯 Fallback Handling: Graceful handling of various text formats
Troubleshooting
Common Issues
Local Mode Issues
"Current directory is not a git repository":
- Run the command from within a git repository
- Ensure you have
.gitdirectory in your project
"No differences found between branches":
- Check that your current branch has commits different from the base branch
- Use
git log base-branch..current-branchto verify differences
"Failed to get current branch":
- Ensure you're not in a detached HEAD state
- Switch to a proper branch with
git checkout branch-name
GitLab Mode Issues
"No LLM binaries found":
- Install at least one LLM CLI tool
- Ensure the CLI is in your system PATH
- Run
npm start list-llmsto verify detection
"GITLAB_PRIVATE_TOKEN not set":
- Create a
.envfile with your GitLab token - Ensure the token has appropriate API permissions
- Create a
"Failed to parse GitLab URL":
- Verify the MR URL format:
https://gitlab.example.com/group/project/-/merge_requests/123 - Ensure the MR exists and is accessible
- Verify the MR URL format:
"Unresolved discussions found":
- Resolve all discussion threads in the MR before running the review
- This prevents duplicate or conflicting feedback
General Issues
- "Invalid output format for local review":
- Local reviews only support
htmlandclioutput formats - GitLab posting is not available for local reviews
- Local reviews only support
Debug Mode
For troubleshooting, check the console output which includes:
- LLM detection results
- Git operations status
- API call responses
- File processing progress
Publishing
This package is automatically published to npm when a new release is created on GitHub.
For Maintainers
- Setup NPM Token: Add your npm token to GitHub repository secrets as
NPM_TOKEN - Create Release: Create a new release on GitHub with a version tag (e.g.,
v1.0.1) - Automatic Publishing: The GitHub Action will automatically publish to npm
Manual Publishing (Alternative)
# Login to npm (if not already)
npm login
# Publish with 2FA
npm publish --access public --otp=123456Contributing
This project follows a strict development workflow documented in CLAUDE.md. All feature development must:
- Create a dedicated feature branch
- Implement the requested functionality
- Test using the tool on its own codebase (dogfooding)
- Implement ALL feedback from self-review
- Update documentation as needed
- Create comprehensive commit with co-authorship
- Push and prepare for merge to main
For Human Contributors
- Fork the repository
- Create a feature branch
- Make your changes
- Run the linter:
npm run lint - Test your changes thoroughly
- Commit your changes
- Push to your fork
- Create a Pull Request
Development Workflow
See CLAUDE.md for detailed development standards and mandatory testing procedures.
License
Security
- The tool only reads repository data and posts comments
- No sensitive data is stored or transmitted beyond GitLab APIs
- LLM providers may have their own data handling policies
- Review your organization's policies before use
Note: This tool is designed for code review assistance. Always review AI-generated feedback before taking action, as AI suggestions may not always be appropriate for your specific context or requirements.
