@nexaleaf/pr-reviewer
v1.0.1
Published
Generic PR reviewer bot for GitHub Actions using react-ai-hooks
Downloads
8
Maintainers
Readme
@nexaleaf/pr-reviewer
A generic PR Review Bot for GitHub Actions, designed to work across your entire organization. Built on top of @nexaleaf/react-ai-hooks for powerful AI integration.
🚀 Features
- 🤖 AI-powered reviews using OpenAI, Anthropic, Google AI, or Ollama
- 📋 Fallback to rule-based reviews when AI is unavailable
- 🔄 Multi-provider support through react-ai-hooks architecture
- 🎯 Smart file filtering - only reviews relevant code files
- 🚫 Duplicate prevention - won't review the same PR twice
- ⚡ GitHub Actions ready - drop-in workflow for any repository
- 🛠️ Highly configurable - customize models, file limits, and behavior
- 📊 Detailed logging - comprehensive feedback and debugging
📦 Installation
Global Installation
npm install -g @nexaleaf/pr-reviewerIn GitHub Actions (Recommended)
- name: Install PR Reviewer
run: npm install -g @nexaleaf/pr-reviewer🎯 Quick Start
1. Basic GitHub Action Workflow
Create .github/workflows/pr-review.yml:
name: PR Review Bot
on:
pull_request:
types: [opened, synchronize, ready_for_review]
jobs:
review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- name: Install PR Reviewer
run: npm install -g @nexaleaf/pr-reviewer
- name: Run PR Review
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PR_NUMBER: ${{ github.event.pull_request.number }}
GITHUB_REPOSITORY: ${{ github.repository }}
run: pr-reviewer2. Add API Key to Repository Secrets
- Go to your repository → Settings → Secrets and variables → Actions
- Add
OPENAI_API_KEYwith your OpenAI API key - (Optional) Add other provider keys:
ANTHROPIC_API_KEY,GOOGLE_AI_API_KEY
3. That's it! 🎉
The bot will now automatically review all new PRs in your repository.
⚙️ Configuration
Environment Variables
| Variable | Required | Description | Default |
|----------|----------|-------------|---------|
| GITHUB_TOKEN | ✅ | GitHub token (auto-provided in Actions) | - |
| GITHUB_REPOSITORY | ✅ | Repository in format owner/repo | - |
| PR_NUMBER | ✅ | PR number to review | - |
| OPENAI_API_KEY | ❌ | OpenAI API key | - |
| ANTHROPIC_API_KEY | ❌ | Anthropic API key | - |
| GOOGLE_AI_API_KEY | ❌ | Google AI API key | - |
| OLLAMA_BASE_URL | ❌ | Ollama server URL | http://localhost:11434 |
| AI_PROVIDER | ❌ | AI provider to use | openai |
Command Line Options
pr-reviewer [options]
OPTIONS:
--pr <number> PR number to review
--provider <provider> AI provider: openai, anthropic, google, ollama
--model <model> Specific model to use
--dry-run Preview review without posting comments
--review-drafts Include draft PRs in review
--no-skip-reviewed Review PRs even if already reviewed
--max-files <number> Maximum files to review per PR (default: 10)
--help, -h Show help message
--version, -v Show version🤖 AI Providers & Models
OpenAI (Default)
env:
AI_PROVIDER: openai
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}Default Model: gpt-4o-mini
Supported Models: gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4, gpt-3.5-turbo
Anthropic
env:
AI_PROVIDER: anthropic
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}Default Model: claude-3-haiku-20240307
Supported Models: claude-3-5-sonnet-20241022, claude-3-opus-20240229, claude-3-sonnet-20240229
Google AI
env:
AI_PROVIDER: google
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}Default Model: gemini-1.5-flash
Supported Models: gemini-1.5-pro, gemini-pro, gemini-pro-vision
Ollama (Local/Self-hosted)
env:
AI_PROVIDER: ollama
OLLAMA_BASE_URL: http://your-ollama-server:11434Default Model: llama3.2
Supported Models: llama3.1, llama3, codellama, mistral, mixtral, phi3
📋 Advanced Usage
Multi-Provider Organization Setup
For organizations using different AI providers across repositories:
# .github/workflows/pr-review.yml
- name: Run PR Review
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AI_PROVIDER: ${{ vars.AI_PROVIDER || 'openai' }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
PR_NUMBER: ${{ github.event.pull_request.number }}
GITHUB_REPOSITORY: ${{ github.repository }}
run: |
pr-reviewer --provider $AI_PROVIDER --model ${{ vars.AI_MODEL }}Custom Model Configuration
# Use GPT-4 for critical repositories
pr-reviewer --provider openai --model gpt-4
# Use Claude Opus for detailed reviews
pr-reviewer --provider anthropic --model claude-3-opus-20240229
# Use local Llama for privacy-sensitive repos
pr-reviewer --provider ollama --model llama3.1Dry Run Testing
Test the bot without posting actual comments:
pr-reviewer --dry-run --pr 123Conditional Reviews
Only review specific file types or sizes:
- name: Review Large PRs
if: github.event.pull_request.changed_files > 5
run: pr-reviewer --max-files 20
- name: Review Non-Draft PRs
if: github.event.pull_request.draft == false
run: pr-reviewer🔧 Supported File Types
The bot automatically reviews these file types:
- JavaScript/TypeScript:
.js,.jsx,.ts,.tsx - Python:
.py - Java:
.java - Go:
.go - Ruby:
.rb - PHP:
.php - C/C++:
.c,.cpp - C#:
.cs - Swift:
.swift - Kotlin:
.kt - Rust:
.rs - Vue:
.vue - Svelte:
.svelte - And more...
Excluded patterns:
node_modules/,dist/,build/,coverage/.min.js,.bundle.js,.chunk.jspackage-lock.json,yarn.lock
💡 Examples
Example Review Output
## 🔎 PR Review Bot
🤖 AI-powered using **openai**
**Review Summary:**
- 📁 Files reviewed: 3
- ➕ Lines added: 145
- ➖ Lines removed: 23
- 💬 Comments generated: 3
**Files reviewed:**
- `src/components/UserProfile.tsx` (modified)
- `src/hooks/useAuth.ts` (modified)
- `tests/auth.test.ts` (added)
---
*This review was generated automatically. Please address the feedback and feel free to ask questions!*Per-File Comments
Each file gets specific AI-generated feedback:
🔎 **AI Review for UserProfile.tsx:**
1. **Authentication Check**: The component directly accesses `user.id` without checking if `user` exists. Consider adding a null check or using optional chaining.
2. **Performance**: The `updateProfile` function recreates the entire user object on every render. Consider using `useCallback` to optimize this.
3. **Error Handling**: What happens if the profile update fails? Consider adding error states and user feedback.
4. **Testing**: Are there unit tests covering the edge case where `user` is null?🏢 Organization-wide Deployment
1. Organization Secrets
Set up organization-level secrets for all repositories:
- Go to Organization → Settings → Secrets and variables → Actions
- Add organization secrets:
OPENAI_API_KEYANTHROPIC_API_KEY(optional)GOOGLE_AI_API_KEY(optional)
2. Workflow Templates
Create .github/workflow-templates/pr-review.yml in your .github repository:
name: PR Review Bot
description: Automated PR reviews using AI
on:
pull_request:
types: [opened, synchronize, ready_for_review]
jobs:
review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm install -g @nexaleaf/pr-reviewer
- env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PR_NUMBER: ${{ github.event.pull_request.number }}
GITHUB_REPOSITORY: ${{ github.repository }}
run: pr-reviewer3. Repository Variables
Use repository variables for per-repo customization:
AI_PROVIDER→openai,anthropic,google,ollamaAI_MODEL→ Specific model nameMAX_FILES→ Maximum files to review
🏗️ Architecture & Performance
Refactored Modular Architecture (v1.0.0+)
The PR reviewer has been completely refactored into a clean, modular architecture for better maintainability and performance:
📁 Services Layer
GitHubService.js- GitHub API operations (PR details, file changes, reviews)ReviewGenerator.js- AI review generation with provider integrationFileFilter.js- Smart file filtering and extension validationLogger.js- Centralized logging with multiple levels
🖥️ CLI Layer
ConfigParser.js- Command-line argument and environment parsingHelpDisplay.js- Help text and version information display
💡 Key Improvements
- 60-70% file size reduction - Main files reduced from 350+ lines to ~100 lines
- Single responsibility principle - Each module handles one specific concern
- Enhanced testability - Individual modules can be unit tested in isolation
- Better error handling - Specialized error classes for different failure modes
- Improved reusability - Modules can be used across different parts of the system
🔄 Before vs After
| Component | Before | After | Improvement |
|-----------|--------|-------|-------------|
| src/index.js | 350+ lines | 3 lines | 99% reduction |
| bin/cli.js | 200+ lines | ~90 lines | 55% reduction |
| PRReviewer.js | Monolithic | ~100 lines | Modular design |
| Total complexity | High coupling | Low coupling | Maintainable |
🔧 Technical Integration
How it uses @nexaleaf/react-ai-hooks
Here's the core integration showing how @nexaleaf/pr-reviewer leverages the react-ai-hooks library:
import { createProvider } from "@nexaleaf/react-ai-hooks";
import { Octokit } from "@octokit/rest";
export class PRReviewer {
constructor({
token,
owner,
repo,
aiProvider = 'openai',
aiKey,
aiModel
}) {
this.octokit = new Octokit({ auth: token });
this.owner = owner;
this.repo = repo;
this.aiProvider = aiProvider;
// Initialize AI provider using react-ai-hooks
this.aiClient = null;
this.initializeAI(aiProvider, aiKey, aiModel);
}
initializeAI(provider, apiKey, model) {
if (!apiKey) {
console.log('No AI API key provided. Falling back to static reviews.');
return;
}
try {
const config = {
provider,
apiKey,
model,
temperature: 0.3, // Lower temperature for consistent reviews
maxTokens: 1000,
enableLogging: true,
};
// Use react-ai-hooks provider factory
this.aiClient = createProvider(config);
console.log(`✅ Initialized ${provider} provider with model: ${model || 'default'}`);
} catch (error) {
console.error('Failed to initialize AI provider:', error.message);
this.aiClient = null;
}
}
async generateAIReview(patch, filename) {
if (!this.aiClient) {
return this.generateFallbackReview(filename);
}
try {
const prompt = `You are an expert code reviewer. Please review the following code changes and provide constructive feedback.
File: ${filename}
Code changes:
\`\`\`diff
${patch}
\`\`\`
Please provide:
1. 2-3 specific questions about the implementation
2. Any potential issues or improvements
3. Suggestions for better practices if applicable
Focus on code quality, performance, security, and testing.`;
// Use react-ai-hooks generateText method
const response = await this.aiClient.generateText(prompt);
return response.text.trim();
} catch (error) {
console.error(`AI review failed for ${filename}:`, error.message);
return this.generateFallbackReview(filename);
}
}
async reviewPR(prNumber, options = {}) {
// Get changed files from GitHub API
const files = await this.getChangedFiles(prNumber);
// Generate AI reviews for each file
const reviews = await Promise.all(
files.map(async (file) => {
const review = await this.generateAIReview(file.patch, file.filename);
return {
path: file.filename,
body: review,
line: 1,
side: "RIGHT",
};
})
);
// Post review to GitHub
await this.octokit.pulls.createReview({
owner: this.owner,
repo: this.repo,
pull_number: prNumber,
event: "COMMENT",
body: this.buildReviewSummary(files, reviews),
comments: reviews,
});
}
}Key Benefits of Using react-ai-hooks
- 🔄 Multi-Provider Support: Switch between OpenAI, Anthropic, Google, Ollama seamlessly
- 🛡️ Enterprise Features: Built-in retry logic, circuit breakers, rate limiting
- ⚡ Optimized Performance: Connection pooling, caching, load balancing
- 🔍 Comprehensive Logging: Detailed metrics and error tracking
- 🎯 Type Safety: Full TypeScript support with intelligent autocompletion
Provider Configuration Examples
// OpenAI Configuration
const openaiConfig = {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o-mini',
temperature: 0.3,
maxTokens: 1000,
organization: 'org-123', // Optional
};
// Anthropic Configuration
const anthropicConfig = {
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY,
model: 'claude-3-haiku-20240307',
temperature: 0.3,
maxTokens: 1000,
};
// Load Balancing with Fallbacks
import { createLoadBalancedProvider } from '@nexaleaf/react-ai-hooks';
const provider = createLoadBalancedProvider(
openaiConfig, // Primary
[anthropicConfig] // Fallbacks
);🛠️ Development
Local Development
# Clone the monorepo
git clone https://github.com/NexaLeaf/react-ai-hooks.git
cd react-ai-hooks
# Install dependencies
npm install
# Build the core library
npm run build:core
# Test the PR reviewer locally
cd packages/pr-reviewer
export GITHUB_TOKEN=your_token
export OPENAI_API_KEY=your_key
export GITHUB_REPOSITORY=owner/repo
export PR_NUMBER=123
node bin/cli.js --dry-runTesting
# Dry run test
pr-reviewer --dry-run --pr 123
# Test with different providers
pr-reviewer --dry-run --provider anthropic --model claude-3-haiku-20240307
# Test file filtering
pr-reviewer --dry-run --max-files 5🤝 Contributing
We welcome contributions! Please see our Contributing Guide.
Development Setup
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
📄 License
MIT © NexaLeaf
🔗 Related
@nexaleaf/react-ai-hooks- The core AI hooks library- GitHub Actions Documentation
- Octokit REST API
