self-serve-code-review-tool
v1.4.4
Published
Universal code review and quality analysis tool for microservices with weighted test scoring (80/20 split for core logic vs infrastructure, 10 points total)
Downloads
7
Maintainers
Readme
@self-serve/code-review-tool
Universal code review and quality analysis tool for microservices.
🚀 Quick Start
# Install globally
npm install -g @self-serve/code-review-tool
# Initialize in your project
cd your-microservice
self-serve-review init --template=api-gateway
# Run analysis
self-serve-review analyze📋 Available Templates
api-gateway- For API Gateway services with proxy and middleware rulesbackend-service- For backend microservices with database and API rulesfrontend- For frontend applications with React/Vue specific rulesmicroservice-base- Base rules for any microservice
🎯 Features
- ✅ Rule-based analysis - Focus on specific issues that matter
- ✅ AI integration - Generate prompts for Cursor AI analysis
- ✅ Multiple templates - Different rules for different service types
- ✅ Extensible - Add custom rules and analyzers
- ✅ CI/CD ready - Easy integration with GitHub Actions
📖 Commands
Initialize Project
# Interactive setup
self-serve-review init --interactive
# Use specific template
self-serve-review init --template=backend-serviceRun Analysis
# Basic analysis
self-serve-review analyze
# Generate AI prompts
self-serve-review analyze --ai-prompts
# Specific severity level
self-serve-review analyze --severity=errorManage Templates
# List available templates
self-serve-review templates
# Show template details
self-serve-review templates show api-gateway🔧 Configuration
Create .self-serve-review.json in your project root:
{
"extends": "api-gateway",
"rules": {
"no-console-log": {
"severity": "error",
"environments": ["production"]
}
},
"ignore": ["dist/", "build/"],
"reporters": ["html", "ai-prompts"]
}🤖 AI Integration
The tool generates focused prompts for AI analysis:
# Generate AI prompts
self-serve-review analyze --ai-prompts
# Copy generated prompts into Cursor AI
@codebase Custom Rule Analysis
[Generated prompt with specific rules to check]📊 Example Output
🔍 Self-Serve Code Review Analysis
================================================================================
✅ TypeScript: No type errors found!
❌ ESLint: 5 errors, 2 warnings
⚠️ Security: 1 vulnerability found
✅ Tests: Coverage 85%
Overall Score: 78/100 (B-)
📋 Reports generated:
- HTML Report: ./reports/quality-report.html
- AI Prompts: ./reports/ai-prompts.md📚 Documentation
For comprehensive documentation, implementation guides, and detailed usage instructions, see the docs/ folder:
- Quick Reference - Quick start guide
- Implementation Guide - Complete setup walkthrough
- Code Quality System - Quality standards and rules
- Publishing Guide - Publishing and distribution
- Developer Workflow - Complete developer flow with local tools
- Implementation Files - Files to add/update in microservices
- Flow Diagram - Visual workflow representation
🏗️ Development
# Clone repository
git clone https://github.com/amitselectcarleasing/self-serve-code-review-tool.git
# Install dependencies
npm install
# Run tests
npm test
# Test CLI locally
npm link
self-serve-review --help📄 License
MIT License - see LICENSE file for details.
