critic-cli
v1.0.0
Published
An AI-powered code review tool
Maintainers
Readme
critic-cli
An AI-powered code review tool for your terminal.
A lightweight CLI tool that provides intelligent code review feedback using your preferred AI model. Get almost instant, contextual suggestions on code quality, security, performance, and best practices - all while maintaining full privacy and control.
Why critic?
Faster feedback loops - Get quick review feedback instead of waiting for human reviewers
Privacy-first - Runs with your own AI models, your code never leaves your control
Lightweight - Simple CLI tool that integrates seamlessly into existing workflows
Context-aware - Understands your specific language/framework patterns and idioms
Bring Your Own Model - Use any OpenAI-compatible API (OpenAI, Anthropic, local models, etc.)
Quick Start
# Install
npm install -g critic-cli
# First-time setup
cd your-project
critic init
# Review your changes
critic reviewFeatures
Smart Analysis
- Code Quality - Clean code principles and maintainability
- Language-Specific Idioms - Best practices for your programming language
- Security Vulnerabilities - Detect potential security issues
- Performance Optimization - Identify performance bottlenecks
- Memory Management - Memory leaks and inefficient patterns
- Architecture & Design - Design patterns and architectural concerns
Flexible Configuration
- Language and Framework Profiles - Pre-configured review guidelines for popular technologies
- Custom Instructions - Add your team's specific coding standards
- Configurable Model Settings - Choose your AI provider, model, and reasoning effort
- Multiple Auth Methods - Direct API key, environment variable, or no auth for local models
Git Integration
- Analyzes Git Diffs - Reviews only your actual changes
- Multiple Modes - Review staged changes, commits, or branches
Supported Technologies
Languages
Frameworks
Installation
Global Installation (Recommended)
npm install -g critic-cliProject-Specific Installation
npm install --save-dev critic-cli
# Add to package.json scripts
{
"scripts": {
"review": "critic review"
}
}Configuration
On first run, use critic init to create a .critic.json configuration file in your project:
critic initThis interactive setup will guide you through configuring:
- Your AI model provider and API endpoint
- API type (Chat Completions or Responses)
- Authentication method
- Programming language and framework
- Custom review instructions
Configuration Schema
{
"modelConfig": {
"baseUrl": "https://api.openai.com",
"inferenceApiType": "responses",
"reasoningEffort": "medium",
"authMethod": "env_var",
"apiKey": null,
"apiKeyEnvVar": "OPENAI_API_KEY",
"modelName": "gpt-5-mini"
},
"activeProfile": {
"defaultBranchName": "main",
"language": "TypeScript",
"framework": "Angular",
"customInstructions": "Focus on Angular best practices and RxJS patterns."
}
}Configuration Options
Model Configuration
baseUrl: Your AI provider's API endpoint (e.g.,https://api.openai.com,http://localhost:11434)inferenceApiType: Choose between"chat_completions"or"responses"(see API Types section)reasoningEffort: For reasoning-capable models -"minimal","low","medium","high", or"disabled"authMethod: How to authenticate -"env_var"(recommended),"direct_key", or"none"( see Authentication)apiKey: Direct API key (only ifauthMethodis"direct_key")apiKeyEnvVar: Environment variable name containing your API key (only ifauthMethodis"env_var")modelName: The model identifier (e.g.,"gpt-5-mini","claude-sonnet-4-5")
Active Profile
defaultBranchName: Your repository's default branch name (e.g.,"main","master")language: Programming language used in your projectframework: Framework used, or"None"if not applicablecustomInstructions: Additional context or guidelines for the AI reviewer
API Types
critic-cli supports two OpenAI-compatible API types. Your choice affects how the tool communicates with your AI provider.
Chat Completions API (chat_completions)
The traditional OpenAI API format. Compatible with most AI providers including:
- OpenAI GPT models
- Anthropic Claude models (via compatibility layer)
- Azure OpenAI
- OpenRouter
- Most local model servers (Ollama, LM Studio, etc.)
Use Chat Completions if:
- You need maximum compatibility with various providers
- Your model doesn't support the newer Responses API
- You're using older models or local setups
Responses API (responses)
The newer OpenAI API format with enhanced capabilities. Provides:
- Better Performance: 3% improvement on benchmarks with same model
- Lower Costs: 40-80% better cache utilization
- Advanced Reasoning: Better integration with reasoning-capable models
- Future-Proof: Designed for next-generation models
Use Responses if:
- You're using OpenAI's latest models
- You want the best possible performance and cost efficiency
- Your provider supports the Responses API format
Note: Responses API is recommended.
Authentication
critic-cli supports three authentication methods to fit your security requirements and use case.
Environment Variable (Recommended)
Store your API key in an environment variable and reference it in your config:
{
"modelConfig": {
"authMethod": "env_var",
"apiKeyEnvVar": "OPENAI_API_KEY"
}
}# In your shell profile
export OPENAI_API_KEY="sk-..."Why this is recommended:
- API keys are never stored in files that might be committed to version control
- Each developer can use their own API key
- Keys can be rotated without changing configuration files
- Follows security best practices
Direct API Key
Store the API key directly in your .critic.json file:
{
"modelConfig": {
"authMethod": "direct_key",
"apiKey": "sk-..."
}
}⚠️ Security Warning:
- DO NOT commit files containing API keys to version control
- Add
.critic.jsonto your.gitignoreif using this method - Only use this for local development or testing
- Consider using environment variables instead
No Authentication
For local models that don't require authentication:
{
"modelConfig": {
"authMethod": "none",
"baseUrl": "http://localhost:11434"
}
}Use this for:
- Local LLM servers (llama.cpp, Ollama, LM Studio, etc.)
- Development environments without external API calls
- Air-gapped or offline setups
Usage
Review Staged Changes (Default)
critic reviewReviews all staged changes (what would be included in your next commit).
Review Other Changes
# Review all uncommitted changes (staged)
critic review --mode staged
# Review changes in the current branch compared to main
critic review --mode branch
# Review a commit range to HEAD
critic review --mode commit --commit abc123Options
-m, --mode <mode>: Specify what to reviewbranch: Review current branch against default branchstaged(default): Review all staged, uncommitted changescommit: Review commits in range from specified to HEAD
-c, --commit <hash>: Commit hash (required when mode iscommit)-q, --quiet: Suppress hints during waiting periods--verbose: Show detailed debug information--json: Output logs in JSON format
Privacy & Security
Your Code, Your Control
- Your code is only sent to the AI model you configure
- Supports local models for complete privacy (llama.cpp, Ollama, LM Studio, etc.)
- API keys can be stored in environment variables for security
Recommended Security Practices:
- Use environment variables for API keys (
authMethod: "env_var") - Add
.critic.jsonto.gitignoreif storing sensitive configuration - Use local models for sensitive codebases
- Review your AI provider's data retention policies
Telemetry
critic-cli collects anonymous usage data to help improve the tool. We take your privacy seriously and follow industry-standard opt-out practices.
What We Collect
- Command usage (which commands you run)
- Inference metrics (token counts, success/failure)
- Error types and codes (to help us fix bugs)
- Performance metrics (duration)
What We DON'T Collect
- Your code or file contents
- IP addresses or geolocation
- Personal information
- User profiles - All data is anonymous and session-based only
Note: A session means a single command execution from the beginning to the end.
How to Opt Out
Set the environment variable in your shell:
export CRITIC_TELEMETRY_DISABLED=1Or add it to your shell profile (~/.bashrc, ~/.zshrc, etc.):
echo 'export CRITIC_TELEMETRY_DISABLED=1' >> ~/.zshrcVerification: You can verify telemetry is disabled by running any critic command with --verbose - you'll see a log
message confirming telemetry is opted out.
Contributing
This project is currently in active development. Feedback and suggestions are welcome through GitHub issues.
License
MIT - see LICENSE.MD for details
Built with care for developers who value code quality.
