npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

critic-cli

v1.0.0

Published

An AI-powered code review tool

Readme

critic-cli

npm version License: MIT TypeScript

An AI-powered code review tool for your terminal.

A lightweight CLI tool that provides intelligent code review feedback using your preferred AI model. Get almost instant, contextual suggestions on code quality, security, performance, and best practices - all while maintaining full privacy and control.

Why critic?

Faster feedback loops - Get quick review feedback instead of waiting for human reviewers
Privacy-first - Runs with your own AI models, your code never leaves your control
Lightweight - Simple CLI tool that integrates seamlessly into existing workflows
Context-aware - Understands your specific language/framework patterns and idioms
Bring Your Own Model - Use any OpenAI-compatible API (OpenAI, Anthropic, local models, etc.)

Quick Start

# Install
npm install -g critic-cli

# First-time setup
cd your-project
critic init

# Review your changes
critic review

Features

Smart Analysis

  • Code Quality - Clean code principles and maintainability
  • Language-Specific Idioms - Best practices for your programming language
  • Security Vulnerabilities - Detect potential security issues
  • Performance Optimization - Identify performance bottlenecks
  • Memory Management - Memory leaks and inefficient patterns
  • Architecture & Design - Design patterns and architectural concerns

Flexible Configuration

  • Language and Framework Profiles - Pre-configured review guidelines for popular technologies
  • Custom Instructions - Add your team's specific coding standards
  • Configurable Model Settings - Choose your AI provider, model, and reasoning effort
  • Multiple Auth Methods - Direct API key, environment variable, or no auth for local models

Git Integration

  • Analyzes Git Diffs - Reviews only your actual changes
  • Multiple Modes - Review staged changes, commits, or branches

Supported Technologies

Languages

TypeScript Java Kotlin Dart Swift

Frameworks

Angular Spring Boot Flutter Android SwiftUI

Installation

Global Installation (Recommended)

npm install -g critic-cli

Project-Specific Installation

npm install --save-dev critic-cli

# Add to package.json scripts
{
  "scripts": {
    "review": "critic review"
  }
}

Configuration

On first run, use critic init to create a .critic.json configuration file in your project:

critic init

This interactive setup will guide you through configuring:

  • Your AI model provider and API endpoint
  • API type (Chat Completions or Responses)
  • Authentication method
  • Programming language and framework
  • Custom review instructions

Configuration Schema

{
  "modelConfig": {
    "baseUrl": "https://api.openai.com",
    "inferenceApiType": "responses",
    "reasoningEffort": "medium",
    "authMethod": "env_var",
    "apiKey": null,
    "apiKeyEnvVar": "OPENAI_API_KEY",
    "modelName": "gpt-5-mini"
  },
  "activeProfile": {
    "defaultBranchName": "main",
    "language": "TypeScript",
    "framework": "Angular",
    "customInstructions": "Focus on Angular best practices and RxJS patterns."
  }
}

Configuration Options

Model Configuration

  • baseUrl: Your AI provider's API endpoint (e.g., https://api.openai.com, http://localhost:11434)
  • inferenceApiType: Choose between "chat_completions" or "responses" (see API Types section)
  • reasoningEffort: For reasoning-capable models - "minimal", "low", "medium", "high", or "disabled"
  • authMethod: How to authenticate - "env_var" (recommended), "direct_key", or "none" ( see Authentication)
  • apiKey: Direct API key (only if authMethod is "direct_key")
  • apiKeyEnvVar: Environment variable name containing your API key (only if authMethod is "env_var")
  • modelName: The model identifier (e.g., "gpt-5-mini", "claude-sonnet-4-5")

Active Profile

  • defaultBranchName: Your repository's default branch name (e.g., "main", "master")
  • language: Programming language used in your project
  • framework: Framework used, or "None" if not applicable
  • customInstructions: Additional context or guidelines for the AI reviewer

API Types

critic-cli supports two OpenAI-compatible API types. Your choice affects how the tool communicates with your AI provider.

Chat Completions API (chat_completions)

The traditional OpenAI API format. Compatible with most AI providers including:

  • OpenAI GPT models
  • Anthropic Claude models (via compatibility layer)
  • Azure OpenAI
  • OpenRouter
  • Most local model servers (Ollama, LM Studio, etc.)

Use Chat Completions if:

  • You need maximum compatibility with various providers
  • Your model doesn't support the newer Responses API
  • You're using older models or local setups

Responses API (responses)

The newer OpenAI API format with enhanced capabilities. Provides:

  • Better Performance: 3% improvement on benchmarks with same model
  • Lower Costs: 40-80% better cache utilization
  • Advanced Reasoning: Better integration with reasoning-capable models
  • Future-Proof: Designed for next-generation models

Use Responses if:

  • You're using OpenAI's latest models
  • You want the best possible performance and cost efficiency
  • Your provider supports the Responses API format

Note: Responses API is recommended.

Authentication

critic-cli supports three authentication methods to fit your security requirements and use case.

Environment Variable (Recommended)

Store your API key in an environment variable and reference it in your config:

{
  "modelConfig": {
    "authMethod": "env_var",
    "apiKeyEnvVar": "OPENAI_API_KEY"
  }
}
# In your shell profile
export OPENAI_API_KEY="sk-..."

Why this is recommended:

  • API keys are never stored in files that might be committed to version control
  • Each developer can use their own API key
  • Keys can be rotated without changing configuration files
  • Follows security best practices

Direct API Key

Store the API key directly in your .critic.json file:

{
  "modelConfig": {
    "authMethod": "direct_key",
    "apiKey": "sk-..."
  }
}

⚠️ Security Warning:

  • DO NOT commit files containing API keys to version control
  • Add .critic.json to your .gitignore if using this method
  • Only use this for local development or testing
  • Consider using environment variables instead

No Authentication

For local models that don't require authentication:

{
  "modelConfig": {
    "authMethod": "none",
    "baseUrl": "http://localhost:11434"
  }
}

Use this for:

  • Local LLM servers (llama.cpp, Ollama, LM Studio, etc.)
  • Development environments without external API calls
  • Air-gapped or offline setups

Usage

Review Staged Changes (Default)

critic review

Reviews all staged changes (what would be included in your next commit).

Review Other Changes

# Review all uncommitted changes (staged)
critic review --mode staged

# Review changes in the current branch compared to main
critic review --mode branch

# Review a commit range to HEAD
critic review --mode commit --commit abc123

Options

  • -m, --mode <mode>: Specify what to review
    • branch: Review current branch against default branch
    • staged (default): Review all staged, uncommitted changes
    • commit: Review commits in range from specified to HEAD
  • -c, --commit <hash>: Commit hash (required when mode is commit)
  • -q, --quiet: Suppress hints during waiting periods
  • --verbose: Show detailed debug information
  • --json: Output logs in JSON format

Privacy & Security

Your Code, Your Control

  • Your code is only sent to the AI model you configure
  • Supports local models for complete privacy (llama.cpp, Ollama, LM Studio, etc.)
  • API keys can be stored in environment variables for security

Recommended Security Practices:

  1. Use environment variables for API keys (authMethod: "env_var")
  2. Add .critic.json to .gitignore if storing sensitive configuration
  3. Use local models for sensitive codebases
  4. Review your AI provider's data retention policies

Telemetry

critic-cli collects anonymous usage data to help improve the tool. We take your privacy seriously and follow industry-standard opt-out practices.

What We Collect

  • Command usage (which commands you run)
  • Inference metrics (token counts, success/failure)
  • Error types and codes (to help us fix bugs)
  • Performance metrics (duration)

What We DON'T Collect

  • Your code or file contents
  • IP addresses or geolocation
  • Personal information
  • User profiles - All data is anonymous and session-based only

Note: A session means a single command execution from the beginning to the end.

How to Opt Out

Set the environment variable in your shell:

export CRITIC_TELEMETRY_DISABLED=1

Or add it to your shell profile (~/.bashrc, ~/.zshrc, etc.):

echo 'export CRITIC_TELEMETRY_DISABLED=1' >> ~/.zshrc

Verification: You can verify telemetry is disabled by running any critic command with --verbose - you'll see a log message confirming telemetry is opted out.

Contributing

This project is currently in active development. Feedback and suggestions are welcome through GitHub issues.

License

MIT - see LICENSE.MD for details


Built with care for developers who value code quality.