@firekeeper.ai/firekeeper
v0.5.0
Published
<h1 align="center">
Readme
firekeeper
Agentic AI code reviewer CLI
Parallel review, custom rules, agent skills, run anywhere
⚠️ Early Development: This project is in early development phase. APIs may change frequently.
Features
- Privacy-first: Bring your own LLM API key and model—works with any OpenAI-compatible endpoint
- Agentic review: Uses an agentic loop with tools to intelligently investigate code changes, not just one-shot LLM calls
- Custom rules: Define project-specific review rules in
firekeeper.tomlwith detailed instructions for the AI agent - Flexible scope: Review uncommitted changes, specific commits, date ranges, or entire repositories
- Parallel execution: Splits review tasks across multiple workers for speed and focus, with configurable file batching
- Structured output: JSON output and markdown trace files for integration with CI/CD and debugging
- Context engineering: Include files, shell command outputs, and Agent Skills as context for reviews
Installation
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/firekeeper-ai/firekeeper/releases/latest/download/firekeeper-installer.sh | shpowershell -ExecutionPolicy Bypass -c "irm https://github.com/firekeeper-ai/firekeeper/releases/latest/download/firekeeper-installer.ps1 | iex"npm install -g @firekeeper.ai/firekeeperGetting Started
Init a config file firekeeper.toml:
firekeeper initSet LLM API key (OpenRouter by default):
export FIREKEEPER_LLM_API_KEY=sk-xxxxxxxxxxxxxxReview uncommitted changes or the last commit:
firekeeper reviewTo add or update rules, just ask your coding agent to modify the config:
add a rule to firekeeper.toml: update CHANGELOG.md when Rust files changeReview uncommitted changes only, suitable for git hooks or coding agent hooks:
firekeeper review --base HEADReview changes from 1 day ago with structured output, suitable for CI/CD pipelines:
firekeeper review --base "@{1.day.ago}" --output /tmp/report.json --trace /tmp/trace.mdReview all files (ensure you have sufficient LLM token budget):
firekeeper review --base ROOT
Prek Hook
[[repos]]
repo = "https://github.com/firekeeper-ai/firekeeper"
rev = "v0.5.0"
hooks = [
{ id = "pre-commit" },
]Pre-commit Hook
repos:
- repo: https://github.com/firekeeper-ai/firekeeper
rev: v0.5.0
hooks:
- id: pre-commitFAQ
Why use a dedicated AI code reviewer instead of coding agents with MCP/Skills?
- Cost efficiency: Reviewers need less coding capability than code generators, so you can use cheaper models (Gemini Flash vs Pro, Claude Haiku vs Opus)
- Integration: CLI design fits naturally into git hooks and CI/CD pipelines
- Specialized tooling: Reviewer agents can have a different, optimized tool set
- Performance at scale: Parallel execution with filtered scopes keeps reviews fast and focused, preventing quality degradation on large codebases
Why doesn't this tool fix bugs after review?
Fixing bugs requires high-quality output (passing compilation and tests), which coding agents already handle well. To avoid duplicate responsibility, firekeeper focuses solely on code review.
Recommended workflow: Integrate firekeeper in pre-commit git hooks → coding agent triggers the hook → sees review results → auto-optimizes the code.
What should I review with this tool?
Don't use for: Issues caught by static analysis tools (formatters, linters, compilers, static analyzers). They're faster, more accurate, and cheaper.
Do use for: Semantic rules and conventions that traditional tools can't detect:
- Documentation updates after code changes
- Error logging after exception handling
- Code duplication that should be extracted into modules
- Project-specific conventions and patterns
This tool is designed for user-defined rules, not built-in nitpicking.
