churn-cli
v2.2.5
Published
Local-first context intelligence layer for AI agents and code workflows
Maintainers
Readme

Your agents favorite context layer.
Open-source • Local-first • Use as-is
Table of Contents
- Why Churn?
- What Churn Finds
- What's New in v2.2
- Quick Start
- Features
- Installation
- Running Locally with Ollama
- Commands
- Keyboard Shortcuts
- Configuration
- Documentation
- Contributing
- License
Why Churn?
Built for developers, by developers. Churn is a context intelligence layer that structures and analyzes code for AI agents and workflows—all from your terminal.
What makes Churn different:
- Truly Open Source - MIT licensed, no hidden fees, no telemetry, no account required
- Local-First - Your code never leaves your machine unless you choose a cloud model
- Zero Setup - Run from any Git repository, no configuration needed to start
- Model Freedom - Use Claude, GPT, Gemini, or run completely offline with Ollama
- Production Ready - v2.1+ is stable and actively maintained
- Cost Conscious - Smart caching, adaptive prompts, and staged mode reduce API costs by 70-90%
- Beautiful Terminal UI - Clean, focused interface that doesn't get in your way
Churn respects your workflow, your privacy, and your budget.
What Churn Finds
Real analysis outcomes from actual projects:
✓ Removed 47 unused imports across 23 TypeScript files
✓ Detected 12 orphaned utility functions with no references
✓ Flagged 8 dependencies not imported anywhere in src/
✓ Identified 5 performance bottlenecks in rendering loops
✓ Caught 3 security anti-patterns (exposed API keys, eval usage)
✓ Suggested 15 complex functions that would benefit from breaking down
✓ Found 9 TODO comments older than 6 months
✓ Detected duplicate logic across 4 similar componentsCommon use cases:
- Pre-commit checks - Use
--stagedto analyze only your changes before committing (90% faster) - Cleanup before release - Find dead code, unused dependencies, and orphaned files
- Onboarding new developers - Generate a "health report" of technical debt
- Pre-PR review - Catch issues before they hit CI/CD with language-specific analysis
- Refactoring planning - Identify high-impact areas to improve
- Dependency audits - Flag outdated or unused packages
- Security scanning - Detect common anti-patterns and vulnerabilities
- Framework migration - Get context-aware findings for your specific stack
Churn gives you actionable insights, not just observations. Every finding includes context, severity, and suggested fixes tailored to your language and framework.
What's New in v2.2
Streamlined Review Flow - Version 2.2 replaces the tedious checkbox review with a scannable summary and one-decision handoff:
New in v2.2.5
- Security Fixes - Fixed command injection vulnerabilities in differential analysis and agent handoff
- Race Condition Fix - Fixed cache corruption during parallel file analysis
- Memory Leak Fixes - Fixed setTimeout cleanup in ModelSelect and AskConsole components
- Performance - Pre-computed gradient logo, parallel file operations, metadata caching
- Error Handling - Comprehensive error handling across all components with user-friendly messages
New in v2.2.4
- Dynamic Version Reading - Version now read dynamically from package.json
- Bug Fixes - Various stability improvements
New in v2.2.0-2.2.1
- AnalysisSummary Screen - Post-analysis view with severity breakdown (HIGH/MEDIUM/LOW), progress bars, and category stats
- ReviewBrowser - Tree-based finding browser with exclude mode (everything included by default)
- Quick Handoff Actions - [H] Pass HIGH only, [M] Pass HIGH+MED, [A] Pass ALL, [R] Review first, [E] Export only
- AgentOnboarding - First-run agent configuration as part of init sequence
- Dynamic Model Updates - Models can be updated without releasing new CLI versions via remote manifest
Agent Handoff System (v2.1.6+)
- Interactive Handoff - After export, prompt to launch your configured agent immediately
- Agent Support - Claude Code, Cursor, Gemini CLI, Droid, and Codex integration
- Context Formats - Minimal (MD+JSON) or Comprehensive (MD+JSON+patch+metadata)
- Enhanced
churn pass---launchflag to start agents directly - Workflow Integration - Analyze with Churn → Hand off to agent → Implement changes
Performance & Stability (v2.1.5+)
- 60-70% faster analysis - 117 files: 8 minutes → 2-3 minutes
- Fixed retry mechanism for proper API error recovery
- Adaptive viewport sizing based on terminal height
- Fullscreen TUI mode with responsive layouts (v2.1.8)
See CHANGELOG.md for complete version history.
Quick Start
Get your first analysis in 30 seconds:
# Install globally
npm install -g churn-cli
# Navigate to your project
cd your-project
# Run analysis
churn startThat's it. Churn will:
- Detect your repository and project type
- Prompt you to select an AI model (first run only)
- Show you what it's about to analyze
- Stream results in real-time
- Let you review and accept/reject findings
- Export patches and reports
First-time users: Start with churn start for an interactive experience. The tool will guide you through model selection and configuration.
Want to run locally with zero API costs? See Running Locally with Ollama for setup instructions with free, offline models.
Cost-conscious? Churn shows real-time token usage and cost estimation during analysis. Combined with smart caching and staged mode, you can reduce API costs by 70-90% while maintaining high-quality insights.
Features
- Zero-Setup - Run from any Git repository, no cloning or configuration required
- Local-First - All data stored on your machine under
~/.churn/and.churn/ - Multi-Model Support - Claude, GPT, Gemini, or local Ollama models
- Agent Handoff - Pass analysis results directly to Claude Code, Cursor, Gemini CLI, or Codex for implementation
- Adaptive Analysis - Language and framework-aware prompts for context-specific findings
- Differential Mode - Analyze only changed lines in staged files (70-90% cost savings)
- Live Analysis - Real-time streaming output with progress tracking
- Interactive Review - Navigate and selectively accept findings
- Export Workflow - Generate patches, reports, and JSON for downstream tools
- Smart Caching - Content-based caching reduces repeat analysis costs by 70%
- Token Tracking - Real-time cost estimation and transparency
- Parallel Processing - Concurrent file analysis for 10x speed improvements
- Beautiful UI - Clean, vibrant red-themed terminal interface powered by Ink
Installation
Quick Install (Recommended)
# Using npm
npm install -g churn-cli
# Using Bun (faster, recommended)
bun install -g churn-cli
# Using pnpm
pnpm install -g churn-cli
# Using yarn
yarn global add churn-cliNote: Churn is built with Bun. While it works with npm/yarn/pnpm, using Bun ensures the best experience.
For Development
Requirements: Bun 1.x (NOT Node.js)
# Clone the repository
git clone https://github.com/cloudboyjh1/churn2.0.git
cd churn2.0
# Install dependencies with Bun
bun install
# Run in development mode
bun run dev
# Build
bun run build
# Type check
bun run type-checkImportant: This project requires Bun. Do not use npm install for development as it may cause dependency issues.
Running Locally with Ollama
Zero API costs. Run Churn completely offline with local models.
Quick Setup
# 1. Install Ollama (macOS/Linux)
curl -fsSL https://ollama.com/install.sh | sh
# Windows - Download from https://ollama.com/download
# 2. Pull a recommended model
ollama pull deepseek-r1:latest
# 3. Run Churn
cd your-project
churn start
# Select "Ollama" when prompted for providerRecommended models: deepseek-r1:latest, qwen2.5-coder:14b, phi-3-mini, mistral:7b
More Information:
Commands
churn start
Launch interactive menu with options to run analysis, choose model, configure settings, or exit.
churn startMenu Options:
- Run scan - Start code analysis with current model
- Choose model - Select or switch AI provider/model
- Settings - Configure handoff preferences (agent, context format, auto-launch)
- Exit - Quit the application
churn run
Direct code analysis for power users (no interactive menu).
Options:
-s, --staged- Analyze only staged files using differential analysis (analyzes only changed lines)-f, --files <files...>- Analyze specific files-c, --concurrency <number>- Number of files to analyze in parallel (1-50)
Examples:
# Pre-commit check - analyze only your changes
git add .
churn run --staged
# Analyze specific files
churn run --files src/components/Button.tsx src/utils/helpers.ts
# Full repo scan with higher concurrency
churn run --concurrency 20churn model
Select or change your AI model provider. Your selection is remembered per provider.
churn modelFeatures (v2.1.2+):
- Persistent model selection - remembers last used model per provider
- API key verification - shows last 4 characters of saved key
- Press 'r' to replace/update API key
- Pre-selects last used model when switching providers
Supported Providers:
- Anthropic (Claude Sonnet 4.5, Opus 4.1, Haiku 4.5, Sonnet 4)
- OpenAI (GPT-5, GPT-5 Mini, GPT-5 Nano, GPT-5 Pro)
- Google (Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.5 Flash-Lite, Gemini 2.0 Flash)
- Ollama (Local models like Llama 3.3, DeepSeek-R1, Qwen 2.5, Mistral, CodeLlama)
churn review
Review results from the last analysis.
churn reviewNavigation:
- Arrow keys - Navigate between findings
- Enter - View details
- Space - Toggle acceptance
- Q - Quit review
churn export
Export the last analysis to files.
churn exportGenerated files (saved to .churn/patches/):
findings-<timestamp>.json- Full analysis results in JSON formatreport-<timestamp>.md- Human-readable markdown report with summarychanges-<timestamp>.patch- Git patch file (if applicable)
Analysis summary includes:
- Files analyzed and cache hits
- Tokens used and saved
- Estimated cost and savings
- Project type and framework detected
- All findings with severity levels
churn pass
Hand off analysis results to AI coding agents (Claude Code, Cursor, Gemini CLI, Codex).
Options:
--to <agent>- Target agent:claude,cursor,gemini, orcodex(required)--format <format>- Context format:minimal(MD+JSON) orcomprehensive(MD+JSON+patch+metadata) (default: minimal)--launch- Launch the agent immediately with the handoff package
Examples:
# Create handoff package and display info
churn pass --to claude
# Create comprehensive package with all context
churn pass --to cursor --format comprehensive
# Launch agent immediately with handoff package
churn pass --to claude --launch
# View handoff package as JSON
churn pass --to gemini | jq '.files'The pass command generates a handoff package containing the analysis results and optionally launches the target agent with the files. This enables seamless workflows where Churn analyzes your code and then hands control to your preferred AI coding assistant for implementation.
Keyboard Shortcuts
Churn includes global keyboard shortcuts that work on every screen (added in v2.1.2):
| Key | Action | Description |
|-----|--------|-------------|
| z | Exit | Exit the application from anywhere |
| o | Start Over | Return to the start menu |
| esc | Go Back | Go back one step in the workflow |
| r | Replace Key | Replace API key (when viewing model selection) |
Review Panel Shortcuts:
↑/↓- Navigate between findingsEnter- View finding detailsSpace- Toggle finding acceptancea- Accept all findingsn- Accept none (clear all)d- Done (proceed to export)q- Quit review mode
Note: Shortcuts are displayed in the footer on each screen for convenience. Global shortcuts work consistently across all interactive screens including ModelSelect, StartMenu, ConfirmRun, and ReviewPanel.
Configuration
Global Config (~/.churn/config.json)
{
"version": "2.2.5",
"apiKeys": {
"anthropic": "sk-ant-...",
"openai": "sk-...",
"google": "AI..."
},
"defaultModel": {
"provider": "anthropic",
"model": "claude-sonnet-4-5"
},
"handoff": {
"enabled": true,
"targetAgent": "claude",
"contextFormat": "minimal",
"autoLaunch": true,
"agentCommands": {
"claude": "claude",
"cursor": "cursor",
"gemini": "gemini",
"codex": "codex"
}
}
}Handoff Configuration:
enabled- Enable/disable handoff featuretargetAgent- Default agent for handoff (claude,cursor,gemini,codex, ornone)contextFormat- Context to pass:minimal(MD+JSON) orcomprehensive(MD+JSON+patch+metadata)autoLaunch- Show interactive prompt after export to launch agentagentCommands- Custom CLI commands for each agent
Configure handoff settings interactively by pressing 'C' when prompted after export, or edit the config file directly.
Note: Ollama models don't require API keys and run completely locally.
API Keys
Churn requires API keys for cloud models:
- Anthropic: console.anthropic.com
- OpenAI: platform.openai.com
- Google: makersuite.google.com
- Ollama: Run locally, no key needed
Keys are stored in ~/.churn/config.json and never sent anywhere except the respective API providers.
Documentation
Full Documentation: churn-docs.pages.dev
Guides
- Agent Handoff Guide - Complete guide to integrating with AI coding agents
- Quickstart Guide - Get started in 5 minutes
- Examples - Real-world usage examples
- Install Bun - Bun installation guide
Development
- Development Guide - Comprehensive developer documentation
- Implementation Summary - Architecture overview
- File Manifest - Complete file listing
Contributing
Churn is open-source and welcomes contributions.
How to Contribute:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes and test thoroughly
- Commit your changes (
git commit -m 'Add amazing feature') - Push to your branch (
git push origin feature/amazing-feature) - Open a Pull Request
Report bugs or request features
License
MIT License - see LICENSE for details.
Copyright (c) 2025 Churn Contributors
Acknowledgments
Built with Ink, powered by Bun, themed in vibrant red.
Special thanks to the open-source community and all contributors who make Churn better.
Star the repo if you find Churn useful!
Report Bug • Request Feature • Contribute
Made with care for developers everywhere.
