zsh-cli-ai
v0.2.1
Published
AI-powered shell assistance for zsh
Maintainers
Readme
zsh-cli-ai
AI-powered shell assistance for zsh. Convert comments to commands, explain commands, get intelligent autocomplete suggestions, and fix failed commands - all with simple keybindings.
No API keys required. Unlike similar tools (fish-ai, zsh-ai), this leverages existing CLI tools you already have authenticated - Claude Code, Codex CLI, or Ollama for fully local inference. If you're already using these tools, zsh-cli-ai works out of the box.
Inspired by fish-ai, built for zsh/oh-my-zsh users.
Features
| Feature | Keybinding | Description |
|---------|------------|-------------|
| Codify | Ctrl+E | Convert # comment to a shell command |
| Explain | Ctrl+E | Explain what a command does |
| Fix | Ctrl+E | Fix the last failed command |
| Autocomplete | Alt+A | Get AI-powered completions via fzf |
Smart Keybinding
Ctrl+E is context-aware and automatically picks the right action:
| Buffer State | Action |
|--------------|--------|
| Empty + failed command exists | Fix the failed command |
| # find large files | Codify → convert to find . -size +100M |
| find . -size +100M | Explain → show what it does |
Installation
Requirements
- Node.js 18+
- fzf (for autocomplete)
- One of:
- Ollama (local LLM - private, no API keys needed)
- Claude Code (
npm install -g @anthropic-ai/claude-code) - Codex CLI (
npm install -g @openai/codex)
Install
npm install -g zsh-cli-ai
zsh-cli-ai init # Interactive: choose backend (codex/claude/ollama)
exec zshOr specify backend directly:
zsh-cli-ai init ollama # Use Ollama (local, private)
zsh-cli-ai init claude # Use Claude Code
zsh-cli-ai init codex # Use CodexVerify Installation
zsh-cli-ai doctorBackends
zsh-cli-ai supports three AI backends:
| Backend | Models | Characteristics | |---------|--------|-----------------| | Ollama | gemma3, llama3, qwen3, phi3, etc. | Local inference, 100% private, no API keys | | Claude Code | opus, sonnet, haiku | Subprocess per request, ~1-2s latency | | Codex | OpenAI models | Persistent MCP server, fast responses |
Ollama Setup
# Install Ollama (https://ollama.ai)
brew install ollama # macOS
# or download from ollama.ai
# Start Ollama server
ollama serve
# Pull a model (gemma3:4b is the default)
ollama pull gemma3:4b
# Initialize zsh-cli-ai with Ollama
zsh-cli-ai init ollamaSwitching Backends
# Show current backend
zsh-cli-ai backend
# Switch backends
zsh-cli-ai backend ollama # Local LLM
zsh-cli-ai backend claude # Claude Code
zsh-cli-ai backend codex # CodexThe daemon automatically restarts when you switch backends.
Switching Models
# Show current model and available Ollama models
zsh-cli-ai model
# Switch to a different model
zsh-cli-ai model llama3.2:3b
zsh-cli-ai model gemma3:4b
# Reset to backend default (gemma3:4b for Ollama)
zsh-cli-ai model clearThe daemon automatically restarts when you switch models.
Tip: Choose non-reasoning models for best results. Reasoning models (like QwQ, DeepSeek-R1) output thinking tokens that pollute the response. Fast, instruction-following models like
gemma3:4b,llama3.2:3b, orphi3:3.8bwork best for shell tasks.
Backend Configuration
Configuration is stored in ~/.config/zsh-cli-ai/config.json (XDG compliant).
You can also override via environment variable (read at daemon startup):
ZSH_AI_BACKEND=claude zsh-cli-ai startUsage
Keybindings
Ctrl+E - Smart AI (context-aware)
# Type a comment, press Ctrl+E to convert to command:
# list all files larger than 100mb
→ find . -type f -size +100M
# Type a command, press Ctrl+E to explain it:
find . -type f -size +100M
→ "Find all regular files larger than 100MB in the current directory"
# After a command fails, press Ctrl+E on empty line to fix:
$ git pish # typo, exits with error
$ <Ctrl+E>
→ git pushAlt+A - AI Autocomplete
# Type partial command, press Alt+A for suggestions:
git sta<Alt+A>
→ fzf menu with: git status, git stash, git stash list, etc.CLI Commands
# AI commands
zsh-cli-ai codify "# find python files modified today"
zsh-cli-ai explain "tar -xzvf archive.tar.gz"
zsh-cli-ai complete "docker "
zsh-cli-ai fix "git pish" 1
# Backend and model management
zsh-cli-ai backend # Show current backend
zsh-cli-ai backend claude # Switch to Claude (auto-restarts daemon)
zsh-cli-ai model # Show current model and available models
zsh-cli-ai model qwen3:4b # Switch model (auto-restarts daemon)
zsh-cli-ai model clear # Reset to default model
# History context (opt-in for fix command)
zsh-cli-ai history # Show if history context is enabled
zsh-cli-ai history on # Enable sending recent commands for fix context
zsh-cli-ai history off # Disable (default)
# Daemon management
zsh-cli-ai start # Start the background daemon
zsh-cli-ai stop # Stop the daemon
zsh-cli-ai status # Check daemon status and backend
zsh-cli-ai doctor # Verify dependencies and backendsConfiguration
Configure via zstyle in your .zshrc:
# Disable the plugin
zstyle ':zsh-cli-ai:*' enabled 'no'
# Custom keybindings
zstyle ':zsh-cli-ai:keybind' smart '^E' # Default: Ctrl+E
zstyle ':zsh-cli-ai:keybind' complete '^[a' # Default: Alt+A (ESC-a)
# Add custom redaction patterns (comma-separated regexes)
zstyle ':zsh-cli-ai:redact' extra-patterns 'my-secret-pattern,company-internal-.*'For history context, use the CLI command instead: zsh-cli-ai history on
Environment Variables
# Override the AI backend (read at daemon startup)
export ZSH_AI_BACKEND="ollama" # or "claude" or "codex"
# Override the AI model (or use: zsh-cli-ai model <model>)
export ZSH_AI_MODEL="gemma3:4b" # ollama: any installed model
# claude: opus/sonnet/haiku
# codex: gpt-4o etc.
# Custom timeout in milliseconds (default: 30000)
export ZSH_AI_TIMEOUT="60000"
# Ollama server address (default: localhost:11434)
export OLLAMA_HOST="localhost:11434"Privacy & Security
Automatic Redaction
Before sending any input to the AI (commands, comments, and history if enabled), sensitive data is automatically redacted:
- API keys and tokens (
api_key=...,OPENAI_API_KEY=...) - AWS credentials (
AKIA...,aws_secret_access_key) - Private keys (
-----BEGIN PRIVATE KEY-----) - Bearer tokens
- Database URLs with credentials
- JWT tokens
- GitHub tokens (
ghp_...,ghs_...)
Note: These patterns are not foolproof. Be mindful of what you type if not running locally - unusual secret formats or company-specific patterns may not be caught. Add custom patterns for your environment (see below).
Custom Redaction Patterns
Add your own patterns via zstyle:
# Single pattern
zstyle ':zsh-cli-ai:redact' extra-patterns 'my-secret-.*'
# Multiple patterns (comma-separated)
zstyle ':zsh-cli-ai:redact' extra-patterns 'pattern1,pattern2,pattern3'Opt-in History
By default, command history is never sent to the AI. You can enable it for better fix suggestions:
zsh-cli-ai history on # Enable
zsh-cli-ai history off # Disable (default)
zsh-cli-ai history # Show current statusWhen enabled, the last 3 commands are sent with fix requests to provide context. These commands are redacted using the same patterns as all other input (API keys, tokens, etc. are stripped before sending).
Timeouts
All requests have a configurable timeout (default 30s). The shell is never blocked - if the AI doesn't respond, you get an error message and can continue working.
export ZSH_AI_TIMEOUT="60000" # 60 seconds (useful for slower local models)Disable
zstyle ':zsh-cli-ai:*' enabled 'no'Development
# Clone and install
git clone https://github.com/Bigsy/zsh-cli-ai
cd zsh-cli-ai
npm install
# Build
npm run build
# Watch mode
npm run dev
# Link for local testing
npm linkTroubleshooting
Daemon not starting
zsh-cli-ai doctor # Check all dependencies
zsh-cli-ai stop # Stop any stuck daemon
zsh-cli-ai start # Start freshWrong backend running
zsh-cli-ai status # Check which backend is running
zsh-cli-ai backend # Check configured vs running backend
zsh-cli-ai backend claude # Switch and auto-restartAlt+A not working
Some terminals intercept Alt keys. Try:
- iTerm2: Preferences → Profiles → Keys → Left Option Key → Esc+
- Terminal.app: May need to use
Escthenainstead ofAlt+A
Or remap to a different key:
zstyle ':zsh-cli-ai:keybind' complete '^X^A' # Ctrl+X Ctrl+A insteadOllama not working
# Check if Ollama is running
curl http://localhost:11434/api/version
# Start Ollama if not running
ollama serve
# Check installed models
ollama list
# Pull a model if none installed
ollama pull gemma3:4b
# Run doctor to verify
zsh-cli-ai doctorIf using a custom Ollama host:
export OLLAMA_HOST="192.168.1.100:11434"
zsh-cli-ai stop && zsh-cli-ai startLicense
MIT
