npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

zsh-cli-ai

v0.2.1

Published

AI-powered shell assistance for zsh

Readme

zsh-cli-ai

AI-powered shell assistance for zsh. Convert comments to commands, explain commands, get intelligent autocomplete suggestions, and fix failed commands - all with simple keybindings.

No API keys required. Unlike similar tools (fish-ai, zsh-ai), this leverages existing CLI tools you already have authenticated - Claude Code, Codex CLI, or Ollama for fully local inference. If you're already using these tools, zsh-cli-ai works out of the box.

Inspired by fish-ai, built for zsh/oh-my-zsh users.

Features

| Feature | Keybinding | Description | |---------|------------|-------------| | Codify | Ctrl+E | Convert # comment to a shell command | | Explain | Ctrl+E | Explain what a command does | | Fix | Ctrl+E | Fix the last failed command | | Autocomplete | Alt+A | Get AI-powered completions via fzf |

Smart Keybinding

Ctrl+E is context-aware and automatically picks the right action:

| Buffer State | Action | |--------------|--------| | Empty + failed command exists | Fix the failed command | | # find large files | Codify → convert to find . -size +100M | | find . -size +100M | Explain → show what it does |

Installation

Requirements

  • Node.js 18+
  • fzf (for autocomplete)
  • One of:
    • Ollama (local LLM - private, no API keys needed)
    • Claude Code (npm install -g @anthropic-ai/claude-code)
    • Codex CLI (npm install -g @openai/codex)

Install

npm install -g zsh-cli-ai
zsh-cli-ai init        # Interactive: choose backend (codex/claude/ollama)
exec zsh

Or specify backend directly:

zsh-cli-ai init ollama  # Use Ollama (local, private)
zsh-cli-ai init claude  # Use Claude Code
zsh-cli-ai init codex   # Use Codex

Verify Installation

zsh-cli-ai doctor

Backends

zsh-cli-ai supports three AI backends:

| Backend | Models | Characteristics | |---------|--------|-----------------| | Ollama | gemma3, llama3, qwen3, phi3, etc. | Local inference, 100% private, no API keys | | Claude Code | opus, sonnet, haiku | Subprocess per request, ~1-2s latency | | Codex | OpenAI models | Persistent MCP server, fast responses |

Ollama Setup

# Install Ollama (https://ollama.ai)
brew install ollama   # macOS
# or download from ollama.ai

# Start Ollama server
ollama serve

# Pull a model (gemma3:4b is the default)
ollama pull gemma3:4b

# Initialize zsh-cli-ai with Ollama
zsh-cli-ai init ollama

Switching Backends

# Show current backend
zsh-cli-ai backend

# Switch backends
zsh-cli-ai backend ollama   # Local LLM
zsh-cli-ai backend claude   # Claude Code
zsh-cli-ai backend codex    # Codex

The daemon automatically restarts when you switch backends.

Switching Models

# Show current model and available Ollama models
zsh-cli-ai model

# Switch to a different model
zsh-cli-ai model llama3.2:3b
zsh-cli-ai model gemma3:4b

# Reset to backend default (gemma3:4b for Ollama)
zsh-cli-ai model clear

The daemon automatically restarts when you switch models.

Tip: Choose non-reasoning models for best results. Reasoning models (like QwQ, DeepSeek-R1) output thinking tokens that pollute the response. Fast, instruction-following models like gemma3:4b, llama3.2:3b, or phi3:3.8b work best for shell tasks.

Backend Configuration

Configuration is stored in ~/.config/zsh-cli-ai/config.json (XDG compliant).

You can also override via environment variable (read at daemon startup):

ZSH_AI_BACKEND=claude zsh-cli-ai start

Usage

Keybindings

Ctrl+E - Smart AI (context-aware)

# Type a comment, press Ctrl+E to convert to command:
# list all files larger than 100mb
→ find . -type f -size +100M

# Type a command, press Ctrl+E to explain it:
find . -type f -size +100M
→ "Find all regular files larger than 100MB in the current directory"

# After a command fails, press Ctrl+E on empty line to fix:
$ git pish  # typo, exits with error
$ <Ctrl+E>
→ git push

Alt+A - AI Autocomplete

# Type partial command, press Alt+A for suggestions:
git sta<Alt+A>
→ fzf menu with: git status, git stash, git stash list, etc.

CLI Commands

# AI commands
zsh-cli-ai codify "# find python files modified today"
zsh-cli-ai explain "tar -xzvf archive.tar.gz"
zsh-cli-ai complete "docker "
zsh-cli-ai fix "git pish" 1

# Backend and model management
zsh-cli-ai backend           # Show current backend
zsh-cli-ai backend claude    # Switch to Claude (auto-restarts daemon)
zsh-cli-ai model             # Show current model and available models
zsh-cli-ai model qwen3:4b    # Switch model (auto-restarts daemon)
zsh-cli-ai model clear       # Reset to default model

# History context (opt-in for fix command)
zsh-cli-ai history           # Show if history context is enabled
zsh-cli-ai history on        # Enable sending recent commands for fix context
zsh-cli-ai history off       # Disable (default)

# Daemon management
zsh-cli-ai start    # Start the background daemon
zsh-cli-ai stop     # Stop the daemon
zsh-cli-ai status   # Check daemon status and backend
zsh-cli-ai doctor   # Verify dependencies and backends

Configuration

Configure via zstyle in your .zshrc:

# Disable the plugin
zstyle ':zsh-cli-ai:*' enabled 'no'

# Custom keybindings
zstyle ':zsh-cli-ai:keybind' smart '^E'      # Default: Ctrl+E
zstyle ':zsh-cli-ai:keybind' complete '^[a'  # Default: Alt+A (ESC-a)

# Add custom redaction patterns (comma-separated regexes)
zstyle ':zsh-cli-ai:redact' extra-patterns 'my-secret-pattern,company-internal-.*'

For history context, use the CLI command instead: zsh-cli-ai history on

Environment Variables

# Override the AI backend (read at daemon startup)
export ZSH_AI_BACKEND="ollama"  # or "claude" or "codex"

# Override the AI model (or use: zsh-cli-ai model <model>)
export ZSH_AI_MODEL="gemma3:4b"  # ollama: any installed model
                                 # claude: opus/sonnet/haiku
                                 # codex: gpt-4o etc.

# Custom timeout in milliseconds (default: 30000)
export ZSH_AI_TIMEOUT="60000"

# Ollama server address (default: localhost:11434)
export OLLAMA_HOST="localhost:11434"

Privacy & Security

Automatic Redaction

Before sending any input to the AI (commands, comments, and history if enabled), sensitive data is automatically redacted:

  • API keys and tokens (api_key=..., OPENAI_API_KEY=...)
  • AWS credentials (AKIA..., aws_secret_access_key)
  • Private keys (-----BEGIN PRIVATE KEY-----)
  • Bearer tokens
  • Database URLs with credentials
  • JWT tokens
  • GitHub tokens (ghp_..., ghs_...)

Note: These patterns are not foolproof. Be mindful of what you type if not running locally - unusual secret formats or company-specific patterns may not be caught. Add custom patterns for your environment (see below).

Custom Redaction Patterns

Add your own patterns via zstyle:

# Single pattern
zstyle ':zsh-cli-ai:redact' extra-patterns 'my-secret-.*'

# Multiple patterns (comma-separated)
zstyle ':zsh-cli-ai:redact' extra-patterns 'pattern1,pattern2,pattern3'

Opt-in History

By default, command history is never sent to the AI. You can enable it for better fix suggestions:

zsh-cli-ai history on   # Enable
zsh-cli-ai history off  # Disable (default)
zsh-cli-ai history      # Show current status

When enabled, the last 3 commands are sent with fix requests to provide context. These commands are redacted using the same patterns as all other input (API keys, tokens, etc. are stripped before sending).

Timeouts

All requests have a configurable timeout (default 30s). The shell is never blocked - if the AI doesn't respond, you get an error message and can continue working.

export ZSH_AI_TIMEOUT="60000"  # 60 seconds (useful for slower local models)

Disable

zstyle ':zsh-cli-ai:*' enabled 'no'

Development

# Clone and install
git clone https://github.com/Bigsy/zsh-cli-ai
cd zsh-cli-ai
npm install

# Build
npm run build

# Watch mode
npm run dev

# Link for local testing
npm link

Troubleshooting

Daemon not starting

zsh-cli-ai doctor  # Check all dependencies
zsh-cli-ai stop    # Stop any stuck daemon
zsh-cli-ai start   # Start fresh

Wrong backend running

zsh-cli-ai status   # Check which backend is running
zsh-cli-ai backend  # Check configured vs running backend
zsh-cli-ai backend claude  # Switch and auto-restart

Alt+A not working

Some terminals intercept Alt keys. Try:

  • iTerm2: Preferences → Profiles → Keys → Left Option Key → Esc+
  • Terminal.app: May need to use Esc then a instead of Alt+A

Or remap to a different key:

zstyle ':zsh-cli-ai:keybind' complete '^X^A'  # Ctrl+X Ctrl+A instead

Ollama not working

# Check if Ollama is running
curl http://localhost:11434/api/version

# Start Ollama if not running
ollama serve

# Check installed models
ollama list

# Pull a model if none installed
ollama pull gemma3:4b

# Run doctor to verify
zsh-cli-ai doctor

If using a custom Ollama host:

export OLLAMA_HOST="192.168.1.100:11434"
zsh-cli-ai stop && zsh-cli-ai start

License

MIT