copilot-plus
v1.0.27
Published
Voice + screenshots + model hotkeys + live agent monitor — drop-in wrapper for GitHub Copilot CLI
Maintainers
Readme
copilot+
copilot+ is a drop-in replacement for the copilot command. It wraps Copilot CLI transparently and adds powerful input enhancements:
| Hotkey / Command | What it does |
|--------|-------------|
| Ctrl+R | Start / stop voice recording → transcription is typed into your prompt |
| Ctrl+P | Screenshot picker → file path is injected as @/path/screenshot.png |
| Ctrl+K | Open command palette — access all features from a searchable menu |
| Option+Shift+1–4 (macOS Terminal.app) | Switch to workhorse model slot 1–4 — requires "Use Option as Meta Key" |
| Option+Shift+5 (macOS Terminal.app) | Toggle ⚡ Auto Mode — model selected per prompt complexity |
| Ctrl+Shift+1–4 (kitty/WezTerm/Windows Terminal) | Switch to workhorse model slot 1–4 on CSI u–capable terminals |
| Ctrl+Shift+5 (kitty/WezTerm/Windows Terminal) | Toggle ⚡ Auto Mode on CSI u–capable terminals |
| Option+1–9 (macOS Terminal.app) | Execute a prompt macro — requires "Use Option as Meta Key" |
| Ctrl+1–9 (kitty/WezTerm/Windows Terminal) | Execute a prompt macro on CSI u–capable terminals |
| copilot+ --monitor | Open the real-time agent dashboard |
Everything else — all Copilot features, slash commands, modes — works exactly as normal.
Requirements
| | macOS | Windows |
|---|---|---|
| OS | macOS 12+ | Windows 10/11 |
| GitHub Copilot CLI | required | required |
| Node.js ≥ 18 | brew install node | nodejs.org or winget install OpenJS.NodeJS |
| ffmpeg | brew install ffmpeg | winget install Gyan.FFmpeg |
| whisper.cpp | brew install whisper-cpp | Manual install |
Apple Silicon: The
base.enmodel transcribes in ~1–2 s on M1/M2/M3.
Installation
Option A — npm (macOS + Windows)
npm install -g copilot-plusOption B — Homebrew (macOS only)
brew tap Errr0rr404/copilot-plus
brew install copilot-plusmacOS — install speech dependencies
brew install ffmpeg whisper-cpp
# Download speech model (Option A — helper script)
whisper-cpp-download-ggml-model base.en
# Download speech model (Option B — direct curl, always works)
mkdir -p ~/.copilot/models
curl -L "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin" \
-o ~/.copilot/models/ggml-base.en.binWindows — install speech dependencies
1. Install ffmpeg:
winget install Gyan.FFmpeg2. Install whisper-cli:
- Download the latest
whisper-cli.exefrom github.com/ggerganov/whisper.cpp/releases - Place it somewhere on your PATH (e.g.
C:\Windows\System32\or add the folder to PATH)
3. Download the speech model:
mkdir "$env:USERPROFILE\.copilot\models" -Force
curl -L "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin" `
-o "$env:USERPROFILE\.copilot\models\ggml-base.en.bin"Verify setup
copilot+ --setupYou should see all green checkmarks. If anything is missing, the setup output tells you exactly what to fix.
The setup wizard also lists all detected audio input devices and lets you pick the right microphone interactively — the choice is saved to ~/.copilot/copilot-plus.json so you never need to edit the file manually.
Quick Start
copilot+That's it. You're now inside Copilot CLI with voice and screenshot support active.
Using Voice Input
Press
Ctrl+Rto start recording.
A system notification appears and your terminal title changes to🎙 Recording…Speak your prompt naturally — e.g. "refactor this function to use async await"
Press
Ctrl+Ragain to stop.
Transcription runs locally (⏳ Transcribing…) — no audio ever leaves your machine.Your words appear as text in the Copilot prompt. Review and edit if needed, then press Enter to send.
Press Ctrl+C while recording to cancel without transcribing.
Using Screenshots
macOS: Press Ctrl+P — the interactive screenshot overlay opens (same UI as ⌘⇧4). Click and drag to select any area. The file path is injected into your prompt as @/tmp/copilot-screenshots/screenshot-<timestamp>.png.
Windows: Press Ctrl+P — the Snip & Sketch overlay opens (same as Win+Shift+S). Draw a selection; the file path is injected automatically when you complete the snip.
Add context if you want (e.g. "what's wrong with this?"), then press Enter.
First-Run Setup
On your first launch of copilot+, an interactive onboarding wizard will ask about:
- Voice Activation — hands-free "hey copilot" keyword detection
- Prompt macros — assign saved prompts to macro slots
Your choices are saved to ~/.copilot/copilot-plus.json. Re-run the wizard anytime:
copilot+ --preferencesCommand Palette
Press Ctrl+K to open the command palette — a searchable overlay listing every copilot-plus action:
- 🎙 Voice Recording
- 📸 Screenshot
- 🗣️ Voice Activation (toggle on/off)
- ⚡ Auto Mode (toggle on/off) + configure Fast / Medium / Powerful model tiers
- 🤖 Workhorse Models 1–4 (switch or configure model slots)
- ⌨️ Macros 1–9 (execute or edit inline)
- ⚙️ Open Preferences
Navigation: ↑↓ to move, type to filter, Enter to select, Esc to close.
Editing items from the palette: Navigate to any workhorse model, auto model tier, or macro entry and press Enter to open an inline editor. Then:
- Enter — save and immediately activate (switch model / run macro)
- Tab — save without activating
- Esc — go back without saving
Agent Monitor
Run copilot+ --monitor in any terminal to open a live dashboard showing every running copilot session on your machine:
copilot+ --monitor╭──────────────────────────── copilot+ monitor ─────────────────────────────╮
│ 3 active · 1 need attention updates every 1.5s · 4:36 PM · q │
│ individual pro · 587/1500 premium req █████░░░░░░░ resets 2026-04-01 │
├────────────────────────────────────────────────────────────────────────────┤
│ ⚠ ATTENTION pid 46206 claude-sonnet-4.6 ~/projects/api │
│ 8 premium req started 14m ago · 8 msgs · active 1m │
├────────────────────────────────────────────────────────────────────────────┤
│ ● IDLE pid 51111 gpt-4.1 ~/projects/frontend │
│ 3 premium req started 8m ago · 3 msgs · active now │
├ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─┤
│ ● IDLE pid 28741 [copilot CLI] ~/projects/docs │
│ [unmanaged – no stats] started 1h ago │
╰────────────────────────────────────────────────────────────────────────────╯What you see:
| Item | Description |
|------|-------------|
| Header quota bar | Your plan, premium requests used/remaining this month, and reset date — pulled live from the GitHub Copilot API |
| Status badge | ATTENTION (response waiting >30 s), THINKING (waiting for response), IDLE, RECORDING, TRANSCRIBING, DONE |
| Premium req count | Number of AI exchanges in this session (copilot+ managed sessions only) |
| [copilot CLI] | A bare copilot session not started through copilot+ — no per-session stats available |
Controls: q / Q / Ctrl+C / Esc to exit.
Sessions disappear automatically when the copilot process exits. Stale entries older than 5 minutes are pruned.
Workhorse Models
Assign up to 4 AI models to slots so you can switch between them instantly with a single hotkey — no more typing /model each time.
Setup
The easiest way: open the command palette (Ctrl+K), navigate to a Workhorse entry, press Enter, type the model name (e.g. claude-sonnet-4.6), and press Enter to save and switch immediately.
You can also edit ~/.copilot/copilot-plus.json directly:
{
"workhorseModels": {
"1": "claude-sonnet-4.6",
"2": "claude-opus-4.5",
"3": "gpt-4.1",
"4": "o3"
}
}Switching models
| Terminal | Hotkey | |----------|--------| | macOS Terminal.app | Option+Shift+1–4 — requires "Use Option as Meta Key" (same as macros) | | kitty / WezTerm | Ctrl+Shift+1–4 — works natively | | Windows Terminal | Ctrl+Shift+1–4 — works natively | | Any terminal | Ctrl+K → navigate to a Workhorse entry → Enter |
Switching clears the current input line and sends /model <name> to Copilot CLI, then shows a macOS/Windows notification confirming the switch.
Note: Activating a workhorse slot (1–4) automatically turns Auto Mode off.
⚡ Auto Mode
Auto Mode routes each prompt to the right model automatically — no manual switching required.
How it works
When Auto Mode is on, copilot+ intercepts every Enter keypress, analyses the prompt you typed, picks a model tier, and switches to it (if needed) before submitting:
| Tier | When selected | Example prompts | |------|--------------|-----------------| | Fast | Short prompts (<80 chars) with question/explanation keywords | "explain this function", "what is a closure?" | | Powerful | Long prompts (>200 chars) or implementation/task keywords | "implement", "refactor", "debug", "build", "create" | | Medium | Everything else | general conversation, moderate-length requests |
Setup
Configure the three tiers via Ctrl+K → navigate to an Auto entry → Enter → type a model name → Enter.
Or edit ~/.copilot/copilot-plus.json directly:
{
"autoModels": {
"fast": "claude-haiku-4.5",
"medium": "claude-sonnet-4.6",
"powerful": "claude-opus-4.6"
}
}If a tier is left empty it falls back to the corresponding workhorse slot (fast/medium → slot 1, powerful → slot 2).
Toggling Auto Mode
| Terminal | Hotkey | |----------|--------| | macOS Terminal.app | Option+Shift+5 — requires "Use Option as Meta Key" | | kitty / WezTerm | Ctrl+Shift+5 — works natively | | Windows Terminal | Ctrl+Shift+5 — works natively | | Any terminal | Ctrl+K → ⚡ Auto Mode → Enter |
When active, the terminal title shows copilot [⚡ auto] and a notification fires on each prompt showing which tier was selected. Switching to a workhorse slot (1–4) automatically turns Auto Mode off.
Prompt Macros
Assign frequently used prompts to macro slots. When triggered, the saved text is instantly injected into your Copilot prompt.
macOS (Apple Terminal)
Macros are triggered with Option+1 through Option+9.
One-time setup: Open Terminal → Settings → Profiles → Keyboard → check "Use Option as Meta Key".
macOS (kitty / WezTerm / iTerm2) and Windows Terminal
Macros are triggered with Ctrl+1 through Ctrl+9 (these terminals support CSI u key encoding natively — no extra setup needed).
Setting macros
The easiest way is via the command palette (Ctrl+K → navigate to a macro → Enter to edit).
You can also set them during onboarding, via copilot+ --preferences, or by editing ~/.copilot/copilot-plus.json directly:
{
"macros": {
"1": "Write unit tests for this code",
"2": "Explain this code step by step",
"3": "Refactor this to use async/await"
}
}Voice Activation
Say "hey copilot" or just "copilot" to start recording hands-free — no accounts, no API keys, no extra installs.
How it works:
- Always listens for your wake phrase using whisper.cpp (near-zero CPU when silent)
- Phrase detected → recording starts automatically
- You speak your prompt
- You pause → transcription runs locally → text is injected into copilot
- Returns to listening — ready for the next trigger
Setup
Enable during first run or via copilot+ --preferences. Choose any wake phrase:
"hey copilot"(default) — or just say"copilot"without the "hey", both work"ok computer","yo copilot", or any short distinctive phrase
Passing Flags to Copilot
Any arguments after copilot+ are forwarded directly to copilot:
copilot+ --experimental
copilot+ --banner
copilot+ --helpConfiguration
Settings are stored at ~/.copilot/copilot-plus.json (created automatically on first run).
{
"modelPath": "/opt/homebrew/share/whisper.cpp/models/ggml-base.en.bin",
"audioDevice": ":2",
"autoSubmit": false,
"firstRunComplete": true,
"workhorseModels": {
"1": "claude-sonnet-4.6",
"2": "claude-opus-4.5",
"3": "gpt-4.1",
"4": "o3"
},
"autoModels": {
"fast": "claude-haiku-4.5",
"medium": "claude-sonnet-4.6",
"powerful": "claude-opus-4.6"
},
"macros": {
"1": "Write unit tests for this code",
"2": "Explain this code step by step"
},
"wakeWord": {
"enabled": false,
"phrase": "hey copilot",
"chunkSeconds": 2
}
}| Key | Default | Description |
|-----|---------|-------------|
| modelPath | auto-detected | Path to your whisper .bin model file. Auto-heals if the file moves. |
| audioDevice | auto-detected | ffmpeg audio input device. Set interactively via copilot+ --setup. macOS: ":2" index format. Windows: "Microphone (Realtek Audio)" name format. |
| autoSubmit | false | true = automatically press Enter after voice transcription |
| workhorseModels | all empty | AI model slots 1–4. Edit via Ctrl+K command palette or directly here. |
| autoModels.fast | empty | Model for short Q&A prompts. Falls back to workhorse slot 1. |
| autoModels.medium | empty | Model for general prompts. Falls back to workhorse slot 1. |
| autoModels.powerful | empty | Model for long/complex/task prompts. Falls back to workhorse slot 2. |
| macros | all empty | Prompt macros, slots 1–9. Edit via Ctrl+K or --preferences. |
| wakeWord.enabled | false | Enable voice activation (wake phrase detection) |
| wakeWord.phrase | "hey copilot" | The phrase to listen for |
| wakeWord.chunkSeconds | 2 | Audio chunk length for wake phrase scanning |
Available whisper models
| Model | Size | Speed (M2) | Accuracy |
|-------|------|------------|----------|
| tiny.en | 75 MB | ~0.5 s | Good |
| base.en | 142 MB | ~1 s | Better |
| small.en | 466 MB | ~3 s | Best for most |
# macOS
whisper-cpp-download-ggml-model small.en
# Windows — download directly and update modelPath in config
curl -L "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.en.bin" `
-o "$env:USERPROFILE\.copilot\models\ggml-small.en.bin"Then update modelPath in ~/.copilot/copilot-plus.json.
How It Works
┌────────────────────────────────────────────────────────────────────┐
│ copilot+ (PTY wrapper) │
│ │
│ Your keystrokes ──► intercept hotkeys │
│ ├── Ctrl+R → push-to-talk recording │
│ ├── Ctrl+P → screenshot picker │
│ ├── Ctrl+K → command palette overlay │
│ ├── Opt+⇧1–4 → switch workhorse model │
│ ├── Opt+⇧5 → toggle ⚡ auto mode │
│ ├── Ctrl+⇧1–4 → switch workhorse model │
│ ├── Ctrl+⇧5 → toggle ⚡ auto mode │
│ ├── Option+1–9 → inject macro (macOS) │
│ ├── Ctrl+1–9 → inject macro (CSI u) │
│ ▼ │
│ ┌─────────────┬──────────────┬───────────────┐ │
│ │ ffmpeg mic │ screencapture │ whisper+VAD │ │
│ │ + whisper-cli│ / Snip&Sketch │ (voice activ) │ │
│ └──────┬───────┴──────┬───────┴───────┬───────┘ │
│ └──────────────┴───────────────┘ │
│ ▼ │
│ inject text / @filepath / /model cmd │
│ │ │
│ copilot ◄───────────────────┘ │
│ (all other keystrokes pass through unchanged) │
└────────────────────────────────────────────────────────────────────┘Transcription is 100% local — whisper.cpp runs on your machine, nothing is sent to any server.
Troubleshooting
posix_spawnp failed on first run
Run npm install -g copilot-plus again — the postinstall script will fix the permissions automatically.
Microphone not being captured / transcription is always the same word
Your audioDevice is pointing to the wrong input (e.g. a virtual audio device).
macOS — list devices:
ffmpeg -f avfoundation -list_devices true -i "" 2>&1 | grep AVFoundationWindows — list devices:
ffmpeg -f dshow -list_devices true -i dummy 2>&1 | findstr audioSet audioDevice in ~/.copilot/copilot-plus.json to the correct device
(macOS: ":2" index format · Windows: "Microphone (Realtek Audio)" name format)
Error: could not open input device (macOS)
Grant microphone access to your terminal:
System Settings → Privacy & Security → Microphone → enable your terminal app
Error: could not open input device (Windows)
Go to Settings → Privacy & Security → Microphone and enable microphone access for your terminal / Node.js.
No whisper model found
# macOS
mkdir -p ~/.copilot/models
curl -L "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin" \
-o ~/.copilot/models/ggml-base.en.bin
# Windows (PowerShell)
mkdir "$env:USERPROFILE\.copilot\models" -Force
curl -L "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin" `
-o "$env:USERPROFILE\.copilot\models\ggml-base.en.bin"Then run copilot+ --setup to confirm it's detected.
Transcription is inaccurate
Switch to a larger model (small.en instead of base.en) and update modelPath in ~/.copilot/copilot-plus.json.
Wake word not triggering
Try a shorter, more distinctive phrase (e.g. "hey copilot" works better than a single common word). You can increase wakeWord.chunkSeconds to 3 or 4 if the phrase gets cut off mid-recording, or download the tiny.en model for faster scanning:
# macOS
mkdir -p ~/.copilot/models
curl -L "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.en.bin" \
-o ~/.copilot/models/ggml-tiny.en.binOption+Shift+1–5 model slots / Option+1–9 macros don't work (macOS Apple Terminal)
Open Terminal → Settings → Profiles → Keyboard → check "Use Option as Meta Key".
Model slot hotkey does nothing (kitty/WezTerm/Windows Terminal)
Ensure your terminal is configured to send CSI u key sequences. In kitty this is on by default. In WezTerm, enable_kitty_keyboard = true must be set. In Windows Terminal, enable "Input: Terminal Input Encoding" → application/vnd.ms-terminal.keyboard.v2 in settings.
Screenshot doesn't attach (macOS)
System Settings → Privacy & Security → Screen Recording → enable your terminal app
Screenshot doesn't attach (Windows)
Make sure you drew a selection in the Snip & Sketch overlay — pressing Escape cancels without saving.
License
MIT © Errr0rr404
