@andypai/orb
v0.1.1
Published
Voice-driven code explorer for your terminal
Maintainers
Readme
Why Orb
Orb is a Bun-native terminal app for exploring real codebases with Anthropic or OpenAI models. It keeps the interface focused, shows tool activity as it happens, remembers project conversations, and can read answers aloud through tts-gateway or macOS say.
Features
- Natural language queries - Ask questions about your code in plain English
- Live tool activity - See file reads, shell commands, and exploration steps as they happen
- Voice input friendly - Paste transcriptions from MacWhisper for hands-free interaction
- Streaming TTS (serve mode) - Hear answers while they are still being generated
- Provider selection - Choose Anthropic (Claude) or OpenAI via CLI flags
- Model switching (Claude) - Cycle Anthropic models during a conversation with Shift+Tab
- Session persistence - Automatically resume the last session per project
- Focused terminal UI - Ink-based interface with conversation history, tool activity, and the Orb intro
Installation
Global install
# With Bun
bun install -g @andypai/orb
# With npm (Bun is still required at runtime)
npm install -g @andypai/orbLocal / one-off use
# Run without installing globally
bunx @andypai/orb
# Add to a Bun project
bun add @andypai/orb
# npm also works, but Bun is still required at runtime
npm install @andypai/orb60-Second Quick Start
1. Set up an LLM provider
- Anthropic: sign in with Claude Code / Max, or set
ANTHROPIC_API_KEY - OpenAI: set
OPENAI_API_KEY
If you do not pass --provider or --model, Orb auto-selects a provider in this order:
- Claude Agent SDK (Claude Code / Max or API key)
OPENAI_API_KEYANTHROPIC_API_KEY
2. Pick your speech path
Fastest path with no speech
orb --no-ttsFastest path on macOS (batch speech)
orb --tts-mode=generateGenerate mode uses macOS built-ins (say and afplay) and does not require tts-gateway.
Recommended path for streaming speech
uv tool install tts-gateway[kokoro]
~/.local/share/uv/tools/tts-gateway/bin/python -m spacy download en_core_web_sm
tts serve --provider kokoro --port 8000That spaCy install is important: Kokoro’s first request will crash in a plain uv tool install unless en_core_web_sm is installed into the tts-gateway tool environment.
Orb expects tts-gateway at http://localhost:8000 by default and automatically targets POST /tts.
3. Run Orb
# Explore the current directory
orb
# Guided setup for persistent defaults
orb setup
# Explore a specific project
orb /path/to/projectUsage
# Anthropic with options
orb --model=sonnet --voice=marius
orb --provider=anthropic --model=opus
# OpenAI provider
orb --provider=openai --model=gpt-5.4
orb --model=openai:gpt-5.4
# Fresh conversation
orb --new
# Skip the intro animation
orb --skip-introOptions
| Option | Description | Default |
| ------------------------ | ------------------------------------------------------------------------------ | --------------------------------------- |
| --provider=<provider> | LLM provider: anthropic|claude, openai|gpt (alias: --llm-provider) | auto |
| --model=<model> | Model ID or alias (haiku, sonnet, opus) or provider:model | haiku (anthropic), gpt-5.4 (openai) |
| --voice=<voice> | TTS voice: alba, marius, jean | alba |
| --tts-mode=<mode> | serve for tts-gateway, generate for local macOS say | serve |
| --tts-server-url=<url> | Serve-mode gateway URL | http://localhost:8000 |
| --tts-speed=<rate> | TTS speed multiplier | 1.5 |
| --new | Start fresh (ignore saved session) | - |
| --skip-intro | Skip the welcome animation | - |
| --no-tts | Disable text-to-speech | - |
| --no-streaming-tts | Disable streaming (batch mode) | - |
| --help | Show help message | - |
Controls
- Type your question and press Enter to submit
- Paste MacWhisper transcription with Cmd+V
- Press Esc or Ctrl+S to stop speech
- Press Shift+Tab to cycle Claude models (Anthropic only)
- Press Ctrl+O to toggle live tool-call details
- Press Ctrl+C to exit
TTS Setup
Orb supports two TTS paths:
- Serve mode (default): send speech requests to a local
tts-gatewayserver - Generate mode: use macOS built-in
sayfor local fallback speech
Serve mode
Serve mode gives Orb the best experience for low-latency streaming speech.
Install and start tts-gateway
uv tool install tts-gateway[kokoro]
~/.local/share/uv/tools/tts-gateway/bin/python -m spacy download en_core_web_sm
tts serve --provider kokoro --port 8000Verify the server
curl http://localhost:8000/health
curl -X POST http://localhost:8000/tts -F 'text=hello from orb' -o /tmp/orb-check.wavThen run Orb with defaults:
orbIf you use a different host or port:
orb --tts-server-url=http://localhost:9000You can also save that value permanently with orb setup or tts.server_url in ~/.orb/config.toml.
Voice notes
Orb exposes three portable voice presets: alba, marius, and jean.
Some tts-gateway providers use different internal voice names. Orb already retries once without an explicit voice if the gateway rejects a preset, so a working server default will still speak.
Generate mode
On macOS, generate mode works out of the box with the built-in say command:
orb --tts-mode=generateIf you want advanced voices, non-macOS support, or streaming playback while the model is still responding, use serve mode with tts-gateway instead.
Disable TTS
orb --no-ttsProvider Setup
Orb supports two LLM providers: Anthropic (Claude) and OpenAI.
Anthropic (default)
Anthropic uses the Claude Agent SDK. Orb can reuse a local Claude Code / Max-authenticated session when available, or fall back to ANTHROPIC_API_KEY / CLAUDE_API_KEY.
Quick start
# Uses Anthropic by default when available
orb
# Explicitly specify Anthropic
orb --provider=anthropic
# Use model aliases
orb --model=haiku
orb --model=sonnet
orb --model=opus
# Or use a full model ID
orb --model=claude-haiku-4-5-20251001Available models
claude-haiku-4-5-20251001(default, alias:haiku)claude-sonnet-4-6(alias:sonnet)claude-opus-4-6(alias:opus)
If you are not already signed in through Claude Code / Max, set ANTHROPIC_API_KEY or CLAUDE_API_KEY before starting Orb.
For setup details, see the Claude Agent SDK quickstart and the Claude models overview.
OpenAI
OpenAI support uses the official OpenAI Responses API and requires OPENAI_API_KEY.
Quick start
export OPENAI_API_KEY=sk-...
orb --provider=openai
orb --provider=openai --model=gpt-5.4
orb --model=openai:gpt-5.4Common models
gpt-5.4(default for OpenAI)gpt-5gpt-4ogpt-5.4-mini
Note: OpenAI runs in a sandboxed environment via
bash-tool. File edits happen in a sandbox overlay and are not applied directly to your working tree. Orb will describe changes it made so you can apply them yourself.
Global Config
Persistent defaults live in ~/.orb/config.toml. CLI flags override config values for one-off runs.
The easiest way to create the file is:
orb setupA typical config looks like:
provider = "anthropic"
model = "claude-haiku-4-5-20251001"
skip_intro = false
[tts]
enabled = true
streaming = true
mode = "serve"
server_url = "http://localhost:8000"
voice = "alba"
speed = 1.5
buffer_sentences = 1
clause_boundaries = false
min_chunk_length = 15
max_wait_ms = 150
grace_window_ms = 50Config-only advanced tuning keys live under [tts]:
buffer_sentencesclause_boundariesmin_chunk_lengthmax_wait_msgrace_window_ms
Sessions are stored under ~/.orb/sessions/ (one per project).
Customizing Prompts
Orb’s built-in instructions live in the root-level prompts/ directory:
prompts/base.mdfor shared behaviorprompts/anthropic.mdfor Anthropic-specific system instructionsprompts/openai.mdfor OpenAI-specific tool and sandbox instructionsprompts/voice.mdfor voice-mode guidance added when TTS is enabled
Prompt files are read fresh for each run, so edits apply to the next question without rebuilding the app.
Requirements
- Runtime: Bun >= 1.1
- LLM provider: Anthropic or OpenAI authentication
- TTS (optional):
tts-gatewayfor serve mode, or macOSsayandafplayfor generate mode
Development
git clone https://github.com/andypai/orb.git
cd orb
bun install
# Run in development
bun run dev
# Run with OpenAI
bun run dev --provider=openai --model=gpt-5.4
# Checks
bun run check
bun run testLicense
MIT
