jbai-cli
v2.1.1
Published
CLI wrappers to use AI coding tools (Claude Code, Codex, Gemini CLI, OpenCode, Goose, Continue) with JetBrains AI Platform
Maintainers
Readme
jbai-cli
Use AI coding tools with your JetBrains AI subscription — no separate API keys needed.
One token, all tools: Claude Code, Codex, Gemini CLI, OpenCode, Goose, Continue, Codex Desktop, Cursor.
Install
npm install -g jbai-cliSetup (2 minutes)
jbai now opens an interactive control panel by default, so you can manage everything from one place:
jbaiIt still supports direct commands for scripts and automation (for example jbai token set, jbai test, jbai proxy setup, jbai install).
Step 1: Get your token
- Go to platform.jetbrains.ai (or staging)
- Click your Profile icon (top right)
- Click "Copy Developer Token"
Step 2: Save your token
jbai token set
# Paste your token when promptedStep 3: Verify it works
jbai testExpected output:
Testing JetBrains AI Platform (staging)
1. OpenAI Proxy (Chat): ✅ Working
2. OpenAI Proxy (Codex /responses): ✅ Working
3. Anthropic Proxy (Claude): ✅ Working
4. Google Proxy (Gemini): ✅ WorkingLocal Proxy (for Codex Desktop, Cursor, and other GUI tools)
jbai-cli includes a local reverse proxy that lets any tool with custom base URL support work through JetBrains AI Platform — no per-tool wrappers needed.
One-liner setup
jbai proxy setupThis single command:
- Starts the proxy on
localhost:18080(auto-starts on login via launchd) - Configures Codex Desktop (
~/.codex/config.toml) - Adds the
JBAI_PROXY_KEYenv var to your shell
How it works
Codex Desktop / Cursor / any tool
│ standard OpenAI / Anthropic API calls
▼
http://localhost:18080
│ injects Grazie-Authenticate-JWT header
│ routes to correct provider endpoint
▼
https://api.jetbrains.ai/user/v5/llm/{provider}/v1
│
▼
Actual LLM (GPT, Claude, Gemini)Codex Desktop
After jbai proxy setup, Codex Desktop works automatically. The setup configures ~/.codex/config.toml with:
model_provider = "jbai-proxy"
[model_providers.jbai-proxy]
name = "JetBrains AI (Proxy)"
base_url = "http://localhost:18080/openai/v1"
env_key = "JBAI_PROXY_KEY"
wire_api = "responses"Cursor
Cursor requires manual configuration via its UI:
- Open Cursor → Settings (gear icon) → Models
- Enable "Override OpenAI Base URL"
- Set:
- Base URL:
http://localhost:18080/openai/v1 - API Key:
placeholder
- Base URL:
- Click Verify
Any OpenAI-compatible tool
Point it to the proxy:
export OPENAI_BASE_URL=http://localhost:18080/openai/v1
export OPENAI_API_KEY=placeholderFor Anthropic-compatible tools:
export ANTHROPIC_BASE_URL=http://localhost:18080/anthropic
export ANTHROPIC_API_KEY=placeholderProxy commands
| Command | Description |
|---------|-------------|
| jbai proxy setup | One-liner: configure everything + start |
| jbai proxy status | Check if proxy is running |
| jbai proxy stop | Stop the proxy |
| jbai proxy --daemon | Start proxy in background |
| jbai proxy install-service | Auto-start on login (macOS launchd) |
| jbai proxy uninstall-service | Remove auto-start |
Proxy routes
| Route | Target |
|-------|--------|
| /openai/v1/* | Grazie OpenAI endpoint |
| /anthropic/v1/* | Grazie Anthropic endpoint |
| /google/v1/* | Grazie Google endpoint |
| /v1/chat/completions | OpenAI (auto-detect) |
| /v1/responses | OpenAI (auto-detect) |
| /v1/messages | Anthropic (auto-detect) |
| /v1/models | Synthetic model list |
| /health | Proxy status |
CLI Tool Wrappers
Claude Code
jbai-claudeCodex CLI
# Interactive mode
jbai-codex
# One-shot task
jbai-codex exec "explain this codebase"OpenCode
jbai-opencodeGoose (Block)
# Interactive session
jbai-goose
# One-shot task
jbai-goose run -t "explain this codebase"Continue CLI
# Interactive TUI
jbai-continue
# One-shot (print and exit)
jbai-continue -p "explain this function"Super Mode (Skip Confirmations)
Add --super (or --yolo or -s) to any command to enable maximum permissions:
# Claude Code - skips all permission prompts
jbai-claude --super
# Codex - full auto mode
jbai-codex --super exec "refactor this code"
| Tool | Super Mode Flag |
|------|-----------------|
| Claude Code | --dangerously-skip-permissions |
| Codex | --full-auto |
| Gemini CLI | --yolo |
| OpenCode | N/A (run mode is already non-interactive) |
| Goose | GOOSE_MODE=auto |
| Continue CLI | --auto |
Using Different Models
Each tool has a sensible default, but you can specify any available model:
jbai-opencodedefault:gpt-5.4withxhighreasoning (--variant xhigh)jbai-codexdefault:gpt-5.4withxhighreasoning effort
# Claude with Opus 4.6
jbai-claude --model claude-opus-4-6
# Codex with GPT-5.4
jbai-codex --model gpt-5.4
# Goose with GPT-5.2
jbai-goose run -t "your task" --provider openai --model gpt-5.2-2025-12-11
# Continue with Claude Opus 4.6
jbai-continue # select model in TUIAvailable Models
Claude (Anthropic) - Default for Goose, Continue
| Model | Notes |
|-------|-------|
| claude-sonnet-4-5-20250929 | Default |
| claude-opus-4-6 | Most capable (latest) |
| claude-opus-4-5-20251101 | |
| claude-opus-4-1-20250805 | |
| claude-sonnet-4-20250514 | |
| claude-haiku-4-5-20251001 | Fast |
| claude-3-7-sonnet-20250219 | |
| claude-3-5-haiku-20241022 | Fastest |
GPT (OpenAI Chat) - Default for OpenCode
| Model | Notes |
|-------|-------|
| gpt-5.4 | Default, latest |
| gpt-5.2-2025-12-11 | |
| gpt-5.2 | Alias |
| gpt-5.1-2025-11-13 | |
| gpt-5-2025-08-07 | |
| gpt-5-mini-2025-08-07 | Fast |
| gpt-5-nano-2025-08-07 | Fastest |
| gpt-4.1-2025-04-14 | |
| o4-mini-2025-04-16 | Reasoning |
| o3-2025-04-16 | Reasoning |
Codex (OpenAI Responses) - Use with Codex CLI: jbai-codex --model <model>
| Model | Notes |
|-------|-------|
| gpt-5.4 | Default, latest |
| gpt-5.3-codex-api-preview | |
| gpt-5.2-codex | Coding-optimized |
| gpt-5.2-pro-2025-12-11 | |
| gpt-5.1-codex | |
| gpt-5.1-codex-max | Most capable |
| gpt-5.1-codex-mini | Fast |
| gpt-5-codex | |
Gemini (Google) - Use with Gemini CLI: jbai-gemini
| Model | Notes |
|-------|-------|
| gemini-2.5-flash | Default, fast |
| gemini-2.5-pro | More capable |
| gemini-3-pro-preview | Preview |
| gemini-3-flash-preview | Preview |
Commands Reference
| Command | Description |
|---------|-------------|
| jbai | Open interactive control panel |
| jbai menu | Open interactive control panel |
| jbai help | Show help |
| jbai token | Show token status |
| jbai token set | Set/update token |
| jbai test | Test API connections |
| jbai models [tool] | List Grazie models |
| jbai proxy setup | Setup proxy + configure Codex Desktop |
| jbai proxy status | Check proxy status |
| jbai proxy stop | Stop proxy |
| jbai install | Install all AI tools |
| jbai install claude | Install specific tool |
| jbai doctor | Check tool installation status |
| jbai env staging | Use staging environment |
| jbai env production | Use production environment |
Interactive Control Panel
Running jbai with no arguments opens a terminal menu with fast access to:
- Token management (show, set, refresh)
- Environment switching (staging / production)
- Agent installation
- Client wiring (
jbai proxy setup+ Codex/Desktop env) - Health check (
doctor) - Agent launch (Claude / Codex / OpenCode / Gemini / Goose / Continue)
- Update / uninstall commands
- Version info
Use 0 to exit the menu.
Installing AI Tools
jbai-cli can install the underlying tools for you:
# Install all tools at once
jbai install
# Install specific tool
jbai install claude
jbai install codex
# Check what's installed
jbai doctorManual Installation
| Tool | Install Command |
|------|-----------------|
| Claude Code | npm i -g @anthropic-ai/claude-code |
| Codex | npm i -g @openai/codex |
| OpenCode | npm i -g opencode-ai |
| Goose | brew install block-goose-cli |
| Continue CLI | npm i -g @continuedev/cli |
Token Management
# Check token status (shows expiry date)
jbai token
# Update expired token
jbai token setTokens are stored securely at ~/.jbai/token
Switching Environments
# Staging (default) - for testing
jbai env staging
# Production - for real work
jbai env productionNote: Staging and production use different tokens. Get the right one from the corresponding platform URL.
How It Works
jbai-cli uses JetBrains AI Platform's Guarded Proxy, which provides API-compatible endpoints:
- OpenAI API →
api.jetbrains.ai/user/v5/llm/openai/v1 - Anthropic API →
api.jetbrains.ai/user/v5/llm/anthropic/v1 - Google Vertex →
api.jetbrains.ai/user/v5/llm/google/v1/vertex
Your JetBrains AI token authenticates all requests via the Grazie-Authenticate-JWT header.
CLI wrappers (jbai-claude, jbai-codex, etc.) set environment variables and launch the underlying tool directly.
Local proxy (jbai proxy) runs an HTTP server on localhost that forwards requests to Grazie, injecting the JWT header automatically. This enables GUI tools like Codex Desktop and Cursor that don't support custom headers but do support custom base URLs.
Troubleshooting
"Token expired"
jbai token set
# Get fresh token from platform.jetbrains.ai"Claude Code not found"
npm install -g @anthropic-ai/claude-code"Connection failed"
# Test which endpoints work
jbai test
# Check your environment
jbai tokenProxy not working
# Check proxy status
jbai proxy status
# Check proxy health
curl http://localhost:18080/health
# Check logs
cat ~/.jbai/proxy.log
# Restart proxy
jbai proxy stop && jbai proxy --daemonWrong environment
# Staging token won't work with production
jbai env staging # if using staging token
jbai env production # if using production tokenLicense
MIT
