npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

jbai-cli

v2.1.1

Published

CLI wrappers to use AI coding tools (Claude Code, Codex, Gemini CLI, OpenCode, Goose, Continue) with JetBrains AI Platform

Readme

jbai-cli

Use AI coding tools with your JetBrains AI subscription — no separate API keys needed.

One token, all tools: Claude Code, Codex, Gemini CLI, OpenCode, Goose, Continue, Codex Desktop, Cursor.

Install

npm install -g jbai-cli

Setup (2 minutes)

jbai now opens an interactive control panel by default, so you can manage everything from one place:

jbai

It still supports direct commands for scripts and automation (for example jbai token set, jbai test, jbai proxy setup, jbai install).

Step 1: Get your token

  1. Go to platform.jetbrains.ai (or staging)
  2. Click your Profile icon (top right)
  3. Click "Copy Developer Token"

Step 2: Save your token

jbai token set
# Paste your token when prompted

Step 3: Verify it works

jbai test

Expected output:

Testing JetBrains AI Platform (staging)

1. OpenAI Proxy (Chat): ✅ Working
2. OpenAI Proxy (Codex /responses): ✅ Working
3. Anthropic Proxy (Claude): ✅ Working
4. Google Proxy (Gemini): ✅ Working

Local Proxy (for Codex Desktop, Cursor, and other GUI tools)

jbai-cli includes a local reverse proxy that lets any tool with custom base URL support work through JetBrains AI Platform — no per-tool wrappers needed.

One-liner setup

jbai proxy setup

This single command:

  • Starts the proxy on localhost:18080 (auto-starts on login via launchd)
  • Configures Codex Desktop (~/.codex/config.toml)
  • Adds the JBAI_PROXY_KEY env var to your shell

How it works

Codex Desktop / Cursor / any tool
    │  standard OpenAI / Anthropic API calls
    ▼
http://localhost:18080
    │  injects Grazie-Authenticate-JWT header
    │  routes to correct provider endpoint
    ▼
https://api.jetbrains.ai/user/v5/llm/{provider}/v1
    │
    ▼
Actual LLM (GPT, Claude, Gemini)

Codex Desktop

After jbai proxy setup, Codex Desktop works automatically. The setup configures ~/.codex/config.toml with:

model_provider = "jbai-proxy"

[model_providers.jbai-proxy]
name = "JetBrains AI (Proxy)"
base_url = "http://localhost:18080/openai/v1"
env_key = "JBAI_PROXY_KEY"
wire_api = "responses"

Cursor

Cursor requires manual configuration via its UI:

  1. Open CursorSettings (gear icon) → Models
  2. Enable "Override OpenAI Base URL"
  3. Set:
    • Base URL: http://localhost:18080/openai/v1
    • API Key: placeholder
  4. Click Verify

Any OpenAI-compatible tool

Point it to the proxy:

export OPENAI_BASE_URL=http://localhost:18080/openai/v1
export OPENAI_API_KEY=placeholder

For Anthropic-compatible tools:

export ANTHROPIC_BASE_URL=http://localhost:18080/anthropic
export ANTHROPIC_API_KEY=placeholder

Proxy commands

| Command | Description | |---------|-------------| | jbai proxy setup | One-liner: configure everything + start | | jbai proxy status | Check if proxy is running | | jbai proxy stop | Stop the proxy | | jbai proxy --daemon | Start proxy in background | | jbai proxy install-service | Auto-start on login (macOS launchd) | | jbai proxy uninstall-service | Remove auto-start |

Proxy routes

| Route | Target | |-------|--------| | /openai/v1/* | Grazie OpenAI endpoint | | /anthropic/v1/* | Grazie Anthropic endpoint | | /google/v1/* | Grazie Google endpoint | | /v1/chat/completions | OpenAI (auto-detect) | | /v1/responses | OpenAI (auto-detect) | | /v1/messages | Anthropic (auto-detect) | | /v1/models | Synthetic model list | | /health | Proxy status |

CLI Tool Wrappers

Claude Code

jbai-claude

Codex CLI

# Interactive mode
jbai-codex

# One-shot task
jbai-codex exec "explain this codebase"

OpenCode

jbai-opencode

Goose (Block)

# Interactive session
jbai-goose

# One-shot task
jbai-goose run -t "explain this codebase"

Continue CLI

# Interactive TUI
jbai-continue

# One-shot (print and exit)
jbai-continue -p "explain this function"

Super Mode (Skip Confirmations)

Add --super (or --yolo or -s) to any command to enable maximum permissions:

# Claude Code - skips all permission prompts
jbai-claude --super

# Codex - full auto mode
jbai-codex --super exec "refactor this code"

| Tool | Super Mode Flag | |------|-----------------| | Claude Code | --dangerously-skip-permissions | | Codex | --full-auto | | Gemini CLI | --yolo | | OpenCode | N/A (run mode is already non-interactive) | | Goose | GOOSE_MODE=auto | | Continue CLI | --auto |

Using Different Models

Each tool has a sensible default, but you can specify any available model:

  • jbai-opencode default: gpt-5.4 with xhigh reasoning (--variant xhigh)
  • jbai-codex default: gpt-5.4 with xhigh reasoning effort
# Claude with Opus 4.6
jbai-claude --model claude-opus-4-6

# Codex with GPT-5.4
jbai-codex --model gpt-5.4

# Goose with GPT-5.2
jbai-goose run -t "your task" --provider openai --model gpt-5.2-2025-12-11

# Continue with Claude Opus 4.6
jbai-continue  # select model in TUI

Available Models

Claude (Anthropic) - Default for Goose, Continue | Model | Notes | |-------|-------| | claude-sonnet-4-5-20250929 | Default | | claude-opus-4-6 | Most capable (latest) | | claude-opus-4-5-20251101 | | | claude-opus-4-1-20250805 | | | claude-sonnet-4-20250514 | | | claude-haiku-4-5-20251001 | Fast | | claude-3-7-sonnet-20250219 | | | claude-3-5-haiku-20241022 | Fastest |

GPT (OpenAI Chat) - Default for OpenCode | Model | Notes | |-------|-------| | gpt-5.4 | Default, latest | | gpt-5.2-2025-12-11 | | | gpt-5.2 | Alias | | gpt-5.1-2025-11-13 | | | gpt-5-2025-08-07 | | | gpt-5-mini-2025-08-07 | Fast | | gpt-5-nano-2025-08-07 | Fastest | | gpt-4.1-2025-04-14 | | | o4-mini-2025-04-16 | Reasoning | | o3-2025-04-16 | Reasoning |

Codex (OpenAI Responses) - Use with Codex CLI: jbai-codex --model <model> | Model | Notes | |-------|-------| | gpt-5.4 | Default, latest | | gpt-5.3-codex-api-preview | | | gpt-5.2-codex | Coding-optimized | | gpt-5.2-pro-2025-12-11 | | | gpt-5.1-codex | | | gpt-5.1-codex-max | Most capable | | gpt-5.1-codex-mini | Fast | | gpt-5-codex | |

Gemini (Google) - Use with Gemini CLI: jbai-gemini | Model | Notes | |-------|-------| | gemini-2.5-flash | Default, fast | | gemini-2.5-pro | More capable | | gemini-3-pro-preview | Preview | | gemini-3-flash-preview | Preview |

Commands Reference

| Command | Description | |---------|-------------| | jbai | Open interactive control panel | | jbai menu | Open interactive control panel | | jbai help | Show help | | jbai token | Show token status | | jbai token set | Set/update token | | jbai test | Test API connections | | jbai models [tool] | List Grazie models | | jbai proxy setup | Setup proxy + configure Codex Desktop | | jbai proxy status | Check proxy status | | jbai proxy stop | Stop proxy | | jbai install | Install all AI tools | | jbai install claude | Install specific tool | | jbai doctor | Check tool installation status | | jbai env staging | Use staging environment | | jbai env production | Use production environment |

Interactive Control Panel

Running jbai with no arguments opens a terminal menu with fast access to:

  • Token management (show, set, refresh)
  • Environment switching (staging / production)
  • Agent installation
  • Client wiring (jbai proxy setup + Codex/Desktop env)
  • Health check (doctor)
  • Agent launch (Claude / Codex / OpenCode / Gemini / Goose / Continue)
  • Update / uninstall commands
  • Version info

Use 0 to exit the menu.

Installing AI Tools

jbai-cli can install the underlying tools for you:

# Install all tools at once
jbai install

# Install specific tool
jbai install claude
jbai install codex
# Check what's installed
jbai doctor

Manual Installation

| Tool | Install Command | |------|-----------------| | Claude Code | npm i -g @anthropic-ai/claude-code | | Codex | npm i -g @openai/codex | | OpenCode | npm i -g opencode-ai | | Goose | brew install block-goose-cli | | Continue CLI | npm i -g @continuedev/cli |

Token Management

# Check token status (shows expiry date)
jbai token

# Update expired token
jbai token set

Tokens are stored securely at ~/.jbai/token

Switching Environments

# Staging (default) - for testing
jbai env staging

# Production - for real work
jbai env production

Note: Staging and production use different tokens. Get the right one from the corresponding platform URL.

How It Works

jbai-cli uses JetBrains AI Platform's Guarded Proxy, which provides API-compatible endpoints:

  • OpenAI API → api.jetbrains.ai/user/v5/llm/openai/v1
  • Anthropic API → api.jetbrains.ai/user/v5/llm/anthropic/v1
  • Google Vertex → api.jetbrains.ai/user/v5/llm/google/v1/vertex

Your JetBrains AI token authenticates all requests via the Grazie-Authenticate-JWT header.

CLI wrappers (jbai-claude, jbai-codex, etc.) set environment variables and launch the underlying tool directly.

Local proxy (jbai proxy) runs an HTTP server on localhost that forwards requests to Grazie, injecting the JWT header automatically. This enables GUI tools like Codex Desktop and Cursor that don't support custom headers but do support custom base URLs.

Troubleshooting

"Token expired"

jbai token set
# Get fresh token from platform.jetbrains.ai

"Claude Code not found"

npm install -g @anthropic-ai/claude-code

"Connection failed"

# Test which endpoints work
jbai test

# Check your environment
jbai token

Proxy not working

# Check proxy status
jbai proxy status

# Check proxy health
curl http://localhost:18080/health

# Check logs
cat ~/.jbai/proxy.log

# Restart proxy
jbai proxy stop && jbai proxy --daemon

Wrong environment

# Staging token won't work with production
jbai env staging   # if using staging token
jbai env production  # if using production token

License

MIT