npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

claude-responses-bridge

v0.3.5

Published

Local Claude-style Messages to OpenAI Responses bridge and wrapper CLI

Readme

Claude Responses Bridge

Claude Responses Bridge is a local CLI that lets Claude-style /v1/messages clients talk to an upstream OpenAI-compatible /v1/responses server.

This project now ships with:

  • an interactive startup console
  • direct provider and token configuration inside the CLI
  • multi-provider management with create, switch, update, and delete flows
  • a local OpenAI-compatible /v1/chat/completions bridge for Cursor and Cline
  • a local OpenAI-compatible /v1/responses passthrough for tools that already use Responses API
  • a Cursor integration command that can detect Cursor, install Continue, and write a bridge-backed Continue config
  • backward compatibility for older single-provider config.local.json files
  • a three-section terminal layout with Header, Status Table, and Interactive Menu
  • Chinese UI copy, ANSI colors, box borders, and keyboard navigation
  • local-first smart routing with single, failover, and round-robin
  • live provider telemetry from /bridge/status

Install

npm install -g claude-responses-bridge

Quick Start

Open the interactive console:

crb

Or run the local file:

node .\cli.js

The console shows the current bridge state and gives you guided actions for:

  • starting the bridge
  • launching Claude through the bridge
  • editing bridge settings
  • managing providers
  • running diagnostics

The interactive console now uses:

  • a branded ASCII Art header
  • a status dashboard with provider and bridge state
  • an arrow-key menu instead of typing 1, 2, 3
  • a short loader before high-impact actions such as starting the bridge

Direct CLI Configuration

You can also configure everything without opening the console:

crb configure --name Main --base-url https://your-upstream.example.com --api-key sk-xxxx

That command creates or updates the active provider and writes the config file.

You can override bridge settings at the same time:

crb configure `
  --name Main `
  --base-url https://your-upstream.example.com `
  --api-key sk-xxxx `
  --port 3456 `
  --host 127.0.0.1 `
  --timeout 600000 `
  --map-default gpt-5.1-codex

Cursor Plugin Setup

If Cursor Free blocks the native BYOK model picker, use the bridge through an official third-party extension instead of trying to unlock Cursor's built-in named models.

Guided Cursor integration:

crb cursor

Non-interactive install + config write:

crb cursor --install --write-config

Choose a specific model from the current provider when writing config:

crb cursor --write-config --model gpt-5.2

This flow:

  • detects the local Cursor command path
  • checks whether the official Continue extension is installed
  • optionally installs Continue into Cursor
  • optionally writes ~/.continue/config.yaml so Continue uses the local bridge
  • can select a model from the current provider's live /v1/models list and persist it for future runs

This flow does not:

  • unlock Cursor's built-in named model picker
  • remove Cursor's native free-plan restriction
  • modify your upstream provider key inside Cursor's native account system

Provider Management

List providers:

crb provider list

Add a provider:

crb provider add --name Backup --base-url https://backup.example.com --api-key sk-backup

Add and activate it immediately:

crb provider add --name Backup --base-url https://backup.example.com --api-key sk-backup --activate

Switch the active provider:

crb provider use backup

Update or replace a provider:

crb provider update backup --base-url https://new-upstream.example.com --api-key sk-new

Delete a provider:

crb provider remove backup

The CLI prevents deleting the last remaining provider so the bridge does not fall into an unusable state.

Start the Bridge

Start the local bridge with the active provider:

crb serve

Start it with a specific provider:

crb serve --provider backup

On startup the CLI now prints a short session overview that shows the selected provider, upstream, token preview, and local listen address.

Launch Claude Through the Bridge

crb claude

Choose a provider for one run:

crb claude --provider backup

Pass Claude CLI arguments after --:

crb claude --provider backup -- -p "Reply with just OK."

Diagnostics

Human-readable doctor output:

crb doctor

JSON doctor output:

crb doctor --json

The doctor report includes:

  • config path
  • active provider
  • upstream base URL
  • masked token state
  • Claude CLI detection
  • local listen URL
  • warnings

Cursor and VSCode / Cline

Start the bridge:

node .\cli.js serve

Print IDE-ready local settings:

node .\cli.js ide
node .\cli.js ide --json

Or use the guided Cursor integration flow:

node .\cli.js cursor

The bridge now exposes an OpenAI-compatible local endpoint:

http://127.0.0.1:3456/v1

Use this local endpoint inside Cursor or Cline instead of your upstream proxy:

  • Base URL: http://127.0.0.1:3456/v1
  • API Key: bridge-local
  • Recommended model: gpt-5.2-codex

Cursor

In Settings -> Models -> API Keys:

  • enable OpenAI API Key
  • set the key to bridge-local
  • enable Override OpenAI Base URL
  • set the base URL to http://127.0.0.1:3456/v1
  • choose gpt-5.2-codex

If your Cursor plan blocks native BYOK flows, install Cline inside Cursor and use the Cline setup below, or run crb cursor to set up Continue automatically.

VSCode / Cursor + Cline

In Cline settings:

  • API Provider: OpenAI Compatible
  • Base URL: http://127.0.0.1:3456/v1
  • API Key: bridge-local
  • Model ID: gpt-5.2-codex
  • Native Tool Call: optional, but supported by the bridge

Smart Routing

This bridge is no longer just a static relay. It now supports:

  • single: always use the selected provider
  • failover: use the selected provider first, then automatically retry other enabled providers
  • round-robin: distribute requests across enabled providers

Show the current route mode:

crb route show

Switch to automatic failover:

crb route set failover

Inspect live provider health:

crb status

Local status endpoint:

GET /bridge/status

Config File

The config format now supports multiple providers. A simplified example:

{
  "schemaVersion": 2,
  "port": 3456,
  "listenHost": "127.0.0.1",
  "upstreamBaseUrl": "https://your-upstream.example.com",
  "apiKey": "<YOUR_ACTIVE_PROVIDER_TOKEN>",
  "requestTimeoutMs": 600000,
  "selectedProviderId": "main",
  "providers": [
    {
      "id": "main",
      "name": "Main Provider",
      "baseUrl": "https://your-upstream.example.com",
      "apiKey": "<YOUR_ACTIVE_PROVIDER_TOKEN>"
    }
  ],
  "modelMap": {
    "default": "gpt-5.1-codex",
    "opus": "gpt-5.1-codex-max",
    "sonnet": "gpt-5.1-codex",
    "haiku": "gpt-5.1-codex-mini"
  }
}

Older configs with only upstreamBaseUrl and apiKey still work. They are normalized into the new provider model when the CLI loads them.

Endpoints

  • GET /health
  • GET /models
  • GET /v1/models
  • GET /v1/models/:id
  • POST /chat/completions
  • POST /v1/chat/completions
  • POST /responses
  • POST /v1/responses
  • POST /v1/messages
  • POST /v1/messages/count_tokens

Notes

  • config.local.json remains ignored by git.
  • Keep real domains and tokens out of screenshots and logs.
  • Use config.example.json as a safe public example.