npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

aliyun-codex-bridge

v0.1.2

Published

Local proxy that translates OpenAI Responses API format to Coding Plan Dashscope Chat Completions format for Codex

Downloads

80

Readme

Aliyun Codex Bridge

Local proxy that translates OpenAI Responses APICoding Plan Dashscope Chat Completions for Codex CLI

npm node license


What It Solves

Newer Codex CLI versions speak the OpenAI Responses API (e.g. /v1/responses, with instructions + input + event-stream semantics). Some gateways/providers (including Coding Plan Dashscope endpoints) only expose legacy Chat Completions (messages[]).

This proxy:

  1. Accepts Codex requests in Responses format
  2. Translates them to Chat Completions
  3. Forwards to Coding Plan Dashscope
  4. Translates back to Responses format (stream + non-stream)
  5. Returns to Codex

Without this proxy, Codex may fail (example from upstream error payloads):

{"error":{"code":"1214","message":"Incorrect role information"}}

If you’re using codex-termux and a gateway that doesn’t fully match the Responses API, this proxy is the recommended compatibility layer.


Features

  • Responses API ↔ Chat Completions translation (request + response)
  • Streaming support with SSE (Server-Sent Events)
  • Health check endpoint (/health)
  • Works on Linux/macOS/Windows (WSL) + Termux (ARM64)
  • Reasoning passthrough (request reasoning + upstream reasoning text)
  • Optional tool/MCP bridging (see “Tools / MCP” below)
  • Extended optional field mapping (stop, n, penalties, logprobs, response_format, user, modalities/audio)
  • Non-stream multi-choice compatibility path (n>1) with provider-safe fallback (n=1 retry when upstream thinking mode forbids multi-choice)
  • Zero/low dependencies (Node built-ins only, unless noted by package.json)

Requirements

  • Node.js: 18+ (native fetch)
  • Port: 31415 (default, configurable)

Installation

npm install -g aliyun-codex-bridge

Quick Start

1) Start the Proxy

aliyun-codex-bridge

Default listen address:

  • http://127.0.0.1:31415

2) Configure Codex

Add this provider to ~/.codex/config.toml:

[model_providers.ai_proxy]
name = "Coding Plan Dashscope via local proxy"
base_url = "http://127.0.0.1:31415"
env_key = "AI_API_KEY"
wire_api = "responses"
stream_idle_timeout_ms = 3000000

Notes:

  • base_url is the server root. Codex will call /v1/responses; this proxy supports that path.
  • Set env_key = "AI_API_KEY" and export your Coding Plan Dashscope key with the same name.

3) Run Codex via the Proxy

export AI_API_KEY="your-coding-plan-key"
codex -m "GLM-4.7" -c model_provider="ai_proxy"

Tools / MCP (optional)

Codex tool-calling / MCP memory requires an additional compatibility layer:

  • Codex uses Responses API tool events (function call items + arguments delta/done, plus function_call_output inputs)
  • Some upstream models/providers may not emit tool calls (or may emit them in a different shape)

This proxy can attempt to bridge tools automatically when the request carries tool definitions (tools, tool_choice, or tool outputs). You can also force it on:

export ALLOW_TOOLS=1

Important:

  • Tool support is provider/model dependent. If upstream never emits tool calls, the proxy can’t invent them.
  • If tools are enabled, the proxy must translate:
    • Responses tools + tool_choice → Chat tools + tool_choice
    • Chat tool_calls (stream/non-stream) → Responses function-call events
    • Responses function_call_output → Chat role=tool messages
  • Non-function tool types are normalized for upstream compatibility.
  • Function calls are emitted as stream events; final response.completed output includes message + function_call items in creation order for parity with streaming.
  • Model-family strategy for tool_choice:
    • qwen* / minimax* / glm*: forced function-object tool_choice is downgraded to auto
    • kimi*: forced function-object tool_choice is kept
    • If upstream still returns tool_choice ... object in thinking mode (HTTP 400), the proxy retries once with tool_choice=auto
  • For n>1, the proxy uses an upstream non-stream path and re-emits Responses lifecycle events; if provider rejects multi-choice in thinking mode, it retries with n=1.

(See repo changelog and docs for the exact implemented behavior.)


CLI Usage

# Start with defaults
aliyun-codex-bridge

# Custom port
aliyun-codex-bridge --port 8080

# Enable debug logging
aliyun-codex-bridge --log-level debug

# Custom Coding Plan Dashscope endpoint
aliyun-codex-bridge --ai-base-url https://coding.dashscope.aliyuncs.com/v1

# Show help
aliyun-codex-bridge --help

Environment Variables

export HOST=127.0.0.1
export PORT=31415
export AI_API_BASE=https://coding.dashscope.aliyuncs.com/v1
export LOG_LEVEL=info
export AI_API_KEY=your-coding-plan-key

# Optional
export ALLOW_TOOLS=1   # force tool bridging (otherwise auto-enabled when tools are present)
export ALLOW_SYSTEM=0  # optional: disable system-role passthrough
export SUPPRESS_REASONING_TEXT=1  # reduce latency by skipping reasoning stream
export ALLOW_MULTI_TOOL_CALLS=1   # process multiple tool_calls in one chunk (default: enabled, set 0 to disable)
export FORCE_ENV_AUTH=1  # default: require env token and ignore inbound Authorization
export LOG_STREAM_RAW=1  # debug upstream chunk summaries (redacted, requires LOG_LEVEL=debug)
export LOG_STREAM_MAX=1200  # max logged raw chunk length

Auto-start the Proxy with Codex (recommended)

Use a shell function that starts the proxy only if needed:

codex-with-codingplan() {
  local HOST="127.0.0.1"
  local PORT="31415"
  local HEALTH="http://${HOST}:${PORT}/health"
  local PROXY_PID=""

  if ! curl -fsS "$HEALTH" >/dev/null 2>&1; then
    ALLOW_TOOLS=1 aliyun-codex-bridge --host "$HOST" --port "$PORT" >/dev/null 2>&1 &
    PROXY_PID=$!
    trap 'kill $PROXY_PID 2>/dev/null' EXIT INT TERM
    sleep 1
  fi

  codex -c model_provider="ai_proxy" "$@"
}

Usage:

export AI_API_KEY="your-coding-plan-key"
codex -m "GLM-4.7"

Use model_provider="ai_proxy" in all new configs.


API Endpoints

  • POST /responses — accepts Responses API requests
  • POST /v1/responses — same as above (Codex default path)
  • POST /chat/completions / POST /v1/chat/completions — accepted for compatibility, still normalized through the bridge pipeline
  • GET /health — health check
  • GET /models / GET /v1/models — static model list

Translation Overview

Request: Responses → Chat

// Input (Responses)
{
  "model": "GLM-4.7",
  "instructions": "Be helpful",
  "input": [{ "role": "user", "content": "Hello" }],
  "max_output_tokens": 1000
}

// Output (Chat)
{
  "model": "GLM-4.7",
  "messages": [
    { "role": "system", "content": "Be helpful" },
    { "role": "user", "content": "Hello" }
  ],
  "max_tokens": 1000
}

Response: Chat → Responses (simplified)

// Input (Chat)
{
  "choices": [{ "message": { "content": "Hi there!" } }],
  "usage": { "prompt_tokens": 10, "completion_tokens": 5 }
}

// Output (Responses - simplified)
{
  "status": "completed",
  "output": [{ "type": "message", "content": [{ "type": "output_text", "text": "Hi there!" }] }],
  "usage": { "input_tokens": 10, "output_tokens": 5 }
}

Reasoning Support

  • If the Responses request includes reasoning, the proxy forwards it to upstream as reasoning (and reasoning_effort when reasoning.effort is set).
  • Upstream reasoning text is accepted from any of: reasoning_content, reasoning, thinking, thought.
  • The proxy emits response.reasoning_text.delta / response.reasoning_text.done events and includes reasoning_text content as a dedicated reasoning output item in response.completed.
  • Upstream stream chunks carrying error are mapped to response.failed.
  • Tool-output rounds preserve/restore preceding assistant.tool_calls before role=tool messages for stricter upstream validators.

Troubleshooting

401 / “token expired or incorrect”

  • Verify the key is exported as AI_API_KEY (and matches env_key in config.toml).
  • Make sure the proxy is not overwriting Authorization headers.

404 on /v1/responses

  • Ensure base_url points to the proxy root (example: http://127.0.0.1:31415).
  • Confirm the proxy is running and /health returns ok.

MCP/tools not being called

  • Check proxy logs for allowTools: true and toolsPresent: true.
  • If toolsPresent: false, Codex did not send tool definitions (verify your provider config).
  • If tools are present but the model prints literal <function=...> markup or never emits tool calls, your upstream model likely doesn’t support tool calling.
  • If your provider rejects system role, set ALLOW_SYSTEM=0.

502 Bad Gateway

  • Proxy reached upstream but upstream failed. Enable debug:
    LOG_LEVEL=debug aliyun-codex-bridge

Log Levels

  • Supported values: debug, info, warn, error.

🧪 Tests

This repo includes end-to-end validation assets for running Codex through the proxy:

Notes:

  • Interactive runs require a real TTY (codex).
  • For automation/non-TTY environments, prefer codex exec.

Versioning Policy

This repo follows small, safe patch increments while stabilizing provider compatibility:

  • Keep patch bumps only in the 0.1.x line.
  • No big jumps unless strictly necessary.

(See CHANGELOG.md for details once present.)


License

Copyright (c) 2026 WellaNet.Dev See MIT LICENSE for details. Made in Italy 🇮🇹