aliyun-codex-bridge
v0.1.2
Published
Local proxy that translates OpenAI Responses API format to Coding Plan Dashscope Chat Completions format for Codex
Downloads
80
Maintainers
Readme
Aliyun Codex Bridge
Local proxy that translates OpenAI Responses API ↔ Coding Plan Dashscope Chat Completions for Codex CLI
What It Solves
Newer Codex CLI versions speak the OpenAI Responses API (e.g. /v1/responses, with instructions + input + event-stream semantics).
Some gateways/providers (including Coding Plan Dashscope endpoints) only expose legacy Chat Completions (messages[]).
This proxy:
- Accepts Codex requests in Responses format
- Translates them to Chat Completions
- Forwards to Coding Plan Dashscope
- Translates back to Responses format (stream + non-stream)
- Returns to Codex
Without this proxy, Codex may fail (example from upstream error payloads):
{"error":{"code":"1214","message":"Incorrect role information"}}If you’re using codex-termux and a gateway that doesn’t fully match the Responses API, this proxy is the recommended compatibility layer.
Features
- Responses API ↔ Chat Completions translation (request + response)
- Streaming support with SSE (Server-Sent Events)
- Health check endpoint (
/health) - Works on Linux/macOS/Windows (WSL) + Termux (ARM64)
- Reasoning passthrough (request
reasoning+ upstream reasoning text) - Optional tool/MCP bridging (see “Tools / MCP” below)
- Extended optional field mapping (
stop,n, penalties, logprobs, response_format, user, modalities/audio) - Non-stream multi-choice compatibility path (
n>1) with provider-safe fallback (n=1retry when upstream thinking mode forbids multi-choice) - Zero/low dependencies (Node built-ins only, unless noted by package.json)
Requirements
- Node.js: 18+ (native
fetch) - Port: 31415 (default, configurable)
Installation
npm install -g aliyun-codex-bridgeQuick Start
1) Start the Proxy
aliyun-codex-bridgeDefault listen address:
http://127.0.0.1:31415
2) Configure Codex
Add this provider to ~/.codex/config.toml:
[model_providers.ai_proxy]
name = "Coding Plan Dashscope via local proxy"
base_url = "http://127.0.0.1:31415"
env_key = "AI_API_KEY"
wire_api = "responses"
stream_idle_timeout_ms = 3000000Notes:
base_urlis the server root. Codex will call/v1/responses; this proxy supports that path.- Set
env_key = "AI_API_KEY"and export your Coding Plan Dashscope key with the same name.
3) Run Codex via the Proxy
export AI_API_KEY="your-coding-plan-key"
codex -m "GLM-4.7" -c model_provider="ai_proxy"Tools / MCP (optional)
Codex tool-calling / MCP memory requires an additional compatibility layer:
- Codex uses Responses API tool events (function call items + arguments delta/done, plus function_call_output inputs)
- Some upstream models/providers may not emit tool calls (or may emit them in a different shape)
This proxy can attempt to bridge tools automatically when the request carries tool definitions
(tools, tool_choice, or tool outputs). You can also force it on:
export ALLOW_TOOLS=1Important:
- Tool support is provider/model dependent. If upstream never emits tool calls, the proxy can’t invent them.
- If tools are enabled, the proxy must translate:
- Responses
tools+tool_choice→ Chattools+tool_choice - Chat
tool_calls(stream/non-stream) → Responses function-call events - Responses
function_call_output→ Chatrole=toolmessages
- Responses
- Non-function tool types are normalized for upstream compatibility.
- Function calls are emitted as stream events; final
response.completedoutput includes message + function_call items in creation order for parity with streaming. - Model-family strategy for
tool_choice:qwen*/minimax*/glm*: forced function-objecttool_choiceis downgraded toautokimi*: forced function-objecttool_choiceis kept- If upstream still returns
tool_choice ... object in thinking mode(HTTP 400), the proxy retries once withtool_choice=auto
- For
n>1, the proxy uses an upstream non-stream path and re-emits Responses lifecycle events; if provider rejects multi-choice in thinking mode, it retries withn=1.
(See repo changelog and docs for the exact implemented behavior.)
CLI Usage
# Start with defaults
aliyun-codex-bridge
# Custom port
aliyun-codex-bridge --port 8080
# Enable debug logging
aliyun-codex-bridge --log-level debug
# Custom Coding Plan Dashscope endpoint
aliyun-codex-bridge --ai-base-url https://coding.dashscope.aliyuncs.com/v1
# Show help
aliyun-codex-bridge --helpEnvironment Variables
export HOST=127.0.0.1
export PORT=31415
export AI_API_BASE=https://coding.dashscope.aliyuncs.com/v1
export LOG_LEVEL=info
export AI_API_KEY=your-coding-plan-key
# Optional
export ALLOW_TOOLS=1 # force tool bridging (otherwise auto-enabled when tools are present)
export ALLOW_SYSTEM=0 # optional: disable system-role passthrough
export SUPPRESS_REASONING_TEXT=1 # reduce latency by skipping reasoning stream
export ALLOW_MULTI_TOOL_CALLS=1 # process multiple tool_calls in one chunk (default: enabled, set 0 to disable)
export FORCE_ENV_AUTH=1 # default: require env token and ignore inbound Authorization
export LOG_STREAM_RAW=1 # debug upstream chunk summaries (redacted, requires LOG_LEVEL=debug)
export LOG_STREAM_MAX=1200 # max logged raw chunk lengthAuto-start the Proxy with Codex (recommended)
Use a shell function that starts the proxy only if needed:
codex-with-codingplan() {
local HOST="127.0.0.1"
local PORT="31415"
local HEALTH="http://${HOST}:${PORT}/health"
local PROXY_PID=""
if ! curl -fsS "$HEALTH" >/dev/null 2>&1; then
ALLOW_TOOLS=1 aliyun-codex-bridge --host "$HOST" --port "$PORT" >/dev/null 2>&1 &
PROXY_PID=$!
trap 'kill $PROXY_PID 2>/dev/null' EXIT INT TERM
sleep 1
fi
codex -c model_provider="ai_proxy" "$@"
}Usage:
export AI_API_KEY="your-coding-plan-key"
codex -m "GLM-4.7"Use model_provider="ai_proxy" in all new configs.
API Endpoints
POST /responses— accepts Responses API requestsPOST /v1/responses— same as above (Codex default path)POST /chat/completions/POST /v1/chat/completions— accepted for compatibility, still normalized through the bridge pipelineGET /health— health checkGET /models/GET /v1/models— static model list
Translation Overview
Request: Responses → Chat
// Input (Responses)
{
"model": "GLM-4.7",
"instructions": "Be helpful",
"input": [{ "role": "user", "content": "Hello" }],
"max_output_tokens": 1000
}
// Output (Chat)
{
"model": "GLM-4.7",
"messages": [
{ "role": "system", "content": "Be helpful" },
{ "role": "user", "content": "Hello" }
],
"max_tokens": 1000
}Response: Chat → Responses (simplified)
// Input (Chat)
{
"choices": [{ "message": { "content": "Hi there!" } }],
"usage": { "prompt_tokens": 10, "completion_tokens": 5 }
}
// Output (Responses - simplified)
{
"status": "completed",
"output": [{ "type": "message", "content": [{ "type": "output_text", "text": "Hi there!" }] }],
"usage": { "input_tokens": 10, "output_tokens": 5 }
}Reasoning Support
- If the Responses request includes
reasoning, the proxy forwards it to upstream asreasoning(andreasoning_effortwhenreasoning.effortis set). - Upstream reasoning text is accepted from any of:
reasoning_content,reasoning,thinking,thought. - The proxy emits
response.reasoning_text.delta/response.reasoning_text.doneevents and includesreasoning_textcontent as a dedicatedreasoningoutput item inresponse.completed. - Upstream stream chunks carrying
errorare mapped toresponse.failed. - Tool-output rounds preserve/restore preceding
assistant.tool_callsbeforerole=toolmessages for stricter upstream validators.
Troubleshooting
401 / “token expired or incorrect”
- Verify the key is exported as
AI_API_KEY(and matchesenv_keyin config.toml). - Make sure the proxy is not overwriting Authorization headers.
404 on /v1/responses
- Ensure
base_urlpoints to the proxy root (example:http://127.0.0.1:31415). - Confirm the proxy is running and
/healthreturnsok.
MCP/tools not being called
- Check proxy logs for
allowTools: trueandtoolsPresent: true. - If
toolsPresent: false, Codex did not send tool definitions (verify your provider config). - If tools are present but the model prints literal
<function=...>markup or never emits tool calls, your upstream model likely doesn’t support tool calling. - If your provider rejects
systemrole, setALLOW_SYSTEM=0.
502 Bad Gateway
- Proxy reached upstream but upstream failed. Enable debug:
LOG_LEVEL=debug aliyun-codex-bridge
Log Levels
- Supported values:
debug,info,warn,error.
🧪 Tests
This repo includes end-to-end validation assets for running Codex through the proxy:
- Test suite:
CODEX_TEST_SUITE.md - Public sanitized report (latest committed snapshot):
CODEX_REPORT_v0.1.1.md - The report is sanitized and excludes local machine identifiers.
- Unit tests:
npm run test:unit
Notes:
- Interactive runs require a real TTY (
codex). - For automation/non-TTY environments, prefer
codex exec.
Versioning Policy
This repo follows small, safe patch increments while stabilizing provider compatibility:
- Keep patch bumps only in the
0.1.xline. - No big jumps unless strictly necessary.
(See CHANGELOG.md for details once present.)
License
Copyright (c) 2026 WellaNet.Dev See MIT LICENSE for details. Made in Italy 🇮🇹
