codexapiuse
v0.0.3
Published
Terminal-first ChatGPT Codex OAuth accounts exposed as an OpenAI-compatible local API gateway
Downloads
337
Maintainers
Readme
codexapiuse
Terminal-first manager for multiple ChatGPT Codex OAuth accounts, exposed through a simple local OpenAI-compatible API.
This is intentionally simple: no auto-rotation, no quota-aware routing, no clever switching. You choose the account, model, and reasoning level by choosing the model ID.
Install
npm install -g codexapiuseBoth commands are available:
codexapiuse help
cau helpQuick start
Guided setup:
cau quickstartManual setup:
cau add work
cau add personal
cau login work
cau login personal
cau serve bg
cau status
cau modelsThen configure any OpenAI-compatible client:
Base URL: http://127.0.0.1:3145/v1
API key: anything, unless CODEXAPIUSE_API_KEY is set
Model: work-gpt-5.5-mediumTo use another account, choose its model alias explicitly:
personal-gpt-5.5-mediumThere is no automatic rotation. If work is low on quota, choose a personal-* model yourself.
Guided quickstart behavior
cau quickstart:
- Creates or loads
~/.codexapiuse/accounts.json. - Shows existing accounts, if any.
- Asks whether to add accounts.
- Asks whether to login to each account one by one.
- Prints skipped login commands like
cau login work. - Asks whether to start the local API server in the background.
- Prints client settings and useful commands.
Each accepted login happens sequentially, so finish one browser OAuth flow before starting the next.
Model IDs
For every logged-in account, codexapiuse exposes account-name aliases:
<account-name>-<codex-model>-<reasoning>Example account named work:
work-gpt-5.5-low
work-gpt-5.5-medium
work-gpt-5.5-highReasoning aliases follow pi's Codex provider: minimal, low, medium, high, and xhigh where the upstream model supports it.
API endpoints
Base URL:
http://127.0.0.1:3145/v1Endpoints:
GET /v1/models
POST /v1/chat/completions
POST /v1/responsesBoth streaming and non-streaming Chat Completions are supported. /v1/responses is also supported for Factory/Droid OpenAI-compatible custom models.
Factory AI Droid
Short version:
Provider: OpenAI / OpenAI-compatible
Base URL: http://127.0.0.1:3145/v1
API key: anything
Model: work-gpt-5.5-medium
Image support: enabled / noImageSupport=falseCommands
cau help Show help
cau quickstart Guided setup
cau add <name> Add a named Codex account slot
cau login [id|name] Login selected account through ChatGPT OAuth
cau list List accounts and current Codex usage
cau models Print logged-in model IDs only
cau serve [--host --port] Start local API server in the foreground
cau serve bg [--host --port]
Start API server in the background
cau status Show background server status
cau stop Stop background server
cau doctor Check config, aliases, and server health
cau config Create/migrate config and print accounts.json path
cau remove <id|name> Remove an account from local config
cau limits Alias for listMore docs
Token refresh and storage
codexapiuse stores OAuth refresh tokens and automatically refreshes an account token when it is expired. It also does one refresh/retry on Codex 401/403. It does not switch accounts automatically.
By default tokens and routing config are stored in:
~/.codexapiuse/accounts.jsonThe config file is written with 0600 permissions, but it contains OAuth refresh tokens. Treat it like a password file.
Notes
- OAuth uses ChatGPT/Codex OAuth with a localhost callback on port
1455. - For
/v1/responses,prompt_cache_keyis forwarded both in the body and as Codex session headers so upstream prompt caching can work. - Reasoning usage and reasoning summary events are forwarded when Codex emits them.
