opencode-cli-proxy
v0.1.0
Published
Turn your local opencode CLI into an OpenAI-compatible API with a one-command npm launcher.
Maintainers
Readme
OpenCode CLI Proxy
Overview
OpenCode CLI Proxy exposes your local opencode runtime through OpenAI-compatible endpoints, so tools like Cherry Studio, Cursor, NextChat, and OpenAI-compatible SDKs can use it without custom integrations.
Instead of assuming a remote OpenCode HTTP service, this project talks to the local opencode CLI directly:
opencode modelsfor model discoveryopencode run --format json --model ...for completions- optional
--attachsupport for an existingopencode serveinstance
Features
- OpenAI-compatible endpoints
GET /v1/modelsPOST /v1/chat/completionsPOST /v1/completions
- Streaming response support via SSE
- Gateway API key authentication
- Native desktop GUI built with Fyne
- Works with local
opencodemodels - Model alias mapping via config
Architecture
OpenAI-compatible client
|
v
+---------------------------+
| OpenCode CLI Proxy |
| - auth |
| - request mapping |
| - response mapping |
| - SSE conversion |
+---------------------------+
|
v
+---------------------------+
| local opencode CLI |
| - opencode models |
| - opencode run |
| - optional attach server |
+---------------------------+Requirements
Before using this project, make sure:
opencodeis installedopencode modelsworksopencode run --model <model> "hello"works- Go is installed
Example:
opencode models
opencode run --model opencode-go/glm-5.1 "reply with exactly: ok"If opencode is not in your PATH, use an absolute path in config or in the desktop app.
Installation
Clone the repository
git clone <your-repo-url>
cd opencode-cli-proxyInstall from npm
npm install -g opencode-cli-proxy
opencode-proxy setup
opencode-proxy startStart from source
make runBuild binaries
make build
make build-desktopBackground service
opencode-proxy install-service
opencode-proxy statusCurrent support:
- macOS LaunchAgent
- Linux / Windows service helpers: planned
Quick Start
Run the server
make runDefault address:
http://127.0.0.1:18080Check service status
curl http://127.0.0.1:18080/
curl http://127.0.0.1:18080/health
curl http://127.0.0.1:18080/v1List models
curl http://127.0.0.1:18080/v1/models \
-H "Authorization: Bearer sk-gw-demo"Chat completion
curl http://127.0.0.1:18080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-gw-demo" \
-d '{
"model": "opencode-go/glm-5.1",
"messages": [
{ "role": "user", "content": "Introduce yourself briefly." }
]
}'Streaming chat completion
curl -N http://127.0.0.1:18080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-gw-demo" \
-d '{
"model": "opencode-go/glm-5.1",
"stream": true,
"messages": [
{ "role": "user", "content": "Explain Go in three sentences." }
]
}'Use in Other Apps
Use the proxy as an OpenAI-compatible provider:
- Base URL:
http://127.0.0.1:18080/v1 - API Key:
sk-gw-demo - Model:
opencode-go/glm-5.1
Cherry Studio
- Provider Type:
OpenAI Compatible - Base URL:
http://127.0.0.1:18080/v1 - API Key:
sk-gw-demo - Model:
opencode-go/glm-5.1
NextChat
- Custom Endpoint:
http://127.0.0.1:18080/v1 - API Key:
sk-gw-demo - Model:
opencode-go/glm-5.1
Cursor / OpenAI-compatible clients
- OpenAI Base URL:
http://127.0.0.1:18080/v1 - OpenAI API Key:
sk-gw-demo - Model Name:
opencode-go/glm-5.1
Desktop App
This project includes a native desktop app for macOS and Windows.
Run it with:
make desktopOr:
go run -buildvcs=false ./cmd/desktopBuild it with:
make build-desktopDesktop fields:
- Config file
- Listen host
- Listen port
- Opencode binary
- Attach server
- Gateway API key
- Default model
- Allowed models
- Client Base URL
Configuration
Example config: configs/config.example.yaml
server:
host: 0.0.0.0
port: 18080
read_timeout: 15s
write_timeout: 0s
upstream:
binary: opencode
attach: ""
timeout: 120s
models:
opencode-go/glm-5.1: opencode-go/glm-5.1
opencode-go/glm-5: opencode-go/glm-5
accounts:
default:
auth_mode: local
token: ""
keys:
sk-gw-demo:
account: default
allowed_models:
- opencode-go/glm-5.1
- opencode-go/glm-5
mapping:
temperature:
target_min: 0
target_max: 1
rate_limit:
enabled: false
rpm: 60Config notes
server: local bind address and timeoutsupstream.binary: executable name or absolute path toopencodeupstream.attach: optionalopencode serveendpointmodels: alias-to-real-model mappingkeys: gateway API key to allowed models mapping
Example alias mapping:
models:
glm-latest: opencode-go/glm-5.1Then clients can use glm-latest while the proxy runs opencode-go/glm-5.1.
Supported Routes
Public routes:
GET /GET /healthGET /v1
Authenticated routes:
GET /v1/modelsPOST /v1/chat/completionsPOST /v1/completions
How It Works
For POST /v1/chat/completions:
- The client sends an OpenAI-style request
- The proxy validates the gateway API key
- The request is mapped into an internal chat request
- The proxy calls local
opencode - Output is parsed and converted back into OpenAI-style JSON or SSE
Project Structure
cmd/
server/ HTTP server entry
desktop/ Fyne desktop app
configs/
config.example.yaml
internal/
adapter/ OpenAI request/response mapping
app/ Gateway lifecycle
config/ Config loading and validation
domain/ Shared protocol types
openai/ HTTP handlers
server/ Router and middleware
upstream/ Local opencode CLI integrationFAQ
Does this proxy call a remote OpenCode HTTP API?
No. The current implementation primarily uses the local opencode CLI.
Can I use it with Cursor, Cherry Studio, or NextChat?
Yes. Point them to the proxy's OpenAI-compatible /v1 endpoint.
Does it support streaming?
Yes. Streaming responses are exposed as SSE in an OpenAI-compatible format.
Do I need to expose my real upstream credentials to clients?
No. Clients only use the gateway API key configured in this proxy.
Limitations
Current MVP limitations:
- Request mapping is still text-oriented for
opencode run - Message/role handling is intentionally simple for now
/v1/modelsdirectly reflects localopencode modelsaccountsis currently kept mainly for config structure compatibility- No full audit, metrics, or production rate limiting yet
- No Docker release flow yet
Development
make run
make desktop
make build
make build-desktop
make testCore files:
cmd/server/main.gocmd/desktop/main.gointernal/app/gateway.gointernal/openai/handlers.gointernal/upstream/client.gointernal/adapter/chat_mapper.gointernal/server/router.go
Roadmap
- [ ] Better message and role mapping
- [ ] Richer streaming event conversion
- [ ] Better error mapping
- [ ] Better model alias and default-model management
- [ ] Request logging and audit support
- [ ] Token usage reporting
- [ ] Docker packaging
- [ ] More OpenAI-compatible endpoints
Release
Build release binaries for npm distribution:
make release-distCreate the npm package tarball locally:
make npm-packRelease artifacts are written to dist/ with these filenames:
opencode-cli-proxy-darwin-arm64opencode-cli-proxy-darwin-amd64opencode-cli-proxy-linux-amd64opencode-cli-proxy-windows-amd64.exechecksums-v<version>.txt
Upload those binaries to the GitHub release whose tag matches package.json version, for example v0.1.0.
License
No LICENSE file is included yet. Add one before public open-source release.
