ghc-tunnel
v1.0.6
Published
GitHub Copilot API Proxy - Provides OpenAI and Anthropic compatible endpoints via GitHub Copilot
Maintainers
Readme
ghc-tunnel
GitHub Copilot API Proxy — exposes standard OpenAI and Anthropic compatible endpoints so any tool (including Claude Code) can use GitHub Copilot models.
Quick Start
# Run directly (Node.js 18+ required)
npx ghc-tunnel
# Or install globally
npm install -g ghc-tunnel
ghc-tunnel
# Interactive setup (configures models + Claude Code settings)
ghc-tunnel --setup
# Update Claude Code settings only
ghc-tunnel --setup --claudecodeOn first run the proxy initiates GitHub Device Flow authentication if no GITHUB_TOKEN is set.
Features
- OpenAI-compatible
/v1/chat/completionsand/v1/responsesendpoints (with Codex adapters:apply_patchtool,X-Initiator, context compaction) - Anthropic-compatible
/v1/messagesendpoint (direct or translated) - Codex config auto-repair — fills missing keys in
~/.codex/config.tomlon startup - Automatic model name translation via configurable mappings
- Streaming support (SSE) for all endpoints
- Request cache with analytics dashboard
- Retry with backoff for upstream connection errors
- Content filtering (system prompt manipulation, tool result cleaning)
- Token management with automatic refresh
CLI Options
ghc-tunnel [options]
-s, --setup Interactive setup wizard (configure models + Claude Code)
--claudecode Update Claude Code settings only (use with --setup)
-d, --default Use defaults for setup and Codex config prompts
-p, --port <port> Port to listen on (default: 8314)
-a, --address <addr> Address to listen on (default: 127.0.0.1)
-c, --config Generate default config file
-v, --version Show version
-h, --help Show helpClaude Code Integration
Run ghc-tunnel --setup --claudecode or manually configure ~/.claude/settings.json:
{
"env": {
"ANTHROPIC_BASE_URL": "http://127.0.0.1:8314/",
"ANTHROPIC_AUTH_TOKEN": "dummy",
"ANTHROPIC_MODEL": "claude-opus-4-7[1m]",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-opus-4-7[1m]"
}
}Codex Integration
ghc-tunnel exposes the OpenAI Responses API (/v1/responses) with Codex-specific adapters so the Codex CLI can drive Copilot models directly.
On startup ghc-tunnel automatically repairs ~/.codex/config.toml — if any of model, model_reasoning_effort, personality, model_provider, or the [model_providers.ghc-tunnel] block is missing (or the block is partial), the missing pieces are filled in. When model or model_reasoning_effort is absent it prompts interactively (defaults: gpt-5.5 / high, xhigh also valid). Use -d / --default to skip prompts and accept defaults.
ghc-tunnel --setup forces a full rewrite of those keys (still prompting for model + reasoning unless -d / --default is passed).
Resulting config:
model = "gpt-5.5"
model_reasoning_effort = "high"
personality = "pragmatic"
model_provider = "ghc-tunnel"
[model_providers.ghc-tunnel]
name = "GHC TUNNEL"
base_url = "http://127.0.0.1:8314/v1"
wire_api = "responses"Codex-specific behavior on /v1/responses:
- Custom
apply_patchtool is rewritten as an OpenAI function tool with a JSON schema (Copilot rejectstype: "custom"). X-Initiatorheader is derived from the last input item's role so resumed tool turns aren't billed as fresh user interactions.type: "compaction"markers in the input array cause everything before the latest one to be trimmed.service_tieris nulled (Copilot rejects it).- Unsupported tool types (
web_search,image_generation) are stripped to avoid 400 errors.
Configuration
Config file: ~/.ghc-tunnel/config.yaml (generated on first run or with --config).
See docs/configuration.md for full reference.
Config Sync and OneDrive
ghc-api can manage and sync these files:
- Claude Code:
~/.claude/settings.json - Codex:
~/.codex/config.toml - ghc-api:
~/.ghc-api/config.yaml(or%APPDATA%/ghc-api/config.yamlon Windows)
OneDrive detection priority:
~/OneDrive - *~/OneDrive- In WSL:
/mnt/c/Users/<username>/OneDrive - *then/mnt/c/Users/<username>/OneDrive
To disable all OneDrive-dependent operations, set disable_onedrive_access: true in config.yaml.
When enabled, ghc-api skips OneDrive detection, config sync actions, and shared OneDrive hash reads.
Sync target folder:
.ghc-api/configSyncunder detected OneDrive root
Machine folder:
.ghc-api/agents/{hostname}_{os}whereosisWin,Linux, orWSL
Hash files:
.ghc-api/configSync/config.sha1.ghc-api/agents/{hostname}_{os}/ghc-api/config.sha1
Hashes are recalculated when local config file timestamp is newer than the hash file. On startup, ghc-api checks synced files and prints config differences to stdout (and UI indicator if different).
Token Usage Logging
Every 5 minutes, ghc-api writes token usage delta (if non-zero) to:
- OneDrive mode:
.ghc-api/agents/{hostname}_{os}/token_usage.jl - Fallback when OneDrive is unavailable:
~/.ghc-api/token_usage.jl
Also flushes pending usage on shutdown (Ctrl+C/termination/normal exit).
Each JSONL line includes:
timestamp(unix seconds)modelslist with:modelrequest_countinput_tokensoutput_tokenstotal_tokens
Request File Logging
When save_request_to_file: true, ghc-api appends each completed request to:
<ghc-api config dir>/requests/YYYY-MM-DD.jl
The saved .jl line format is the same as dashboard export (/api/requests/export) and can be imported by dashboard import (/api/requests/import).
Code Agent Interaction
The Code Agent page (/agent) provides a web interface for interacting with AI coding agents via the Agent Client Protocol (ACP). Supported agents:
| Agent | Package | Install |
|-------|---------|---------|
| Claude Code | @agentclientprotocol/claude-agent-acp | npm install -g @agentclientprotocol/claude-agent-acp |
| Codex | codex-acp | Download from GitHub releases |
| Copilot CLI | @github/copilot | npm install -g @github/copilot |
Agent binaries are resolved in order: environment variable override (CLAUDE_ACP_BINARY, CODEX_ACP_BINARY, COPILOT_CLI_BINARY), then PATH lookup, then npm global packages.
Session data is stored in:
- OneDrive mode:
.ghc-api/agents/{hostname}_{os}/sessions/ - Fallback:
~/.ghc-api/sessions/(or%APPDATA%/ghc-api/sessions/on Windows)
Recent working directories are persisted to workdirs.json in the same location. Sessions from other machines are browsable via the machine selector dropdown when OneDrive is enabled.
API Endpoints
| Endpoint | Description |
|----------|-------------|
| POST /v1/chat/completions | OpenAI chat completions |
| POST /v1/responses | OpenAI responses API |
| GET /v1/models | List available models |
| POST /v1/messages | Anthropic messages API |
| GET / | Web dashboard |
| GET /requests | Request browser |
Example Usage
OpenAI SDK
from openai import OpenAI
client = OpenAI(
base_url="http://127.0.0.1:8314/v1",
api_key="not-needed"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)Anthropic SDK
import anthropic
client = anthropic.Anthropic(
base_url="http://127.0.0.1:8314",
api_key="not-needed"
)
message = client.messages.create(
model="claude-sonnet-4",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)cURL
curl http://127.0.0.1:8314/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}]}'Documentation
- Architecture — system design and data flow
- API Reference — all HTTP endpoints
- Configuration — config file, env vars, CLI options
License
MIT
