ghc-tunnel
v1.0.2
Published
GitHub Copilot API Proxy - Provides OpenAI and Anthropic compatible endpoints via GitHub Copilot
Maintainers
Readme
ghc-tunnel
GitHub Copilot API Proxy — exposes standard OpenAI and Anthropic compatible endpoints so any tool (including Claude Code) can use GitHub Copilot models.
Quick Start
# Run directly (Node.js 18+ required)
npx ghc-tunnel
# Or install globally
npm install -g ghc-tunnel
ghc-tunnel
# Interactive setup (configures models + Claude Code settings)
ghc-tunnel --setup
# Update Claude Code settings only
ghc-tunnel --setup --claudecodeOn first run the proxy initiates GitHub Device Flow authentication if no GITHUB_TOKEN is set.
Features
- OpenAI-compatible
/v1/chat/completionsand/v1/responsesendpoints - Anthropic-compatible
/v1/messagesendpoint (direct or translated) - Automatic model name translation via configurable mappings
- Streaming support (SSE) for all endpoints
- Request cache with analytics dashboard
- Retry with backoff for upstream connection errors
- Content filtering (system prompt manipulation, tool result cleaning)
- Token management with automatic refresh
CLI Options
ghc-tunnel [options]
-s, --setup Interactive setup wizard (configure models + Claude Code)
--claudecode Update Claude Code settings only (use with --setup)
-p, --port <port> Port to listen on (default: 8314)
-a, --address <addr> Address to listen on (default: localhost)
-c, --config Generate default config file
-v, --version Show version
-h, --help Show helpClaude Code Integration
Run ghc-tunnel --setup --claudecode or manually configure ~/.claude/settings.json:
{
"env": {
"ANTHROPIC_BASE_URL": "http://localhost:8314/",
"ANTHROPIC_AUTH_TOKEN": "dummy",
"ANTHROPIC_MODEL": "claude-opus-4-6[1m]",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-sonnet-4-6"
}
}Configuration
Config file: ~/.ghc-tunnel/config.yaml (generated on first run or with --config).
See docs/configuration.md for full reference.
API Endpoints
| Endpoint | Description |
|----------|-------------|
| POST /v1/chat/completions | OpenAI chat completions |
| POST /v1/responses | OpenAI responses API |
| GET /v1/models | List available models |
| POST /v1/messages | Anthropic messages API |
| GET / | Web dashboard |
| GET /requests | Request browser |
Example Usage
OpenAI SDK
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8314/v1",
api_key="not-needed"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)Anthropic SDK
import anthropic
client = anthropic.Anthropic(
base_url="http://localhost:8314",
api_key="not-needed"
)
message = client.messages.create(
model="claude-sonnet-4",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)cURL
curl http://localhost:8314/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}]}'Documentation
- Architecture — system design and data flow
- API Reference — all HTTP endpoints
- Configuration — config file, env vars, CLI options
License
MIT
