@borgius/copilot-proxy
v0.1.1
Published
CLI proxy that authenticates with GitHub Copilot and serves OpenAI/Claude compatible REST API
Downloads
202
Maintainers
Readme
@borgius/copilot-proxy
A CLI tool that authenticates with GitHub Copilot and exposes OpenAI-compatible and Anthropic-compatible REST APIs locally. Use any AI SDK or tool that supports OpenAI/Claude APIs and route requests through your existing GitHub Copilot subscription — no additional API keys needed.
Features
- OpenAI Chat Completions API (
/v1/chat/completions) — drop-in replacement for OpenAI's chat API - OpenAI Responses API (
/v1/responses) — supports the newer Responses API format - Anthropic Messages API (
/v1/messages) — drop-in replacement for Claude's messages API - Model listing (
/v1/models) — dynamically fetched from GitHub Copilot - Streaming support — server-sent events (SSE) for all endpoints
- Tool/function calling — pass tools/functions transparently
- Smart model routing — automatically routes models to the correct backend endpoint
- GitHub Enterprise support — works with GHE Server and Data Residency instances
- Custom config file — override the default config path with
--config - Cloudflare Worker deployment — can also run as a serverless worker
Quick Start
# One command — authenticates if needed, then starts the server
npx @borgius/copilot-proxy
# Or step by step
npx @borgius/copilot-proxy auth
npx @borgius/copilot-proxy serveInstallation
Global install (recommended for regular use)
npm install -g @borgius/copilot-proxy
# or with bun
bun add -g @borgius/copilot-proxyUse without installing (npx)
npx @borgius/copilot-proxy auth
npx @borgius/copilot-proxy serveCommands
Default (no command) — Auto-auth + serve
copilot-proxy [options]When run without a subcommand, copilot-proxy will:
- Check if credentials exist in the config file
- If not authenticated, automatically start the
authdevice-flow - After authentication (or if already authenticated), start the proxy server
This is the recommended way to use the tool — a single command handles everything.
# Start on default port 11433
copilot-proxy
# Start on a custom port
copilot-proxy --port 8080
# Custom host + port
copilot-proxy --port 3000 --host 0.0.0.0
# Use a specific config file
copilot-proxy --config ~/work-copilot.jsonauth — Authenticate with GitHub Copilot
copilot-proxy authStarts an OAuth device flow to authenticate with your GitHub account. Supports:
- GitHub.com (public cloud)
- GitHub Enterprise Server (self-hosted)
- GitHub Enterprise Cloud with Data Residency
The command will:
- Prompt you to choose GitHub.com or Enterprise
- For Enterprise, ask for your instance URL (e.g.
company.ghe.com) - Display a verification URL and user code
- Wait for you to approve the device in your browser
- Save credentials to
~/.config/copilot-proxy/config.json
serve — Start the proxy server
copilot-proxy serve [options]Options:
| Flag | Default | Description |
|---|---|---|
| -p, --port <port> | 11433 | Port to listen on |
| -h, --host <host> | localhost | Host to bind to |
Examples:
# Default: http://localhost:11433
copilot-proxy serve
# Custom port
copilot-proxy serve --port 8080
# Listen on all interfaces (Docker/remote)
copilot-proxy serve --port 3000 --host 0.0.0.0Global Options
| Flag | Description |
|---|---|
| -p, --port <port> | Port to listen on (default: 11433) |
| -H, --host <host> | Host to bind to (default: localhost) |
| -c, --config <file> | Path to config file (overrides default) |
| -V, --version | Print version |
| --help | Show help |
Using --config to manage multiple accounts or profiles:
# Authenticate to a work profile
copilot-proxy --config ~/.config/copilot-proxy/work.json auth
# Serve using that profile
copilot-proxy --config ~/.config/copilot-proxy/work.json serveAPI Reference
All endpoints are available at http://localhost:11433 by default. No API key is required — set any non-empty string if the client requires one.
OpenAI Chat Completions
POST /v1/chat/completionsDrop-in replacement for https://api.openai.com/v1/chat/completions.
curl http://localhost:11433/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}]}'Streaming:
curl http://localhost:11433/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}], "stream": true}'With tool/function calling:
curl http://localhost:11433/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "What is the weather in Paris?"}],
"tools": [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {"type": "object", "properties": {"location": {"type": "string"}}}
}
}]
}'OpenAI Responses API
POST /v1/responsesRequired for Codex and O-series models. Also works with standard GPT models.
curl http://localhost:11433/v1/responses \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "input": "Explain quantum computing"}'Anthropic Messages API
POST /v1/messagesDrop-in replacement for https://api.anthropic.com/v1/messages.
curl http://localhost:11433/v1/messages \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-5-20250929",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello!"}]
}'List Models
GET /v1/modelsReturns available models from GitHub Copilot in OpenAI format.
curl http://localhost:11433/v1/modelsSupported Models
Models are fetched dynamically from GitHub Copilot. Availability depends on your plan.
| Model | Provider | Endpoint |
|---|---|---|
| gpt-4o, gpt-4o-mini | OpenAI | Chat |
| gpt-4.1, gpt-4.5 | OpenAI | Chat |
| gpt-5, gpt-5-mini | OpenAI | Both |
| o1, o3, o4-mini | OpenAI | Responses only |
| gpt-5.1-codex, gpt-5.2-codex | OpenAI | Responses only |
| claude-sonnet-4.5 | Anthropic | Chat |
| claude-sonnet-4 | Anthropic | Chat |
| claude-opus-4.5 | Anthropic | Chat |
| claude-haiku-4.5 | Anthropic | Chat |
Claude Model Aliases
When using the Anthropic Messages API, legacy model names are automatically remapped:
| Input name | Routed to |
|---|---|
| claude-3-5-sonnet-20241022 | claude-sonnet-4.5 |
| claude-sonnet-4-5-20250929 | claude-sonnet-4.5 |
| claude-opus-4-0-20250514 | claude-opus-4.5 |
| claude-3-opus-20240229 | claude-opus-4.5 |
| claude-3-haiku-20240307 | gpt-4o-mini |
Using with AI SDKs
OpenAI SDK (Node.js)
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'http://localhost:11433/v1',
apiKey: 'not-needed',
});
// Non-streaming
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);
// Streaming
const stream = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a haiku' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}OpenAI SDK (Python)
from openai import OpenAI
client = OpenAI(base_url="http://localhost:11433/v1", api_key="not-needed")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)Anthropic SDK (Node.js)
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
baseURL: 'http://localhost:11433',
apiKey: 'not-needed',
});
const message = await client.messages.create({
model: 'claude-sonnet-4-5-20250929',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(message.content[0].text);Anthropic SDK (Python)
import anthropic
client = anthropic.Anthropic(base_url="http://localhost:11433", api_key="not-needed")
message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
print(message.content[0].text)Other Compatible Tools
| Tool | Configuration |
|---|---|
| Continue (VS Code) | Set apiBase to http://localhost:11433/v1 |
| aider | aider --openai-api-base http://localhost:11433/v1 --openai-api-key any |
| LiteLLM | Use openai/ prefix with api_base=http://localhost:11433/v1 |
| Open WebUI | Add custom OpenAI endpoint: http://localhost:11433/v1 |
| Cursor | Configure OpenAI-compatible provider with local URL |
Configuration
Default config: ~/.config/copilot-proxy/config.json
GitHub.com:
{
"auth": {
"type": "oauth",
"provider": "github-copilot",
"accessToken": "ghu_...",
"refreshToken": "ghr_...",
"expiresAt": 1777777777
}
}GitHub Enterprise:
{
"auth": {
"type": "oauth",
"provider": "github-copilot-enterprise",
"accessToken": "ghu_...",
"refreshToken": "ghr_...",
"expiresAt": 1777777777,
"enterpriseUrl": "company.ghe.com"
}
}Development
Requirements
- Bun >= 1.0.0
- Node.js >= 18.0.0
Setup
git clone https://github.com/borgius/copilot-proxy
cd copilot-proxy
bun installScripts
| Command | Description |
|---|---|
| bun run build | Build for production |
| bun run dev | Dev mode (watch + auto-restart) |
| bun run typecheck | TypeScript type checking |
| bun run test | Unit tests |
| bun run test:integration | Integration tests |
| bun run release | Build and publish to npm |
| bun run release:dry | Dry-run publish (no upload) |
Publishing to npm
# Login to npm (once)
npm login
# Build and publish
bun run release
# Or test first with a dry run
bun run release:dryCloudflare Worker Deployment
The proxy can run as a Cloudflare Worker for serverless operation.
bun run deploy # Deploy to production
bun run deploy:dev # Deploy to development
bun run test:cloudflare # Test the deployed workerConfigure wrangler.toml with your Cloudflare account details before deploying.
How It Works
- Authentication — Uses GitHub's OAuth device flow to obtain a Copilot-scoped token
- Token refresh — Access tokens are automatically refreshed before expiry
- API translation — Incoming OpenAI/Anthropic requests are translated to GitHub Copilot's internal format
- Model routing — Models are routed to the correct Copilot endpoint (chat vs responses) based on their capabilities
- Streaming — SSE streams are proxied transparently, preserving chunk boundaries
License
MIT
