@pointgroup/copilot-api
v1.1.0
Published
Turn GitHub Copilot into OpenAI/Anthropic API compatible server. Usable with Claude Code Or Codex Or Opencode!
Downloads
169
Maintainers
Readme
Copilot API Proxy
Turn GitHub Copilot into an OpenAI/Anthropic/Gemini API compatible server. Usable with Claude Code, Codex, OpenCode, and more.
Installation
Run directly (no install)
bunx @pointgroup/copilot-api start -M
npx @pointgroup/copilot-api start -MGlobal install
npm install -g @pointgroup/copilot-apiFrom source
git clone https://github.com/pointgroup-labs/copilot-api.git
cd copilot-api
bun install
bun run build
bun linkAuthentication
First-time setup (interactive)
copilot-api authOpens the GitHub device flow — enter the code at github.com to authorize. Use --show-token to print the token for reuse:
copilot-api auth --show-tokenUsing a saved token
copilot-api start -g YOUR_GITHUB_TOKENQuick Start
# Start with defaults (port 4141)
copilot-api start
# Native Claude Messages API (recommended for Claude models)
copilot-api start -M
# With a saved GitHub token + native messages
copilot-api start -g YOUR_GITHUB_TOKEN -M
# Custom port
copilot-api start -p 8080
# Business account
copilot-api start -a business
# Enterprise account
copilot-api start -a enterpriseCLI Reference
Commands
| Command | Description |
|:--------|:------------|
| copilot-api auth | Run GitHub auth flow |
| copilot-api start | Start the API server |
| copilot-api check-usage | Show current Copilot quota |
| copilot-api debug | Print debug information |
Start Options
| Flag | Short | Default | Description |
|:-----|:------|:--------|:------------|
| --port | -p | 4141 | Port to listen on |
| --verbose | -v | | Enable verbose logging |
| --account-type | -a | individual | Account type (individual, business, enterprise) |
| --github-token | -g | | Provide GitHub token directly |
| --native-messages | -M | | Use Copilot's native /v1/messages for Claude models |
| --force-agent | -F | | Smart agent: auto-switch to agent mode when over quota |
| --claude-code | -c | | Generate a Claude Code launch command |
| --rate-limit | -r | | Rate limit in seconds between requests |
| --wait | -w | | Wait instead of error on rate limit |
| --manual | | | Manual request approval |
| --show-token | | | Show tokens on fetch/refresh |
| --proxy-env | | | Initialize proxy from environment variables |
API Endpoints
| Endpoint | Format | Description |
|:---------|:-------|:------------|
| POST /v1/messages | Anthropic | Claude Messages API |
| POST /v1/messages/count_tokens | Anthropic | Token counting |
| POST /v1/chat/completions | OpenAI | Chat Completions API |
| POST /v1beta/models/{model}:generateContent | Gemini | Gemini API |
| POST /v1/responses | OpenAI | Responses API |
| GET /v1/models | OpenAI | List models with capabilities |
Usage with Claude Code
# Generate the launch command automatically
copilot-api start -M -c
# Or configure manually
ANTHROPIC_BASE_URL=http://localhost:4141 \
ANTHROPIC_API_KEY=anything \
claudeConfiguration
Config file location: ~/.local/share/copilot-api/config.json
{
"extraPrompts": {},
"smallModel": "gpt-5-mini",
"compactUseSmallModel": true,
"useFunctionApplyPatch": true,
"modelReasoningEfforts": {
"gpt-5-mini": "low",
"claude-opus-4.6": "xhigh"
},
"webSearchModel": "claude-sonnet-4"
}| Option | Type | Default | Description |
|:-------|:-----|:--------|:------------|
| extraPrompts | Record<string, string> | | Model-specific system prompt additions |
| smallModel | string | gpt-5-mini | Model for warmup/compact requests |
| compactUseSmallModel | boolean | true | Use small model for compact/summarization requests |
| useFunctionApplyPatch | boolean | true | Convert custom apply_patch to function type |
| modelReasoningEfforts | Record<string, string> | | Per-model reasoning effort levels |
| webSearchModel | string | (disabled) | Model to use when web search tools are detected (e.g. claude-sonnet-4). Saves premium quota on search-heavy requests |
Development
bun run dev # Dev server with watch
bun run dev -- -M # Dev with native messages
bun test # Run all tests
bun run typecheck # Type check
bun run lint:all # LintLicense
MIT
