npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@borgius/copilot-proxy

v0.1.1

Published

CLI proxy that authenticates with GitHub Copilot and serves OpenAI/Claude compatible REST API

Downloads

202

Readme

@borgius/copilot-proxy

A CLI tool that authenticates with GitHub Copilot and exposes OpenAI-compatible and Anthropic-compatible REST APIs locally. Use any AI SDK or tool that supports OpenAI/Claude APIs and route requests through your existing GitHub Copilot subscription — no additional API keys needed.

Features

  • OpenAI Chat Completions API (/v1/chat/completions) — drop-in replacement for OpenAI's chat API
  • OpenAI Responses API (/v1/responses) — supports the newer Responses API format
  • Anthropic Messages API (/v1/messages) — drop-in replacement for Claude's messages API
  • Model listing (/v1/models) — dynamically fetched from GitHub Copilot
  • Streaming support — server-sent events (SSE) for all endpoints
  • Tool/function calling — pass tools/functions transparently
  • Smart model routing — automatically routes models to the correct backend endpoint
  • GitHub Enterprise support — works with GHE Server and Data Residency instances
  • Custom config file — override the default config path with --config
  • Cloudflare Worker deployment — can also run as a serverless worker

Quick Start

# One command — authenticates if needed, then starts the server
npx @borgius/copilot-proxy

# Or step by step
npx @borgius/copilot-proxy auth
npx @borgius/copilot-proxy serve

Installation

Global install (recommended for regular use)

npm install -g @borgius/copilot-proxy
# or with bun
bun add -g @borgius/copilot-proxy

Use without installing (npx)

npx @borgius/copilot-proxy auth
npx @borgius/copilot-proxy serve

Commands

Default (no command) — Auto-auth + serve

copilot-proxy [options]

When run without a subcommand, copilot-proxy will:

  1. Check if credentials exist in the config file
  2. If not authenticated, automatically start the auth device-flow
  3. After authentication (or if already authenticated), start the proxy server

This is the recommended way to use the tool — a single command handles everything.

# Start on default port 11433
copilot-proxy

# Start on a custom port
copilot-proxy --port 8080

# Custom host + port
copilot-proxy --port 3000 --host 0.0.0.0

# Use a specific config file
copilot-proxy --config ~/work-copilot.json

auth — Authenticate with GitHub Copilot

copilot-proxy auth

Starts an OAuth device flow to authenticate with your GitHub account. Supports:

  • GitHub.com (public cloud)
  • GitHub Enterprise Server (self-hosted)
  • GitHub Enterprise Cloud with Data Residency

The command will:

  1. Prompt you to choose GitHub.com or Enterprise
  2. For Enterprise, ask for your instance URL (e.g. company.ghe.com)
  3. Display a verification URL and user code
  4. Wait for you to approve the device in your browser
  5. Save credentials to ~/.config/copilot-proxy/config.json

serve — Start the proxy server

copilot-proxy serve [options]

Options:

| Flag | Default | Description | |---|---|---| | -p, --port <port> | 11433 | Port to listen on | | -h, --host <host> | localhost | Host to bind to |

Examples:

# Default: http://localhost:11433
copilot-proxy serve

# Custom port
copilot-proxy serve --port 8080

# Listen on all interfaces (Docker/remote)
copilot-proxy serve --port 3000 --host 0.0.0.0

Global Options

| Flag | Description | |---|---| | -p, --port <port> | Port to listen on (default: 11433) | | -H, --host <host> | Host to bind to (default: localhost) | | -c, --config <file> | Path to config file (overrides default) | | -V, --version | Print version | | --help | Show help |

Using --config to manage multiple accounts or profiles:

# Authenticate to a work profile
copilot-proxy --config ~/.config/copilot-proxy/work.json auth

# Serve using that profile
copilot-proxy --config ~/.config/copilot-proxy/work.json serve

API Reference

All endpoints are available at http://localhost:11433 by default. No API key is required — set any non-empty string if the client requires one.

OpenAI Chat Completions

POST /v1/chat/completions

Drop-in replacement for https://api.openai.com/v1/chat/completions.

curl http://localhost:11433/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}]}'

Streaming:

curl http://localhost:11433/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello!"}], "stream": true}'

With tool/function calling:

curl http://localhost:11433/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "What is the weather in Paris?"}],
    "tools": [{
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather",
        "parameters": {"type": "object", "properties": {"location": {"type": "string"}}}
      }
    }]
  }'

OpenAI Responses API

POST /v1/responses

Required for Codex and O-series models. Also works with standard GPT models.

curl http://localhost:11433/v1/responses \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4o", "input": "Explain quantum computing"}'

Anthropic Messages API

POST /v1/messages

Drop-in replacement for https://api.anthropic.com/v1/messages.

curl http://localhost:11433/v1/messages \
  -H "Content-Type: application/json" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "claude-sonnet-4-5-20250929",
    "max_tokens": 1024,
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

List Models

GET /v1/models

Returns available models from GitHub Copilot in OpenAI format.

curl http://localhost:11433/v1/models

Supported Models

Models are fetched dynamically from GitHub Copilot. Availability depends on your plan.

| Model | Provider | Endpoint | |---|---|---| | gpt-4o, gpt-4o-mini | OpenAI | Chat | | gpt-4.1, gpt-4.5 | OpenAI | Chat | | gpt-5, gpt-5-mini | OpenAI | Both | | o1, o3, o4-mini | OpenAI | Responses only | | gpt-5.1-codex, gpt-5.2-codex | OpenAI | Responses only | | claude-sonnet-4.5 | Anthropic | Chat | | claude-sonnet-4 | Anthropic | Chat | | claude-opus-4.5 | Anthropic | Chat | | claude-haiku-4.5 | Anthropic | Chat |

Claude Model Aliases

When using the Anthropic Messages API, legacy model names are automatically remapped:

| Input name | Routed to | |---|---| | claude-3-5-sonnet-20241022 | claude-sonnet-4.5 | | claude-sonnet-4-5-20250929 | claude-sonnet-4.5 | | claude-opus-4-0-20250514 | claude-opus-4.5 | | claude-3-opus-20240229 | claude-opus-4.5 | | claude-3-haiku-20240307 | gpt-4o-mini |


Using with AI SDKs

OpenAI SDK (Node.js)

import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'http://localhost:11433/v1',
  apiKey: 'not-needed',
});

// Non-streaming
const response = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);

// Streaming
const stream = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Write a haiku' }],
  stream: true,
});
for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}

OpenAI SDK (Python)

from openai import OpenAI

client = OpenAI(base_url="http://localhost:11433/v1", api_key="not-needed")
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

Anthropic SDK (Node.js)

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  baseURL: 'http://localhost:11433',
  apiKey: 'not-needed',
});

const message = await client.messages.create({
  model: 'claude-sonnet-4-5-20250929',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(message.content[0].text);

Anthropic SDK (Python)

import anthropic

client = anthropic.Anthropic(base_url="http://localhost:11433", api_key="not-needed")
message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello!"}],
)
print(message.content[0].text)

Other Compatible Tools

| Tool | Configuration | |---|---| | Continue (VS Code) | Set apiBase to http://localhost:11433/v1 | | aider | aider --openai-api-base http://localhost:11433/v1 --openai-api-key any | | LiteLLM | Use openai/ prefix with api_base=http://localhost:11433/v1 | | Open WebUI | Add custom OpenAI endpoint: http://localhost:11433/v1 | | Cursor | Configure OpenAI-compatible provider with local URL |


Configuration

Default config: ~/.config/copilot-proxy/config.json

GitHub.com:

{
  "auth": {
    "type": "oauth",
    "provider": "github-copilot",
    "accessToken": "ghu_...",
    "refreshToken": "ghr_...",
    "expiresAt": 1777777777
  }
}

GitHub Enterprise:

{
  "auth": {
    "type": "oauth",
    "provider": "github-copilot-enterprise",
    "accessToken": "ghu_...",
    "refreshToken": "ghr_...",
    "expiresAt": 1777777777,
    "enterpriseUrl": "company.ghe.com"
  }
}

Development

Requirements

  • Bun >= 1.0.0
  • Node.js >= 18.0.0

Setup

git clone https://github.com/borgius/copilot-proxy
cd copilot-proxy
bun install

Scripts

| Command | Description | |---|---| | bun run build | Build for production | | bun run dev | Dev mode (watch + auto-restart) | | bun run typecheck | TypeScript type checking | | bun run test | Unit tests | | bun run test:integration | Integration tests | | bun run release | Build and publish to npm | | bun run release:dry | Dry-run publish (no upload) |


Publishing to npm

# Login to npm (once)
npm login

# Build and publish
bun run release

# Or test first with a dry run
bun run release:dry

Cloudflare Worker Deployment

The proxy can run as a Cloudflare Worker for serverless operation.

bun run deploy          # Deploy to production
bun run deploy:dev      # Deploy to development
bun run test:cloudflare # Test the deployed worker

Configure wrangler.toml with your Cloudflare account details before deploying.


How It Works

  1. Authentication — Uses GitHub's OAuth device flow to obtain a Copilot-scoped token
  2. Token refresh — Access tokens are automatically refreshed before expiry
  3. API translation — Incoming OpenAI/Anthropic requests are translated to GitHub Copilot's internal format
  4. Model routing — Models are routed to the correct Copilot endpoint (chat vs responses) based on their capabilities
  5. Streaming — SSE streams are proxied transparently, preserving chunk boundaries

License

MIT