@seagrass/1proxy
v0.0.4
Published
1Proxy shared AI router dashboard
Downloads
510
Readme
1Proxy - Shared AI Router
Never stop coding. Auto-route to FREE & cheap AI models with smart fallback.
Connect All AI Code Tools (Claude Code, Cursor, Antigravity, Copilot, Codex, Gemini, OpenCode, Cline, OpenClaw...) to 40+ AI Providers & 100+ Models.
🚀 Quick Start • 💡 Features • 📖 Setup • 🌐 Website
🇻🇳 Tiếng Việt • 🇨🇳 中文 • 🇯🇵 日本語
🤔 Why 1Proxy?
Stop wasting money and hitting limits:
- ❌ Subscription quota expires unused every month
- ❌ Rate limits stop you mid-coding
- ❌ Expensive APIs ($20-50/month per provider)
- ❌ Manual switching between providers
1Proxy solves this:
- ✅ Maximize subscriptions - Track quota, use every bit before reset
- ✅ Auto fallback - Subscription → Cheap → Free, zero downtime
- ✅ Multi-account - Round-robin between accounts per provider
- ✅ Universal - Works with Claude Code, Codex, Gemini CLI, Cursor, Cline, any CLI tool
🔄 How It Works
┌─────────────┐
│ Your CLI │ (Claude Code, Codex, Gemini CLI, OpenClaw, Cursor, Cline...)
│ Tool │
└──────┬──────┘
│ http://localhost:11111/v1
↓
┌─────────────────────────────────────────┐
│ 1Proxy (Smart Router) │
│ • Format translation (OpenAI ↔ Claude) │
│ • Quota tracking │
│ • Auto token refresh │
└──────┬──────────────────────────────────┘
│
├─→ [Tier 1: SUBSCRIPTION] Claude Code, Codex, Gemini CLI
│ ↓ quota exhausted
├─→ [Tier 2: CHEAP] GLM ($0.6/1M), MiniMax ($0.2/1M)
│ ↓ budget limit
└─→ [Tier 3: FREE] iFlow, Qwen, Kiro (unlimited)
Result: Never stop coding, minimal cost⚡ Quick Start
1. Install globally:
npm install -g @seagrass/1proxy
1proxy start🎉 Dashboard opens at http://localhost:11111
2. Connect a FREE provider (no signup needed):
Dashboard → Providers → Connect Claude Code or Antigravity → OAuth login → Done!
3. Use in your CLI tool:
Claude Code/Codex/Gemini CLI/OpenClaw/Cursor/Cline Settings:
Endpoint: http://localhost:11111/v1
API Key: [copy from dashboard]
Model: if/kimi-k2-thinkingThat's it! Start coding with FREE AI models.
Alternative: run from source (this repository):
Install from npm as @seagrass/1proxy, or run from source/Docker for local development.
cp .env.example .env
npm install
PORT=11111 NEXT_PUBLIC_BASE_URL=http://localhost:11111 npm run devProduction mode:
npm run build
PORT=11111 HOSTNAME=0.0.0.0 NEXT_PUBLIC_BASE_URL=http://localhost:11111 npm run startDefault URLs:
- Dashboard:
http://localhost:11111/dashboard - OpenAI-compatible API:
http://localhost:11111/v1
🎥 Video Tutorial
📺 Complete Setup Guide - 1Proxy + Claude Code FREE
🎬 Watch the complete step-by-step tutorial:
- ✅ 1Proxy installation & setup
- ✅ FREE Claude Sonnet 4.5 configuration
- ✅ Claude Code integration
- ✅ Live coding demonstration
⏱️ Duration: 20 minutes | 👥 By: Developer Community
🛠️ Supported CLI Tools
1Proxy works seamlessly with all major AI coding tools:
🌐 Supported Providers
🔐 OAuth Providers
🆓 Free Providers
🔑 API Key Providers (40+)
💡 Key Features
| Feature | What It Does | Why It Matters | |---------|--------------|----------------| | 🎯 Smart 3-Tier Fallback | Auto-route: Subscription → Cheap → Free | Never stop coding, zero downtime | | 📊 Real-Time Quota Tracking | Live token count + reset countdown | Maximize subscription value | | 🔄 Format Translation | OpenAI ↔ Claude ↔ Gemini seamless | Works with any CLI tool | | 👥 Multi-Account Support | Multiple accounts per provider | Load balancing + redundancy | | 🔄 Auto Token Refresh | OAuth tokens refresh automatically | No manual re-login needed | | 🎨 Custom Combos | Create unlimited model combinations | Tailor fallback to your needs | | 📝 Request Logging | Debug mode with full request/response logs | Troubleshoot issues easily | | 💾 Cloud Sync | Sync config across devices | Same setup everywhere | | 📊 Usage Analytics | Track tokens, cost, trends over time | Optimize spending | | 🌐 Deploy Anywhere | Localhost, VPS, Docker, Cloudflare Workers | Flexible deployment options |
🎯 Smart 3-Tier Fallback
Create combos with automatic fallback:
Combo: "my-coding-stack"
1. cc/claude-opus-4-6 (your subscription)
2. glm/glm-4.7 (cheap backup, $0.6/1M)
3. if/kimi-k2-thinking (free fallback)
→ Auto switches when quota runs out or errors occur📊 Real-Time Quota Tracking
- Token consumption per provider
- Reset countdown (5-hour, daily, weekly)
- Cost estimation for paid tiers
- Monthly spending reports
🔄 Format Translation
Seamless translation between formats:
- OpenAI ↔ Claude ↔ Gemini ↔ OpenAI Responses
- Your CLI tool sends OpenAI format → 1Proxy translates → Provider receives native format
- Works with any tool that supports custom OpenAI endpoints
👥 Multi-Account Support
- Add multiple accounts per provider
- Auto round-robin or priority-based routing
- Fallback to next account when one hits quota
🔄 Auto Token Refresh
- OAuth tokens automatically refresh before expiration
- No manual re-authentication needed
- Seamless experience across all providers
🎨 Custom Combos
- Create unlimited model combinations
- Mix subscription, cheap, and free tiers
- Name your combos for easy access
- Share combos across devices with Cloud Sync
📝 Request Logging
- Enable debug mode for full request/response logs
- Track API calls, headers, and payloads
- Troubleshoot integration issues
- Export logs for analysis
💾 Cloud Sync
- Sync providers, combos, and settings across devices
- Automatic background sync
- Secure encrypted storage
- Access your setup from anywhere
Cloud Runtime Notes
- Prefer server-side cloud variables in production:
BASE_URL(internal callback URL used by sync scheduler)CLOUD_URL(cloud sync endpoint base)
NEXT_PUBLIC_BASE_URLandNEXT_PUBLIC_CLOUD_URLare still supported for compatibility/UI, but server runtime now prioritizesBASE_URL/CLOUD_URL.- Cloud sync requests now use timeout + fail-fast behavior to avoid UI hanging when cloud DNS/network is unavailable.
📊 Usage Analytics
- Track token usage per provider and model
- Cost estimation and spending trends
- Monthly reports and insights
- Optimize your AI spending
💡 IMPORTANT - Understanding Dashboard Costs:
The "cost" displayed in Usage Analytics is for tracking and comparison purposes only. 1Proxy itself never charges you anything. You only pay providers directly (if using paid services).
Example: If your dashboard shows "$290 total cost" while using iFlow models, this represents what you would have paid using paid APIs directly. Your actual cost = $0 (iFlow is free unlimited).
Think of it as a "savings tracker" showing how much you're saving by using free models or routing through 1Proxy!
🌐 Deploy Anywhere
- 💻 Localhost - Default, works offline
- ☁️ VPS/Cloud - Share across devices
- 🐳 Docker - One-command deployment
- 🚀 Cloudflare Workers - Global edge network
💰 Pricing at a Glance
| Tier | Provider | Cost | Quota Reset | Best For | |------|----------|------|-------------|----------| | 💳 SUBSCRIPTION | Claude Code (Pro) | $20/mo | 5h + weekly | Already subscribed | | | Codex (Plus/Pro) | $20-200/mo | 5h + weekly | OpenAI users | | | Gemini CLI | FREE | 180K/mo + 1K/day | Everyone! | | | GitHub Copilot | $10-19/mo | Monthly | GitHub users | | 💰 CHEAP | GLM-4.7 | $0.6/1M | Daily 10AM | Budget backup | | | MiniMax M2.1 | $0.2/1M | 5-hour rolling | Cheapest option | | | Kimi K2 | $9/mo flat | 10M tokens/mo | Predictable cost | | 🆓 FREE | iFlow | $0 | Unlimited | 8 models free | | | Qwen | $0 | Unlimited | 3 models free | | | Kiro | $0 | Unlimited | Claude free |
💡 Pro Tip: Start with Gemini CLI (180K free/month) + iFlow (unlimited free) combo = $0 cost!
📊 Understanding 1Proxy Costs & Billing
1Proxy Billing Reality:
✅ 1Proxy software = FREE forever (open source, never charges) ✅ Dashboard "costs" = Display/tracking only (not actual bills) ✅ You pay providers directly (subscriptions or API fees) ✅ FREE providers stay FREE (iFlow, Kiro, Qwen = $0 unlimited) ❌ 1Proxy never sends invoices or charges your card
How Cost Display Works:
The dashboard shows estimated costs as if you were using paid APIs directly. This is not billing - it's a comparison tool to show your savings.
Example Scenario:
Dashboard Display:
• Total Requests: 1,662
• Total Tokens: 47M
• Display Cost: $290
Reality Check:
• Provider: iFlow (FREE unlimited)
• Actual Payment: $0.00
• What $290 Means: Amount you SAVED by using free models!Payment Rules:
- Subscription providers (Claude Code, Codex): Pay them directly via their websites
- Cheap providers (GLM, MiniMax): Pay them directly, 1Proxy just routes
- FREE providers (iFlow, Kiro, Qwen): Genuinely free forever, no hidden charges
- 1Proxy: Never charges anything, ever
🎯 Use Cases
Case 1: "I have Claude Pro subscription"
Problem: Quota expires unused, rate limits during heavy coding
Solution:
Combo: "maximize-claude"
1. cc/claude-opus-4-6 (use subscription fully)
2. glm/glm-4.7 (cheap backup when quota out)
3. if/kimi-k2-thinking (free emergency fallback)
Monthly cost: $20 (subscription) + ~$5 (backup) = $25 total
vs. $20 + hitting limits = frustrationCase 2: "I want zero cost"
Problem: Can't afford subscriptions, need reliable AI coding
Solution:
Combo: "free-forever"
1. gc/gemini-3-flash (180K free/month)
2. if/kimi-k2-thinking (unlimited free)
3. qw/qwen3-coder-plus (unlimited free)
Monthly cost: $0
Quality: Production-ready modelsCase 3: "I need 24/7 coding, no interruptions"
Problem: Deadlines, can't afford downtime
Solution:
Combo: "always-on"
1. cc/claude-opus-4-6 (best quality)
2. cx/gpt-5.2-codex (second subscription)
3. glm/glm-4.7 (cheap, resets daily)
4. minimax/MiniMax-M2.1 (cheapest, 5h reset)
5. if/kimi-k2-thinking (free unlimited)
Result: 5 layers of fallback = zero downtime
Monthly cost: $20-200 (subscriptions) + $10-20 (backup)Case 4: "I want FREE AI in OpenClaw"
Problem: Need AI assistant in messaging apps (WhatsApp, Telegram, Slack...), completely free
Solution:
Combo: "openclaw-free"
1. if/glm-4.7 (unlimited free)
2. if/minimax-m2.1 (unlimited free)
3. if/kimi-k2-thinking (unlimited free)
Monthly cost: $0
Access via: WhatsApp, Telegram, Slack, Discord, iMessage, Signal...❓ Frequently Asked Questions
The dashboard tracks your token usage and displays estimated costs as if you were using paid APIs directly. This is not actual billing - it's a reference to show how much you're saving by using free models or existing subscriptions through 1Proxy.
Example:
- Dashboard shows: "$290 total cost"
- Reality: You're using iFlow (FREE unlimited)
- Your actual cost: $0.00
- What $290 means: Amount you saved by using free models instead of paid APIs!
The cost display is a "savings tracker" to help you understand your usage patterns and optimization opportunities.
No. 1Proxy is free, open-source software that runs on your own computer. It never charges you anything.
You only pay:
- ✅ Subscription providers (Claude Code $20/mo, Codex $20-200/mo) → Pay them directly on their websites
- ✅ Cheap providers (GLM, MiniMax) → Pay them directly, 1Proxy just routes your requests
- ❌ 1Proxy itself → Never charges anything, ever
1Proxy is a local proxy/router. It doesn't have your credit card, can't send invoices, and has no billing system. It's completely free software.
Yes! Providers marked as FREE (iFlow, Kiro, Qwen) are genuinely unlimited with no hidden charges.
These are free services offered by those respective companies:
- iFlow: Free unlimited access to 8+ models via OAuth
- Kiro: Free unlimited Claude models via AWS Builder ID
- Qwen: Free unlimited access to Qwen models via device auth
1Proxy just routes your requests to them - there's no "catch" or future billing. They're truly free services, and 1Proxy makes them easy to use with fallback support.
Note: Some subscription providers (Antigravity, GitHub Copilot) may have free preview periods that could become paid later, but this would be clearly announced by those providers, not 1Proxy.
Free-First Strategy:
Start with 100% free combo:
1. gc/gemini-3-flash (180K/month free from Google) 2. if/kimi-k2-thinking (unlimited free from iFlow) 3. qw/qwen3-coder-plus (unlimited free from Qwen)Cost: $0/month
Add cheap backup only if you need it:
4. glm/glm-4.7 ($0.6/1M tokens)Additional cost: Only pay for what you actually use
Use subscription providers last:
- Only if you already have them
- 1Proxy helps maximize their value through quota tracking
Result: Most users can operate at $0/month using only free tiers!
1Proxy's smart fallback prevents surprise charges:
Scenario: You're on a coding sprint and blow through your quotas
Without 1Proxy:
- ❌ Hit rate limit → Work stops → Frustration
- ❌ Or: Accidentally rack up huge API bills
With 1Proxy:
- ✅ Subscription hits limit → Auto-fallback to cheap tier
- ✅ Cheap tier gets expensive → Auto-fallback to free tier
- ✅ Never stop coding → Predictable costs
You're in control: Set spending limits per provider in dashboard, and 1Proxy respects them.
📖 Setup Guide
Claude Code (Pro/Max)
Dashboard → Providers → Connect Claude Code
→ OAuth login → Auto token refresh
→ 5-hour + weekly quota tracking
Models:
cc/claude-opus-4-6
cc/claude-sonnet-4-5-20250929
cc/claude-haiku-4-5-20251001Pro Tip: Use Opus for complex tasks, Sonnet for speed. 1Proxy tracks quota per model!
OpenAI Codex (Plus/Pro)
Dashboard → Providers → Connect Codex
→ OAuth login (port 1455)
→ 5-hour + weekly reset
Models:
cx/gpt-5.2-codex
cx/gpt-5.1-codex-maxGemini CLI (FREE 180K/month!)
Dashboard → Providers → Connect Gemini CLI
→ Google OAuth
→ 180K completions/month + 1K/day
Models:
gc/gemini-3-flash-preview
gc/gemini-2.5-proBest Value: Huge free tier! Use this before paid tiers.
GitHub Copilot
Dashboard → Providers → Connect GitHub
→ OAuth via GitHub
→ Monthly reset (1st of month)
Models:
gh/gpt-5
gh/claude-4.5-sonnet
gh/gemini-3-proGLM-4.7 (Daily reset, $0.6/1M)
- Sign up: Zhipu AI
- Get API key from Coding Plan
- Dashboard → Add API Key:
- Provider:
glm - API Key:
your-key
- Provider:
Use: glm/glm-4.7
Pro Tip: Coding Plan offers 3× quota at 1/7 cost! Reset daily 10:00 AM.
MiniMax M2.1 (5h reset, $0.20/1M)
- Sign up: MiniMax
- Get API key
- Dashboard → Add API Key
Use: minimax/MiniMax-M2.1
Pro Tip: Cheapest option for long context (1M tokens)!
Kimi K2 ($9/month flat)
- Subscribe: Moonshot AI
- Get API key
- Dashboard → Add API Key
Use: kimi/kimi-latest
Pro Tip: Fixed $9/month for 10M tokens = $0.90/1M effective cost!
iFlow (8 FREE models)
Dashboard → Connect iFlow
→ iFlow OAuth login
→ Unlimited usage
Models:
if/kimi-k2-thinking
if/qwen3-coder-plus
if/glm-4.7
if/minimax-m2
if/deepseek-r1Qwen (3 FREE models)
Dashboard → Connect Qwen
→ Device code authorization
→ Unlimited usage
Models:
qw/qwen3-coder-plus
qw/qwen3-coder-flashKiro (Claude FREE)
Dashboard → Connect Kiro
→ AWS Builder ID, AWS IAM Identity Center, Google, GitHub
→ Unlimited usage
Models:
kr/claude-sonnet-4.5
kr/claude-haiku-4.5Example 1: Maximize Subscription → Cheap Backup
Dashboard → Combos → Create New
Name: premium-coding
Models:
1. cc/claude-opus-4-6 (Subscription primary)
2. glm/glm-4.7 (Cheap backup, $0.6/1M)
3. minimax/MiniMax-M2.1 (Cheapest fallback, $0.20/1M)
Use in CLI: premium-coding
Monthly cost example (100M tokens):
80M via Claude (subscription): $0 extra
15M via GLM: $9
5M via MiniMax: $1
Total: $10 + your subscriptionExample 2: Free-Only (Zero Cost)
Name: free-combo
Models:
1. gc/gemini-3-flash-preview (180K free/month)
2. if/kimi-k2-thinking (unlimited)
3. qw/qwen3-coder-plus (unlimited)
Cost: $0 forever!Cursor IDE
Settings → Models → Advanced:
OpenAI API Base URL: http://localhost:11111/v1
OpenAI API Key: [from 1proxy dashboard]
Model: cc/claude-opus-4-6Or use combo: premium-coding
Claude Code
Edit ~/.claude/config.json:
{
"anthropic_api_base": "http://localhost:11111/v1",
"anthropic_api_key": "your-1proxy-api-key"
}Codex CLI
export OPENAI_BASE_URL="http://localhost:11111"
export OPENAI_API_KEY="your-1proxy-api-key"
codex "your prompt"OpenClaw
Option 1 — Dashboard (recommended):
Dashboard → CLI Tools → OpenClaw → Select Model → ApplyOption 2 — Manual: Edit ~/.openclaw/openclaw.json:
{
"agents": {
"defaults": {
"model": {
"primary": "1proxy/if/glm-4.7"
}
}
},
"models": {
"providers": {
"1proxy": {
"baseUrl": "http://127.0.0.1:11111/v1",
"apiKey": "sk_1proxy",
"api": "openai-completions",
"models": [
{
"id": "if/glm-4.7",
"name": "glm-4.7"
}
]
}
}
}
}Note: OpenClaw only works with local 1Proxy. Use
127.0.0.1instead oflocalhostto avoid IPv6 resolution issues.
Cline / Continue / RooCode
Provider: OpenAI Compatible
Base URL: http://localhost:11111/v1
API Key: [from dashboard]
Model: cc/claude-opus-4-6VPS Deployment
# Clone and install
git clone https://github.com/decolua/1proxy.git
cd 1proxy
npm install
npm run build
# Configure
export SETUP_TOKEN="one-time-setup-token"
export DATA_DIR="/var/lib/1proxy"
export PORT="11111"
export HOSTNAME="0.0.0.0"
export NODE_ENV="production"
export NEXT_PUBLIC_BASE_URL="http://localhost:11111"
export NEXT_PUBLIC_CLOUD_URL=""
export API_KEY_SECRET="endpoint-proxy-api-key-secret"
export MACHINE_ID_SALT="1proxy-machine-id-salt"
# Start
npm run start
# Or use PM2
npm install -g pm2
pm2 start npm --name 1proxy -- start
pm2 save
pm2 startupDocker
# Build image (from repository root)
docker build -t 1proxy .
# Run container (command used in current setup)
docker run -d \
--name 1proxy \
-p 11111:11111 \
--env-file /root/dev/1proxy/.env \
-v 1proxy-data:/app/data \
-v 1proxy-usage:/root/.1proxy \
1proxyPortable command (if you are already at repository root):
docker run -d \
--name 1proxy \
-p 11111:11111 \
--env-file ./.env \
-v 1proxy-data:/app/data \
-v 1proxy-usage:/root/.1proxy \
1proxyContainer defaults:
PORT=11111HOSTNAME=0.0.0.0
Useful commands:
docker logs -f 1proxy
docker restart 1proxy
docker stop 1proxy && docker rm 1proxyEnvironment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| JWT_SECRET | generated per install | Optional override for dashboard JWT signing; if unset, 1Proxy creates a private secret in DATA_DIR |
| SETUP_TOKEN | unset | Optional one-time token for initial password setup when not using localhost |
| DATA_DIR | ~/.1proxy | Main app database location (db.json) |
| PORT | framework default | Service port (11111 in examples) |
| HOSTNAME | framework default | Bind host (Docker defaults to 0.0.0.0) |
| NODE_ENV | runtime default | Set production for deploy |
| BASE_URL | http://localhost:11111 | Server-side internal base URL used by cloud sync jobs |
| CLOUD_URL | | Server-side cloud sync endpoint base URL |
| `NEXT_PUBLIC_BASE_URL` | `http://localhost:3000` | Backward-compatible/public base URL (prefer `BASE_URL` for server runtime) |
| `NEXT_PUBLIC_CLOUD_URL` | | Backward-compatible/public cloud URL (prefer CLOUD_URL for server runtime) |
| API_KEY_SECRET | endpoint-proxy-api-key-secret | HMAC secret for generated API keys |
| MACHINE_ID_SALT | 1proxy-machine-id-salt | Salt for stable machine ID hashing |
| ENABLE_REQUEST_LOGS | false | Enables request/response logs under logs/ |
| AUTH_COOKIE_SECURE | false | Force Secure auth cookie (set true behind HTTPS reverse proxy) |
| REQUIRE_API_KEY | false | Enforce Bearer API key on /v1/* routes (recommended for internet-exposed deploys) |
| HTTP_PROXY, HTTPS_PROXY, ALL_PROXY, NO_PROXY | empty | Optional outbound proxy for upstream provider calls |
Notes:
- Lowercase proxy variables are also supported:
http_proxy,https_proxy,all_proxy,no_proxy. .envis not baked into Docker image (.dockerignore); inject runtime config with--env-fileor-e.- On Windows,
APPDATAcan be used for local storage path resolution. INSTANCE_NAMEappears in older docs/env templates, but is currently not used at runtime.
Runtime Files and Storage
- Main app state:
${DATA_DIR}/db.json(providers, combos, aliases, keys, settings), managed bysrc/lib/localDb.js. - Usage history and logs:
~/.1proxy/usage.jsonand~/.1proxy/log.txt, managed bysrc/lib/usageDb.js. - Optional request/translator logs:
<repo>/logs/...whenENABLE_REQUEST_LOGS=true. - Usage storage currently follows
~/.1proxypath logic and is independent fromDATA_DIR.
📊 Available Models
Claude Code (cc/) - Pro/Max:
cc/claude-opus-4-6cc/claude-sonnet-4-5-20250929cc/claude-haiku-4-5-20251001
Codex (cx/) - Plus/Pro:
cx/gpt-5.2-codexcx/gpt-5.1-codex-max
Gemini CLI (gc/) - FREE:
gc/gemini-3-flash-previewgc/gemini-2.5-pro
GitHub Copilot (gh/):
gh/gpt-5gh/claude-4.5-sonnet
GLM (glm/) - $0.6/1M:
glm/glm-4.7
MiniMax (minimax/) - $0.2/1M:
minimax/MiniMax-M2.1
iFlow (if/) - FREE:
if/kimi-k2-thinkingif/qwen3-coder-plusif/deepseek-r1
Qwen (qw/) - FREE:
qw/qwen3-coder-plusqw/qwen3-coder-flash
Kiro (kr/) - FREE:
kr/claude-sonnet-4.5kr/claude-haiku-4.5
🐛 Troubleshooting
"Language model did not provide messages"
- Provider quota exhausted → Check dashboard quota tracker
- Solution: Use combo fallback or switch to cheaper tier
Rate limiting
- Subscription quota out → Fallback to GLM/MiniMax
- Add combo:
cc/claude-opus-4-6 → glm/glm-4.7 → if/kimi-k2-thinking
OAuth token expired
- Auto-refreshed by 1Proxy
- If issues persist: Dashboard → Provider → Reconnect
High costs
- Check usage stats in Dashboard
- Switch primary model to GLM/MiniMax
- Use free tier (Gemini CLI, iFlow) for non-critical tasks
Dashboard opens on wrong port
- Set
PORT=11111andNEXT_PUBLIC_BASE_URL=http://localhost:11111
First login not working
- Open the dashboard from localhost and create the initial password
- For remote first setup, set a temporary
SETUP_TOKENand include it in the login request/environment flow
No request logs under logs/
- Set
ENABLE_REQUEST_LOGS=true
🛠️ Tech Stack
- Runtime: Node.js 20+
- Framework: Next.js 16
- UI: React 19 + Tailwind CSS 4
- Database: LowDB (JSON file-based)
- Streaming: Server-Sent Events (SSE)
- Auth: OAuth 2.0 (PKCE) + JWT + API Keys
📝 API Reference
Chat Completions
POST http://localhost:11111/v1/chat/completions
Authorization: Bearer your-api-key
Content-Type: application/json
{
"model": "cc/claude-opus-4-6",
"messages": [
{"role": "user", "content": "Write a function to..."}
],
"stream": true
}List Models
GET http://localhost:11111/v1/models
Authorization: Bearer your-api-key
→ Returns all models + combos in OpenAI format📧 Support
- npm: npmjs.com/package/1proxy
- GitHub: github.com/decolua/1proxy
- Issues: github.com/decolua/1proxy/issues
👥 Contributors
Thanks to all contributors who helped make 1Proxy better!
📊 Star Chart
🔀 Forks
OmniRoute — A full-featured TypeScript fork of 1Proxy. Adds 36+ providers, 4-tier auto-fallback, multi-modal APIs (images, embeddings, audio, TTS), circuit breaker, semantic cache, LLM evaluations, and a polished dashboard. 368+ unit tests. Available via npm and Docker.
🙏 Acknowledgments
Special thanks to CLIProxyAPI - the original Go implementation that inspired this JavaScript port.
📄 License
MIT License - see LICENSE for details.

