npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@kdump/gemini-any-llm

v0.0.26

Published

> Let the Gemini CLI access any large language model provider

Readme

Gemini Any LLM Gateway

Let the Gemini CLI access any large language model provider

中文版请见 README_CN.md

🎯 Project Overview

Gemini Any LLM Gateway is an API gateway service that lets you seamlessly access various large language model providers (such as OpenAI, ZhipuAI, Qwen, etc.) via the Gemini CLI. You can enjoy diverse AI model services without modifying the Gemini CLI.

Core Features:

  • 🔌 Plug-and-play - Fully compatible, no Gemini CLI changes required
  • 🌐 Multi-provider support - Supports Codex, Claude Code, OpenAI, ZhipuAI, Qwen, and more
  • High-performance streaming responses - Real-time streaming output for a smooth experience
  • 🛠️ Intelligent tool calling - Complete Function Calling support
  • 📁 Flexible configuration management - Global plus project-level configuration for easy use

🚀 Quick Start

Installation

  1. Install Gemini CLI (if you haven't yet):
npm install -g @google/gemini-cli@latest --registry https://registry.npmmirror.com
npm install -g @google/gemini-cli-core@latest --registry https://registry.npmmirror.com
  1. Install this gateway:
npm install -g @kdump/gemini-any-llm@latest --registry https://registry.npmmirror.com

First Run

Run the following command to get started:

gal code

First-run flow:

  • The system automatically launches a setup wizard and asks you to choose an AI Provider (claudeCode, codex, or openai)
  • Then fill in the following based on your provider:
    • Base URL
      • OpenAI default: https://open.bigmodel.cn/api/paas/v4
      • Codex default: https://chatgpt.com/backend-api/codex
      • Claude Code default: https://open.bigmodel.cn/api/anthropic(可替换为你自己的 relay 地址,如 https://<host>/api
    • Default model
      • OpenAI default: glm-4.5
      • Codex default: gpt-5-codex
      • Claude Code default: claude-sonnet-4-20250514
    • Auth mode(Codex only, supports ApiKey or ChatGPT
    • API Key(OpenAI / Codex-ApiKey / Claude Code 模式都需要填写)
  • For Claude Code, the gateway automatically sends both x-api-key and Authorization: Bearer headers so it works with Anthropic relay services out of the box.
  • Configuration is saved to ~/.gemini-any-llm/config.yaml
  • Automatically generates or updates ~/.gemini/settings.json, setting the auth type to gemini-api-key
  • Automatically starts the background gateway service and waits for it to become ready
  • Launches the Gemini CLI for conversation

💡 Codex ChatGPT mode: If you choose Codex + ChatGPT in the wizard, the first request will prompt you to finish OAuth login in a browser. The login link appears in the terminal. After a successful login, the token is stored in ~/.gemini-any-llm/codex/auth.json. Tokens refresh automatically so you don’t need to log in again.

Reconfigure

Run this when you need to reconfigure or switch providers:

gal auth

💡 Usage Examples

Basic conversation

# Start a conversation
gal code "Write an HTTP service in TypeScript"

# Explain code
gal code "Explain what this code does"

# Optimization tips
gal code "Help me optimize this algorithm"

Pass file content

# Analyze the code files in the current directory
gal code "Please analyze the architecture of this project"

# Request a code review
gal code "Please review my code and suggest improvements"

More options

# View all Gemini CLI options
gal code --help

# Use other Gemini CLI parameters
gal code --temperature 0.7 "Write a creative story"

📖 User Guide

Command overview

gal provides the following primary commands:

  • gal code [prompt] - Chat with the AI assistant (main feature)
  • gal auth - Configure AI service credentials
  • gal start - Manually start the background gateway service
  • gal stop - Stop the gateway service
  • gal restart - Restart the gateway service
  • gal status - Check the gateway status
  • gal kill - Force-kill stuck processes (for troubleshooting)
  • gal update - Manually check for a new release and install it
  • gal version - Display the current version
  • gal --help - Show help information

Codex ChatGPT (OAuth) mode

  1. Run gal auth, choose Codex as the provider, and set the auth mode to ChatGPT in the wizard.
  2. The first time you run gal code or gal start, the terminal prints a https://auth.openai.com/oauth/authorize?... link. Copy it into a browser to complete the login.
  3. During login the CLI spins up a temporary callback service on 127.0.0.1:1455. If the port is taken, free it or try again (the CLI retries automatically and shows error reasons).
  4. After the authorization succeeds you’ll see “Login successful, you may return to the terminal.” Tokens are saved to ~/.gemini-any-llm/codex/auth.json, including access_token, refresh_token, id_token, and the refresh timestamp.
  5. The gateway refreshes tokens automatically afterwards, so you don’t need to log in again. If you delete or move auth.json, the browser login will be triggered the next time you send a request.

To customize the token directory, set the CODEX_HOME environment variable (defaults to ~/.gemini-any-llm/codex).

Configuration management

The system supports a flexible configuration hierarchy. Higher priority values override lower ones:

  1. Project configuration (./config/config.yaml) - Highest priority, project-specific
  2. Global configuration (~/.gemini-any-llm/config.yaml) - Medium priority, user defaults
  3. Environment variables - Lowest priority, baseline settings

Supported providers

| Provider | Base URL | Recommended models | | --- | --- | --- | | Codex | https://chatgpt.com/backend-api/codex | gpt-5-codex | | Claude Code | https://open.bigmodel.cn/api/anthropic(or a relay endpoint such as https://<host>/api) | claude-sonnet-4-20250514, claude-3.5-sonnet-20241022 | | ZhipuAI (default) | https://open.bigmodel.cn/api/paas/v4 | glm-4.5 | | OpenAI | https://api.openai.com/v1 | gpt-4, gpt-4o | | Qwen | https://dashscope.aliyuncs.com/compatible-mode/v1 | qwen-plus, qwen-turbo | | Other OpenAI-compatible services | Custom URL | Matching model name |

Environment variable configuration

You can also configure settings with environment variables (baseline settings with the lowest priority):

# Choose the primary provider (supports claudeCode / codex / openai)
export GAL_AI_PROVIDER="codex"

# Codex configuration
# Auth mode can be apikey / chatgpt (default apikey)
export GAL_CODEX_AUTH_MODE="chatgpt"
# Provide the API Key when using ApiKey mode; leave empty for ChatGPT mode
export GAL_CODEX_API_KEY="your-codex-api-key"
export GAL_CODEX_BASE_URL="https://chatgpt.com/backend-api/codex"
export GAL_CODEX_MODEL="gpt-5-codex"
export GAL_CODEX_TIMEOUT="1800000"
# Optional: reasoning parameters and output verbosity control
export GAL_CODEX_REASONING='{"effort":"medium"}'
export GAL_CODEX_TEXT_VERBOSITY="medium"
# Optional: custom OAuth token directory (defaults to ~/.gemini-any-llm/codex)
export CODEX_HOME="$HOME/.custom-codex"

# Claude Code configuration
export GAL_CLAUDE_CODE_API_KEY="your-claude-code-api-key"
export GAL_CLAUDE_CODE_BASE_URL="https://open.bigmodel.cn/api/anthropic"   # 或自建 relay 的 /api 根路径
export GAL_CLAUDE_CODE_MODEL="claude-sonnet-4-20250514"
export GAL_CLAUDE_CODE_TIMEOUT="1800000"
export GAL_CLAUDE_CODE_VERSION="2023-06-01"
export GAL_CLAUDE_CODE_BETA="claude-code-20250219,interleaved-thinking-2025-05-14"
export GAL_CLAUDE_CODE_USER_AGENT="claude-cli/1.0.119 (external, cli)"
export GAL_CLAUDE_CODE_X_APP="cli"
export GAL_CLAUDE_CODE_DANGEROUS_DIRECT="true"
export GAL_CLAUDE_CODE_MAX_OUTPUT="64000"

# OpenAI / compatible service configuration
export GAL_OPENAI_API_KEY="your-api-key"
export GAL_OPENAI_BASE_URL="https://api.openai.com/v1"
export GAL_OPENAI_MODEL="gpt-4"
export GAL_OPENAI_TIMEOUT="1800000"
# Optional: OpenAI organization ID
export GAL_OPENAI_ORGANIZATION="org-xxxxxx"

# Gateway configuration
export GAL_PORT="23062"
export GAL_HOST="0.0.0.0"
export GAL_LOG_LEVEL="info"
export GAL_GATEWAY_LOG_DIR="~/.gemini-any-llm/logs"
export GAL_DISABLE_UPDATE_CHECK="1"            # Disable automatic update prompts

# General advanced configuration
export GAL_RATE_LIMIT_MAX="100"                # API rate limit cap (per 15 minutes)
export GAL_REQUEST_TIMEOUT="3600000"           # Request timeout in milliseconds (default 1 hour)
export GAL_ALLOWED_ORIGINS="http://localhost:3000,http://localhost:8080"  # Allowed origins for CORS
export GAL_LOG_DIR="/custom/log/path"          # Custom log directory

Project-specific configuration

If you want different models or settings for a given project, create the following in the project directory:

mkdir config
cat > config/config.yaml << EOF
openai:
  apiKey: "project-specific-key"
  model: "gpt-4"
  baseURL: "https://api.openai.com/v1"
  timeout: 1800000
gateway:
  port: 23062
  host: "0.0.0.0"
  logLevel: "info"
  logDir: "./logs"
EOF

To make Codex the default provider for the project, add:

aiProvider: codex
codex:
  authMode: ApiKey
  apiKey: "project-codex-key"
  baseURL: "https://chatgpt.com/backend-api/codex"
  model: "gpt-5-codex"
  timeout: 1800000
  # Optional: customize reasoning effort and output verbosity
  reasoning:
    effort: medium
  textVerbosity: medium

For OAuth login, switch to:

aiProvider: codex
codex:
  authMode: ChatGPT
  baseURL: "https://chatgpt.com/backend-api/codex"
  model: "gpt-5-codex"
  timeout: 1800000
  reasoning:
    effort: medium
    summary: auto
  textVerbosity: medium

🔧 Detailed Configuration

API settings

  • aiProvider - Primary provider type, choose openai or codex
  • codex.authMode - Codex auth mode, supports ApiKey (static key) or ChatGPT (OAuth login with automatic refresh)
  • openai.apiKey - API key for OpenAI or compatible services (required when using openai)
  • openai.baseURL - Endpoint URL for OpenAI-compatible APIs (default: ZhipuAI)
  • openai.model - Default model name (default: glm-4.5)
  • openai.timeout - Request timeout in milliseconds (default: 1800000 ≈ 30 minutes)
  • codex.apiKey - Codex API key (required only in ApiKey mode, optional in ChatGPT mode)
  • codex.baseURL - Codex API endpoint URL (default: https://chatgpt.com/backend-api/codex)
  • codex.model - Codex model name (default: gpt-5-codex)
  • codex.timeout - Codex request timeout in milliseconds (default: 1800000 ≈ 30 minutes)
  • codex.reasoning - Codex reasoning configuration, follows the Codex Responses API JSON schema
  • codex.textVerbosity - Codex text verbosity, supports low/medium/high

Gateway settings

  • gateway.port - Service port (default: 23062)
  • gateway.host - Bind address (default: 0.0.0.0)
  • gateway.logLevel - Log level: debug/info/warn/error (default: info)
  • gateway.logDir - Log directory (default: ~/.gemini-any-llm/logs)

🛠️ Troubleshooting

AI assistant not responding

Symptom: gal code hangs or shows no response

Solution:

# 1. Clean up stuck processes
gal kill

# 2. Try the conversation again
gal code "Hello"

Authentication failure

Symptom: API Key is rejected or authentication fails

Solution:

# Reconfigure credentials
gal auth

Checklist:

  • Make sure the API Key is correct and still valid
  • Verify that the baseURL matches the provider
  • Confirm that the account has sufficient quota

Service fails to start

Symptom: Gateway fails to boot or health check reports errors

Solution:

# 1. Check service status
gal status

# 2. Restart the service manually
gal restart

# 3. If issues persist, force clean up
gal kill
gal start

Checklist:

  • Check network connectivity to the AI provider
  • Ensure port 23062 is free
  • Verify the configuration file format is correct

Port conflict

Symptom: Port 23062 is already in use

Solution:

  1. Change the port in the configuration file:
# ~/.gemini-any-llm/config.yaml
gateway:
  port: 23063  # Switch to another available port
  1. Or set it via environment variables:
export PORT=23063

Configuration issues

Symptom: Configuration validation fails

Solution:

  1. Check the syntax in ~/.gemini-any-llm/config.yaml
  2. Make sure all required fields are filled in
  3. Validate file permissions (should be 600)

Permission issues

Symptom: Unable to read or write configuration files

Solution:

# Ensure the directory permissions are correct
chmod 700 ~/.gemini-any-llm
chmod 600 ~/.gemini-any-llm/config.yaml

Network connectivity issues

Symptom: Connection times out or reports network errors

Solution:

  1. Check your network connection
  2. Try another baseURL (for example, a local mirror)
  3. Increase the timeout:
openai:
  timeout: 1800000  # 30 minutes

View logs

To debug, inspect detailed logs:

# Tail gateway logs
tail -n 300 -f ~/.gemini-any-llm/logs/gateway-{date-time}.log

# Enable debug mode
export LOG_LEVEL=debug
gal restart

❓ FAQ

Q: What should I do when the input length exceeds the limit?

Symptom:

  • Gemini CLI shows: "Model stream ended with an invalid chunk or missing finish reason."
  • Gateway logs (~/.gemini-any-llm/logs/) contain errors such as:
InternalError.Algo.InvalidParameter: Range of input length should be [1, 98304]

Cause: The number of input tokens exceeds the default limit of the model

Solution:

  1. Increase the input limit via extraBody.max_input_tokens:
# ~/.gemini-any-llm/config.yaml or a project configuration file
openai:
  apiKey: "your-api-key"
  baseURL: "https://dashscope.aliyuncs.com/compatible-mode/v1"
  model: "qwen-plus-latest"
  extraBody:
    max_input_tokens: 200000  # Increase the input token limit
  1. Default limits for common models:
    • qwen-plus-latest: default 129,024, expandable to 1,000,000
    • qwen-plus-2025-07-28: default 1,000,000
    • Refer to vendor manuals for other models

Q: How can I switch to another AI provider?

Solution:

# Reconfigure credentials
gal auth

In the wizard, choose the provider you need. You can also preselect it with the environment variable GAL_AI_PROVIDER (openai or codex).

Common configuration examples:

  • OpenAI: https://api.openai.com/v1 + gpt-4 or gpt-4o
  • Qwen: https://dashscope.aliyuncs.com/compatible-mode/v1 + qwen-plus or qwen-turbo
  • ZhipuAI: https://open.bigmodel.cn/api/paas/v4 + glm-4.5
  • Codex: https://chatgpt.com/backend-api/codex + gpt-5-codex

Q: How do I use a different model for a specific project?

Solution: Create a config/config.yaml file in the project root:

openai:
  apiKey: "project-specific-key"
  model: "gpt-4"
  baseURL: "https://api.openai.com/v1"
  timeout: 1800000
gateway:
  logLevel: "debug"  # Use debug mode during project development

Project configuration has the highest priority and overrides global settings.

Q: The service is unreachable or slow after it starts?

Solution:

  1. Check the service status:
gal status
  1. Verify the network connection to the AI provider
  2. Consider increasing the timeout:
openai:
  timeout: 1800000  # 30 minutes
  1. If the issue persists, restart the service:
gal restart

📚 More Resources

Automatic updates

  • Every interactive gal command checks ~/.gemini-any-llm/version.json and refreshes the cache in the background every 20 hours. Network errors during the check never block the gateway.
  • When you run gal code, the CLI pauses before launching the Gemini experience if a newer version exists and offers four options: y (update now), n (skip for this run), skip (ignore this release), or off (disable future checks and restart the gateway).
  • Run gal update at any time to synchronously refresh the cache and install the latest published package.
  • Set GAL_DISABLE_UPDATE_CHECK=1 if you need to permanently opt out of automatic checks (also available through the off option in the prompt).

🙏 Acknowledgements

This project draws inspiration from claude-code-router, llxprt-code, and aio-cli. We sincerely thank these excellent open-source projects and their contributors.

🤝 Contributing

Issues and pull requests are welcome!

📄 License

Apache License 2.0