openclaw-config
v2026.2.11
Published
Standalone tool to add or update OpenClaw config (e.g. models.providers) without depending on openclaw
Downloads
331
Maintainers
Readme
openclaw-config
Standalone CLI to add or update OpenClaw config (e.g. models.providers) without depending on the openclaw package. Only edits openclaw.json; no coupling to openclaw source.
Install
npm install -g openclaw-config
# or
npx openclaw-config add-provider ...Usage
Add a model provider
openclaw-config add-provider <providerId> --base-url <url> --models <id1,id2,...> [options]Options:
| Option | Description |
| ------------ | ------------------------------------------------------------------------ |
| --base-url | Provider base URL (required) |
| --api-key | API key (optional) |
| --api | openai-completions | openai-responses (default: openai-completions) |
| --config | Path to openclaw.json (default: env or ~/.openclaw/openclaw.json) |
Config path resolution (same as OpenClaw):
OPENCLAW_CONFIG_PATH→ use that fileOPENCLAW_STATE_DIR→<dir>/openclaw.json- Default:
~/.openclaw/openclaw.json
Examples
NVIDIA NIM:
openclaw-config add-provider nvidia \
--base-url "https://integrate.api.nvidia.com/v1" \
--api-key "your-api-key" \
--models "minimaxai/minimax-m2.1,z-ai/glm4.7"LM Studio (local, default base URL):
openclaw-config add-provider lmstudio \
--base-url "http://localhost:1234/v1" \
--models "qwen/qwen3-coder-next"Ollama:
openclaw-config add-provider ollama \
--base-url "http://127.0.0.1:11434/v1" \
--models "llama3.2"xAI (Grok):
openclaw-config add-provider xai \
--base-url "https://api.x.ai/v1" \
--api-key "your-api-key" \
--models "grok-beta"Minimax:
openclaw-config add-provider minimax \
--base-url "https://api.minimax.chat/v1" \
--api-key "your-api-key" \
--models "MiniMax-M2.1"Moonshot (Kimi):
openclaw-config add-provider moonshot \
--base-url "https://api.moonshot.ai/v1" \
--api-key "your-api-key" \
--models "kimi-k2.5"Qwen Portal:
openclaw-config add-provider qwen \
--base-url "https://portal.qwen.ai/v1" \
--api-key "your-api-key" \
--models "qwen-plus"Qianfan (Baidu):
openclaw-config add-provider qianfan \
--base-url "https://qianfan.baidubce.com/v2" \
--api-key "your-api-key" \
--models "deepseek-v3.2"Xiaomi:
openclaw-config add-provider xiaomi \
--base-url "https://api.xiaomimimo.com/anthropic" \
--api-key "your-api-key" \
--models "mimo-v2-flash"Synthetic:
openclaw-config add-provider synthetic \
--base-url "https://api.synthetic.new/anthropic" \
--api-key "your-api-key" \
--models "synthetic-model"OpenRouter:
openclaw-config add-provider openrouter \
--base-url "https://openrouter.ai/api/v1" \
--api-key "your-api-key" \
--models "openrouter/pony-alpha"Provider that only supports /v1/responses:
openclaw-config add-provider codex \
--base-url "https://api.134300.xyz/v1" \
--api openai-responses \
--api-key "your-api-key" \
--models "gpt-5.3-codex,gpt-5.2"Programmatic API
import { addProvider } from "openclaw-config";
const configPath = addProvider({
providerId: "nvidia",
baseUrl: "https://integrate.api.nvidia.com/v1",
apiKey: "your-api-key",
models: ["minimaxai/minimax-m2.1", "z-ai/glm4.7"],
});
console.log("Written to", configPath);License
MIT
