@lucasonline0/leblonde
v0.1.9
Published
Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models
Maintainers
Readme
Leblonde
Leblonde is a terminal coding assistant for people who want direct control over model providers, local runtimes, and code workflows.
This repository keeps the entry docs focused on setup and development so you can get running fast and shape Leblonde in your own direction.
Install
npm install -g leblondeStart
leblondeInside the CLI, the easiest path is:
- run
/provider - save a provider profile
- start working in your project
If you prefer environment variables instead of the guided setup, use one of the minimal examples below.
OpenAI-compatible example
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
leblondeLocal Ollama example
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=qwen2.5-coder:7b
leblondeWhat you get
- terminal-first coding workflow
- file read, edit, grep, glob, and bash tools
- agents, tasks, MCP, and slash commands
- provider profiles for repeatable launches
- support for cloud and local model backends
Development
Run from the repository root:
bun install
bun run build
bun run devCommon checks:
bun test
bun run smoke
bun run doctor:runtimeDocumentation
Provider Notes
The codebase includes support for OpenAI-compatible endpoints, Gemini, GitHub Models, Codex, Ollama, and other provider paths already wired into the runtime.
Use the quick starts for the shortest path. Use the advanced guide when you want source builds, saved profiles, diagnostics, or more control over provider launch behavior.
Security
If you believe you found a security issue, see SECURITY.md.
License
See LICENSE.
