@chocks-dev/locode
v0.1.13
Published
Local-first AI coding CLI. Routes tasks between Ollama and Claude to save tokens.
Maintainers
Readme
Locode
Alpha Software — Use at Your Own Risk Locode is under active development and has not been validated for production use. Interfaces, configuration formats, and behaviours may change without notice between releases. It is provided as-is, without warranty of any kind. Use in critical or production environments is not recommended at this stage.
Local-first AI coding CLI. Routes simple tasks to a local LLM (Ollama), complex tasks to Claude. Saves tokens.
Demo
⭐ If you find the idea interesting, please consider starring the repo. It helps a lot!
Quick Start
npm install -g @chocks-dev/locode
locode setup # installs Ollama, picks a model, saves API key
locode # start chattingArchitecture
User CLI
│
▼
Routing Logic
│
├── Local LLM (fast tasks)
│
└── Claude (complex reasoning)Commands
| Command | Description |
|---------|-------------|
| locode | Interactive REPL (default) |
| locode run "<prompt>" | Single-shot task execution |
| locode setup | First-run wizard (Ollama + model + API key) |
| locode install [model] | Pull a specific Ollama model |
| locode update | Update locode to the latest version |
| locode benchmark | Compare token cost across routing modes |
Flags
locode chat --claude-only # skip local, send everything to Claude
locode chat --local-only # skip Claude, use Ollama only
locode chat --config ./custom.yaml # use a custom config file
locode benchmark --prompt "build a REST API" --output report.htmlIf no ANTHROPIC_API_KEY is set, locode automatically runs in local-only mode.
Config
Edit locode.yaml for routing rules, models, and thresholds:
local_llm.model— Ollama model (default:qwen3:8b)routing.rules— regex patterns that route tasks to local or Clauderouting.escalation_threshold— confidence below this escalates to Claude
Type stats in the REPL to see token usage and estimated savings.
Telemetry (Opt-in)
Telemetry is off by default. To opt in, export in your shell profile:
export SENTRY_DSN="https://[email protected]/456"When enabled: captures unhandled exceptions and samples 20% of performance traces.
Never sent: prompts, API keys, file contents. Unset SENTRY_DSN to disable.
Development
git clone https://github.com/chocks/locode && cd locode
npm install
npm run dev # run with ts-node
npm test # vitest
npm run build # tsc → dist/Project Structure
src/
cli/ # REPL, setup, install, update, benchmark
config/ # Zod schema + YAML loader
agents/ # LocalAgent (Ollama) + ClaudeAgent (Anthropic SDK)
orchestrator/ # Router + Orchestrator
tools/ # readFile, shell (allow-list), git
tracker/ # Token usage + cost estimationE2E Tests
End-to-end tests verify the full CLI pipeline by spawning locode against lightweight HTTP stub servers that mimic Ollama and Anthropic APIs. No external services required.
Prerequisites: Build the project first — E2E tests run the compiled CLI.
npm run build
npm run test:e2eThe tests verify:
- Simple prompts (e.g.,
grep) route to local LLM - Complex prompts (e.g.,
refactor) route to Claude - Missing API key triggers local-only fallback
Contributing
- Fork and branch from
main— never commit directly - TDD — write failing test first, then implement
npm test && npm run buildbefore opening a PR- One feature/fix per PR
Releasing
Releases are tag-driven — CI publishes to npm on v* tag push.
git checkout -b release/vX.Y.Z
npm run release:patch # bump package.json
git add package.json package-lock.json
git commit -S -m "chore: release vX.Y.Z"
gh pr create --fill
# after merge:
git checkout main && git pull
git tag -s "vX.Y.Z" -m "Release vX.Y.Z"
git push origin "vX.Y.Z"