codemaxxing
v1.1.1
Published
Open-source terminal coding agent. Connect any LLM. Max your code.
Maintainers
Readme
codemaxxing 💪
your code. your model. no excuses.
Open-source terminal coding agent. Connect any LLM — local or remote — and start building. Like Claude Code, but you bring your own model.
🆕 v1.1.0: Use GPT-5.4 with your ChatGPT Plus subscription — no API key needed. Just /login → OpenAI → OAuth. Same access as Codex CLI.
Why?
Every coding agent locks you into their API. Codemaxxing doesn't. Run it with LM Studio, Ollama, OpenRouter, OpenAI, Anthropic, or any OpenAI-compatible endpoint. Your machine, your model, your rules.
Install
If you have Node.js:
npm install -g codemaxxingIf you don't have Node.js:
The one-line installers below will install Node.js first, then codemaxxing.
Linux / macOS:
bash -c "$(curl -fsSL https://raw.githubusercontent.com/MarcosV6/codemaxxing/main/install.sh)"Windows (CMD as Administrator):
curl -fsSL -o %TEMP%\install-codemaxxing.bat https://raw.githubusercontent.com/MarcosV6/codemaxxing/main/install.bat && %TEMP%\install-codemaxxing.batWindows (PowerShell as Administrator):
curl -fsSL -o $env:TEMP\install-codemaxxing.bat https://raw.githubusercontent.com/MarcosV6/codemaxxing/main/install.bat; & $env:TEMP\install-codemaxxing.batWindows note: If Node.js was just installed, you may need to close and reopen your terminal, then run
npm install -g codemaxxingmanually. This is a Windows PATH limitation.
Updating
npm update -g codemaxxingIf that doesn't get the latest version, use the exact reinstall path:
npm install -g codemaxxing@latestThen verify:
codemaxxing --versionQuick Start
Option A — easiest local setup
If you already have a local server running, Codemaxxing auto-detects common defaults:
- LM Studio on
http://localhost:1234/v1 - Ollama on
http://localhost:11434 - vLLM on
http://localhost:8000
For LM Studio:
- Download LM Studio
- Load a coding model (for example Qwen 2.5 Coder 7B for a lightweight test)
- Start the local server
- Run:
codemaxxingOption B — no local model yet
Just run:
codemaxxingIf no LLM is available, Codemaxxing can guide you through:
- detecting your hardware
- recommending a model
- installing Ollama
- downloading the model
- connecting automatically
Option C — ChatGPT Plus (GPT-5.4, easiest cloud option)
If you have a ChatGPT Plus subscription, get instant access to GPT-5.4 with zero API costs:
codemaxxing login
# → Pick "OpenAI"
# → Pick "OpenAI (ChatGPT)"
# → Browser opens, log in with your ChatGPT account
# → Done — you now have GPT-5.4, GPT-5, o3, o4-mini
codemaxxing
# → /model → pick gpt-5.4No API key required. Uses your ChatGPT subscription limits instead.
Option D — other cloud providers
Authenticate first:
codemaxxing loginThen run:
codemaxxingAuthentication
One command to connect any provider:
codemaxxing loginInteractive setup walks you through it. Or use /login inside the TUI.
Supported auth methods:
| Provider | Methods | |----------|---------| | OpenRouter | OAuth (browser login) or API key — one login, 200+ models | | Anthropic | Link your Claude subscription (via Claude Code) or API key | | OpenAI | Import from Codex CLI or API key | | Qwen | Import from Qwen CLI or API key | | GitHub Copilot | Device flow (browser) | | Google Gemini | API key | | Any provider | API key + custom base URL |
codemaxxing login # Interactive provider picker
codemaxxing auth list # See saved credentials
codemaxxing auth remove <name> # Delete a credential
codemaxxing auth openrouter # Direct OpenRouter OAuthCredentials stored securely in ~/.codemaxxing/auth.json (owner-only permissions).
Advanced Setup
With a remote provider (OpenAI, OpenRouter, etc.):
codemaxxing --base-url https://api.openai.com/v1 --api-key sk-... --model gpt-5With a saved provider profile:
codemaxxing --provider openrouterAuto-detected local servers: LM Studio (:1234), Ollama (:11434), vLLM (:8000)
Features
🔥 Streaming Tokens
Real-time token display. See the model think, not just the final answer.
⚠️ Tool Approval + Diff Preview
Dangerous operations require your approval. File writes show a unified diff of what will change before you say yes. Press y to allow, n to deny, a to always allow.
🏗️ Architect Mode
Dual-model planning. A "planner" model reasons through the approach, then your editor model executes the changes.
/architect— toggle on/off/architect claude-3-5-sonnet— set the planner model- Great for pairing expensive reasoning models with fast editors
🧠 Skills System (21 Built-In)
Downloadable skill packs that teach the agent domain expertise. Ships with 21 built-in skills and a menu-first /skills flow so you can browse instead of memorizing names:
| Category | Skills | |----------|--------| | Frontend | react-expert, nextjs-app, tailwind-ui, svelte-kit | | Mobile | react-native, swift-ios, flutter | | Backend | python-pro, node-backend, go-backend, rust-systems | | Data | sql-master, supabase | | Practices | typescript-strict, api-designer, test-engineer, doc-writer, security-audit, devops-toolkit, git-workflow | | Game Dev | unity-csharp |
/skills # Browse & install from registry
/skills install X # Quick install
/skills on/off X # Toggle per session📋 CODEMAXXING.md — Project Rules
Drop a CODEMAXXING.md in your project root for project-specific instructions. It gets loaded automatically for that project.
🔧 Auto-Lint
Automatically runs your linter after every file edit and feeds errors back to the model for auto-fix. Detects eslint, biome, ruff, clippy, golangci-lint, and more.
/lint on//lint off— toggle (ON by default)
📂 Smart Context (Repo Map)
Scans your codebase and builds a map of functions, classes, and types. The model knows what exists where without reading every file.
📦 Context Compression
When conversation history gets too large, older messages are automatically summarized to free up context.
💰 Cost Tracking
Per-session token usage and estimated cost in the status bar. Pricing for 20+ common models. Saved to session history.
🖥️ Headless/CI Mode
Run codemaxxing in scripts and pipelines without the TUI:
codemaxxing exec "add error handling to api.ts"
codemaxxing exec --auto-approve "fix all lint errors"
echo "add tests" | codemaxxing exec🔀 Git Integration
Opt-in git commands built in:
/commit <message>— stage all + commit/push— push to remote/diff— show changes/undo— revert last codemaxxing commit/git on//git off— toggle auto-commits
💾 Session Persistence
Conversations auto-save to SQLite. Pick up where you left off:
/sessions— list past sessions/session delete— remove a session/resume— interactive session picker
🔌 MCP Support (Model Context Protocol)
Connect to external tools via the MCP standard: databases, GitHub, Slack, browsers, and more.
/mcp— show connected servers/mcp add github npx -y @modelcontextprotocol/server-github— add a server/mcp tools— list available MCP tools
🖥️ Zero-Setup Local LLM
First time with no LLM? Codemaxxing walks you through it:
- Detects your hardware (CPU, RAM, GPU)
- Recommends coding models that fit your machine
- Installs Ollama automatically
- Downloads the model with a progress bar
- Connects and drops you into coding mode
No googling, no config files, no decisions. Just run codemaxxing.
🦙 Ollama Management
Full Ollama control from inside codemaxxing:
/ollama— status, installed models, GPU usage/ollama pull— interactive model picker + download/ollama delete— pick and remove models/ollama start//ollama stop— server management- Exit warning when Ollama is using GPU memory
🔄 Multi-Provider
Switch models mid-session with an interactive picker:
/model— browse and switch models/model gpt-5— switch directly by name- Native Anthropic API support (not just OpenAI-compatible)
🎨 14 Themes
/theme to browse: cyberpunk-neon, dracula, gruvbox, nord, catppuccin, tokyo-night, one-dark, rose-pine, synthwave, blood-moon, mono, solarized, hacker, acid
🔐 Authentication
One command to connect any LLM provider. OpenRouter OAuth, Anthropic subscription linking, Codex/Qwen CLI import, GitHub Copilot device flow, or manual API keys.
📋 Smart Paste
Multi-line pastes collapse into [Pasted text #1 +N lines] badges instead of dumping raw text into the input box. This was specifically hardened for bracketed-paste terminal weirdness.
⌨️ Slash Commands
Type / for autocomplete suggestions. Arrow keys to navigate, Tab or Enter to select.
Commands
| Command | Description |
|---------|-------------|
| /help | Show all commands |
| /connect | Retry LLM connection |
| /login | Interactive auth setup |
| /model | Browse & switch models (picker) |
| /architect | Toggle architect mode / set model |
| /skills | Browse, install, manage skills |
| /lint on/off | Toggle auto-linting |
| /mcp | MCP server status & tools |
| /ollama | Ollama status, models & GPU |
| /ollama pull | Download a model (picker) |
| /ollama delete | Remove a model (picker) |
| /ollama start/stop | Server management |
| /theme | Switch color theme |
| /map | Show repository map |
| /sessions | List past sessions |
| /session delete | Delete a session |
| /resume | Resume a past session |
| /reset | Clear conversation |
| /context | Show message count + tokens |
| /diff | Show git changes |
| /commit <msg> | Stage all + commit |
| /push | Push to remote |
| /undo | Revert last codemaxxing commit |
| /git on/off | Toggle auto-commits |
| /quit | Exit |
CLI
codemaxxing # Start TUI
codemaxxing login # Auth setup
codemaxxing auth list # Show saved credentials
codemaxxing exec "prompt" # Headless mode (no TUI)
codemaxxing exec --auto-approve "x" # Skip approval prompts
codemaxxing exec --json "x" # JSON output for scripts
echo "fix tests" | codemaxxing exec # Pipe from stdin-m, --model <model> Model name to use
-p, --provider <name> Provider profile from config
-k, --api-key <key> API key for the provider
-u, --base-url <url> Base URL for the provider API
-h, --help Show helpConfig
Settings are stored in ~/.codemaxxing/settings.json:
{
"provider": {
"baseUrl": "http://localhost:1234/v1",
"apiKey": "not-needed",
"model": "auto"
},
"providers": {
"local": {
"name": "Local (LM Studio/Ollama)",
"baseUrl": "http://localhost:1234/v1",
"apiKey": "not-needed",
"model": "auto"
},
"openrouter": {
"name": "OpenRouter",
"baseUrl": "https://openrouter.ai/api/v1",
"apiKey": "sk-or-...",
"model": "anthropic/claude-sonnet-4-6"
},
"openai": {
"name": "OpenAI",
"baseUrl": "https://api.openai.com/v1",
"apiKey": "sk-...",
"model": "gpt-5"
}
},
"defaults": {
"autoApprove": false,
"maxTokens": 8192
}
}Tools
Built-in tools:
- read_file — Read file contents (safe)
- write_file — Write/create files (requires approval, shows diff)
- edit_file — Apply surgical patches to files (preferred for targeted changes)
- list_files — List directory contents (safe)
- search_files — Search for patterns across files (safe)
- run_command — Execute shell commands (requires approval)
Plus any tools from connected MCP servers (databases, APIs, GitHub, etc.)
Project Context
Drop a CODEMAXXING.md file in your project root to give the model extra context about your codebase, conventions, or instructions. It's automatically included in the system prompt.
Stack
- Runtime: Node.js + TypeScript
- TUI: Ink (React for the terminal)
- LLM SDKs: OpenAI SDK + Anthropic SDK
- MCP: @modelcontextprotocol/sdk
- Sessions: better-sqlite3
- Local LLM: Ollama integration (auto-install, pull, manage)
- Zero cloud dependencies — everything runs locally unless you choose a remote provider
Inspired By
Built by studying the best:
- Aider — repo map concept, auto-commit
- Claude Code — permission system, paste handling
- OpenCode — multi-provider, SQLite sessions
License
MIT
