@gakr-gakr/gakrcli
v0.4.8
Published
Gakr opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models
Maintainers
Readme
Gakr
Version 0.3.1 — Any model. Every tool. Zero limits.
Gakr is a terminal-first coding-agent CLI that brings the powerful workflow to multiple LLM providers - Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex OAuth, Codex, Ollama, Atomic Chat, and other supported backends while keeping one terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
The packaged command is gakrcli.
What It Supports
Provider Ecosystem
| Provider | Authentication | Key Features |
|----------|---------------|--------------|
| Anthropic | ANTHROPIC_API_KEY or gakrcli auth login | Native Claude models with full tool support |
| OpenAI-compatible | OPENAI_API_KEY | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, Together AI, Azure OpenAI, LM Studio, and any OpenAI-compatible local server |
| Gemini | GEMINI_API_KEY or GOOGLE_API_KEY | Google Gemini 2.0+ models |
| GitHub Models | GITHUB_TOKEN / GH_TOKEN | Free tier via GitHub's model marketplace |
| NVIDIA NIMs | NVIDIA_API_KEY | Enterprise-grade models via NIM endpoint |
| Codex | CODEX_API_KEY or ~/.codex/auth.json | ChatGPT Codex backend with reasoning |
| Ollama | No API key | Local inference, full privacy |
| Atomic Chat | No API key | Apple Silicon local models |
| Bedrock | AWS credentials | Amazon Bedrock Claude models |
| Vertex AI | Google Cloud credentials | Claude on Google Cloud |
| Foundry | Anyscale credentials | Anthropic on Anyscale |
Core Features
- Tool Calling: Read, write, edit files; execute bash commands; search with grep/glob; web search and fetch
- Agent Workflows: Autonomous multi-step reasoning with tool execution loops
- MCP Integration: Connect to external tools, data sources, and services via Model Context Protocol
- Provider Profiles: Saved configurations with
.gakr-profile.jsonfor project-specific settings - Streaming Output: Real-time token display for responsive interaction
- Cost Tracking: Token usage and cost monitoring per session
- Project Onboarding: Automatic context extraction and history persistence
- VS Code Extension: Integrated Control Center and workspace awareness
- Privacy-First: No telemetry, no phone-home, verified privacy build
Install
Global Install (Recommended for Users)
npm install -g @gakr-gakr/gakrcliThen run:
gakrcliRequirements:
- Node.js 20 or newer
- ripgrep (
rg) installed and in PATH
Source Build (For Development)
git clone https://github.com/gakr-gakr/gakr.git
cd gakr
bun install
bun run build
npm link # Optional: makes gakrcli available globallyRequirements:
- Bun 1.3.11+ (for TypeScript build and development scripts)
- Node.js 20+ (for running the built CLI)
- TypeScript 6+ (dev dependency)
Helpful Commands:
bun run dev # Build and run locally with hot reload
bun run smoke # Quick build + version check
bun run doctor:runtime # System diagnostics and provider validation
bun run verify:privacy # Verify no telemetry/phone-home
bun run typecheck # TypeScript type checking
bun test # Run test suiteTroubleshooting
If gakrcli reports that ripgrep / rg is missing:
- macOS:
brew install ripgrep - Ubuntu/Debian:
sudo apt-get install ripgrep - Windows:
winget install ripgrepor download from ripgrep.org - Verify with:
rg --version
After installing, restart your terminal.
Quick Start
Set your provider configuration using environment variables, then run gakrcli. Choose your provider:
Option 1: OpenAI (Quickest Cloud Setup)
Get an API key from OpenAI Platform.
Then set these variables:
macOS / Linux:
export GAKR_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o # or gpt-4.1, o3-mini, etc.
gakrcliWindows PowerShell:
$env:GAKR_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"
gakrcliOption 2: Ollama (Local, No API Key)
- Install Ollama and pull a model:
ollama pull llama3.2:3b # or qwen2.5-coder:7b, codellama:7b, etc.- Set environment variables:
macOS / Linux:
export GAKR_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.2:3b
gakrcliWindows PowerShell:
$env:GAKR_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
$env:OPENAI_MODEL="llama3.2:3b"
gakrcliOption 3: Anthropic (Claude)
Get an API key from Anthropic Console.
export ANTHROPIC_API_KEY=sk-ant-your-key-here
export ANTHROPIC_MODEL=claude-sonnet-4-5-20251014 # or claude-3-7-sonnet
gakrcliOr use the guided login:
gakrcli auth loginOption 4: Other Providers
For Gemini, GitHub Models, NVIDIA NIMs, Codex, DeepSeek, LM Studio, and more, see:
- Advanced Setup Guide — Full provider examples and configuration
- Provider Configuration Reference — All supported backends
Agent Routing
GakrCLI can route different agents to different models through settings-based routing. This is useful for cost optimization or splitting work by model strength.
Add to ~/.gakrcli/settings.json:
{
"agentModels": {
"deepseek-v4-flash": {
"base_url": "https://api.deepseek.com/v1",
"api_key": "sk-your-key"
},
"gpt-4o": {
"base_url": "https://api.openai.com/v1",
"api_key": "sk-your-key"
}
},
"agentRouting": {
"Explore": "deepseek-v4-flash",
"Plan": "gpt-4o",
"general-purpose": "gpt-4o",
"frontend-dev": "deepseek-v4-flash",
"default": "gpt-4o"
}
}When no routing match is found, the global provider remains the fallback.
Note:
api_keyvalues insettings.jsonare stored in plaintext. Keep this file private and do not commit it to version control.
Inside Gakr
Once launched:
- Start coding: Just type your request naturally (e.g., "Refactor this function to be more readable")
- Slash commands: Type
/helpto see all commands - Provider setup:
/providerfor guided saved-profile setup - GitHub Models:
/onboard-githubfor secure token onboarding - Settings:
/settingsto view/modify configuration - Clear history:
/clearto start fresh
Gakr will automatically use tools (file operations, bash, grep, etc.) to accomplish your tasks. You'll see streaming output as it works.
Project-Level Configuration
For project-specific settings, create .gakr-profile.json in your project root. This is automatically loaded when you start Gakr in that directory.
Example profile:
{
"provider": "openai-compatible",
"apiKey": "sk-...",
"baseURL": "https://api.openai.com/v1",
"model": "gpt-4o",
"temperature": 0.7,
"maxTokens": 8000
}Initialize a profile interactively:
bun run profile:initor
gakrcli /providerProvider Setup Paths
| Provider | Main setup paths | Notes |
| --- | --- | --- |
| Anthropic | gakrcli auth login or ANTHROPIC_API_KEY | Default mode when no third-party provider flag is enabled |
| OpenAI-compatible | env vars, --provider openai, /provider, bun run profile:init -- --provider openai | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, Together AI, Azure OpenAI, LM Studio, and compatible local /v1 servers |
| Gemini | env vars, --provider gemini, /provider, bun run profile:init -- --provider gemini | Uses GEMINI_API_KEY or GOOGLE_API_KEY |
| GitHub Models | --provider github, /onboard-github, GITHUB_TOKEN / GH_TOKEN | Runtime default model falls back to github:copilot -> openai/gpt-4.1 when OPENAI_MODEL is unset |
| NVIDIA NIMs | env vars, /provider, bun run profile:init -- --provider nvidia | Uses dedicated NVIDIA_* env vars and defaults to https://integrate.api.nvidia.com/v1 |
| Codex | env vars, /provider, bun run profile:init -- --provider codex, bun run dev:codex | Reads ~/.codex/auth.json by default or uses CODEX_API_KEY plus CHATGPT_ACCOUNT_ID / CODEX_ACCOUNT_ID |
| Ollama | env vars, --provider ollama, /provider, bun run profile:init -- --provider ollama | Local provider, no API key required |
| Atomic Chat | env vars, bun run profile:init -- --provider atomic-chat, bun run dev:atomic-chat | Uses local http://127.0.0.1:1337/v1; Atomic Chat must be running with a model loaded |
| Bedrock / Vertex / Foundry | env vars | Supported through the runtime provider layer |
Notes:
- The direct CLI
--providerflag currently supportsanthropic,openai,gemini,github,bedrock,vertex, andollama. - Saved provider profiles are stored in
.gakr-profile.jsonin the current working directory and are loaded automatically on startup unless explicit provider env flags override them. profile:recommendandprofile:autocurrently choose between Ollama and OpenAI-compatible profiles based on availability and goal.
Provider Profiles And Helpers
One-time profile bootstrap:
bun run profile:initExamples:
bun run profile:init -- --provider openai --api-key sk-your-key --model gpt-4o
bun run profile:init -- --provider ollama --model qwen2.5-coder:7b
bun run profile:init -- --provider gemini --api-key your-key --model gemini-2.0-flash
bun run profile:init -- --provider nvidia --api-key nvapi-your-key
bun run profile:init -- --provider codex --model codexplan
bun run profile:init -- --provider atomic-chatRecommendation helpers:
bun run profile:recommend -- --goal coding --benchmark
bun run profile:auto -- --goal latencyLaunch through saved or explicit profiles:
bun run dev:profile
bun run dev:openai
bun run dev:gemini
bun run dev:ollama
bun run dev:codex
bun run dev:atomic-chatdev:profile, dev:openai, dev:gemini, dev:ollama, dev:codex, and dev:atomic-chat run the runtime doctor before launching.
Architecture Overview
Gakr is built with a modular, layered architecture emphasizing separation of concerns, testability, and extensibility.
High-Level Structure
gakr/
├── src/
│ ├── entrypoints/ # CLI entrypoint (main.tsx)
│ ├── cli/ # CLI transport, I/O, update handler
│ ├── commands/ # Slash commands (/help, /provider, etc.)
│ ├── tools/ # Tool implementations (read_file, bash, grep, etc.)
│ ├── services/ # Provider integrations (OpenAI, Anthropic, etc.)
│ ├── assistant/ # Agent session management
│ ├── bridge/ # Remote execution (codespaces, web)
│ ├── query/ # QueryEngine — orchestrates LLM calls
│ ├── context/ # Context management and compression
│ ├── state/ # Global state (app state, settings)
│ ├── hooks/ # React hooks (if using Ink/React)
│ ├── components/ # React UI components
│ ├── screens/ # Full-screen UI views
│ ├── skills/ # Skill definitions (auto-improvement, etc.)
│ ├── types/ # TypeScript type definitions
│ ├── utils/ # Utility functions
│ ├── constants/ # Constants and configuration
│ ├── migrations/ # Database migrations (if any)
│ ├── plugins/ # Plugin system infrastructure
│ ├── keybindings/ # Vim/Emacs keybinding support
│ ├── outputStyles/ # Terminal styling and markup
│ ├── proactive/ # Proactive hints and suggestions
│ ├── moreright/ # Right-side panel UI (buddy, etc.)
│ ├── inks/ # Ink-based UI components
│ ├── upstreamproxy/ # Proxy for API requests
│ ├── voice/ # Voice input (disabled in open build)
│ ├── native-ts/ # Native module bindings
│ └── memdir/ # Memory directory abstraction
├── scripts/ # Build, provider, and maintenance scripts
├── bin/ # CLI wrapper (gakrcli)
├── dist/ # Built output (cli.mjs)
├── vscode-extension/ # VS Code extension
├── docs/ # Documentation (user guides)
├── assets/ # Bundled assets (skills, rules, agents)
└── graphify/ # Auto-generated codebase knowledge graphThe project uses React (Ink) for terminal UI, TypeScript for type safety, Bun for build scripts, and Commander for CLI argument parsing. The provider layer uses a unified runtime that selects among multiple backends (OpenAI, Anthropic, etc.) via environment flags.
Key Modules
src/entrypoints/main.tsx— App startup, React renderersrc/commands.ts— Slash command implementationssrc/tools.ts— Tool definitions and invocationsrc/query.ts— Query processing, tool loopsrc/QueryEngine.ts— LLM interaction, tool callingsrc/AssistantSessionChooser.tsx— Agent session managementsrc/context.ts— Context window managementsrc/cost-tracker.ts— Token tracking and cost estimationsrc/history.ts— Conversation persistencesrc/services/api/— Provider-specific API clientssrc/bridge/— Remote execution (VS Code, codespaces)src/mcp/— Model Context Protocol integration
The graphify/ folder (auto-generated) contains a knowledge graph of the codebase with 11,805 nodes and 21,694 edges across 634 functional communities. It's a tool for navigating and understanding the code structure.
Web Search And Fetch
WebSearch behavior depends on the active provider and model:
- Anthropic-native, Vertex, and Foundry backends keep native provider web search behavior.
- Codex responses mode uses the Codex
web_searchtool through the/responsesAPI. - On non-native providers with non-gakrcli models, Gakr falls back to DuckDuckGo scraping by default.
If FIRECRAWL_API_KEY is set, Gakr can use Firecrawl for non-native search/fetch flows:
export FIRECRAWL_API_KEY=your-key-hereWebFetch behavior:
- With Firecrawl enabled, it uses Firecrawl scrape-to-markdown.
- Without Firecrawl, it uses HTTP fetch plus HTML-to-Markdown conversion.
- Authenticated pages and JavaScript-heavy apps are still unreliable without a specialized MCP tool or Firecrawl.
Diagnostics And Validation
Useful commands:
bun run smoke
bun run doctor:runtime
bun run doctor:runtime:json
bun run doctor:report
bun run hardening:check
bun run hardening:strictdoctor:runtime validates:
- Node version and build artifacts
- provider env configuration
- remote provider reachability
- Codex auth requirements
- local Ollama mode checks when applicable
CLI Notes
Useful entry points exposed by the current CLI include:
gakrcli --helpgakrcli authgakrcli doctorgakrcli mcpgakrcli plugingakrcli update
Repo-local startup also loads a repo-root .env file before launching when one exists. That is convenient for source builds, but shell or system environment variables remain the portable setup path for global installs.
VS Code Extension
This repo also includes a VS Code extension in vscode-extension/gakr-vscode with:
- a Control Center activity view
- project-aware launch behavior
- workspace profile visibility for
.gakr-profile.json - a built-in
Gakr Terminal Blacktheme
Documentation
Beginner-friendly guides:
Advanced and conceptual guides:
- Advanced Setup — Comprehensive provider configuration
- Self-Improvement Architecture — How Gakr learns and optimizes
- Android Install — Run on Android (Termux)
- Local Agent Playbook — Quick reference for daily use
Security And Contributing
Project Note
Gakr is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic.
License
See LICENSE.
