@hkjang/openpro
v0.1.16
Published
Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models
Maintainers
Readme
OpenPro
OpenPro is an open-source coding-agent CLI that works with more than one model provider.
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping the same terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
Why OpenPro
- Use one CLI across cloud and local model providers
- Save provider profiles inside the app with
/provider - Run locally with Ollama or Atomic Chat
- Keep core coding-agent workflows: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
Quick Start
Install
npm install -g @hkjang/openproInstall from a GitHub release .tgz
Release assets are built with bun run pack:release, which bundles the runtime npm dependencies into the tarball so npm install -g can install from the local file without reaching the npm registry during install.
If you downloaded a release asset such as hkjang-openpro-x.y.z.tgz, install it globally with:
macOS / Linux:
npm install -g ./hkjang-openpro-x.y.z.tgzWindows PowerShell:
npm install -g .\hkjang-openpro-x.y.z.tgzAfter installation, confirm the installed build:
openpro --versionIf the npm install path later reports ripgrep not found, install ripgrep system-wide and confirm rg --version works in the same terminal before starting OpenPro.
Start
openproInside OpenPro:
- run
/providerfor guided setup of OpenAI-compatible, Gemini, Ollama, or Codex profiles - run
/onboard-githubfor GitHub Models setup
Fastest OpenAI setup
macOS / Linux:
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
openproWindows PowerShell:
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"
openproFastest local Ollama setup
macOS / Linux:
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=qwen2.5-coder:7b
openproWindows PowerShell:
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
$env:OPENAI_MODEL="qwen2.5-coder:7b"
openproSetup Guides
Beginner-friendly guides:
Korean documentation set:
- Korean Docs Index
- Korean Overview
- Korean Functional Spec
- Korean
srcFolder Code Reference - Korean
srcFolder Docs Index - Korean
commands/services/tools/utilsSubfolder Docs Index - Korean API Guide
- Korean Provider Matrix
- Korean Request Lifecycle Guide
- Korean Session and Transcript Storage Guide
- Korean Change Impact Map
- Korean Server Mode Guide
- Korean Feature Flag and Build Guide
- Korean Release, Packaging, and Fork Maintenance Guide
- Korean Release TGZ Install Guide
- Korean Remote Control and Bridge Guide
- Korean Auth and Credential Guide
- Korean Environment and Settings Reference
- Korean Command Cookbook
- Korean Permission and Security Matrix
- Korean Error Catalog
- Korean Troubleshooting Guide
- Korean MCP Operations Guide
- Korean Plugin and Hook Guide
- Korean Memory and Context Compaction Spec
- Korean Coding Agent Architecture Spec
- Korean Coding Agent Flow and Security Spec
Advanced and source-build guides:
Supported Providers
| Provider | Setup Path | Notes |
| --- | --- | --- |
| OpenAI-compatible | /provider or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and compatible local /v1 servers |
| Gemini | /provider or env vars | Google Gemini support through the runtime provider layer |
| GitHub Models | /onboard-github | Interactive onboarding with saved credentials |
| Codex | /provider | Uses existing Codex credentials when available |
| Ollama | /provider or env vars | Local inference with no API key |
| Atomic Chat | advanced setup | Local Apple Silicon backend |
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |
What Works
- Tool-driven coding workflows Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
- Streaming responses Real-time token output and tool progress
- Tool calling Multi-step tool loops with model calls, tool execution, and follow-up responses
- Images URL and base64 image inputs for providers that support vision
- Provider profiles
Guided setup plus saved
.openpro-profile.jsonsupport - Local and remote model backends Cloud APIs, local servers, and Apple Silicon local inference
Provider Notes
OpenPro supports multiple providers, but behavior is not identical across all of them.
- Anthropic-specific features may not exist on other providers
- Tool quality depends heavily on the selected model
- Smaller local models can struggle with long multi-step tool flows
- Some providers impose lower output caps than the CLI defaults, and OpenPro adapts where possible
For best results, use models with strong tool/function calling support.
Web Search and Fetch
WebFetch works out of the box.
WebSearch and richer JS-aware fetching work best with a Firecrawl API key:
export FIRECRAWL_API_KEY=your-key-hereWith Firecrawl enabled:
WebSearchis available across more provider setupsWebFetchcan handle JavaScript-rendered pages more reliably
Firecrawl is optional. Without it, OpenPro falls back to the built-in behavior.
Source Build
bun install
bun run build
node dist/cli.mjsHelpful commands:
bun run devbun run smokebun run doctor:runtime
VS Code Extension
The repo includes a VS Code extension in vscode-extension/openpro-vscode for OpenPro launch integration and theme support.
Security
If you believe you found a security issue, see SECURITY.md.
Contributing
Contributions are welcome.
For larger changes, open an issue first so the scope is clear before implementation. Helpful validation commands include:
bun run buildbun run smoke- focused
bun test ...runs for touched areas
Disclaimer
OpenPro is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic.
"Claude" and "Claude Code" are trademarks of Anthropic.
License
MIT
