infinicode
v1.0.0
Published
AI coding agent powered by your local Ollama. Fork of OpenCode optimized for self-hosted LLMs.
Downloads
81
Maintainers
Readme
⚡ Infinicode
AI coding agent powered by your local Ollama. Fork of OpenCode optimized for self-hosted LLMs.
npm install -g infinicode
# Connect to your Ollama master
infinicode connect 192.168.1.100
# Start coding
infinicodeWhy Infinicode?
- 100% Local — All inference on YOUR hardware
- Zero Cloud — No API keys, no subscriptions, no data leaving your network
- One Command Setup —
infinicode connect <ip>and you're done - Multi-Node — Run on any machine, point to one Ollama master
Quick Start
1. Install
npm install -g infinicode
# or
pnpm add -g infinicode2. Setup Ollama Master
On your GPU machine:
# Allow network connections
OLLAMA_HOST=0.0.0.0:11434 ollama serve
# Pull a coding model
ollama pull qwen2.5-coder:14b3. Connect & Code
# From any machine on your network
infinicode connect 192.168.1.100 # Your Ollama IP
infinicode # Start coding!Commands
| Command | Alias | Description |
|---------|-------|-------------|
| infinicode connect <ip> | ic c | Quick connect to Ollama master |
| infinicode setup | ic s | Interactive setup wizard |
| infinicode run | ic | Start the coding agent (default) |
| infinicode models | ic m | List available models |
| infinicode status | | Show config & connection status |
| infinicode config --list | | View all configuration |
| infinicode config --reset | | Reset to defaults |
Configuration
Config is stored automatically. Override with:
# Set default model
infinicode config --set defaultModel=codestral:22b
# Set master URL
infinicode config --set masterUrl=http://192.168.1.100:11434
# View all
infinicode config --listRecommended Models
For coding tasks on Ollama:
| Model | Size | Best For |
|-------|------|----------|
| qwen2.5-coder:14b | 9GB | Great balance |
| qwen2.5-coder:32b | 20GB | Best quality |
| deepseek-coder-v2:16b | 10GB | Strong reasoning |
| codestral:22b | 13GB | Fast completion |
Architecture
┌─────────────────┐
│ Laptop │
│ (infinicode) │──┐
└─────────────────┘ │
│ ┌─────────────────┐
┌─────────────────┐ │ │ GPU Server │
│ Desktop │──┼───▶│ (Ollama) │
│ (infinicode) │ │ │ │
└─────────────────┘ │ │ qwen2.5-coder │
│ │ codestral │
┌─────────────────┐ │ └─────────────────┘
│ Server │──┘
│ (infinicode) │
└─────────────────┘Security
Exposing Ollama to your network means anyone on that network can use it.
Secure options:
- VPN/Tailscale — Only accessible on private network
- SSH Tunnel —
ssh -L 11434:localhost:11434 gpu-server - Firewall — Allow only specific IPs
Requirements
- Node.js 20+
- OpenCode installed (
npm install -g opencode) - Ollama running somewhere on your network
License
MIT — Based on OpenCode
⚡ Built for sovereign computing. Your code, your models, your hardware.
