lo-cool
v2.0.0
Published
A local-first, zero-trust AI coding harness for constrained environments
Downloads
123
Maintainers
Readme
🚀 lo-cool
A local-first, zero-trust AI coding harness for constrained environments. Run Claude Code-style autonomous workflows entirely offline on Termux/Android using Ollama and Node.js.
✨ Features
- Deterministic Agentic Loop:
while(tool_call)architecture ensures predictable, stateful execution. - Harness-First Security: The model never executes code directly. Node.js acts as a strict sandbox with path traversal guards and timeouts.
- Progressive Disclosure: Dynamic prompt assembly from
.openclaude/commands/,.skills/, and.plugins/. - Zero-Index Search: Uses native
grepfor instant file pattern matching. No vector DB bloat. - 100% Offline: Communicates only with
localhost:11434. Telemetry-free by design. - Streaming Support: Token-by-token streaming for responsive UI experience.
- Termux Optimizations: Wake locks, notifications, and API fallbacks for mobile usage.
- Full-Stack Architecture: Optional web UI with HTTP/SSE backend.
📱 Termux-Specific Installation
Prerequisites
- Termux v0.118+
- Storage permission granted to Termux
Installation Steps
Update Termux:
pkg update && pkg upgrade -yInstall Dependencies:
pkg install nodejs git jq curl -yInstall Ollama (ARM64 Binary):
# Install official binary directly to Termux bin prefix curl -fsSL https://ollama.com/install.sh | sh 2>/dev/null || { echo "⚠️ Install script failed. Downloading ARM64 binary manually..." OLLAMA_VER="v0.3.14" curl -L "https://github.com/ollama/ollama/releases/download/${OLLAMA_VER}/ollama-linux-arm64" \ -o "$PREFIX/bin/ollama" chmod +x "$PREFIX/bin/ollama" }Pull the Model:
ollama serve & sleep 4 ollama pull qwen2.5-coder:1.5b kill %1 2>/dev/nullInstall lo-cool:
npm install -g lo-coolSet Up Alias (Optional):
[ -f ~/.bashrc ] && echo 'alias lo-cool="lo-cool"' >> ~/.bashrc [ -f ~/.zshrc ] && echo 'alias lo-cool="lo-cool"' >> ~/.zshrc
🚀 Quick Start
CLI Usage
# Start the CLI
lo-cool
# Or using alias if set up
lo-cool
# Example interaction:
# 🧑💻 You: Create a Node.js script that calculates fibonacci numbers and save it.
# 🤖 Agent is thinking...
# ⚡ Executing tool: write_file({"path": "fibonacci.js", "content": "..."})
# ✅ Result: Successfully wrote 142 bytes to fibonacci.js
# 📜 Answer: Done! `fibonacci.js` created with an optimized iterative approach.Web UI Usage
# Start the server
lo-cool web
# Or directly
node src/server/launcher.js
# Then open in browser:
# http://localhost:3000📐 Core Concepts
| Concept | Purpose | Trigger |
|---------|---------|---------|
| Harness | The orchestrator (Node.js/Bash). Manages I/O, security, loop state, and tool execution. | Automatic |
| Agentic Loop | The while(tool_call) pattern. Model decides -> Harness executes -> Result fed back -> Repeat until done. | Automatic |
| Slash Commands | User-invoked workflows. Pre-defined .md files injected into context when triggered. | /command_name |
| Skills | Model-invoked capabilities. SKILL.md files loaded dynamically to extend reasoning domains. | Implicit (Context Aware) |
| Plugins | Advanced manifests (.openclaude/plugins/*/plugin.json) for custom tool registries or API hooks. | Automatic |
⚙️ Configuration
Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| OLLAMA_MODEL | qwen2.5-coder:1.5b | Overrides model for the session. |
| MAX_ITERATIONS | 15 | Maximum agentic loop iterations. |
| TIMEOUT_MS | 30000 | Tool execution timeout in milliseconds. |
| SAFETY_MODE | strict | Security level (strict, moderate, permissive). |
| PORT | 3000 | Port for web server. |
| HOST | localhost | Host for web server binding. |
.openclaude/config.json
{
"model": "qwen2.5-coder:1.5b",
"maxIterations": 15,
"timeout_ms": 30000,
"allow_shell_glob": true,
"safety_mode": "strict"
}Directory Structure
.openclaude/
├── commands/ # /help, /review, /test
├── skills/ # domain-specific instructions (e.g., REACT_SKILL.md)
├── plugins/ # Extensibility manifests
├── sessions/ # Resumable JSON states
├── logs/ # Execution traces for debugging
└── config.json # Overrides🛡️ Security & Safety
Built-in Protections
- Path Jail: All file operations are restricted to the current working directory
- Command Whitelisting: Dangerous commands like
rm -rf /,sudo,mkfsare blocked - Execution Timeouts: Shell commands have default 15-second timeouts
- Buffer Limits: Output is capped to prevent memory exhaustion
- Input Sanitization: All user inputs are validated and sanitized
Security Zones
- Strict Mode (Default): Maximum safety, blocks potentially risky operations
- Moderate Mode: Allows more operations with basic safeguards
- Permissive Mode: Fewer restrictions (not recommended for production)
🐛 Troubleshooting
| Issue | Cause | Solution |
|-------|-------|----------|
| ECONNREFUSED 11434 | Ollama isn't running. | Run ollama serve in a separate Termux window. |
| Path traversal blocked | Model outputted absolute paths. | CLI auto-sanitizes. Ask model to use relative paths. |
| Max iterations reached | Model stuck in loop. | Increase maxIterations in config or refine prompt. |
| Command not found | Missing Termux utilities. | Install missing packages with pkg install. |
| Slow responses | Large context window. | Use /new to clear history or switch to smaller model. |
| Web UI not loading | Server not running. | Start server with npm run web or node src/server/launcher.js |
🔧 Development
Setting Up Development Environment
# Clone repository
git clone https://github.com/your-org/lo-cool.git
cd lo-cool
# Install dependencies
npm install
# Run tests
npm test
# Start development server
npm run webCode Style
- Uses ES Modules (
type: "module"in package.json) - Follows standard JavaScript conventions
- Configured with ESLint for code quality
📄 License
MIT License - see LICENSE file for details.
🙏 Acknowledgments
- Inspired by Claude Code and open-source AI agents
- Built for the Termux community
- Powered by Ollama for local LLM inference
- Created with ❤️ for private, local-first AI development
Built for hackers, by harnesses. 🛡️📱
