npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

lo-cool

v2.0.0

Published

A local-first, zero-trust AI coding harness for constrained environments

Downloads

123

Readme

🚀 lo-cool

License: MIT Termux Ready Ollama Compatible Node.js Version

A local-first, zero-trust AI coding harness for constrained environments. Run Claude Code-style autonomous workflows entirely offline on Termux/Android using Ollama and Node.js.


✨ Features

  • Deterministic Agentic Loop: while(tool_call) architecture ensures predictable, stateful execution.
  • Harness-First Security: The model never executes code directly. Node.js acts as a strict sandbox with path traversal guards and timeouts.
  • Progressive Disclosure: Dynamic prompt assembly from .openclaude/commands/, .skills/, and .plugins/.
  • Zero-Index Search: Uses native grep for instant file pattern matching. No vector DB bloat.
  • 100% Offline: Communicates only with localhost:11434. Telemetry-free by design.
  • Streaming Support: Token-by-token streaming for responsive UI experience.
  • Termux Optimizations: Wake locks, notifications, and API fallbacks for mobile usage.
  • Full-Stack Architecture: Optional web UI with HTTP/SSE backend.

📱 Termux-Specific Installation

Prerequisites

  • Termux v0.118+
  • Storage permission granted to Termux

Installation Steps

  1. Update Termux:

    pkg update && pkg upgrade -y
  2. Install Dependencies:

    pkg install nodejs git jq curl -y
  3. Install Ollama (ARM64 Binary):

    # Install official binary directly to Termux bin prefix
    curl -fsSL https://ollama.com/install.sh | sh 2>/dev/null || {
      echo "⚠️  Install script failed. Downloading ARM64 binary manually..."
      OLLAMA_VER="v0.3.14"
      curl -L "https://github.com/ollama/ollama/releases/download/${OLLAMA_VER}/ollama-linux-arm64" \
        -o "$PREFIX/bin/ollama"
      chmod +x "$PREFIX/bin/ollama"
    }
  4. Pull the Model:

    ollama serve &
    sleep 4
    ollama pull qwen2.5-coder:1.5b
    kill %1 2>/dev/null
  5. Install lo-cool:

    npm install -g lo-cool
  6. Set Up Alias (Optional):

    [ -f ~/.bashrc ] && echo 'alias lo-cool="lo-cool"' >> ~/.bashrc
    [ -f ~/.zshrc ] && echo 'alias lo-cool="lo-cool"' >> ~/.zshrc

🚀 Quick Start

CLI Usage

# Start the CLI
lo-cool

# Or using alias if set up
lo-cool

# Example interaction:
# 🧑‍💻 You: Create a Node.js script that calculates fibonacci numbers and save it.
# 🤖 Agent is thinking...
# ⚡ Executing tool: write_file({"path": "fibonacci.js", "content": "..."})
# ✅ Result: Successfully wrote 142 bytes to fibonacci.js
# 📜 Answer: Done! `fibonacci.js` created with an optimized iterative approach.

Web UI Usage

# Start the server
lo-cool web

# Or directly
node src/server/launcher.js

# Then open in browser:
# http://localhost:3000

📐 Core Concepts

| Concept | Purpose | Trigger | |---------|---------|---------| | Harness | The orchestrator (Node.js/Bash). Manages I/O, security, loop state, and tool execution. | Automatic | | Agentic Loop | The while(tool_call) pattern. Model decides -> Harness executes -> Result fed back -> Repeat until done. | Automatic | | Slash Commands | User-invoked workflows. Pre-defined .md files injected into context when triggered. | /command_name | | Skills | Model-invoked capabilities. SKILL.md files loaded dynamically to extend reasoning domains. | Implicit (Context Aware) | | Plugins | Advanced manifests (.openclaude/plugins/*/plugin.json) for custom tool registries or API hooks. | Automatic |


⚙️ Configuration

Environment Variables

| Variable | Default | Description | |----------|---------|-------------| | OLLAMA_MODEL | qwen2.5-coder:1.5b | Overrides model for the session. | | MAX_ITERATIONS | 15 | Maximum agentic loop iterations. | | TIMEOUT_MS | 30000 | Tool execution timeout in milliseconds. | | SAFETY_MODE | strict | Security level (strict, moderate, permissive). | | PORT | 3000 | Port for web server. | | HOST | localhost | Host for web server binding. |

.openclaude/config.json

{
  "model": "qwen2.5-coder:1.5b",
  "maxIterations": 15,
  "timeout_ms": 30000,
  "allow_shell_glob": true,
  "safety_mode": "strict"
}

Directory Structure

.openclaude/
├── commands/     # /help, /review, /test
├── skills/       # domain-specific instructions (e.g., REACT_SKILL.md)
├── plugins/      # Extensibility manifests
├── sessions/     # Resumable JSON states
├── logs/         # Execution traces for debugging
└── config.json   # Overrides

🛡️ Security & Safety

Built-in Protections

  • Path Jail: All file operations are restricted to the current working directory
  • Command Whitelisting: Dangerous commands like rm -rf /, sudo, mkfs are blocked
  • Execution Timeouts: Shell commands have default 15-second timeouts
  • Buffer Limits: Output is capped to prevent memory exhaustion
  • Input Sanitization: All user inputs are validated and sanitized

Security Zones

  1. Strict Mode (Default): Maximum safety, blocks potentially risky operations
  2. Moderate Mode: Allows more operations with basic safeguards
  3. Permissive Mode: Fewer restrictions (not recommended for production)

🐛 Troubleshooting

| Issue | Cause | Solution | |-------|-------|----------| | ECONNREFUSED 11434 | Ollama isn't running. | Run ollama serve in a separate Termux window. | | Path traversal blocked | Model outputted absolute paths. | CLI auto-sanitizes. Ask model to use relative paths. | | Max iterations reached | Model stuck in loop. | Increase maxIterations in config or refine prompt. | | Command not found | Missing Termux utilities. | Install missing packages with pkg install. | | Slow responses | Large context window. | Use /new to clear history or switch to smaller model. | | Web UI not loading | Server not running. | Start server with npm run web or node src/server/launcher.js |


🔧 Development

Setting Up Development Environment

# Clone repository
git clone https://github.com/your-org/lo-cool.git
cd lo-cool

# Install dependencies
npm install

# Run tests
npm test

# Start development server
npm run web

Code Style

  • Uses ES Modules (type: "module" in package.json)
  • Follows standard JavaScript conventions
  • Configured with ESLint for code quality

📄 License

MIT License - see LICENSE file for details.


🙏 Acknowledgments

  • Inspired by Claude Code and open-source AI agents
  • Built for the Termux community
  • Powered by Ollama for local LLM inference
  • Created with ❤️ for private, local-first AI development

Built for hackers, by harnesses. 🛡️📱