@clanklabs/clank
v1.12.2
Published
Local-first AI agent harness. Open-source alternative to OpenClaw, optimized for local models.
Maintainers
Readme
What is Clank?
Clank is a personal AI harness that connects your preferred interfaces to AI agents running local or cloud models. One daemon runs in the background; every interface — CLI, TUI, browser, Telegram, Discord, Signal — shares sessions, memory, and agent state.
┌──────────────────────────────┐
│ Clank Gateway │
│ (single daemon) │
│ │
│ Agent Pool + Routing │
│ Sessions, Memory, Tools │
│ Pipelines, Cron, Plugins │
└──────────────┬────────────────┘
│
WebSocket + HTTP (port 18790)
│
┌──────┬──────┬───────────┼───────────┬──────┬──────┐
│ │ │ │ │ │ │
CLI TUI Web UI Telegram Discord Signal APIQuick Start
npm (all platforms)
npm install -g @clanklabs/clank
clank setup
clankmacOS / Linux (one-liner)
curl -fsSL https://raw.githubusercontent.com/ClankLabs/Clank/main/install.sh | bash
clank setup
clankThat's it. Setup auto-detects local models, configures the harness, and gets you chatting in under 2 minutes. See the Install Guide for platform-specific instructions — Windows | macOS | Linux.
Security Notice
Clank is a developer tool that gives AI agents full access to your file system, shell, and connected services. We strongly recommend running Clank on dedicated hardware (a dev machine, VM, or container) rather than on a system with sensitive personal files or credentials.
Wrench — Purpose-Built Agentic Models
Wrench is our family of fine-tuned models built specifically for Clank's tool calling protocol. All training data is published and auditable.
| Model | Clank Benchmark | BFCL (non_live) | Base | VRAM | Download | |-------|----------------|-----------------|------|------|----------| | Wrench 35B | 118/120 (98%) | 82.0% | Qwen3.5-35B-A3B (MoE) | 16GB | HuggingFace | | Wrench 9B | 114/120 (95%) | — | Qwen3.5-9B (dense) | 8GB | HuggingFace |
# Ollama (9B or 35B GGUF)
ollama create wrench -f Modelfile
# Set as primary model: "primary": "ollama/wrench"
# llama.cpp — Wrench 35B
./llama-server -m wrench-35B-A3B-Q4_K_M.gguf --jinja -ngl 100 -fa on \
--cache-type-k q8_0 --cache-type-v q8_0 \
--temp 0.4 --top-k 20 --top-p 0.95 --min-p 0 --presence-penalty 1.5 -c 32768
# llama.cpp — Wrench 9B
./llama-server -m wrench-9B-Q4_K_M.gguf --jinja -ngl 100 -fa on \
--cache-type-k q8_0 --cache-type-v q8_0 \
--temp 0.4 --top-k 20 --top-p 0.95 --min-p 0 --presence-penalty 1.5 -c 8192Features
| | | |---|---| | Local-first | Auto-detects Ollama, LM Studio, llama.cpp, vLLM. Cloud providers optional. | | 8 providers | Ollama, Anthropic, OpenAI, Google Gemini, OpenRouter, OpenCode, Codex (OAuth), and automatic prompt fallback for models without native tool calling. | | 6 interfaces | CLI, TUI, Web UI, Telegram, Discord, Signal — all equal citizens, all share sessions and memory. | | 25 tools | File ops, bash, git, web search, web fetch, doc search (RAG), plus 10 self-config tools (including health diagnostics), 3 voice tools, and file sharing. | | Multi-agent | Named agents with separate models, workspaces, tools, and routing. Spawn background sub-agents with depth control. | | Inline approvals | Telegram and Discord show Approve / Always / Deny buttons for tool confirmations. Signal and CLI auto-approve. | | Web dashboard | 8-panel SPA: Chat, Agents, Sessions, Config, Pipelines, Cron, Logs, Channels. | | Pipelines | Chain agents together for multi-step workflows. | | Cron | Recurring and one-shot scheduled agent tasks. | | Plugins | Extend with custom tools, channels, and providers. 25+ hook types. | | Voice | ElevenLabs TTS, Groq/OpenAI/local Whisper STT. Telegram voice messages. | | Memory | TF-IDF with decay scoring. The agent learns and remembers across sessions. | | Self-configuring | After setup, configure everything through conversation — models, channels, agents, cron jobs. | | Security | AES-256-GCM encryption, SSRF protection, bash blocklist, path containment, config redaction, rate limiting. |
Commands
# Daily use
clank # Start harness + TUI (recommended)
clank chat # Direct CLI chat (no harness needed)
clank chat --web # Start harness + open Web UI
clank tui # Rich TUI connected to harness
clank dashboard # Open Web UI in browser
# Harness management
clank gateway start # Start in background
clank gateway stop # Stop
clank gateway status # Show status, clients, sessions
clank gateway restart # Restart
# Setup & diagnostics
clank setup # Onboarding wizard
clank setup --advanced # Full control over every setting
clank fix # Diagnostics & auto-repair
# Models & agents
clank models list # Detect + list all available models
clank models add # Add a provider (Anthropic, OpenAI, etc.)
clank models test # Test provider connectivity
clank agents list # List configured agents
clank agents add # Create a new agent
# Scheduling
clank cron list # List scheduled jobs
clank cron add # Schedule a recurring task
# System
clank daemon install # Auto-start at login (Windows/macOS/Linux)
clank daemon status # Check daemon status
clank channels # Channel adapter status
clank auth login # OAuth login (Codex)
clank update # Update to latest version
clank uninstall # Remove everythingProviders
| Provider | Type | Detection |
|----------|------|-----------|
| Ollama | Local | Auto-detected at localhost:11434 |
| LM Studio | Local | Auto-detected at localhost:1234 |
| llama.cpp | Local | Auto-detected at localhost:8080 |
| vLLM | Local | Auto-detected at localhost:8000 |
| Anthropic | Cloud | API key via clank setup or config |
| OpenAI | Cloud | API key via clank setup or config |
| Google Gemini | Cloud | API key via clank setup or config |
| OpenRouter | Cloud | API key via clank setup or config |
Models without native tool calling automatically use prompt-based fallback — tools are injected into the system prompt and parsed from text output. Every local model gets tool support out of the box.
/model, /models, and clank models list also show the active provider, local/cloud status, tool-call mode, context window expectations, and provider configuration state. This makes it easier to see whether a model is using native tool calls or prompt fallback before you start a long session.
Security
| Layer | Protection |
|-------|------------|
| Workspace containment | File tools blocked outside workspace via guardPath() |
| Bash blocklist | 32 patterns covering destructive commands (rm -rf, mkfs, fork bombs, nested shells, interpreter escapes, etc.) |
| API key redaction | Keys never sent to LLM context or exposed via RPC |
| SSRF protection | web_fetch blocks localhost, private IPs, cloud metadata, internal hosts |
| Harness auth | Token-based, auto-generated, localhost-only by default |
| Encryption | AES-256-GCM for API keys at rest (PBKDF2, 100K iterations) |
| Rate limiting | 20 requests/min/session by default |
| Supply chain | All deps pinned to exact versions, lockfile committed, npm 2FA |
See SECURITY.md for the full security model and THREAT_MODEL.md for an honest assessment of limitations.
Documentation
| Document | Description | |----------|-------------| | Install Guide | Installation and setup — with per-OS guides for Windows, macOS, and Linux | | User Guide | Day-to-day usage, commands, multi-agent, background tasks, memory | | Architecture | Engine, providers, tools, channels, security internals | | Changelog | Full version history | | Security Policy | Security model and vulnerability reporting | | Privacy Policy | Data handling (spoiler: we collect nothing) | | Threat Model | What we defend against and what we don't | | Training | Wrench model training methodology and data | | Benchmark | 40-prompt agentic evaluation suite | | Alignment | How we think about AI safety and transparency | | Contributing | How to contribute code, training data, or bug reports |
Requirements
- Node.js 20+ — nodejs.org
- A local model server (Ollama recommended) or a cloud API key
Links
| | | |--|--| | Website | clanklabs.dev | | npm | @clanklabs/clank | | GitHub | ClankLabs/Clank | | Twitter/X | @Clank_Labs | | Reddit | u/ClankLabs |
License
Apache 2.0 — see LICENSE
