clawdaddy
v1.0.8
Published
Run and access local LLMs from anywhere over a secure P2P connection
Maintainers
Readme
🦞 Clawdaddy
Your GPU. Your models. Accessible from anywhere.
Clawdaddy lets you run local LLMs on your own hardware and access them securely from anywhere.
No cloud, no API costs, and no data passing through anyone else's servers.
It exposes an OpenAI and Claude compatible API for tools like Claude Code, and goes beyond inference with a command layer for triggering workflows and agents on your host machine.
This package includes both pieces:
clawdaddy serve— runs on the machine with your GPU, alongside Ollama. Registers with the switchboard and accepts incoming P2P connections.clawdaddy cli— runs wherever you are. Pairs with a serve node and streams inference over an encrypted tunnel.
Requirements
- Node.js 18+
- Ollama running locally (for
serve)
Install
npm install -g clawdaddyQuick start
On your host machine:
ollama pull llama3.2 # or any model you like
clawdaddy serve llama3.2
# outputs your nodeId and pairing codeOn your laptop, phone, wherever:
Pair to your server
clawdaddy pair <nodeId> <pairingCode>Then start a client in API mode
clawdaddy api <nodeId>Or run an interactive website:
clawdaddy webOr run an interactive console:
clawdaddy console <nodeId>They find each other through the switchboard, complete the handshake, and then communicate directly.
You can also connect from a browser by visiting: https://clawdaddy.goodenoughcafe.com or fork and host your own web cient from the web-cli in github.com/Good-Enough-Cafe-LLC/clawdaddy-cli
How the tunnel works
[you] ---- WebSocket (signal only) ----> [switchboard]
| |
| <-- WebRTC offer/answer --> |
| |
+------------ P2P data channel ----------- [your host]
(HMAC-signed, chunked)Privacy by design: The switchboard only ever sees a one-way hash derived from your pairing code. It cannot authenticate as either side, cannot read your traffic, and has no record of what models you're running.
Once the tunnel is established the switchboard is gone from the equation. Large messages (tool calls, file context, long completions) are automatically split into 12KB frames, reassembled on the other end, and verified with HMAC before processing. The serve node handles multiple simultaneous clients with per-connection generation tracking, so stale reconnects never clobber an active session.
Features
- True P2P: inference traffic never touches a relay server
- Zero cost: no API fees, no subscriptions, your hardware does the work
- OpenAI-compatible API: drop in to Claude Code, Continue, or any OpenAI client
- Resilient: exponential backoff on both sides, survives flaky connections
- Multi-client support: concurrent connections with generation-aware session management
- Large Payload Support: handles large context windows cleanly, no WebRTC size limits in practice
- Secure: every message signed with a key derived from your pairing code
Beyond chat — the command layer
Clawdaddy isn't just an inference tunnel. Prefix any message with /cmd to send a control command to your serve node instead of triggering inference:
/ping check the node is alive
/get_status model, memory, active connections
/clear_memory wipe conversation history
/set_system_prompt <text> swap personality mid-session
/echo <message> sanity check the tunnel
/log <message>The Agent Hook: Any log command is written to command_log.jsonl as newline-delimited JSON. You can build agents that watch this log and trigger real-world actions:
tail -f ~/.clawdaddy/serve.log | jq . | while read line; do
# your agent logic here
doneThe serve node also exposes POST /v1/command over HTTP locally if you want to drive commands from scripts on the same machine without going through the tunnel.
Send /cmd start_job from your phone. The host logs it. Your script picks it up and kicks off a workflow. No webhooks, no polling, no separate message broker — the log file is the bus.
Configuration
On first run each tool writes a config file to ~/.clawdaddy/ with sensible defaults. Edit to customise:
~/.clawdaddy/serve-config.json
{
"nodeId": "my-node",
"pairingCode": "mysecretcode",
"model": "llama3.2",
"maxConnections": 3,
"allowMultiple": true,
"signalServer": "https://clawdaddyswitch01.goodenoughcafe.com",
"reconnectBaseMs": 2000,
"reconnectMaxMs": 30000
}~/.clawdaddy/client-config.json
{
"signalServer": "https://clawdaddyswitch01.goodenoughcafe.com",
"reconnectBaseMs": 2000,
"reconnectMaxMs": 30000,
"defaultMaxTokens": 1024,
"defaultTemperature": 0.7
}To use your own switchboard, point signalServer at your own instance on both sides.
