npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

clawdaddy

v1.0.8

Published

Run and access local LLMs from anywhere over a secure P2P connection

Readme

🦞 Clawdaddy

Your GPU. Your models. Accessible from anywhere.

Clawdaddy lets you run local LLMs on your own hardware and access them securely from anywhere.

No cloud, no API costs, and no data passing through anyone else's servers.

It exposes an OpenAI and Claude compatible API for tools like Claude Code, and goes beyond inference with a command layer for triggering workflows and agents on your host machine.

This package includes both pieces:

  • clawdaddy serve — runs on the machine with your GPU, alongside Ollama. Registers with the switchboard and accepts incoming P2P connections.
  • clawdaddy cli — runs wherever you are. Pairs with a serve node and streams inference over an encrypted tunnel.

Requirements

  • Node.js 18+
  • Ollama running locally (for serve)

Install

npm install -g clawdaddy

Quick start

On your host machine:

ollama pull llama3.2 # or any model you like
clawdaddy serve llama3.2
# outputs your nodeId and pairing code

On your laptop, phone, wherever:

Pair to your server

clawdaddy pair <nodeId> <pairingCode>

Then start a client in API mode

clawdaddy api <nodeId>

Or run an interactive website:

clawdaddy web

Or run an interactive console:

clawdaddy console <nodeId>

They find each other through the switchboard, complete the handshake, and then communicate directly.

You can also connect from a browser by visiting: https://clawdaddy.goodenoughcafe.com or fork and host your own web cient from the web-cli in github.com/Good-Enough-Cafe-LLC/clawdaddy-cli


How the tunnel works

[you]  ---- WebSocket (signal only) ---->  [switchboard]
  |                                              |
  |         <-- WebRTC offer/answer -->          |
  |                                              |
  +------------ P2P data channel ----------- [your host]
                 (HMAC-signed, chunked)

Privacy by design: The switchboard only ever sees a one-way hash derived from your pairing code. It cannot authenticate as either side, cannot read your traffic, and has no record of what models you're running.

Once the tunnel is established the switchboard is gone from the equation. Large messages (tool calls, file context, long completions) are automatically split into 12KB frames, reassembled on the other end, and verified with HMAC before processing. The serve node handles multiple simultaneous clients with per-connection generation tracking, so stale reconnects never clobber an active session.


Features

  • True P2P: inference traffic never touches a relay server
  • Zero cost: no API fees, no subscriptions, your hardware does the work
  • OpenAI-compatible API: drop in to Claude Code, Continue, or any OpenAI client
  • Resilient: exponential backoff on both sides, survives flaky connections
  • Multi-client support: concurrent connections with generation-aware session management
  • Large Payload Support: handles large context windows cleanly, no WebRTC size limits in practice
  • Secure: every message signed with a key derived from your pairing code

Beyond chat — the command layer

Clawdaddy isn't just an inference tunnel. Prefix any message with /cmd to send a control command to your serve node instead of triggering inference:

/ping                           check the node is alive
/get_status                     model, memory, active connections
/clear_memory                   wipe conversation history
/set_system_prompt <text>       swap personality mid-session
/echo <message>                 sanity check the tunnel
/log <message>

The Agent Hook: Any log command is written to command_log.jsonl as newline-delimited JSON. You can build agents that watch this log and trigger real-world actions:

tail -f ~/.clawdaddy/serve.log | jq . | while read line; do
  # your agent logic here
done

The serve node also exposes POST /v1/command over HTTP locally if you want to drive commands from scripts on the same machine without going through the tunnel.

Send /cmd start_job from your phone. The host logs it. Your script picks it up and kicks off a workflow. No webhooks, no polling, no separate message broker — the log file is the bus.


Configuration

On first run each tool writes a config file to ~/.clawdaddy/ with sensible defaults. Edit to customise:

~/.clawdaddy/serve-config.json

{
  "nodeId": "my-node",
  "pairingCode": "mysecretcode",
  "model": "llama3.2",
  "maxConnections": 3,
  "allowMultiple": true,
  "signalServer": "https://clawdaddyswitch01.goodenoughcafe.com",
  "reconnectBaseMs": 2000,
  "reconnectMaxMs": 30000
}

~/.clawdaddy/client-config.json

{
  "signalServer": "https://clawdaddyswitch01.goodenoughcafe.com",
  "reconnectBaseMs": 2000,
  "reconnectMaxMs": 30000,
  "defaultMaxTokens": 1024,
  "defaultTemperature": 0.7
}

To use your own switchboard, point signalServer at your own instance on both sides.


License

MIT — github.com/Good-Enough-Cafe-LLC