npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@c0mpute/worker

v2.3.1

Published

Native CLI worker for the c0mpute.ai distributed inference network. Runs LLM inference via ollama and connects to the orchestrator via Socket.io.

Downloads

310

Readme

c0mpute-worker

Native CLI worker for the c0mpute.ai distributed inference network. Runs LLM inference via ollama and connects to the orchestrator via Socket.io.

Quick Start

  1. Install ollama and make sure it's running (ollama serve)
  2. Run the worker:
npx @c0mpute/worker --token <your-token>

On first run, the worker will automatically:

  • Pull the base model (~17GB download)
  • Create a custom c0mpute-max model with optimized settings
  • Run a speed benchmark
  • Connect to the orchestrator and start serving jobs

How It Works

  1. Verifies ollama is running locally
  2. Pulls and configures the model (Qwen 3.5 27B abliterated)
  3. Runs a speed benchmark to measure your hardware
  4. Connects to the c0mpute.ai orchestrator via WebSocket
  5. Accepts and processes inference jobs, streaming tokens back in real time

Capabilities

  • Thinking — model uses chain-of-thought reasoning with <think> tags
  • Vision — accepts images (base64) alongside text messages
  • Tool calling — model can invoke tools (web search, etc.) defined by the orchestrator
  • Uncensored — abliterated model with no content restrictions
  • Long context — 256K context window

Options

--token <token>   Authentication token from c0mpute.ai (required)
--url <url>       Orchestrator URL (default: https://c0mpute.ai)
--benchmark       Run benchmark only, then exit
--version         Show version
--help            Show help

Requirements

  • Node.js 18+
  • ollama installed and running
  • GPU with 20GB+ VRAM recommended (NVIDIA RTX 3090/4090, Apple Silicon 32GB+)
  • ~17GB disk space for the model

Default Model

Qwen 3.5 27B Abliterated — an uncensored 27B parameter model with 256K context window, vision support, and thinking capabilities.

Architecture

The worker delegates all inference to ollama's local HTTP API. This means:

  • No CUDA/Metal build issues — ollama handles GPU acceleration
  • Easy model management — ollama pulls and caches models
  • Automatic GPU detection — ollama picks the best backend for your hardware

The worker is a dumb relay — it passes tool definitions to the model and relays tool calls back to the orchestrator for execution. Tools are defined and managed server-side.

Earnings

Workers earn credits for completing inference jobs. Earnings are based on tokens generated and your hardware tier. Check your earnings at c0mpute.ai.