npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@thomasrumas/llm-client

v1.2.0

Published

CLI client to remotely manage local LLM inference servers

Readme

@thomasrumas/llm-client

CLI client to list, launch, and stop LLM inference servers managed by @thomasrumas/llm-manager over your local network.

Requirements

  • Node.js 22+
  • A machine on your LAN running @thomasrumas/llm-manager with the API server available — either:
    • Daemon (recommended): llm-manager service install && llm-manager service start on the remote machine
    • Embedded: API server enabled in the TUI Settings (runs while the TUI is open)

Install

npm install -g @thomasrumas/llm-client

Quick start

# 1. Point the client at your manager machine
llm-client config set remote-url http://192.168.1.5:3333

# 2. See what's available
llm-client list

# 3. Launch a model
llm-client start Qwen3-8B

# 4. Check it's running
llm-client status

# 5. Stop it
llm-client stop

Features

  • Zero dependencies — uses Node's built-in fetch, no runtime packages
  • Model name without .gguf — type Qwen3-8B instead of Qwen3-8B-Q4_K_M.gguf; the extension is added automatically
  • Default model — configure a default so llm-client start always knows what to launch
  • Named configurations — target any saved config with --config <name>
  • 10 s request timeout with a clear "cannot reach server" message

Commands

config

llm-client config show
llm-client config set <key> <value>

| Key | Description | | ---------------- | -------------------------------------------------------------------- | | remote-url | Base URL of the manager API (e.g. http://192.168.1.5:3333) | | default-model | Model name to use when none is specified on start | | default-config | Config name to use when --config is omitted (default: "default") |

Client config is stored at ~/.local-llm-client/config.json.

list

llm-client list

Lists every model that has at least one saved configuration on the remote manager, along with their config names. Models without a configuration cannot be launched remotely.

start

llm-client start [model-name] [--config <name>]

Launches a model on the remote manager. If model-name is omitted, uses default-model from config. The .gguf extension is optional — both Qwen3-8B and Qwen3-8B-Q4_K_M.gguf are accepted.

llm-client start                          # uses default-model + default config
llm-client start Qwen3-8B                 # specific model, default config
llm-client start Qwen3-8B --config fast   # specific model + named config

status

llm-client status

Shows the currently running model (if any): name, config, port, PID, and uptime. Also prints the OpenAI-compatible endpoint URL.

stop

llm-client stop

Stops the model currently running on the remote manager.

help

llm-client help

How the manager API must be enabled

Option A — Daemon (recommended for always-on access)

On the remote machine:

llm-manager service install   # registers with launchd / systemd, starts at login
llm-manager service start     # starts now

The daemon runs the API server headlessly with no terminal window required. The TUI can still be used alongside it.

Option B — Embedded API inside the TUI

On the machine running the manager TUI:

  1. Open Settings (4 from the dashboard)
  2. Navigate to API Server and toggle it enabled with ← →
  3. Set the desired API Port (default 3333)
  4. Press Ctrl+S to save — the API server starts immediately

The API listens on 0.0.0.0 so it is reachable from any device on the same network.

License

ISC