npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

pi-llama-cpp

v0.3.4

Published

Pi extension for llama.cpp integration. Supports both router and single modes.

Downloads

1,245

Readme

pi-llama-cpp

A Pi Coding Agent extension that integrates with a running llama.cpp server to provide live model browsing, loading, and switching directly from Pi.

Features

  • Auto-detect models — discovers all models available on your running llama.cpp server
  • Live status indicators — see which models are loaded, loading, failed, sleeping, or unloaded with color-coded icons
  • Load / unload / switch — manage models directly from the Pi command palette
  • Multi-model router support — works with both single-model and multi-model llama.cpp server configurations
  • Image capabilities detection — detects multimodal models automatically
  • Flexible URL resolution — configures the server URL via project config, environment variable, or global settings

Status Indicators

| Icon | Status | Description | |------|--------|-------------| | 🟢 | Loaded | Model is active and ready to use | | 🟡 | Loading | Model is currently being loaded | | 🔴 | Failed | Model failed to load | | 🔵 | Sleeping | Model is available, but inactive | | ⚪ | Unloaded | Model is not loaded on the server |

Note: The Sleeping status only shows when you start your server with llama-server --sleep-idle-seconds <n> .... This is a llama.cpp server flag that tells the server to put idle models to sleep after n seconds. The model awakens automatically when you send a message.

Installation

This package is a Pi extension. Install it with

pi install npm:pi-llama-cpp

or

pi install https://github.com/gsanhueza/pi-llama-cpp

Configuration

The extension resolves the llama.cpp server URL using the following priority order:

  1. Per-project config.pi/llama-server.json in your project root:

    {
      "url": "http://127.0.0.1:8080"
    }
  2. Environment variableLLAMA_SERVER_URL

  3. Global settings~/.pi/agent/settings.json:

    {
      "llamaServerUrl": "http://127.0.0.1:8080"
    }
  4. Defaulthttp://127.0.0.1:8080

API Key

If your llama.cpp server requires authentication, use /login in Pi, select the "API key" option, and choose the Llama.cpp provider from the list.

Alternatively, configure the API key in ~/.pi/agent/auth.json using the provider ID llama-server:

Note: The provider is displayed as Llama.cpp in the Pi UI, but its internal identifier is llama-server — use this ID when configuring auth.json or other programmatic access.

{
  "llama-server": {
    "type": "api_key",
    "key": "<your-api-key-here>"
  }
}

Usage

Prerequisites

Make sure your llama.cpp server is running with the appropriate flags.

  • For multi-model support (model router), start the server with:
llama-server --models-preset path/to/presets.ini ...
  • For single-model mode, start the server with:
llama-server --model path/to/model.gguf ...

The extension determines the context size as follows:

  • Router mode — when loaded, reads meta.n_ctx from the /models endpoint; when not loaded, reads --ctx-size and/or --fit-ctx from the model's status args array
  • Single mode — reads meta.n_ctx from the /models endpoint
  • Falls back to 128000 if not available

Commands

| Command | Description | | --------- | ------------------------------------------------------------------------------------------ | | /models | Browse your models with live status. Select a model to load, switch, or unload it. |

Note: When the llama.cpp server is unreachable, /models is still available but shows the description Llama.cpp models (offline) and displays an error notification with the configured server URL.

Model Actions

When browsing models via the /models command, you can:

  • Load & switch — Load an unloaded model and switch to it
  • Switch model — Switch to a model that is already loaded
  • Unload — Unload a loaded model to free memory
  • Retry — Retry loading a failed model
  • Info — View model details (ID, capabilities, context size)
  • Cancel — Cancel the current operation

Note: In single-model mode, only Info and Cancel are available, since there is only one model loaded on the server.

Model Selection Event

When you switch models via Pi's model picker (instead of using the /models command), the extension listens for the model_select event, which also loads the requested model before the conversation begins.

This keeps the server in sync with the active model in Pi, regardless of how the switch was initiated — you don't need to manually load models before using them.

Loading Models

When you trigger a load, switch, or retry action, the extension polls the server to track progress. If a model takes longer than 60 seconds to load, the polling times out with an error.

Note: The timeout is only for the polling. The model might still be loading.

Model Configuration

Each model exposed to Pi includes the following defaults:

  • maxTokens32000 (maximum possible tokens per response according to Pi's source code)
  • reasoningtrue (assumed, as llama.cpp's /models endpoint does not expose it)
  • cost — all zero (local model)

Dependencies

| Dependency | Purpose | | --------------------------------- | ------------------------------------- | | @earendil-works/pi-coding-agent | Pi Coding Agent SDK (peer dependency) |