npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

jasper-context-compactor

v0.4.1

Published

Context compaction plugin for OpenClaw - works with local models (MLX, llama.cpp) that don't report token limits

Readme

Jasper Context Compactor

Token-based context compaction for OpenClaw with local models (MLX, llama.cpp, Ollama)

The Problem

Local LLMs don't report context overflow errors like cloud APIs do. When context gets too long, they either:

  • Silently truncate your conversation
  • Return garbage output
  • Crash without explanation

OpenClaw's built-in compaction relies on error signals that local models don't provide.

The Solution

Jasper Context Compactor estimates tokens client-side and proactively summarizes older messages before hitting your model's limit. No more broken conversations.

Quick Start

npx jasper-context-compactor setup

The setup will:

  1. Back up your config — Saves openclaw.json to ~/.openclaw/backups/ with restore instructions
  2. Ask permission — Won't read your config without consent
  3. Detect local models — Automatically identifies Ollama, llama.cpp, MLX, LM Studio providers
  4. Suggest token limits — Based on your model's contextWindow from config
  5. Let you customize — Enter your own values if auto-detection doesn't match
  6. Update config safely — Adds the plugin with your chosen settings

Supported Local Providers

The setup automatically detects these providers (primary or fallback):

  • Ollama — Any provider with ollama in name or :11434 in baseUrl
  • llama.cpp — llamacpp provider
  • MLX — mlx provider
  • LM Studio — lmstudio provider
  • friend-gpu — Custom GPU servers
  • OpenRouter — When routing to local models
  • Local network — Any provider with localhost, 127.0.0.1, or Tailscale IP in baseUrl

Then restart OpenClaw:

openclaw gateway restart

Privacy

🔒 Everything runs 100% locally. Nothing is sent to external servers.

The setup only reads your local openclaw.json file (with your permission) to detect your model and suggest appropriate limits.

How It Works

  1. Before each message, estimates total context tokens (chars ÷ 4)
  2. If over maxTokens, splits messages into "old" and "recent"
  3. Summarizes old messages using your session model
  4. Injects summary as context — conversation continues seamlessly

Commands

After setup, use these in chat:

| Command | Description | |---------|-------------| | /context-stats | Show current token usage and limits | | /compact-now | Clear cache and force fresh compaction |

Configuration

The setup configures these values in ~/.openclaw/openclaw.json:

{
  "plugins": {
    "entries": {
      "context-compactor": {
        "enabled": true,
        "config": {
          "maxTokens": 8000,
          "keepRecentTokens": 2000,
          "summaryMaxTokens": 1000,
          "charsPerToken": 4,
          "modelFilter": ["ollama", "lmstudio"]
        }
      }
    }
  }
}

| Option | Description | |--------|-------------| | maxTokens | Trigger compaction above this (set to ~80% of your model's context) | | keepRecentTokens | Recent context to preserve (default: 25% of max) | | summaryMaxTokens | Max tokens for the summary (default: 12.5% of max) | | charsPerToken | Token estimation ratio (4 works for English) | | modelFilter | (Optional) Only compact for these providers. If not set, compacts all sessions.

Restoring Your Config

Setup always backs up first. To restore:

# List backups
ls ~/.openclaw/backups/

# Restore (use the timestamp from your backup)
cp ~/.openclaw/backups/openclaw-2026-02-11T08-00-00-000Z.json ~/.openclaw/openclaw.json

# Restart
openclaw gateway restart

Uninstall

# Remove plugin files
rm -rf ~/.openclaw/extensions/context-compactor

# Remove from config (edit openclaw.json and delete the context-compactor entry)
# Or restore from backup

Links

  • npm: https://www.npmjs.com/package/jasper-context-compactor
  • GitHub: https://github.com/E-x-O-Entertainment-Studios-Inc/openclaw-context-compactor
  • ClawHub: https://clawhub.ai/skills/context-compactor

License

MIT