npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@db0-ai/openclaw

v0.3.0

Published

db0 memory plugin for OpenClaw. Persistent scoped memory, automatic fact extraction, sub-agent support, compaction safety.

Readme

@db0-ai/openclaw

Your OpenClaw agent forgets things it shouldn't. You told it your preferences three sessions ago — gone. A sub-agent spent 10 minutes researching — the parent never saw the results. You switched projects and yesterday's context bled into today's work.

db0 is a ContextEngine plugin that gives your agent memory that actually works — across sessions, between agents, and across projects.

Quick Start

openclaw plugins install @db0-ai/openclaw

Or use the interactive installer for more options:

npx @db0-ai/openclaw init

Both set up persistent SQLite storage, configure openclaw.json, and activate db0 as the context engine. Restart OpenClaw to activate.

Requires OpenClaw v2026.3.7 or later. Compatible with v2026.3.22 (latest).

Or tell your OpenClaw agent:

Read https://db0.ai/skills/openclaw/SKILL.md and install db0

What You Get

Out of the box, zero configuration:

  • Your agent remembers — preferences, decisions, and context persist across sessions
  • Facts stay current — when things change, old facts are superseded, not duplicated
  • Projects stay separate — scoped memory prevents cross-project contamination
  • Sub-agents collaborate — parent and child share memory automatically
  • Nothing is lost to compaction — facts are extracted before messages are discarded
  • You can see what it knows — inspector UI, CLI, and structured logs for full visibility
  • Memory consolidation — related facts are automatically clustered and merged over time (with consolidateFn)

Memory Consolidation

When you provide a consolidateFn, db0 clusters semantically similar memories and merges them via your LLM. Three memories about TypeScript preferences become one concise fact. Runs as part of reconcile() — no extra calls needed.

db0({
  consolidateFn: async (memories) => {
    const response = await callGemini(
      `Merge these related facts into one concise statement:\n${memories.map(m => m.content).join("\n")}`
    );
    return { content: response.text };
  }
})

Without consolidateFn, reconcile only does exact-match dedup — same as before, zero LLM calls.

Upgrade Embeddings

Works out of the box with hash embeddings (exact/near-exact match). For semantic search, set one env var:

export GEMINI_API_KEY="your-key"    # free tier, auto-detected

Or use local embeddings:

ollama pull nomic-embed-text
npx @db0-ai/openclaw set embeddings ollama

| Provider | Setup | Quality | Cost | |---|---|---|---| | gemini | GEMINI_API_KEY env var | Good (768d) | Free tier | | ollama | ollama pull nomic-embed-text | Good | Free (local) | | openai | OPENAI_API_KEY env var | Best | ~$0.02/1M tokens | | hash | Zero-config (default) | Exact match | Free |

When the provider changes, existing memories are re-embedded automatically.

Inspector

npx @db0-ai/inspector

Opens a web UI at http://127.0.0.1:6460 with three views:

  • Memories — browse, filter, and search by scope, status, source, and extraction method
  • Dashboard — charts showing memory distribution and confidence levels
  • Health — integrity report surfacing contradictions, missing summaries, and orphaned edges

See @db0-ai/inspector for full options.

CLI

npx @db0-ai/openclaw init                  # install
npx @db0-ai/openclaw upgrade               # upgrade to latest
npx @db0-ai/openclaw uninstall             # remove (--keep-data to preserve DB)
npx @db0-ai/openclaw set embeddings ollama  # switch embedding provider
npx @db0-ai/openclaw get                    # view current settings
npx @db0-ai/openclaw status                 # check health
npx @db0-ai/openclaw restore               # restore from Postgres backup

How It Works

User message
  │
  ▼
┌─────────────┐
│  assemble() │ ← search memory, inject relevant context
└──────┬──────┘
       │
       ▼
   LLM call
       │
       ▼
┌─────────────┐
│   ingest()  │ ← extract facts, log turn, checkpoint state
└──────┬──────┘
       │
       ▼
┌─────────────┐
│ afterTurn() │ ← preserve compaction summaries
└─────────────┘

| Lifecycle | What db0 does | |---|---| | bootstrap | Open storage, restore checkpoint, sync memory index, run backup | | assemble | Search structured facts + file chunks, inject context | | ingest | Extract facts from messages, log turn, checkpoint state | | compact | Extract facts from messages about to be discarded, snapshot memory files | | afterTurn | Preserve compaction summaries as durable memory | | prepareSubagentSpawn | Spawn child harness with shared backend, build briefing | | onSubagentEnded | Store child's result, close child harness | | dispose | Flush and close |

Documentation

For detailed configuration, API reference, and advanced usage:

  • Configuration Reference — all options, storage backends, extraction strategies
  • Memory & Search — scopes, superseding, hybrid search, relationships, structured content
  • Sub-Agents — shared backend model, visibility rules, lifecycle hooks
  • Embeddings — provider setup (OpenAI, Ollama, transformers.js), custom functions
  • Migration — importing from legacy OpenClaw MEMORY.md
  • Manual Setup — step-by-step without the CLI installer

License

MIT