npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

pi-mem

v0.1.0

Published

Persistent memory extension for pi — captures observations, compresses them into searchable memories, and injects context into future sessions

Readme

pi-mem

Persistent memory extension for pi. Automatically captures what pi does during sessions, compresses observations into searchable memories, and injects relevant context into future sessions.

Features

  • Automatic observation capture — hooks into tool_result events to record tool executions
  • LLM-powered observation extraction — extracts structured facts, narrative, concepts, and file references from tool output
  • Session summaries — compresses observations into searchable memories using checkpoint summarization
  • Vector + full-text search — LanceDB-backed semantic and keyword search across all memories
  • Context injection — automatically loads relevant past memories at session start
  • Memory toolssearch, timeline, get_observations, and save_memory tools for the LLM
  • Privacy controls<private> tags to exclude sensitive content
  • Project awareness — scopes memories per project (from git remote), supports cross-project search

Installation

pi install npm:pi-mem

Or to try without installing:

pi -e npm:pi-mem

Configuration

Create ~/.pi/agent/pi-mem.json or ~/.pi-mem/config.json (optional — all settings have sensible defaults):

{
  "enabled": true,
  "autoInject": true,
  "maxObservationLength": 4000,
  "summaryModel": "anthropic/claude-haiku-3",
  "indexSize": 10,
  "tokenBudget": 2000,
  "embeddingProvider": "openai",
  "embeddingModel": "text-embedding-3-small",
  "embeddingDims": 1536
}

| Setting | Default | Description | |---------|---------|-------------| | enabled | true | Enable/disable the extension | | autoInject | true | Automatically inject past memories at session start | | maxObservationLength | 4000 | Max characters per tool output observation | | summaryModel | (current model) | Model to use for session summarization | | observerModel | (falls back to summaryModel) | Model for per-tool observation extraction | | thinkingLevel | (current level) | Thinking level for LLM calls | | indexSize | 10 | Max entries in the project memory index | | tokenBudget | 2000 | Max tokens for injected context | | embeddingProvider | (none) | Pi provider name for embeddings. Must support OpenAI-compatible /v1/embeddings | | embeddingModel | text-embedding-3-small | Embedding model name | | embeddingDims | 1536 | Embedding vector dimensions (must match the model) |

Embedding Setup

For vector/semantic search, configure an embedding provider. The provider must support the OpenAI-compatible /v1/embeddings endpoint. Add the provider name from your ~/.pi/agent/models.json:

{
  "embeddingProvider": "openai",
  "embeddingModel": "text-embedding-3-small",
  "embeddingDims": 1536
}

Without an embedding provider, full-text search still works.

Data Storage

All data is stored in ~/.pi-mem/:

~/.pi-mem/
├── lancedb/                      # Observation store (LanceDB)
└── config.json                   # User preferences (optional)

Commands

  • /mem — Show current memory status (project, observation count, vector DB status)

Tools (available to the LLM)

search

Search past observations and summaries with full-text search:

search({ query: "authentication flow" })
search({ query: "authentication", project: "my-app", limit: 5 })

timeline

Get chronological context around a specific observation:

timeline({ anchor: "abc12345" })
timeline({ query: "auth bug", depth_before: 5, depth_after: 5 })

get_observations

Fetch full details for specific observation IDs:

get_observations({ ids: ["abc12345", "def67890"] })

save_memory

Explicitly save important information:

save_memory({
  text: "Decided to use PostgreSQL for ACID transactions",
  title: "Database choice",
  concepts: ["decision", "architecture"]
})

Privacy

Wrap sensitive content in <private> tags in tool output — it will be stripped before observation:

API key is <private>sk-abc123</private>

License

MIT