npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

pi-memory-md

v0.1.1

Published

Letta-like memory management for pi using structured markdown files in a GitHub repository

Readme

pi-memory-md

Letta-like memory management for pi using GitHub-backed markdown files.

Features

  • Persistent Memory: Store context, preferences, and knowledge across sessions
  • Git-backed: Version control with full history
  • Prompt append: Memory index automatically appended to conversation at session start
  • On-demand access: LLM reads full content via tools when needed
  • Multi-project: Separate memory spaces per project

Quick Start

# 1. Install
pi install npm:pi-memory-md
# Or for latest from GitHub:
pi install git:github.com/VandeeFeng/pi-memory-md

# 2. Create a GitHub repository (private recommended)

# 3. Configure pi
# Add to ~/.pi/agent/settings.json:
{
  "pi-memory-md": {
    "enabled": true,
    "repoUrl": "[email protected]:username/repo.git",
    "localPath": "~/.pi/memory-md"
  }
}

# 4. Start a new pi session
# The extension will auto-initialize and sync on first run

Commands available in pi:

  • :memory-init - Initialize repository structure
  • :memory-status - Show repository status

How It Works

Session Start
    ↓
1. Git pull (sync latest changes)
    ↓
2. Scan all .md files in memory directory
    ↓
3. Build index (descriptions + tags only - NOT full content)
    ↓
4. Append index to conversation via prompt append (not system prompt)
    ↓
5. LLM reads full file content via tools when needed

Why index-only via prompt append? Keeps token usage low while making full content accessible on-demand. The index is appended to the conversation, not injected into the system prompt.

Available Tools

The LLM can use these tools to interact with memory:

| Tool | Parameters | Description | |------|------------|-------------| | memory_init | {force?: boolean} | Initialize or reinitialize repository | | memory_sync | {action: "pull" | "push" | "status"} | Git operations | | memory_read | {path: string} | Read a memory file | | memory_write | {path, content, description, tags?} | Create/update memory file | | memory_list | {directory?: string} | List all memory files | | memory_search | {query, searchIn} | Search by content/tags/description |

Memory File Format

---
description: "User identity and background"
tags: ["user", "identity"]
created: "2026-02-14"
updated: "2026-02-14"
---

# Your Content Here

Markdown content...

Directory Structure

~/.pi/memory-md/
└── project-name/
    ├── core/
    │   ├── user/           # Your preferences
    │   │   ├── identity.md
    │   │   └── prefer.md
    │   └── project/        # Project context
    │       └── tech-stack.md
    └── reference/          # On-demand docs

Configuration

{
  "pi-memory-md": {
    "enabled": true,
    "repoUrl": "[email protected]:username/repo.git",
    "localPath": "~/.pi/memory-md",
    "injection": "message-append",
    "autoSync": {
      "onSessionStart": true
    }
  }
}

| Setting | Default | Description | |---------|---------|-------------| | enabled | true | Enable extension | | repoUrl | Required | GitHub repository URL | | localPath | ~/.pi/memory-md | Local clone path | | injection | "message-append" | Memory injection mode: "message-append" or "system-prompt" | | autoSync.onSessionStart | true | Git pull on session start |

Memory Injection Modes

The extension supports two modes for injecting memory into the conversation:

1. Message Append (Default)

{
  "pi-memory-md": {
    "injection": "message-append"
  }
}
  • Memory is sent as a custom message before the user's first message
  • Not visible in the TUI (display: false)
  • Persists in the session history
  • Injected only once per session (on first agent turn)
  • Pros: Lower token usage, memory persists naturally in conversation
  • Cons: Only visible when the model scrolls back to earlier messages

2. System Prompt

{
  "pi-memory-md": {
    "injection": "system-prompt"
  }
}
  • Memory is appended to the system prompt
  • Rebuilt and injected on every agent turn
  • Always visible to the model in the system context
  • Pros: Memory always present in system context, no need to scroll back
  • Cons: Higher token usage (repeated on every prompt)

Recommendation: Use message-append (default) for optimal token efficiency. Switch to system-prompt if you notice the model not accessing memory consistently.

Usage Examples

Simply talk to pi - the LLM will automatically use memory tools when appropriate:

You: Save my preference for 2-space indentation in TypeScript files to memory.

Pi: [Uses memory_write tool to save your preference]

You can also explicitly request operations:

You: List all memory files for this project.
You: Search memory for "typescript" preferences.
You: Read core/user/identity.md
You: Sync my changes to the repository.

The LLM automatically:

  • Reads memory index at session start (appended to conversation)
  • Writes new information when you ask to remember something
  • Syncs changes when needed

Commands

Use these directly in pi:

  • :memory-status - Show repository status
  • :memory-init - Initialize repository structure

Reference