npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

mempiper

v0.1.0

Published

Consolidate AI chat histories into OpenClaw-compatible memory files

Downloads

13

Readme

mempiper

Turn scattered AI chat histories into persistent, structured memory.

mempiper scans your machine for conversations from 9 AI coding tools, normalizes them into a unified format, and produces layered memory files — daily summaries, weekly/monthly rollups, and a distilled core memory (MEMORY.md) suitable for use as a system prompt.

Why

You talk to AI assistants every day. Those conversations contain decisions, conventions, preferences, and hard-won context — but they're scattered across tools and forgotten by next session. mempiper extracts durable knowledge from that history so your future AI sessions start with context instead of from scratch.

Two Ways to Use

1. Claude Code Command (Recommended)

Let your Claude Code agent do everything — no API key needed:

npm install -g mempiper
mempiper command install     # creates .claude/commands/memories.md

Then in Claude Code, type /project:memories. The agent runs a 4-stage pipeline:

  1. Collectmempiper ingest normalizes chat histories from all detected providers
  2. Preparemempiper prepare groups by date, computes stats, formats conversation bundles
  3. Summarize — the agent reads each prepared daily bundle and writes memory summaries
  4. Distill — the agent creates weekly/monthly rollups and a core MEMORY.md

The CLI does all the data processing; the agent only does what it's good at — summarization.

2. Standalone CLI

Run the full pipeline yourself with your own LLM API key:

npm install -g mempiper
mempiper init               # interactive setup — configure LLM, adapters, output
mempiper scan               # discover available chat history sources
mempiper ingest             # normalize conversations → memory-output/ingested/
mempiper organize           # summarize via LLM → daily/rollups/core memory

Requires MEMPIPER_LLM_API_KEY (or configure inline via mempiper init). Supports any OpenAI-compatible endpoint and Anthropic, both with custom base URLs.

Supported Providers

| Provider | Type | Auto-detect | Source | |----------|------|:-----------:|--------| | Claude Code | Local | ✓ | ~/.claude/ | | OpenCode | Local | ✓ | SQLite DB | | Codex (OpenAI) | Local | ✓ | ~/.codex/ | | Cursor | Local | ✓ | vscdb files | | GitHub Copilot | Local | ✓ | VS Code workspaceStorage/ | | Aider | Local | ✓ | .aider.chat.history.md | | Chatbox | Export | — | JSON export from app | | ChatGPT | Export | — | Data export from settings | | Claude.ai | Export | — | Data export via email |

Local providers are auto-detected by mempiper scan. Export providers require you to export data from the app and point mempiper at the file.

Output

memory-output/
├── ingested/               # Normalized JSON conversations (per provider)
├── memory/
│   ├── 2026-02-15.md       # Daily summary
│   ├── 2026-02-16.md
│   └── ...
├── memory-rollups/
│   ├── 2026-W07.md         # Weekly rollup
│   └── 2026-01.md          # Monthly rollup
└── MEMORY.md               # Core memory — distilled essentials

| Layer | Content | Produced when | |-------|---------|---------------| | memory/ | One file per day — key decisions, outcomes, blockers | Every run | | memory-rollups/ | Weekly (7 daily files) and monthly (4+ weeks) aggregations | Enough data exists | | MEMORY.md | Distilled preferences, conventions, technical decisions | Aggregated from rollups |

Commands

| Command | Description | |---------|-------------| | mempiper init | Interactive setup — LLM provider, model, API key/base URL, adapters, privacy | | mempiper init --yes | Non-interactive with defaults | | mempiper scan | Discover chat history sources on your machine | | mempiper ingest | Normalize conversations into memory-output/ingested/ | | mempiper prepare | Group, format, and compute stats for agent summarization | | mempiper organize | Summarize ingested data via LLM into daily/rollups/core | | mempiper status | Show processing status and checkpoint info | | mempiper export | Export memories in various formats | | mempiper command install | Install Claude Code /project:memories command |

Flags

| Flag | Applies to | Default | Description | |------|-----------|---------|-------------| | --concurrency <n> | ingest, organize | 4 / 2 | Parallel workers | | --max-days <n> | ingest, organize | 30 | Only process last N days | | --max-conversations <n> | ingest, organize | — | Limit to newest N per provider | | --force | ingest, organize, init | false | Re-process / overwrite | | --dir <path> | command install | . | Target project directory |

Configuration

mempiper init creates .mempiper/config.yaml:

outputDir: ./memory-output
llm:
  daily:
    provider: openai        # or "anthropic"
    model: gpt-4o-mini
    baseURL: https://api.openai.com   # custom endpoint supported
    temperature: 0.3
    maxTokens: 4096
adapters:
  enabled: []               # empty = auto-detect all
privacy:
  sanitizePaths: true
  sanitizeKeys: true

| Environment Variable | Description | |----------|-------------| | MEMPIPER_LLM_API_KEY | API key for the configured LLM provider | | MEMPIPER_OUTPUT_DIR | Override the output directory |

Privacy

All processing happens locally. Raw chat histories never leave your machine — only summarized content is sent to the configured LLM (or stays entirely local when using the Claude Code command).

  • Path stripping/Users/you/project/src/<project>/src/
  • Key redaction — API keys, tokens, secrets are removed before processing
  • Content filtering — Passwords and credentials are stripped

For AI Agents

If you are an AI coding agent and the user asked you to set up mempiper:

npm install -g mempiper
mempiper command install

This creates .claude/commands/memories.md. The user types /project:memories in Claude Code to trigger the full pipeline.

The pipeline has 4 stages. The CLI handles stages 1–2; you handle stages 3–4:

  1. mempiper ingest --max-days 30 — collects and normalizes conversations
  2. mempiper prepare --max-days 30 — groups by date, formats conversation bundles, writes memory-output/prepared/manifest.json and per-day .md bundles
  3. Read manifest.json, process each day where alreadySummarized is false — read the prepared bundle, summarize, write to memory-output/memory/YYYY-MM-DD.md
  4. Create weekly/monthly rollups → memory-output/memory-rollups/, distill core memory → memory-output/MEMORY.md

Do not run mempiper organize — that requires an external LLM API key. You are the LLM. The detailed prompts and output format specs are in .claude/commands/memories.md.

License

MIT