npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

memorybench

v1.0.0

Published

A pluggable benchmarking framework for evaluating memory and context systems.

Readme

MemoryBench

A pluggable benchmarking framework for evaluating memory and context systems.

Features

  • 🔌 Interoperable: mix and match any provider with any benchmark
  • 🧩 Bring your own benchmarks: plug in custom datasets and tasks
  • ♻️ Checkpointed runs: resume from any pipeline stage (ingest → index → search → answer → evaluate)
  • 🆚 Multi‑provider comparison: run the same benchmark across providers side‑by‑side
  • 🧪 Judge‑agnostic: swap GPT‑4o, Claude, Gemini, etc. without code changes
  • 📊 Structured reports: export run status, failures, and metrics for analysis
  • 🖥️ Web UI: inspect runs, questions, and failures interactively, in real-time!
┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│  Benchmarks │    │  Providers  │    │   Judges    │
│  (LoCoMo,   │    │ (Supermem,  │    │  (GPT-4o,   │
│  LongMem..) │    │  Mem0, Zep) │    │  Claude..)  │
└──────┬──────┘    └──────┬──────┘    └──────┬──────┘
       └──────────────────┼──────────────────┘
                         ▼
             ┌───────────────────────┐
             │      MemoryBench      │
             └───────────┬───────────┘
                         ▼
   ┌────────┬─────────┬────────┬──────────┬────────┐
   │ Ingest │ Indexing│ Search │  Answer  │Evaluate│
   └────────┴─────────┴────────┴──────────┴────────┘

Quick Start

bun install
cp .env.example .env.local  # Add your API keys
bun run src/index.ts run -p supermemory -b locomo

Configuration

# Providers (at least one)
SUPERMEMORY_API_KEY=
MEM0_API_KEY=
ZEP_API_KEY=

# Judges (at least one)
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GOOGLE_API_KEY=

Commands

| Command | Description | |---------|-------------| | run | Full pipeline: ingest → index → search → answer → evaluate → report | | compare | Run benchmark across multiple providers simultaneously | | ingest | Ingest benchmark data into provider | | search | Run search phase only | | test | Test single question | | status | Check run progress | | list-questions | Browse benchmark questions | | show-failures | Debug failed questions | | serve | Start web UI | | help | Show help (help providers, help models, help benchmarks) |

Options

-p, --provider         Memory provider (supermemory, mem0, zep)
-b, --benchmark        Benchmark (locomo, longmemeval, convomem)
-j, --judge            Judge model (gpt-4o, sonnet-4, gemini-2.5-flash, etc.)
-r, --run-id           Run identifier (auto-generated if omitted)
-m, --answering-model  Model for answer generation (default: gpt-4o)
-l, --limit            Limit number of questions
-q, --question-id      Specific question (for test command)
--force                Clear checkpoint and restart

Examples

# Full run
bun run src/index.ts run -p mem0 -b locomo

# With custom run ID
bun run src/index.ts run -p mem0 -b locomo -r my-test

# Resume existing run
bun run src/index.ts run -r my-test

# Limited questions
bun run src/index.ts run -p supermemory -b locomo -l 10

# Different models
bun run src/index.ts run -p zep -b longmemeval -j sonnet-4 -m gemini-2.5-flash

# Compare multiple providers
bun run src/index.ts compare -p supermemory,mem0,zep -b locomo -s 5

# Test single question
bun run src/index.ts test -r my-test -q question_42

# Debug
bun run src/index.ts status -r my-test
bun run src/index.ts show-failures -r my-test

Pipeline

1. INGEST    Load benchmark sessions → Push to provider
2. INDEX     Wait for provider indexing
3. SEARCH    Query provider → Retrieve context
4. ANSWER    Build prompt → Generate answer via LLM
5. EVALUATE  Compare to ground truth → Score via judge
6. REPORT    Aggregate scores → Output accuracy + latency

Each phase checkpoints independently. Failed runs resume from last successful point.

Checkpointing

Runs persist to data/runs/{runId}/:

  • checkpoint.json - Run state and progress
  • results/ - Search results per question
  • report.json - Final report

Re-running same ID resumes. Use --force to restart.

Extending

| Component | Guide | |-----------|-------| | Add Provider | src/providers/README.md | | Add Benchmark | src/benchmarks/README.md | | Add Judge | src/judges/README.md | | Project Structure | src/README.md |

License

MIT