npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

memory-search

v1.0.3

Published

Hybrid search memory system for coding agents (vector + BM25)

Downloads

76

Readme

memory-search

A hybrid search memory system for coding agents (70% vector similarity + 30% BM25 keyword).

Based on the memory system from OpenClaw (originally Clawdbot).

Features

  • Hybrid search combining vector similarity and keyword matching
  • Local embeddings included (no API key needed)
  • Optional OpenAI embeddings for faster queries
  • bun:sqlite storage (no native extensions needed)
  • Support for MEMORY.md + memory/*.md files
  • Embedding cache for efficiency
  • Works with Claude Code, Cursor, Codex, and 30+ other agents

Quick Start

1. Install the Skill

bunx skills add rjyo/memory-search

This installs the /memory skill to your coding agent (Claude Code, Cursor, Codex, etc.).

2. Use It

Save information:

/memory remember that I prefer TypeScript over JavaScript

Search memories:

/memory what did we decide about authentication?

Or just ask naturally - the skill triggers on phrases like "remember this" or "what did we decide about X".

Automatic Memory Injection (Optional)

Want Claude to search memory automatically? Add to your CLAUDE.md:

## Memory
When questions relate to past decisions or preferences, use /memory to search first.

Or use a hook in .claude/hooks.json:

{
  "hooks": {
    "SessionStart": [{
      "command": "bunx memory-search \"project context preferences decisions\"",
      "timeout": 30000
    }]
  }
}

Programmatic Usage

import { MemoryIndex } from "memory-search";

// Create index (uses local embeddings by default)
const memory = await MemoryIndex.create({
  workspaceDir: "./my-project",
});

// Index files
await memory.sync();

// Search
const results = await memory.search("authentication");
// Returns: [{ path, startLine, endLine, score, snippet }]

// Read a file
const file = await memory.readFile({ path: "MEMORY.md" });

// Get status
const status = memory.status();

// Clean up
await memory.close();

Memory File Structure

your-project/
├── MEMORY.md           # Long-term: preferences, patterns, decisions
└── memory/
    ├── 2024-01-15.md   # Daily notes
    ├── 2024-01-16.md
    └── architecture.md # Topic-specific memory

MEMORY.md (Permanent)

  • User preferences
  • Project decisions
  • Coding patterns
  • Architecture choices

memory/*.md (Contextual)

  • Daily session notes
  • Work in progress
  • Ideas to explore
  • Meeting notes

Configuration

Local embeddings work out of the box.

First Run: The first query downloads a ~300MB embedding model. Run bunx memory-search --warmup after installing to pre-download it.

For faster embeddings, you can optionally use OpenAI:

export OPENAI_API_KEY=sk-...

Full Options

interface MemoryConfig {
  workspaceDir: string;       // Required: directory with MEMORY.md

  // Database
  dbPath?: string;            // Default: {workspaceDir}/.memory.sqlite

  // Embeddings
  embeddingProvider?: 'local' | 'openai';  // Default: local
  openaiApiKey?: string;      // Required for 'openai' provider
  openaiModel?: string;       // Default: text-embedding-3-small
  localModelPath?: string;    // Default: hf:ggml-org/embeddinggemma-300M-GGUF/...
  modelCacheDir?: string;     // Default: ~/.cache/memory-search

  // Chunking
  chunkTokens?: number;       // Default: 400
  chunkOverlap?: number;      // Default: 80

  // Search
  maxResults?: number;        // Default: 6
  minScore?: number;          // Default: 0.35
  vectorWeight?: number;      // Default: 0.7
  textWeight?: number;        // Default: 0.3
}

Development

# Install dependencies
bun install

# Run tests
bun test

# Type check
bun run typecheck

# Build
bun run build

How It Works

  1. Indexing: Scans MEMORY.md and memory/*.md, chunks into ~400 token pieces
  2. Embedding: Converts chunks to vectors via local model (or OpenAI)
  3. Storage: SQLite database with FTS5 for keyword search
  4. Search: Hybrid (70% vector similarity + 30% BM25 keyword)
  5. Caching: Embeddings cached to avoid re-computation

License

MIT