npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@iflow-mcp/thewinci-local-rag-mcp

v1.0.0

Published

Persistent project memory for AI coding agents — semantic search, AST-aware chunking, dependency graphs, and conversation history

Readme

mimirs

Persistent project memory for AI coding agents. One command to set up, nothing to maintain.

npm license

Your agent starts every session blind — guessing filenames, grepping for keywords, burning context on irrelevant files, and forgetting everything you discussed yesterday.

On a real project, that costs 380K tokens per prompt and 12-second response times.

After indexing with mimirs: 91K tokens, 3 seconds. A 76% reduction — depending on your model and usage, that's hundreds to thousands in monthly API savings.

No API keys. No cloud. No Docker. Just bun and SQLite.

Works with

Claude Code  ·  Cursor  ·  Windsurf  ·  JetBrains (Junie)  ·  GitHub Copilot  ·  any MCP client

Search quality

100% recall. Benchmarked on four real codebases — including Kubernetes at 8,691 files — with known expected results per query. Full methodology in BENCHMARKS.md.

| Codebase | Language | Files | Queries | Recall@10 | MRR | Zero-miss | |---|---|---|---|---|---|---| | mimirs | TypeScript | 97 | 20 | 100.0% | 0.651 | 0.0% | | Express.js | JavaScript | 161 | 15 | 100.0% | 0.922 | 0.0% | | Excalidraw | TypeScript | 676 | 20 | 100.0% | 0.366 | 0.0% | | Kubernetes | Go | 8,691 | 20 | 100.0%* | 0.496 | 0.0%* |

*With config tuning. At default top-10, Recall is 80%. See BENCHMARKS.md for details.

How it compares

| | mimirs | No tool (grep + Read) | Context stuffing | Cloud RAG services | |---|---|---|---|---| | Setup | One command | Nothing | Nothing | API keys, accounts | | Token cost | ~91K/prompt | ~380K/prompt | Entire codebase | Varies | | Search quality | 100% Recall@10 | Depends on keywords | N/A (everything loaded) | Varies | | Code understanding | AST-aware (24 langs) | Line-level | None | Usually line-level | | Cross-session memory | Conversations + checkpoints | None | None | Some | | Privacy | Fully local | Local | Local | Data leaves your machine | | Price | Free | Free | High token bills | $10-50/mo + tokens |

What it gives your agent

Find code by meaning, not filename. "Where do we handle authentication errors?" → mimirs finds middleware/session-guard.ts. Hybrid vector + BM25 search, boosted by dependency graph centrality.

Remember past sessions. Conversation transcripts are indexed in real time. Three days later, your agent can search for "why did we switch to JWT?" and get the exact discussion.

Know what changed since last time. git_context shows uncommitted changes and recent commits in one call, so agents don't propose edits that conflict with in-progress work.

Leave notes for future sessions. annotate attaches persistent caveats to files or symbols — "known race condition", "blocked on auth rewrite" — that surface automatically in search results.

Mark decisions, not just code. Checkpoints capture milestones, direction changes, and blockers. Searchable across sessions so context doesn't evaporate.

Understand codebase structure. Dependency graphs, reverse-dependency lookups, and find_usages show the blast radius before any refactor.

Generate a project wiki. generate_wiki produces a structured, cross-linked markdown wiki — architecture docs, module pages, entity pages, guides, and Mermaid diagrams — all built from the semantic index.

Expose documentation gaps. Analytics log every query locally — nothing leaves your machine. Zero-result and low-relevance queries reveal what's missing from your docs.

Quick start

1. Install SQLite (macOS)

Apple's bundled SQLite doesn't support extensions:

brew install sqlite

2. Set up your editor

bunx mimirs init --ide claude   # or: cursor, windsurf, copilot, jetbrains, all

This creates the MCP server config, editor rules, .mimirs/config.json, and .gitignore entry. Run with --ide all to set up every supported editor at once.

3. Try the demo (optional)

bunx mimirs demo

Claude Code plugin

For deeper integration, mimirs is also available as a Claude Code plugin. In a Claude Code session:

/plugin marketplace add https://github.com/TheWinci/mimirs.git
/plugin install mimirs

The plugin adds SessionStart (context summary), PostToolUse (auto-reindex on edit), and SessionEnd (auto-checkpoint) hooks. No CLAUDE.md instructions needed — the plugin's built-in skill handles tool usage.

How it works

  1. Parse & chunk — Splits content using type-matched strategies: function/class boundaries for code (via tree-sitter across 24 languages), headings for markdown, top-level keys for YAML/JSON. Chunks that exceed the embedding model's token limit are windowed and merged.

  2. Embed — Each chunk becomes a 384-dimensional vector using all-MiniLM-L6-v2 (in-process via Transformers.js + ONNX, no API calls). Vectors are stored in sqlite-vec.

  3. Build dependency graph — Import specifiers and exported symbols are captured during AST chunking, then resolved to build a file-level dependency graph.

  4. Hybrid search — Queries run vector similarity and BM25 in parallel, blended by configurable weight. Results are boosted by dependency graph centrality and path heuristics. read_relevant returns individual chunks with entity names and exact line ranges (path:start-end).

  5. Watch & re-index — File changes are detected with a 2-second debounce. Changed files are re-indexed; deleted files are pruned.

  6. Conversation & checkpoints — Tails Claude Code's JSONL transcripts in real time. Agents can create checkpoints at important moments for future sessions to search.

  7. Annotations — Notes attached to files or symbols surface as [NOTE] blocks inline in read_relevant results.

  8. Analytics — Every query is logged. Analytics surface zero-result queries, low-relevance queries, and period-over-period trends.

Supported languages

AST-aware chunking via bun-chunk with tree-sitter grammars:

TypeScript/JavaScript, Python, Go, Rust, Java, C, C++, C#, Ruby, PHP, Scala, Kotlin, Lua, Zig, Elixir, Haskell, OCaml, Dart, Bash/Zsh, TOML, YAML, HTML, CSS/SCSS/LESS

Also indexes: Markdown, JSON, XML, SQL, GraphQL, Protobuf, Terraform, Dockerfiles, Makefiles, and more. Files without a known extension fall back to paragraph splitting.

Documentation

Stack

| Layer | Choice | |---|---| | Runtime | Bun (built-in SQLite, fast TS) | | AST chunking | bun-chunk — tree-sitter grammars for 24 languages | | Embeddings | Transformers.js + ONNX (in-process, no daemon) | | Embedding model | all-MiniLM-L6-v2 (~23MB, 384 dimensions) — configurable | | Vector store | sqlite-vec (single .db file) | | MCP | @modelcontextprotocol/sdk (stdio transport) | | Plugin | Claude Code plugin with skills + hooks |

All data lives in .mimirs/ inside your project — add it to .gitignore.