npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@falsafa/mcp

v0.1.2

Published

Open-source MCP server for the Falsafa corpus. Ten librarian-flavored tools (eight catalog tools plus read_wiki and read_wiki_full for the rule-based wiki layer) so any LLM (Claude, ChatGPT, Hermes via OpenRouter, etc.) can navigate the catalog without an

Readme

Falsafa MCP

Stdio MCP server for the Falsafa corpus. Ten librarian-flavored tools so any LLM client (Claude Desktop, Claude Code, Cursor, Codex, or any MCP-aware host) can navigate 37 translated philosophical and classical works through paragraph-stable citations. No API key, no setup beyond npx.

npx -y @falsafa/mcp

Install from the npm registry, not from a git URL — git installs trigger the package's prepack hook, which depends on the source tree's corpus/ directory and bun. Registry installs ship the corpus pre-bundled in the tarball.

First run downloads ~48 MB (the corpus ships inside the tarball). If your MCP client's startup timeout is short — Claude Code in particular — run npx -y @falsafa/mcp once in a terminal first. npm caches the package, and your client's spawn resolves instantly thereafter.

Install in your daily LLM

Claude Desktop

Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "falsafa": { "command": "npx", "args": ["-y", "@falsafa/mcp"] }
  }
}

Restart Claude Desktop. The Falsafa tools show up in the tool palette. Ask "what works does Cynewulf have?" and the model calls list_works({ author: "cynewulf" }).

Claude Code

claude mcp add falsafa npx -y @falsafa/mcp

Cursor

Settings → MCPAdd new global MCP server, paste:

{
  "mcpServers": {
    "falsafa": { "command": "npx", "args": ["-y", "@falsafa/mcp"] }
  }
}

Or edit ~/.cursor/mcp.json directly with the same shape.

Codex CLI

codex mcp add falsafa -- npx -y @falsafa/mcp

Persists in ~/.codex/config.toml. The -- separator is required.

Any other stdio MCP client

The universal config shape is:

{ "command": "npx", "args": ["-y", "@falsafa/mcp"] }

Drop that wherever your client expects an MCP server entry.

Tools

Ten tools. Eight for catalog navigation, two for the rule-based wiki layer.

  • list_works — list works in the corpus with optional author / era / genre / language filters
  • list_chapters — list chapters of a specific work
  • get_metadata — full metadata + variant counts for a work
  • read_chapter — full chapter text. Body is annotated with [p-XXXXXX] paragraph-id markers; use those for paragraph citations.
  • get_passage — read specific paragraphs by id list or 0-indexed range. Each result has a citation_url ready to drop into a markdown footnote.
  • search_corpus — search English bodies. Distinctive 2-3 word phrases work best.
  • find_related — TF-IDF-based related chapters, with a structural fallback.
  • compare_works — side-by-side pointer chapters for two works on a topic.
  • read_wiki — rule-based wiki card (~280 tokens) for a work or chapter. Use BEFORE read_chapter to scan what's worth a deep read. Cards are deterministic, generated from the corpus by classical statistical algorithms — zero LLM tokens in any output. Each card includes verbatim openings, closings, and key passages with [p-XXXXXX] cite handles.
  • read_wiki_full — heavier wiki sheet (~1,500 tokens) with the deeper statistical detail layered on top of the card. Opt-in for deep analysis; most queries should use read_wiki first and only escalate when needed.

What's in the corpus

37 works spanning Old English Christian poetry (Cynewulf), Urdu ghazal masters (Ghalib, Iqbal, Zauq), French Enlightenment political theory (Comte, Dunoyer), German philosophical writing (Fichte), Sanskrit smṛti traditions, and Old Javanese / Kawi tattva texts. Each work ships with the original-language source, a Latin-script transliteration where it makes sense, and an English translation. Every paragraph has a stable content-derived ID (p-xxxxxx) so citations survive reformatting.

Translations and transliterations are AI-assisted. AI can make mistakes — when accuracy matters, verify against the original-language source linked on each chapter page. Translations are produced by Thothica's pipeline across Claude / GPT / Gemini. Underlying source archives:

  • Old English (Cynewulf, OE Elegies) — sacred-texts.com
  • Sanskrit smṛti corpusGRETIL, Göttingen Register of Electronic Texts in Indian Languages
  • Allama Iqbal (Bāng-i-Darā) — allamaiqbal.com, Iqbal Academy Pakistan
  • Mirza Ghalib + Sheikh Ibrahim Zauq — printed editions

Full source acknowledgments at falsafa.ai/about/#sources.

Links

  • falsafa.ai — reading site, eval explorer, thesis on why this design
  • falsafa.ai/thesis/#methodology — how eval scoring works (deterministic citation check; no LLM judge)
  • GitHubadoistic/falsafa for source

License

MIT.