npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

brainmaxxing

v0.2.2

Published

Obsidian-native LLM wiki with tiered search and visual dashboard, usable by any agent via MCP

Readme

brainmaxxing

a three-tier knowledge pipeline for AI agents

Raw sources go in, compiled wiki pages come out, generated artifacts build on top. The knowledge doesn't decay — it compounds.


Install

npm install -g brainmaxxing
brainmaxxing setup

The setup wizard walks you through 4 steps:

  1. Knowledge base mode — global wiki, project-specific (.wiki/ per repo), or both (default)
  2. Storage location — auto-detects Obsidian vaults, or pick a custom path. Defaults to ~/.brainmaxxing/wiki
  3. Search enginefts (BM25, no download, default) or hybrid (BM25 + ~2GB vector model)
  4. Register MCP server with Claude Code — say yes. Writes brainmaxxing into ~/.claude.json

Install the Claude Code plugin (optional)

Adds /wiki:* slash commands on top of the MCP tools:

/plugin marketplace add ShahriarBijoy/brainmaxxing
/plugin install brainmaxxing@brainmaxxing
/reload-plugins

Prefer the UI? Run /plugin and use the Marketplaces and Discover tabs.

Verify

/mcp

You should see brainmaxxing listed as connected. If not, restart Claude Code.


Quick start

Just talk to Claude — it calls the MCP tools for you:

You: dump ~/papers/attention-is-all-you-need.pdf into my wiki
You: cook a page about the transformer architecture from that paper
You: lookup "attention mechanism"
You: what does my wiki currently contain?
You: check the health of my knowledge graph

Or use the plugin's workflow commands:

/wiki:setup           # interactive walkthrough
/wiki:research RLHF   # full pipeline: web search → raw → wiki → output
/wiki:ingest          # process anything sitting in raw/
/wiki:query "what do I know about reward hacking?"
/wiki:lint            # graph health check with fix suggestions
/wiki:compile         # synthesize wiki pages into an output document

Dashboard

Auto-starts with the MCP server at http://localhost:6969 (force graph, page browser, graph health). To launch standalone:

brainmaxxing web

MCP tools

14 tools across three tiers. All data-only — no internal LLM calls, no API key needed. The host agent does its own reasoning.

Wiki tier

| Tool | What it does | |------|-------------| | recall | Read a wiki page by slug | | cook | Create/update a page with typed relationships | | lookup | Search pages (BM25 via FTS5, or hybrid BM25 + qmd vector ranking) | | shelf | List all pages | | inventory | Full manifest with metadata and relationships | | vibes | Graph health check: orphans, contradictions, broken links | | yap | Append to the operation log | | tidy | Regenerate the index page | | status | System health across all three tiers |

Raw tier

| Tool | What it does | |------|-------------| | dump | Add a file to raw/ for later processing | | stash | List raw files with processing status |

Output tier

| Tool | What it does | |------|-------------| | drop | Write a generated artifact to output/ | | receipts | List outputs with citation and promotion status | | glow_up | Promote an output to a full wiki page |

All tools accept scope: "auto" (default), "global", "project", or "all".


Plugin skills

| Skill | Workflow | |-------|---------| | /wiki:setup | Interactive setup wizard | | /wiki:research | Full pipeline: web search → raw/ → wiki/ → output/ | | /wiki:ingest | Process raw/ files into wiki/ pages | | /wiki:query | Answer questions from wiki/, save to output/ | | /wiki:lint | Graph health check with fix suggestions | | /wiki:compile | Synthesize wiki/ pages into output/ documents |


How it works

Three folders:

your-wiki/
  raw/        ← Dump anything here. PDFs, articles, notes.
  wiki/       ← Compiled knowledge. Agent-organized, interconnected.
  output/     ← Generated artifacts. Reports, answers, analyses.

The agent reads from raw/, writes structured pages to wiki/ with typed semantic relationships, and generates research outputs that cite wiki pages. When an output is good enough, it gets promoted back into the wiki with glow_up.

Research orchestration

You: "research RLHF"
         │
         ▼
┌─────────────────────┐
│  Break into facets   │  "theory", "implementations", "limitations"
└────────┬────────────┘
         │
    ┌────┼────┐
    ▼    ▼    ▼
  Agent Agent Agent     ← each researches one facet independently
    │    │    │
    └────┼────┘
         ▼
┌─────────────────────┐
│  Merge findings      │  deduplicate, find cross-connections, flag contradictions
└────────┬────────────┘
         ▼
┌─────────────────────┐
│  Cook wiki pages     │  one page per concept, typed relationships
└────────┬────────────┘
         ▼
┌─────────────────────┐
│  Drop synthesis      │  output report citing all pages created
└─────────────────────┘

Subagents only gather information — they never write to the wiki. The orchestrator makes all editorial calls. For narrow topics, it skips the fan-out and researches linearly.

Promotion

raw/                    wiki/                   output/
messy sources    →    curated pages      →    generated artifacts
                                                    │
                                              worth keeping?
                                                    │
                                              ┌─────┴─────┐
                                              no          yes
                                                           │
                                                       glow_up
                                                           │
                                                           ▼
                                                   back into wiki/
                                                   as a synthesis page

glow_up promotes an output to a permanent wiki page, automatically creating summarizes relationships to every page the output cited.


Relationship types

| Type | Meaning | |------|---------| | supports | Evidence that strengthens the target | | contradicts | Evidence that challenges the target | | extends | Builds on the target concept | | supersedes | Replaces the target | | implements | Practical application of a concept | | cites | References as a source | | summarizes | Condensed version of the target |

Every relationship requires a reason string — not just "A contradicts B" but why. Those reasons are indexed and searchable.


Configuration

Created by brainmaxxing setup at ~/.brainmaxxing/config.yaml:

mode: both                # global | project | both
globalWikiPath: "D:/obsidian/my-vault/brainmaxxing"
searchTier: fts           # fts (default) | hybrid (BM25 + vectors)
dashboardPort: 6969       # auto-starts with MCP server

Wiki page schema

---
title: "Reinforcement Learning from Human Feedback"
type: concept             # summary | entity | concept | comparison | synthesis
sources: ["https://arxiv.org/abs/2203.02155"]
relationships:
  - slug: reward-hacking
    type: contradicts
    reason: "Reward hacking demonstrates RLHF can be gamed"
  - slug: transformer-architecture
    type: extends
    reason: "RLHF builds on pretrained transformer models"
created: "2026-04-06T00:00:00.000Z"
updated: "2026-04-06T00:00:00.000Z"
confidence: high          # high | medium | low
status: active            # active | stale | superseded
---

Content with [[wikilinks]] to other pages.

Architecture

brainmaxxing/
├── src/
│   ├── cli.ts              # CLI: setup, web, mcp
│   ├── types.ts            # WikiPage, Relationship, RawFileStatus, OutputMeta
│   ├── schemas.ts          # Zod validation (relationships, frontmatter, config)
│   ├── mcp/server.ts       # MCP server (14 data-only tools)
│   ├── wiki/               # Reader, writer, graph analysis, path resolution
│   ├── raw/                # Manifest tracking, file ingestion
│   ├── output/             # Writer, promotions tracking
│   ├── search/             # FTS5 (default) + QMD hybrid (opt-in)
│   ├── config/             # Config loader + setup wizard with migration
│   └── web/                # Vite + Express dashboard (force graph, page browser)
├── skills/                 # Claude Code plugin skills (6 workflows)
├── AGENTS.md               # Agent protocol for any MCP client
└── package.json            # Claude Code plugin manifest

Tech: TypeScript, Zod 4, MCP SDK, SQLite FTS5, gray-matter, vitest. Optional: @tobilu/qmd for hybrid vector search.


Credits

Inspired by Karpathy's LLM Wiki idea.

License

MIT