brainmaxxing
v0.2.2
Published
Obsidian-native LLM wiki with tiered search and visual dashboard, usable by any agent via MCP
Readme
brainmaxxing
a three-tier knowledge pipeline for AI agents
Raw sources go in, compiled wiki pages come out, generated artifacts build on top. The knowledge doesn't decay — it compounds.
Install
npm install -g brainmaxxing
brainmaxxing setupThe setup wizard walks you through 4 steps:
- Knowledge base mode — global wiki, project-specific (
.wiki/per repo), or both (default) - Storage location — auto-detects Obsidian vaults, or pick a custom path. Defaults to
~/.brainmaxxing/wiki - Search engine —
fts(BM25, no download, default) orhybrid(BM25 + ~2GB vector model) - Register MCP server with Claude Code — say yes. Writes
brainmaxxinginto~/.claude.json
Install the Claude Code plugin (optional)
Adds /wiki:* slash commands on top of the MCP tools:
/plugin marketplace add ShahriarBijoy/brainmaxxing
/plugin install brainmaxxing@brainmaxxing
/reload-pluginsPrefer the UI? Run /plugin and use the Marketplaces and Discover tabs.
Verify
/mcpYou should see brainmaxxing listed as connected. If not, restart Claude Code.
Quick start
Just talk to Claude — it calls the MCP tools for you:
You: dump ~/papers/attention-is-all-you-need.pdf into my wiki
You: cook a page about the transformer architecture from that paper
You: lookup "attention mechanism"
You: what does my wiki currently contain?
You: check the health of my knowledge graphOr use the plugin's workflow commands:
/wiki:setup # interactive walkthrough
/wiki:research RLHF # full pipeline: web search → raw → wiki → output
/wiki:ingest # process anything sitting in raw/
/wiki:query "what do I know about reward hacking?"
/wiki:lint # graph health check with fix suggestions
/wiki:compile # synthesize wiki pages into an output documentDashboard
Auto-starts with the MCP server at http://localhost:6969 (force graph, page browser, graph health). To launch standalone:
brainmaxxing webMCP tools
14 tools across three tiers. All data-only — no internal LLM calls, no API key needed. The host agent does its own reasoning.
Wiki tier
| Tool | What it does |
|------|-------------|
| recall | Read a wiki page by slug |
| cook | Create/update a page with typed relationships |
| lookup | Search pages (BM25 via FTS5, or hybrid BM25 + qmd vector ranking) |
| shelf | List all pages |
| inventory | Full manifest with metadata and relationships |
| vibes | Graph health check: orphans, contradictions, broken links |
| yap | Append to the operation log |
| tidy | Regenerate the index page |
| status | System health across all three tiers |
Raw tier
| Tool | What it does |
|------|-------------|
| dump | Add a file to raw/ for later processing |
| stash | List raw files with processing status |
Output tier
| Tool | What it does |
|------|-------------|
| drop | Write a generated artifact to output/ |
| receipts | List outputs with citation and promotion status |
| glow_up | Promote an output to a full wiki page |
All tools accept scope: "auto" (default), "global", "project", or "all".
Plugin skills
| Skill | Workflow |
|-------|---------|
| /wiki:setup | Interactive setup wizard |
| /wiki:research | Full pipeline: web search → raw/ → wiki/ → output/ |
| /wiki:ingest | Process raw/ files into wiki/ pages |
| /wiki:query | Answer questions from wiki/, save to output/ |
| /wiki:lint | Graph health check with fix suggestions |
| /wiki:compile | Synthesize wiki/ pages into output/ documents |
How it works
Three folders:
your-wiki/
raw/ ← Dump anything here. PDFs, articles, notes.
wiki/ ← Compiled knowledge. Agent-organized, interconnected.
output/ ← Generated artifacts. Reports, answers, analyses.The agent reads from raw/, writes structured pages to wiki/ with typed semantic relationships, and generates research outputs that cite wiki pages. When an output is good enough, it gets promoted back into the wiki with glow_up.
Research orchestration
You: "research RLHF"
│
▼
┌─────────────────────┐
│ Break into facets │ "theory", "implementations", "limitations"
└────────┬────────────┘
│
┌────┼────┐
▼ ▼ ▼
Agent Agent Agent ← each researches one facet independently
│ │ │
└────┼────┘
▼
┌─────────────────────┐
│ Merge findings │ deduplicate, find cross-connections, flag contradictions
└────────┬────────────┘
▼
┌─────────────────────┐
│ Cook wiki pages │ one page per concept, typed relationships
└────────┬────────────┘
▼
┌─────────────────────┐
│ Drop synthesis │ output report citing all pages created
└─────────────────────┘Subagents only gather information — they never write to the wiki. The orchestrator makes all editorial calls. For narrow topics, it skips the fan-out and researches linearly.
Promotion
raw/ wiki/ output/
messy sources → curated pages → generated artifacts
│
worth keeping?
│
┌─────┴─────┐
no yes
│
glow_up
│
▼
back into wiki/
as a synthesis pageglow_up promotes an output to a permanent wiki page, automatically creating summarizes relationships to every page the output cited.
Relationship types
| Type | Meaning |
|------|---------|
| supports | Evidence that strengthens the target |
| contradicts | Evidence that challenges the target |
| extends | Builds on the target concept |
| supersedes | Replaces the target |
| implements | Practical application of a concept |
| cites | References as a source |
| summarizes | Condensed version of the target |
Every relationship requires a reason string — not just "A contradicts B" but why. Those reasons are indexed and searchable.
Configuration
Created by brainmaxxing setup at ~/.brainmaxxing/config.yaml:
mode: both # global | project | both
globalWikiPath: "D:/obsidian/my-vault/brainmaxxing"
searchTier: fts # fts (default) | hybrid (BM25 + vectors)
dashboardPort: 6969 # auto-starts with MCP serverWiki page schema
---
title: "Reinforcement Learning from Human Feedback"
type: concept # summary | entity | concept | comparison | synthesis
sources: ["https://arxiv.org/abs/2203.02155"]
relationships:
- slug: reward-hacking
type: contradicts
reason: "Reward hacking demonstrates RLHF can be gamed"
- slug: transformer-architecture
type: extends
reason: "RLHF builds on pretrained transformer models"
created: "2026-04-06T00:00:00.000Z"
updated: "2026-04-06T00:00:00.000Z"
confidence: high # high | medium | low
status: active # active | stale | superseded
---
Content with [[wikilinks]] to other pages.Architecture
brainmaxxing/
├── src/
│ ├── cli.ts # CLI: setup, web, mcp
│ ├── types.ts # WikiPage, Relationship, RawFileStatus, OutputMeta
│ ├── schemas.ts # Zod validation (relationships, frontmatter, config)
│ ├── mcp/server.ts # MCP server (14 data-only tools)
│ ├── wiki/ # Reader, writer, graph analysis, path resolution
│ ├── raw/ # Manifest tracking, file ingestion
│ ├── output/ # Writer, promotions tracking
│ ├── search/ # FTS5 (default) + QMD hybrid (opt-in)
│ ├── config/ # Config loader + setup wizard with migration
│ └── web/ # Vite + Express dashboard (force graph, page browser)
├── skills/ # Claude Code plugin skills (6 workflows)
├── AGENTS.md # Agent protocol for any MCP client
└── package.json # Claude Code plugin manifestTech: TypeScript, Zod 4, MCP SDK, SQLite FTS5, gray-matter, vitest. Optional: @tobilu/qmd for hybrid vector search.
Credits
Inspired by Karpathy's LLM Wiki idea.
License
MIT
