npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@titan-design/brain

v0.3.0

Published

Developer second brain with hybrid RAG search

Readme

Brain

Personal knowledge base and memory engine with hybrid RAG search (BM25 + vector embeddings), LLM-powered memory extraction, and temporal intelligence.

Install

npm install @titan-design/brain

Requires Node >= 22.

Quick Start

brain init                    # Initialize workspace
brain index                   # Index notes
brain search "query"          # Hybrid search
brain quick "thought"         # Capture to inbox
brain extract --all           # Extract memories (requires Ollama)

Commands

| Command | Description | |---------|-------------| | brain init | Initialize workspace and database | | brain index | Index all markdown notes | | brain search "query" | Hybrid BM25 + vector search | | brain add <file> | Add a note from file or stdin | | brain quick "text" | Zero-friction capture to inbox | | brain inbox | View/manage inbox items | | brain ingest | Bulk-import files to inbox | | brain feed | Manage RSS feed subscriptions | | brain extract | Extract memories from notes (Ollama) | | brain memories | List, history, and stats for memories | | brain context <id> | Show context for a note (relations + memories) | | brain profile | Generate agent context profile | | brain tidy | LLM-powered note cleanup suggestions | | brain doctor | System health checks (--fix for auto-repair) | | brain install-hooks | Set up launchd/systemd scheduled processing | | brain status | Database stats | | brain stale | Notes needing review | | brain graph <id> | Show note relations | | brain template <type> | Output frontmatter template | | brain archive | Archive expired notes | | brain config | View/set configuration |

How It Works

Brain indexes markdown files with YAML frontmatter into a SQLite database. It combines three layers:

Search — Hybrid BM25 full-text search (FTS5) + vector similarity (sqlite-vec) with reciprocal rank fusion. Optional cross-encoder reranking via --rerank.

Memory extraction — Ollama LLM extracts discrete facts from notes, then reconciles them against existing memories (ADD/UPDATE/DELETE). Memories are versioned with parent chains, temporal validity (valid_at/invalid_at), and automatic forgetting (forget_after).

Capture pipeline — Zero-friction ingestion from CLI quick capture, file import, and RSS feed subscriptions. Items flow through an inbox queue before being indexed.

Embedding Backends

  • Local@huggingface/transformers (default, no external dependencies)
  • Ollama — local Ollama server
  • Remote — configurable API endpoint

Note Tiers

  • slow — permanent knowledge (decisions, patterns, research) with review intervals
  • fast — ephemeral (meetings, session logs) with expiry dates

Architecture

src/
  cli.ts                — Entry point, Commander program
  types.ts              — All TypeScript interfaces
  utils.ts              — Shared utilities
  commands/             — CLI commands (22 commands)
  services/
    brain-db.ts         — Database facade (delegates to repos)
    brain-service.ts    — Resource management (withBrain/withDb)
    repos/
      note-repo.ts      — Notes, files, chunks, relations, FTS, search queries
      memory-repo.ts    — Memory entries, history, vectors
      capture-repo.ts   — Inbox items, feed records
    config.ts           — Configuration loading
    file-scanner.ts     — File change detection
    markdown-parser.ts  — Frontmatter + heading-aware chunking
    search.ts           — Hybrid search orchestration
    graph.ts            — Note relation traversal
    indexing.ts         — Index pipeline
    memory-extractor.ts — LLM fact extraction and reconciliation
    ollama.ts           — Ollama client and health checks
    health.ts           — System health check service
    reranker.ts         — Cross-encoder reranking
  adapters/             — Embedder backends (local/ollama/remote)

Storage: SQLite via better-sqlite3 with FTS5 and sqlite-vec

Development

npm install
npm test              # Vitest (380 tests)
npm run build         # tsup → dist/cli.js
npm run typecheck     # tsc --noEmit
npm run lint          # ESLint
npx tsx src/cli.ts    # Run CLI in dev

License

MIT