npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@andespindola/brainlink

v0.1.0-beta.1

Published

Local-first knowledge memory for agents with Markdown, backlinks, indexing and context retrieval.

Readme

Brainlink

Local-first memory and knowledge graph for AI agents.

Brainlink turns a folder of Markdown files into a searchable, link-aware memory layer that agents can use before answering, planning, coding, documenting or handing work to another agent.

It is inspired by Obsidian-style knowledge bases: plain Markdown, [[wiki links]], backlinks, tags and graph navigation. The difference is that Brainlink is built for automation first: CLI, JSON output, local HTTP API and a graph frontend.

Purpose

This repository exists to give agents a durable project memory that lives outside the model context window.

Without Brainlink, an agent usually depends on:

  • the current prompt
  • the current chat history
  • files it happens to inspect in this run
  • short-lived assumptions made during a task

With Brainlink, an agent can persist and retrieve:

  • architecture decisions
  • coding conventions
  • user preferences
  • operational runbooks
  • previous investigation summaries
  • domain concepts
  • unresolved questions
  • handoff notes for other agents

The goal is not to replace the model's context window. The goal is to make the model's context window smarter by filling it with the most relevant memory before the agent acts.

How Brainlink Improves Agents

Brainlink improves agent behavior by giving them a repeatable memory workflow:

  • Less repeated discovery: agents can retrieve previous decisions instead of rediscovering the same context.
  • Better continuity: long-running projects keep memory across sessions and across different agents.
  • Grounded answers: context packages include source paths, titles, tags and relevant excerpts.
  • Safer collaboration: agent namespaces prevent private context from different agents being mixed accidentally.
  • Inspectable memory: humans can open the vault as plain Markdown or in Obsidian-like tools.
  • Automation-ready output: CLI commands support --json, so agents and external tool servers can consume stable machine-readable responses.

Why

LLMs do not have infinite context. Brainlink gives agents an external memory layer:

  1. Durable knowledge is written as Markdown.
  2. Notes are connected with [[wiki links]].
  3. Concepts are classified with #tags.
  4. Brainlink builds a local SQLite index with FTS records and local embeddings.
  5. Agents query the index before responding.
  6. Brainlink returns compact, source-backed context.

Markdown is the source of truth. .brainlink/brainlink.db is only a rebuildable index.

Features

  • Local-first Markdown vault.
  • Obsidian-compatible [[wiki links]] and #tags.
  • Weighted graph edges so agents can rank relationship importance and priority.
  • Backlinks, broken-link reports, orphan detection and validation.
  • Full-text, semantic and hybrid retrieval modes.
  • SQLite-backed semantic candidate buckets for larger vaults.
  • Agent namespaces under agents/<agent-id>/.
  • S3-compatible bucket vaults through s3://bucket/prefix URIs.
  • CLI with machine-readable --json output.
  • Short CLI alias: blink.
  • Built-in MCP stdio server for agent tool integration.
  • Local HTTP API.
  • Realtime graph UI with agent selector and colored knowledge groups.

Install

Requirements:

  • Node.js >=22.5.0
  • npm
npm install -g @andespindola/brainlink@latest

The package installs two equivalent commands:

brainlink --help
blink --help

Use brainlink when clarity matters. Use blink for faster daily terminal usage.

The npm package page may show npm i @andespindola/brainlink. That installs Brainlink as a project dependency. Use -g when you want the terminal commands available globally, or run it without a global install:

npx @andespindola/brainlink@latest --help

Agent Autonomous Setup

This section is written for agents. If you are an AI agent operating in a user workspace, use this workflow to install and use Brainlink without requiring the user to run commands manually.

1. Check Whether Brainlink Is Installed

command -v blink || command -v brainlink

If neither command exists, install it:

npm install -g @andespindola/brainlink@latest

Then verify:

blink --help

2. Use The Default Vault Or Choose A Custom Vault

By default, Brainlink stores memory in:

$HOME/.brainlink/vault

Use the default vault when memory should span many projects:

blink init

Choose a project-local vault only when memory belongs to one repository:

BLINK_VAULT=".brainlink-vault"
blink init "$BLINK_VAULT"

You can also set BRAINLINK_HOME to move Brainlink's default home directory:

export BRAINLINK_HOME="/path/to/brainlink-home"

3. Choose An Agent Namespace

Use a stable namespace for yourself:

BLINK_AGENT="coding-agent"

Use shared only for knowledge that should be visible to every workflow.

4. Read Memory Before Acting

Before answering or changing code, retrieve context:

blink context "What should I know before working on this task?" \
  --agent "$BLINK_AGENT" \
  --json

If the context is weak, inspect raw search results:

blink search "architecture conventions tests release" \
  --agent "$BLINK_AGENT" \
  --mode hybrid \
  --limit 10 \
  --json

5. Write Durable Memory After Learning Something Useful

Only store knowledge that is likely to matter later:

blink add "Testing Policy" \
  --agent "$BLINK_AGENT" \
  --content "Run npm run check before final delivery. Related: [[Release Checklist]]. #testing #process"

Brainlink does not infer durable graph relationships from generated context. A context result is only a read package for the model. To create a real link in the knowledge graph, the agent must write Markdown that contains an explicit [[Note Title]] wiki link.

Writes with blink add reindex the vault automatically by default. This can be disabled with --no-auto-index and controlled globally with autoIndexOnWrite in brainlink.config.json.

When adding memory, follow this contract:

  • Link the new note to at least one existing note when there is a related concept.
  • Use the exact target note title inside [[...]].
  • Add retrieval tags such as #architecture, #decision, #runbook or #preference.
  • Do not leave isolated notes unless they are intentionally root concepts.

If you disable auto-index, run blink index after batched writes.

6. Validate Memory Health

blink validate --agent "$BLINK_AGENT" --json
blink broken-links --agent "$BLINK_AGENT" --json
blink orphans --agent "$BLINK_AGENT" --json

Agent Operating Loop

Use this loop during real work:

  1. Identify the task and choose BLINK_AGENT.
  2. Run blink context "<task>" --agent "$BLINK_AGENT" --json.
  3. Use returned sources as project memory.
  4. Perform the task.
  5. Save only durable learnings with blink add, including [[wiki links]] to related notes.
  6. Run blink index only when auto-index was disabled during a batch.
  7. Validate with blink validate, blink broken-links and blink orphans when graph links matter.

Do not store secrets, credentials, private keys, access tokens or transient chat noise.

Quick Start

blink init ./vault

blink add "Architecture" \
  --vault ./vault \
  --content "Brainlink keeps Markdown as source of truth. #architecture"

blink add "Auth Decision" \
  --vault ./vault \
  --content "We chose JWT for API clients. [[Architecture]] #auth #jwt"

blink search "jwt auth" --vault ./vault

blink context "how does auth work?" --vault ./vault

blink server --vault ./vault --watch

Open the graph UI:

http://127.0.0.1:4321

When --vault is omitted, commands use the default vault at $HOME/.brainlink/vault. Pass --vault or configure vault in brainlink.config.json when you want a custom project-local vault.

Bucket Vaults

Brainlink can use an S3-compatible bucket as the Markdown source of truth:

export AWS_REGION="us-east-1"
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."

blink add "Architecture" \
  --vault "s3://my-memory-bucket/brainlink" \
  --content "Bucket Markdown is the source of truth. #architecture"

blink index --vault "s3://my-memory-bucket/brainlink"
blink context "architecture" --vault "s3://my-memory-bucket/brainlink"

For Cloudflare R2, MinIO or another S3-compatible endpoint:

export BRAINLINK_S3_ENDPOINT="https://<account-id>.r2.cloudflarestorage.com"
export BRAINLINK_S3_FORCE_PATH_STYLE=1

Bucket vaults mirror Markdown into a local cache under $BRAINLINK_HOME/bucket-cache. The bucket remains canonical; the local .brainlink/brainlink.db stays a disposable index. Run index after remote bucket changes before relying on search, context, graph or validation commands. Watch mode is only supported for local filesystem vaults.

Core Model

vault/
  agents/
    shared/
      architecture.md
    coding-agent/
      implementation-policy.md
    research-agent/
      source-review-policy.md
  .brainlink/
    brainlink.db

Permanent data:

  • Markdown notes
  • optional Git history around the vault

Rebuildable data:

  • .brainlink/brainlink.db
  • full-text records
  • local embedding vectors
  • local embedding buckets
  • chunks
  • resolved links
  • backlinks

Agent Namespaces

Brainlink separates memory by agent so multiple agents can use the same CLI without mixing private context.

Use shared for project-wide memory:

blink add "Project Rules" \
  --vault ./vault \
  --agent shared \
  --content "All agents should run tests before final answers. #process"

Use a dedicated namespace for private agent memory:

blink add "TypeScript Policy" \
  --vault ./vault \
  --agent coding-agent \
  --content "Prefer explicit types and functional core boundaries. [[Project Rules]] #typescript"

List indexed namespaces:

blink agents --vault ./vault
blink agents --vault ./vault --json

Query a single namespace:

blink search "typescript" --vault ./vault --agent coding-agent --json
blink search "authentication token policy" --vault ./vault --agent coding-agent --mode semantic --json
blink context "how should I change this module?" --vault ./vault --agent coding-agent
blink graph --vault ./vault --agent coding-agent --json

Link resolution is scoped:

  1. same agent namespace
  2. shared
  3. unresolved link

This allows coding-agent and research-agent to both have a note named Architecture without contaminating each other's private memory.

MCP Server Integration

Brainlink ships a stdio MCP server with the npm package:

brainlink-mcp

Example MCP client configuration:

{
  "mcpServers": {
    "brainlink": {
      "command": "brainlink-mcp"
    }
  }
}

Available tools:

  • brainlink_context: read indexed context for a task or question.
  • brainlink_search: search indexed notes.
  • brainlink_add_note: write durable Markdown memory and reindex.
  • brainlink_add_file: ingest a local file as a note and reindex.
  • brainlink_index: rebuild the vault index.
  • brainlink_validate: validate broken links and orphan notes.
  • brainlink_graph: read indexed graph nodes and weighted links.
  • brainlink_broken_links: list unresolved wiki links.
  • brainlink_orphans: list disconnected notes.

The same linking rule applies through MCP: brainlink_context is read-only, and real graph links require Markdown notes with explicit [[wiki links]]. brainlink_add_note and brainlink_add_file reindex by default and include the index result when enabled.

Agents can raise the importance of a relationship by putting priority markers on the same line as a wiki link:

- [ ] Review [[Architecture]] priority: high
Related: [[Incident Runbook]] #critical

Indexed edges expose weight and priority (low, normal, high, critical) through CLI JSON, HTTP graph APIs and brainlink_graph.

Graph UI

Start the local frontend:

blink server --host 127.0.0.1 --port 4321 --watch

By default, the server uses $HOME/.brainlink/vault. Pass --vault ./vault only when you want to inspect a custom vault.

The graph UI shows:

  • notes as nodes
  • [[wiki links]] as weighted edges
  • backlinks and outgoing links
  • full Markdown content for the selected note
  • neutral graph nodes with segment/group metadata
  • agent selector for isolated views
  • realtime refresh while --watch is enabled

The server indexes before starting by default. Use --no-index to skip that step:

blink server --vault ./vault --no-index

HTTP API

The HTTP API is read-only and exists only to power the graph UI and local inspection workflows.

The server always refuses non-loopback hosts. Brainlink HTTP only runs on localhost.

Routes:

  • GET /api/agents
  • GET /api/graph
  • GET /api/graph-layout
  • GET /api/search?q=<query>&limit=10&mode=hybrid
  • GET /api/context?q=<query>&limit=12&tokens=2000&mode=hybrid
  • GET /api/links
  • GET /api/backlinks?title=<title>
  • GET /api/stats
  • GET /api/broken-links
  • GET /api/orphans
  • GET /api/validate

Read routes accept agent=<agent-id>:

/api/graph-layout?agent=coding-agent
/api/search?q=typescript&agent=coding-agent&mode=hybrid
/api/context?q=module-boundaries&agent=coding-agent&mode=semantic

CLI Reference

Every command works with either brainlink or blink.

init

blink init
blink init ./vault

Initializes vault metadata. Without an argument, Brainlink initializes the default vault at $HOME/.brainlink/vault.

add

blink add "Note Title" --agent coding-agent --content "Markdown content"
blink add "Note Title" --vault ./vault --agent coding-agent --content "Markdown content"
blink add "Note Title" --vault ./vault --content-file ./notes.md
blink add "Note Title" --vault ./vault --content-file ./notes.md --no-auto-index

--content and --content-file are mutually exclusive. Add --no-auto-index when you want to defer reindexing.

Creates a Markdown note under agents/<agent-id>/. Common secret patterns are blocked by default; use --allow-sensitive only for an intentionally protected vault.

index

blink index
blink index --vault ./vault

Rebuilds the local index from Markdown files.

agents

blink agents --vault ./vault
blink agents --vault ./vault --json

Lists indexed agent namespaces.

search

blink search "query" --vault ./vault --limit 10
blink search "query" --vault ./vault --agent coding-agent --json
blink search "query" --vault ./vault --mode semantic --json

Runs retrieval over indexed chunks.

Modes:

  • hybrid: default; combines SQLite FTS with local embedding similarity.
  • fts: exact lexical retrieval through SQLite FTS.
  • semantic: local deterministic embedding similarity only.

context

blink context "question" --vault ./vault --limit 12 --tokens 2000
blink context "question" --vault ./vault --agent coding-agent --json
blink context "question" --vault ./vault --agent coding-agent --mode hybrid --json

Builds a compact context package for an agent.

links

blink links --vault ./vault
blink links --vault ./vault --agent coding-agent

Lists indexed wiki links. JSON output includes weight and priority for each relationship.

backlinks

blink backlinks "Architecture" --vault ./vault
blink backlinks "Architecture" --vault ./vault --agent coding-agent

Lists notes pointing to a target title, ordered by strongest relationship first. JSON output includes weight and priority.

graph

blink graph --vault ./vault --json
blink graph --vault ./vault --agent coding-agent --json

Prints indexed graph data. Edges include weight and priority so agents can categorize importance.

stats

blink stats --vault ./vault
blink stats --vault ./vault --agent coding-agent --json

Prints vault metrics.

broken-links

blink broken-links --vault ./vault

Lists unresolved wiki links.

orphans

blink orphans --vault ./vault

Lists notes without incoming or outgoing links.

validate

blink validate --vault ./vault

Validates graph health. The command exits non-zero when required checks fail.

doctor

blink doctor --vault ./vault

Runs environment and vault checks.

watch

blink watch --vault ./vault

Watches Markdown files and rebuilds the index when notes change.

server

blink server --watch
blink server --vault ./vault --watch

Starts the local read-only graph UI and HTTP API.

The HTTP server only binds to loopback hosts such as 127.0.0.1, localhost or ::1.

Machine-Readable Output

Commands with finite output support --json:

blink context "question" --vault ./vault --agent coding-agent --json

When running through npm scripts, use --silent to keep stdout clean:

npm run --silent dev -- context "question" --vault ./vault --json

Configuration

Brainlink reads brainlink.config.json or .brainlink.json from the current working directory. If no vault is configured and no --vault flag is passed, Brainlink uses $HOME/.brainlink/vault.

{
  "vault": ".brainlink-vault",
  "host": "127.0.0.1",
  "port": 4321,
  "allowedVaults": [".brainlink-vault"],
  "defaultAgent": "shared",
  "autoIndexOnWrite": true,
  "defaultSearchLimit": 10,
  "defaultContextTokens": 2000,
  "embeddingProvider": "local",
  "defaultSearchMode": "hybrid",
  "chunkSize": 1200
}

`defaultAgent` is optional. When set, CLI and MCP calls that omit `--agent`/`agent` use this value automatically. If not set, behavior remains as before.

`autoIndexOnWrite` is optional and defaults to `true`. Set it to `false` to defer indexing after writes.

Use "embeddingProvider": "none" when you want FTS-only indexing.

For local security checks, set your Snyk token in the environment:

export SNYK_TOKEN="snyk_..."

For GitHub Actions, add a repository secret SNYK_TOKEN and the CI/publish workflows will consume it automatically during build/test.

Set BRAINLINK_ALLOWED_VAULTS for external wrappers, including MCP servers, so a tool cannot pass arbitrary --vault paths:

export BRAINLINK_ALLOWED_VAULTS="/absolute/path/to/project-vault,/absolute/path/to/team-vault"

Bucket vaults can be allowlisted with the same variable:

export BRAINLINK_ALLOWED_VAULTS="s3://my-memory-bucket/brainlink"

Note Format

Brainlink supports Markdown with optional frontmatter:

---
title: "Auth Decision"
agent: "coding-agent"
type: "decision"
---

# Auth Decision

We chose JWT for API clients.

Related:

- [[Architecture]]
- [[API Gateway]]

#auth #jwt #decision

Supported signals:

  • title frontmatter
  • agent frontmatter
  • first # Heading
  • file name fallback
  • [[Wiki Link]]
  • [[Wiki Link#Section]]
  • [[Wiki Link|Alias]]
  • #tags

Development

npm install
npm run build
npm run test
npm run check

Large vault benchmark:

npm run benchmark:large -- --notes 5000

Use --keep to keep the generated temporary vault for inspection.

Local CLI:

npm run dev -- --help
npm run dev -- server --vault .brainlink-vault --watch

Package smoke test:

npm run pack:smoke

Architecture

src/
  application/      use cases
  cli/              command-line adapter
  domain/           pure knowledge rules
  infrastructure/   filesystem and SQLite adapters

Detailed notes:

Current Limits

  • Semantic search uses deterministic local embeddings, not a remote model provider.
  • Semantic search uses SQLite embedding buckets to narrow candidates before cosine scoring.
  • embeddingProvider currently supports local and none.
  • Link resolution is title-based inside each agent namespace, with shared as fallback.
  • HTTP API is local and unauthenticated.
  • Watch mode depends on the platform filesystem watcher.

Alpha Scope

0.1.0-alpha.0 is intended to prove the local-first memory loop:

  • Markdown as durable memory.
  • SQLite FTS plus local embeddings and semantic buckets as rebuildable retrieval index.
  • CLI as the primary agent interface.
  • HTTP graph API and frontend as inspection tools.
  • Agent namespaces to avoid context mixing.

The alpha includes local semantic retrieval. Remote embedding providers, remote auth, advanced deduplication and graph editing are future milestones.

Security

Brainlink is local-first by default.

  • Do not expose the HTTP server publicly without authentication.
  • Brainlink HTTP is localhost-only and refuses non-loopback hosts.
  • Brainlink blocks common secret patterns by default when adding notes. Use --allow-sensitive only for intentional, protected vaults.
  • Do not store secrets, credentials, API keys or regulated personal data unless the vault is protected by your own storage controls.
  • Treat .brainlink/brainlink.db as disposable derived data.

See SECURITY.md.

Contributing

See CONTRIBUTING.md.

License

MIT. See LICENSE.

Memory Optimization Loop (1-7)

Use this when your agent work needs consistent memory quality:

  1. Start with blink context "<task>" --agent "$BLINK_AGENT" --json.
  2. Keep notes focused with explicit [[wiki links]] and #tags.
  3. Route agent-specific knowledge to dedicated namespaces under agents/<agent-id>/.
  4. Keep shared as a curated global layer only.
  5. Use targeted queries (--limit, explicit terms, --mode hybrid) before broad scans.
  6. Run the sync command after writing notes:
npm run brainlink:sync -- --vault ./vault --agent "$BLINK_AGENT"
  1. Before final response, keep the returned context sources as the grounding baseline.