@andespindola/brainlink
v0.1.0-beta.1
Published
Local-first knowledge memory for agents with Markdown, backlinks, indexing and context retrieval.
Maintainers
Readme
Brainlink
Local-first memory and knowledge graph for AI agents.
Brainlink turns a folder of Markdown files into a searchable, link-aware memory layer that agents can use before answering, planning, coding, documenting or handing work to another agent.
It is inspired by Obsidian-style knowledge bases: plain Markdown, [[wiki links]], backlinks, tags and graph navigation. The difference is that Brainlink is built for automation first: CLI, JSON output, local HTTP API and a graph frontend.
Purpose
This repository exists to give agents a durable project memory that lives outside the model context window.
Without Brainlink, an agent usually depends on:
- the current prompt
- the current chat history
- files it happens to inspect in this run
- short-lived assumptions made during a task
With Brainlink, an agent can persist and retrieve:
- architecture decisions
- coding conventions
- user preferences
- operational runbooks
- previous investigation summaries
- domain concepts
- unresolved questions
- handoff notes for other agents
The goal is not to replace the model's context window. The goal is to make the model's context window smarter by filling it with the most relevant memory before the agent acts.
How Brainlink Improves Agents
Brainlink improves agent behavior by giving them a repeatable memory workflow:
- Less repeated discovery: agents can retrieve previous decisions instead of rediscovering the same context.
- Better continuity: long-running projects keep memory across sessions and across different agents.
- Grounded answers: context packages include source paths, titles, tags and relevant excerpts.
- Safer collaboration: agent namespaces prevent private context from different agents being mixed accidentally.
- Inspectable memory: humans can open the vault as plain Markdown or in Obsidian-like tools.
- Automation-ready output: CLI commands support
--json, so agents and external tool servers can consume stable machine-readable responses.
Why
LLMs do not have infinite context. Brainlink gives agents an external memory layer:
- Durable knowledge is written as Markdown.
- Notes are connected with
[[wiki links]]. - Concepts are classified with
#tags. - Brainlink builds a local SQLite index with FTS records and local embeddings.
- Agents query the index before responding.
- Brainlink returns compact, source-backed context.
Markdown is the source of truth. .brainlink/brainlink.db is only a rebuildable index.
Features
- Local-first Markdown vault.
- Obsidian-compatible
[[wiki links]]and#tags. - Weighted graph edges so agents can rank relationship importance and priority.
- Backlinks, broken-link reports, orphan detection and validation.
- Full-text, semantic and hybrid retrieval modes.
- SQLite-backed semantic candidate buckets for larger vaults.
- Agent namespaces under
agents/<agent-id>/. - S3-compatible bucket vaults through
s3://bucket/prefixURIs. - CLI with machine-readable
--jsonoutput. - Short CLI alias:
blink. - Built-in MCP stdio server for agent tool integration.
- Local HTTP API.
- Realtime graph UI with agent selector and colored knowledge groups.
Install
Requirements:
- Node.js
>=22.5.0 - npm
npm install -g @andespindola/brainlink@latestThe package installs two equivalent commands:
brainlink --help
blink --helpUse brainlink when clarity matters. Use blink for faster daily terminal usage.
The npm package page may show npm i @andespindola/brainlink. That installs
Brainlink as a project dependency. Use -g when you want the terminal commands
available globally, or run it without a global install:
npx @andespindola/brainlink@latest --helpAgent Autonomous Setup
This section is written for agents. If you are an AI agent operating in a user workspace, use this workflow to install and use Brainlink without requiring the user to run commands manually.
1. Check Whether Brainlink Is Installed
command -v blink || command -v brainlinkIf neither command exists, install it:
npm install -g @andespindola/brainlink@latestThen verify:
blink --help2. Use The Default Vault Or Choose A Custom Vault
By default, Brainlink stores memory in:
$HOME/.brainlink/vaultUse the default vault when memory should span many projects:
blink initChoose a project-local vault only when memory belongs to one repository:
BLINK_VAULT=".brainlink-vault"
blink init "$BLINK_VAULT"You can also set BRAINLINK_HOME to move Brainlink's default home directory:
export BRAINLINK_HOME="/path/to/brainlink-home"3. Choose An Agent Namespace
Use a stable namespace for yourself:
BLINK_AGENT="coding-agent"Use shared only for knowledge that should be visible to every workflow.
4. Read Memory Before Acting
Before answering or changing code, retrieve context:
blink context "What should I know before working on this task?" \
--agent "$BLINK_AGENT" \
--jsonIf the context is weak, inspect raw search results:
blink search "architecture conventions tests release" \
--agent "$BLINK_AGENT" \
--mode hybrid \
--limit 10 \
--json5. Write Durable Memory After Learning Something Useful
Only store knowledge that is likely to matter later:
blink add "Testing Policy" \
--agent "$BLINK_AGENT" \
--content "Run npm run check before final delivery. Related: [[Release Checklist]]. #testing #process"Brainlink does not infer durable graph relationships from generated context. A context result is only a read package for the model. To create a real link in the knowledge graph, the agent must write Markdown that contains an explicit [[Note Title]] wiki link.
Writes with blink add reindex the vault automatically by default. This can be disabled with --no-auto-index and controlled globally with autoIndexOnWrite in brainlink.config.json.
When adding memory, follow this contract:
- Link the new note to at least one existing note when there is a related concept.
- Use the exact target note title inside
[[...]]. - Add retrieval tags such as
#architecture,#decision,#runbookor#preference. - Do not leave isolated notes unless they are intentionally root concepts.
If you disable auto-index, run blink index after batched writes.
6. Validate Memory Health
blink validate --agent "$BLINK_AGENT" --json
blink broken-links --agent "$BLINK_AGENT" --json
blink orphans --agent "$BLINK_AGENT" --jsonAgent Operating Loop
Use this loop during real work:
- Identify the task and choose
BLINK_AGENT. - Run
blink context "<task>" --agent "$BLINK_AGENT" --json. - Use returned sources as project memory.
- Perform the task.
- Save only durable learnings with
blink add, including[[wiki links]]to related notes. - Run
blink indexonly when auto-index was disabled during a batch. - Validate with
blink validate,blink broken-linksandblink orphanswhen graph links matter.
Do not store secrets, credentials, private keys, access tokens or transient chat noise.
Quick Start
blink init ./vault
blink add "Architecture" \
--vault ./vault \
--content "Brainlink keeps Markdown as source of truth. #architecture"
blink add "Auth Decision" \
--vault ./vault \
--content "We chose JWT for API clients. [[Architecture]] #auth #jwt"
blink search "jwt auth" --vault ./vault
blink context "how does auth work?" --vault ./vault
blink server --vault ./vault --watchOpen the graph UI:
http://127.0.0.1:4321When --vault is omitted, commands use the default vault at $HOME/.brainlink/vault. Pass --vault or configure vault in brainlink.config.json when you want a custom project-local vault.
Bucket Vaults
Brainlink can use an S3-compatible bucket as the Markdown source of truth:
export AWS_REGION="us-east-1"
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
blink add "Architecture" \
--vault "s3://my-memory-bucket/brainlink" \
--content "Bucket Markdown is the source of truth. #architecture"
blink index --vault "s3://my-memory-bucket/brainlink"
blink context "architecture" --vault "s3://my-memory-bucket/brainlink"For Cloudflare R2, MinIO or another S3-compatible endpoint:
export BRAINLINK_S3_ENDPOINT="https://<account-id>.r2.cloudflarestorage.com"
export BRAINLINK_S3_FORCE_PATH_STYLE=1Bucket vaults mirror Markdown into a local cache under
$BRAINLINK_HOME/bucket-cache. The bucket remains canonical; the local
.brainlink/brainlink.db stays a disposable index. Run index after remote
bucket changes before relying on search, context, graph or validation
commands. Watch mode is only supported for local filesystem vaults.
Core Model
vault/
agents/
shared/
architecture.md
coding-agent/
implementation-policy.md
research-agent/
source-review-policy.md
.brainlink/
brainlink.dbPermanent data:
- Markdown notes
- optional Git history around the vault
Rebuildable data:
.brainlink/brainlink.db- full-text records
- local embedding vectors
- local embedding buckets
- chunks
- resolved links
- backlinks
Agent Namespaces
Brainlink separates memory by agent so multiple agents can use the same CLI without mixing private context.
Use shared for project-wide memory:
blink add "Project Rules" \
--vault ./vault \
--agent shared \
--content "All agents should run tests before final answers. #process"Use a dedicated namespace for private agent memory:
blink add "TypeScript Policy" \
--vault ./vault \
--agent coding-agent \
--content "Prefer explicit types and functional core boundaries. [[Project Rules]] #typescript"List indexed namespaces:
blink agents --vault ./vault
blink agents --vault ./vault --jsonQuery a single namespace:
blink search "typescript" --vault ./vault --agent coding-agent --json
blink search "authentication token policy" --vault ./vault --agent coding-agent --mode semantic --json
blink context "how should I change this module?" --vault ./vault --agent coding-agent
blink graph --vault ./vault --agent coding-agent --jsonLink resolution is scoped:
- same agent namespace
shared- unresolved link
This allows coding-agent and research-agent to both have a note named Architecture without contaminating each other's private memory.
MCP Server Integration
Brainlink ships a stdio MCP server with the npm package:
brainlink-mcpExample MCP client configuration:
{
"mcpServers": {
"brainlink": {
"command": "brainlink-mcp"
}
}
}Available tools:
brainlink_context: read indexed context for a task or question.brainlink_search: search indexed notes.brainlink_add_note: write durable Markdown memory and reindex.brainlink_add_file: ingest a local file as a note and reindex.brainlink_index: rebuild the vault index.brainlink_validate: validate broken links and orphan notes.brainlink_graph: read indexed graph nodes and weighted links.brainlink_broken_links: list unresolved wiki links.brainlink_orphans: list disconnected notes.
The same linking rule applies through MCP: brainlink_context is read-only, and real graph links require Markdown notes with explicit [[wiki links]]. brainlink_add_note and brainlink_add_file reindex by default and include the index result when enabled.
Agents can raise the importance of a relationship by putting priority markers on the same line as a wiki link:
- [ ] Review [[Architecture]] priority: high
Related: [[Incident Runbook]] #criticalIndexed edges expose weight and priority (low, normal, high, critical) through CLI JSON, HTTP graph APIs and brainlink_graph.
Graph UI
Start the local frontend:
blink server --host 127.0.0.1 --port 4321 --watchBy default, the server uses $HOME/.brainlink/vault. Pass --vault ./vault only when you want to inspect a custom vault.
The graph UI shows:
- notes as nodes
[[wiki links]]as weighted edges- backlinks and outgoing links
- full Markdown content for the selected note
- neutral graph nodes with segment/group metadata
- agent selector for isolated views
- realtime refresh while
--watchis enabled
The server indexes before starting by default. Use --no-index to skip that step:
blink server --vault ./vault --no-indexHTTP API
The HTTP API is read-only and exists only to power the graph UI and local inspection workflows.
The server always refuses non-loopback hosts. Brainlink HTTP only runs on localhost.
Routes:
GET /api/agentsGET /api/graphGET /api/graph-layoutGET /api/search?q=<query>&limit=10&mode=hybridGET /api/context?q=<query>&limit=12&tokens=2000&mode=hybridGET /api/linksGET /api/backlinks?title=<title>GET /api/statsGET /api/broken-linksGET /api/orphansGET /api/validate
Read routes accept agent=<agent-id>:
/api/graph-layout?agent=coding-agent
/api/search?q=typescript&agent=coding-agent&mode=hybrid
/api/context?q=module-boundaries&agent=coding-agent&mode=semanticCLI Reference
Every command works with either brainlink or blink.
init
blink init
blink init ./vaultInitializes vault metadata. Without an argument, Brainlink initializes the default vault at $HOME/.brainlink/vault.
add
blink add "Note Title" --agent coding-agent --content "Markdown content"
blink add "Note Title" --vault ./vault --agent coding-agent --content "Markdown content"
blink add "Note Title" --vault ./vault --content-file ./notes.md
blink add "Note Title" --vault ./vault --content-file ./notes.md --no-auto-index--content and --content-file are mutually exclusive. Add --no-auto-index when you want to defer reindexing.
Creates a Markdown note under agents/<agent-id>/. Common secret patterns are blocked by default; use --allow-sensitive only for an intentionally protected vault.
index
blink index
blink index --vault ./vaultRebuilds the local index from Markdown files.
agents
blink agents --vault ./vault
blink agents --vault ./vault --jsonLists indexed agent namespaces.
search
blink search "query" --vault ./vault --limit 10
blink search "query" --vault ./vault --agent coding-agent --json
blink search "query" --vault ./vault --mode semantic --jsonRuns retrieval over indexed chunks.
Modes:
hybrid: default; combines SQLite FTS with local embedding similarity.fts: exact lexical retrieval through SQLite FTS.semantic: local deterministic embedding similarity only.
context
blink context "question" --vault ./vault --limit 12 --tokens 2000
blink context "question" --vault ./vault --agent coding-agent --json
blink context "question" --vault ./vault --agent coding-agent --mode hybrid --jsonBuilds a compact context package for an agent.
links
blink links --vault ./vault
blink links --vault ./vault --agent coding-agentLists indexed wiki links. JSON output includes weight and priority for each relationship.
backlinks
blink backlinks "Architecture" --vault ./vault
blink backlinks "Architecture" --vault ./vault --agent coding-agentLists notes pointing to a target title, ordered by strongest relationship first. JSON output includes weight and priority.
graph
blink graph --vault ./vault --json
blink graph --vault ./vault --agent coding-agent --jsonPrints indexed graph data. Edges include weight and priority so agents can categorize importance.
stats
blink stats --vault ./vault
blink stats --vault ./vault --agent coding-agent --jsonPrints vault metrics.
broken-links
blink broken-links --vault ./vaultLists unresolved wiki links.
orphans
blink orphans --vault ./vaultLists notes without incoming or outgoing links.
validate
blink validate --vault ./vaultValidates graph health. The command exits non-zero when required checks fail.
doctor
blink doctor --vault ./vaultRuns environment and vault checks.
watch
blink watch --vault ./vaultWatches Markdown files and rebuilds the index when notes change.
server
blink server --watch
blink server --vault ./vault --watchStarts the local read-only graph UI and HTTP API.
The HTTP server only binds to loopback hosts such as 127.0.0.1, localhost or ::1.
Machine-Readable Output
Commands with finite output support --json:
blink context "question" --vault ./vault --agent coding-agent --jsonWhen running through npm scripts, use --silent to keep stdout clean:
npm run --silent dev -- context "question" --vault ./vault --jsonConfiguration
Brainlink reads brainlink.config.json or .brainlink.json from the current working directory. If no vault is configured and no --vault flag is passed, Brainlink uses $HOME/.brainlink/vault.
{
"vault": ".brainlink-vault",
"host": "127.0.0.1",
"port": 4321,
"allowedVaults": [".brainlink-vault"],
"defaultAgent": "shared",
"autoIndexOnWrite": true,
"defaultSearchLimit": 10,
"defaultContextTokens": 2000,
"embeddingProvider": "local",
"defaultSearchMode": "hybrid",
"chunkSize": 1200
}
`defaultAgent` is optional. When set, CLI and MCP calls that omit `--agent`/`agent` use this value automatically. If not set, behavior remains as before.
`autoIndexOnWrite` is optional and defaults to `true`. Set it to `false` to defer indexing after writes.Use "embeddingProvider": "none" when you want FTS-only indexing.
For local security checks, set your Snyk token in the environment:
export SNYK_TOKEN="snyk_..."For GitHub Actions, add a repository secret SNYK_TOKEN and the CI/publish workflows will consume it automatically during build/test.
Set BRAINLINK_ALLOWED_VAULTS for external wrappers, including MCP servers, so a tool cannot pass arbitrary --vault paths:
export BRAINLINK_ALLOWED_VAULTS="/absolute/path/to/project-vault,/absolute/path/to/team-vault"Bucket vaults can be allowlisted with the same variable:
export BRAINLINK_ALLOWED_VAULTS="s3://my-memory-bucket/brainlink"Note Format
Brainlink supports Markdown with optional frontmatter:
---
title: "Auth Decision"
agent: "coding-agent"
type: "decision"
---
# Auth Decision
We chose JWT for API clients.
Related:
- [[Architecture]]
- [[API Gateway]]
#auth #jwt #decisionSupported signals:
titlefrontmatteragentfrontmatter- first
# Heading - file name fallback
[[Wiki Link]][[Wiki Link#Section]][[Wiki Link|Alias]]#tags
Development
npm install
npm run build
npm run test
npm run checkLarge vault benchmark:
npm run benchmark:large -- --notes 5000Use --keep to keep the generated temporary vault for inspection.
Local CLI:
npm run dev -- --help
npm run dev -- server --vault .brainlink-vault --watchPackage smoke test:
npm run pack:smokeArchitecture
src/
application/ use cases
cli/ command-line adapter
domain/ pure knowledge rules
infrastructure/ filesystem and SQLite adaptersDetailed notes:
Current Limits
- Semantic search uses deterministic local embeddings, not a remote model provider.
- Semantic search uses SQLite embedding buckets to narrow candidates before cosine scoring.
embeddingProvidercurrently supportslocalandnone.- Link resolution is title-based inside each agent namespace, with
sharedas fallback. - HTTP API is local and unauthenticated.
- Watch mode depends on the platform filesystem watcher.
Alpha Scope
0.1.0-alpha.0 is intended to prove the local-first memory loop:
- Markdown as durable memory.
- SQLite FTS plus local embeddings and semantic buckets as rebuildable retrieval index.
- CLI as the primary agent interface.
- HTTP graph API and frontend as inspection tools.
- Agent namespaces to avoid context mixing.
The alpha includes local semantic retrieval. Remote embedding providers, remote auth, advanced deduplication and graph editing are future milestones.
Security
Brainlink is local-first by default.
- Do not expose the HTTP server publicly without authentication.
- Brainlink HTTP is localhost-only and refuses non-loopback hosts.
- Brainlink blocks common secret patterns by default when adding notes. Use
--allow-sensitiveonly for intentional, protected vaults. - Do not store secrets, credentials, API keys or regulated personal data unless the vault is protected by your own storage controls.
- Treat
.brainlink/brainlink.dbas disposable derived data.
See SECURITY.md.
Contributing
See CONTRIBUTING.md.
License
MIT. See LICENSE.
Memory Optimization Loop (1-7)
Use this when your agent work needs consistent memory quality:
- Start with
blink context "<task>" --agent "$BLINK_AGENT" --json. - Keep notes focused with explicit
[[wiki links]]and#tags. - Route agent-specific knowledge to dedicated namespaces under
agents/<agent-id>/. - Keep
sharedas a curated global layer only. - Use targeted queries (
--limit, explicit terms,--mode hybrid) before broad scans. - Run the sync command after writing notes:
npm run brainlink:sync -- --vault ./vault --agent "$BLINK_AGENT"- Before final response, keep the returned context sources as the grounding baseline.
