npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@rixter145/open-brain

v1.0.0

Published

MCP server for Open Brain: Postgres + pgvector shared memory for Cursor, Claude, and any MCP client

Readme

Open Brain – MCP server

One shared memory layer for Cursor, Claude, and any MCP client: thoughts stored in Postgres + pgvector, exposed via an MCP server with semantic search and capture.

  • Capture: Save thoughts from any client; each is embedded (OpenAI text-embedding-3-small) and stored.
  • Retrieve: Semantic search by meaning, list recent thoughts, or view stats.
  • Same brain everywhere: One Postgres DB and one MCP server; point Cursor, Claude Desktop, and other clients at it.

Prerequisites

  • Node.js 18+
  • Postgres 15+ with the pgvector extension (e.g. Supabase free tier, or self‑hosted).
  • OpenAI API key (for embeddings), Google AI Studio API key (see Using Google AI Studio), or Ollama for free local embeddings (see Using Ollama below).

Finish setup on this machine

  1. Build (in a terminal where Node/npm are available):
    cd "c:\Users\rix\OneDrive\open-brain"
    npm install
    npm run build
  2. Env: Edit .env in the project root and set DATABASE_URL and your embedding provider key (OPENAI_API_KEY, GOOGLE_API_KEY, or use EMBEDDING_PROVIDER=ollama).
  3. Database: If you don’t have a DB yet, follow Database setup below (Supabase free tier is the easiest). Then run schema.sql once.
  4. Client: Add the Open Brain MCP server in your MCP client (see Connect clients below), then reload.

Using Ollama (free embeddings)

To avoid paying for OpenAI, you can use Ollama with nomic-embed-text (runs locally, no API key).

  1. Install Ollama from ollama.com and start it.
  2. Pull the embedding model:
    ollama pull nomic-embed-text
  3. In .env: set EMBEDDING_PROVIDER=ollama. Optionally set OLLAMA_HOST=http://localhost:11434 if Ollama runs elsewhere. You do not need OPENAI_API_KEY.
  4. Database: The Ollama model uses 768 dimensions (not 1536). If you already have a thoughts table from the OpenAI schema, drop it and re-run the Ollama schema:
    In Supabase SQL Editor, run DROP TABLE IF EXISTS thoughts; then paste and run the contents of schema-ollama.sql.
  5. Cursor MCP: In the Open Brain server env, include EMBEDDING_PROVIDER=ollama (and optionally OLLAMA_HOST). No OPENAI_API_KEY needed.
  6. Restart Cursor (or reload the window).

After that, capture and search use your local Ollama embeddings. You cannot mix OpenAI (1536) and Ollama (768) in the same table.


Using Google AI Studio

To use Google AI Studio (Gemini) for embeddings instead of OpenAI:

  1. Get an API key at Google AI Studio. Sign in, create or select a project, and create an API key.
  2. In .env: set EMBEDDING_PROVIDER=google and GOOGLE_API_KEY=your-key. You do not need OPENAI_API_KEY.
  3. Database: Google embeddings use 768 dimensions (same as Ollama). Use the same schema: if you don’t already have a 768-dim table, run schema-ollama.sql (or run DROP TABLE IF EXISTS thoughts; then the contents of schema-ollama.sql).
  4. Cursor MCP: In the Open Brain server env, include EMBEDDING_PROVIDER=google and GOOGLE_API_KEY. No OPENAI_API_KEY needed.
  5. Restart Cursor (or reload the window).

You cannot mix different embedding dimensions in the same table (OpenAI 1536 vs Ollama/Google 768).


1. Database setup

Open Brain needs Postgres 15+ with the pgvector extension. The easiest way is Supabase (free tier, pgvector included).

Option A: Supabase (recommended, free)

  1. Create an account
    Go to supabase.com and sign up (GitHub or email).

  2. Create a project

    • Click New project.
    • Choose your organization (or create one).
    • Set Name (e.g. open-brain), Database password (save it somewhere safe), and Region.
    • Click Create new project and wait until it’s ready.
  3. Get the connection string

    • In the left sidebar: Project Settings (gear) → Database.
    • Under Connection string choose URI.
    • Copy the URI. It looks like:
      postgresql://postgres.[ref]:[YOUR-PASSWORD]@aws-0-[region].pooler.supabase.com:6543/postgres
    • Replace [YOUR-PASSWORD] with the database password you set in step 2.
    • If your password has special characters (@, #, /, %, etc.), they must be URL-encoded in the URI. Run:
      node scripts/encode-password.mjs "YourPassword"
      and use the output in place of the password in the URL.
    • Put this full URI in your .env as DATABASE_URL.
  4. Run the schema

    • In the left sidebar: SQL Editor.
    • Click New query.
    • Open schema.sql in this repo, copy its entire contents, paste into the editor, and click Run (or Ctrl+Enter).
    • You should see “Success. No rows returned.” The thoughts table and vector index are now created.
  5. Use the same DATABASE_URL in your .env and in Cursor’s MCP config for the Open Brain server.

Option B: Other Postgres (with pgvector)

If you use another host (e.g. Neon, Railway, or your own server):

  • Ensure pgvector is enabled (run CREATE EXTENSION IF NOT EXISTS vector; if needed).
  • Connection string format: postgresql://USER:PASSWORD@HOST:PORT/DATABASE.
  • Run the contents of schema.sql once (e.g. via psql $DATABASE_URL -f schema.sql or your host’s SQL runner).

2. Install and run the MCP server

cd open-brain
npm install
npm run build

Set environment variables (or use a .env file with something like dotenv if you add it):

  • DATABASE_URL – Postgres connection string (e.g. postgresql://user:pass@host:5432/dbname).
  • OPENAI_API_KEY – OpenAI API key for embeddings (default provider).
  • GOOGLE_API_KEY – Google AI Studio API key when EMBEDDING_PROVIDER=google (get one at Google AI Studio).
  • EMBEDDING_PROVIDER – Optional. Set to ollama, google, or gemini to use that provider instead of OpenAI.

Run the server (stdio; clients will start it as a subprocess):

npm start
# or for development: npm run dev

3. Connect clients

Any MCP client (Cursor, Claude Desktop, etc.) can connect to Open Brain. Add a server entry that runs the built server and passes the required env.

Server entry shape: command: "node", args: path to dist/index.js (absolute path, or relative to the workspace if your client uses this repo as the working directory). Include env with DATABASE_URL and your embedding provider key (OPENAI_API_KEY, or GOOGLE_API_KEY + EMBEDDING_PROVIDER=google, or use Ollama and set EMBEDDING_PROVIDER=ollama).

Example (replace the path and env values with your own):

{
  "mcpServers": {
    "open-brain": {
      "command": "node",
      "args": ["/path/to/open_brain/dist/index.js"],
      "env": {
        "DATABASE_URL": "postgresql://user:password@host:5432/database",
        "GOOGLE_API_KEY": "your-key",
        "EMBEDDING_PROVIDER": "google"
      }
    }
  }
}
  • Cursor: Settings → MCP, or project-level .cursor/mcp.json (see your client’s docs for config location).
  • Claude Desktop: e.g. %APPDATA%\\Claude\\claude_desktop_config.json on Windows.
  • Abacus AI Deep Agent: MCP Servers How-to — use the npx config below in MCP JSON Config.

Using npx (Abacus AI or any stdio client): Paste this in your client's MCP config (Abacus: only the inner object; Cursor/Claude: nest under mcpServers). The package is published as @rixter145/open-brain; run npm publish --access=public from the repo after npm login.

{
  "open_brain": {
    "command": "npx",
    "args": ["-y", "@rixter145/open-brain"],
    "env": {
      "DATABASE_URL": "postgresql://user:password@host:5432/database",
      "OPENAI_API_KEY": "your-key"
    }
  }
}

For Google or Ollama embeddings, add EMBEDDING_PROVIDER and the matching key to env (see sections above).

Restart the client after changing the config.

MCP tools

| Tool | Purpose | |------|--------| | capture_thought | Save a thought (content + optional source). It is embedded and stored. | | search_brain | Semantic search by meaning (e.g. “career change”, “meeting with Sarah”). | | list_recent | List recent thoughts (optional: last N days). | | brain_stats | Count of thoughts and activity in the last 7 and 30 days. |

All tools return plain text (and optional metadata) so any model can interpret the results.

Embedding model

By default the server uses OpenAI text-embedding-3-small (1536 dimensions). If EMBEDDING_PROVIDER=ollama (or OPENAI_API_KEY is unset and OLLAMA_HOST is set), it uses Ollama nomic-embed-text (768 dimensions). If EMBEDDING_PROVIDER=google or gemini, it uses Google Gemini text-embedding (768 dimensions via outputDimensionality). Use the matching schema: schema.sql for OpenAI (1536 dims), schema-ollama.sql for Ollama or Google (768 dims). Do not change the embedding model without re-embedding existing rows.

Project layout

  • schema.sql – Postgres + pgvector schema for OpenAI (1536 dims). schema-ollama.sql for Ollama (768 dims).
  • src/index.ts – MCP server and tool handlers.
  • src/db.ts – Postgres + pgvector access (insert, search, list, stats).
  • src/embeddings.ts – Embedding calls (OpenAI, Google Gemini, or Ollama, env-driven).

Publishing to npm

The package is scoped as @rixter145/open-brain (npm rejects unscoped open-brain as too similar to existing openbrain). To publish so clients can use npx @rixter145/open-brain (e.g. Abacus AI):

  1. Log in: npm login (username, password, email, OTP if 2FA enabled).
  2. From the repo root: npm publish --access=public.

The prepublishOnly script builds before publish; the package includes only dist/.

Optional: metadata extraction

The plan mentioned optional metadata (people, topics, type, action items) via an LLM call. The current schema and table support these columns; the server currently only stores content, embedding, and source. You can extend capture_thought to call an LLM, parse the response, and fill people, topics, type, and action_items in insertThought.