npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

superbrain-distributed-sdk

v3.0.1-cognitive

Published

Premium High-Performance Distributed Memory SDK for AI Agents

Readme

🧠 superbrain-distributed-sdk v3.0.1-cognitive — TypeScript/Node.js

npm version Demo Code License: BSL 1.1 Node.js

The Distributed RAM Fabric for AI Agents — Share terabytes of context across your LLM cluster at microsecond speeds using 36-byte UUID pointers.


🔥 v3.0.0-cognitive: The Intelligence Update is now live!

📦 Installation

npm install superbrain-distributed-sdk

🚀 New in v3.0.0-cognitive — Active Memory & Coordinator Bypass

Version 3.0.0 introduces the ability to operate as a microsecond-latency Active Memory Tier for agent architectures.

  • Coordinator Bypass: Metadata is cached locally, eliminating the gRPC hop to the Coordinator for established pointers.
  • Zero-Copy SHM: When the SDK detects a co-located Memory Node (127.0.0.1), it seamlessly switches from gRPC streaming to direct /dev/shm memory-mapped file access.
  • 13.5µs Native Latency: The Native Go core bypass achieves microsecond speed, bypassing the network entirely for local agents.

🔧 Usage

Basic — Shared Memory Between Agents

import { SuperbrainClient } from 'superbrain-distributed-sdk';

const client = new SuperbrainClient('localhost:50050');
await client.register('my-agent-id');

// Allocate distributed RAM
const ptrId = await client.allocate(100 * 1024 * 1024); // 100 MB

// Write from Agent A on Machine A
await client.write(ptrId, 0, Buffer.from('Shared AI context'));

// Read from Agent B on Machine B (just needs the 36-byte pointer!)
const data = await client.read(ptrId, 0, 17);

await client.free(ptrId);
client.close();

Advanced — Secure Fabric (E2EE)

import { SuperbrainClient } from 'superbrain-distributed-sdk';

// All data encrypted with AES-256-GCM at client level
// Memory nodes NEVER see plaintext
const client = new SuperbrainClient('localhost:50050', {
  encryptionKey: crypto.randomBytes(32)
});
await client.register('secure-agent');

const ptr = await client.allocate(4 * 1024 * 1024);
await client.write(ptr, 0, Buffer.from(JSON.stringify(sensitiveData)));
const response = await client.read(ptr, 0, 0);

Multi-Agent Context Passing

// Agent A writes — gets pointer
const ctxPtr = await client.allocate(1024 * 1024);
await client.write(ctxPtr, 0, Buffer.from(JSON.stringify({
  topic: "distributed AI inference",
  findings: researchResults,
  timestamp: Date.now()
})));

// Share the 36-byte pointer ID via any channel (HTTP, gRPC, etc.)
broadcast({ contextPtr: ctxPtr }); // other agents connect immediately

// Agent B reads — microseconds, no data copying
const received = JSON.parse((await clientB.read(ctxPtr, 0, 0)).toString());

📊 Architecture

Your LLM App (SDK)                 SuperBrain Cluster
┌─────────────────────────┐
│  allocate(size) ────────┼──(1)──► Coordinator (Control Plane)
│  free(ptr_id)   ────────┼──(5)──► Maps pointers → node locations
│                         │                │
│                         │         (2) pointer map returned
│                         │                │
│  write(ptr_id, data) ───┼──(3)──►┌───────▼──────────────┐
│  read(ptr_id)   ────────┼──(4)──►│   Memory Nodes       │
└─────────────────────────┘        │   (Data Plane)       │
                                   │   1TB+ pooled RAM    │
                                   └──────────────────────┘

CRITICAL: write() and read() bypass the Coordinator entirely.
They stream directly to the Memory Nodes over gRPC for maximum throughput (~100 MB/s).
The Coordinator is ONLY in the control path (allocate + free).

Why this matters: The Coordinator never becomes a bottleneck for your data. 1000 agents can read/write simultaneously to different nodes without fighting for the same control plane.


🧹 Memory Management

The Node.js SDK exposes the raw client layer — free() is always required after allocate().

// ✅ Always do this after you are done with a pointer
const ptr = await client.allocate(100 * 1024 * 1024);
await client.write(ptr, 0, data);
const result = await client.read(ptr, 0, 0);
await client.free(ptr);   // ← required — leaks memory if skipped

🐍 Want Managed Memory? Use the Python SDK

The Python SDK (pip install superbrain-sdk) provides higher-level APIs where free() is never needed:

| Python API | Free needed? | What it does | |------------|:------------:|--------------| | SharedContext.write("key", data) | ❌ No | Key-based shared state across agents | | fabric.store_kv_cache(prefix) | ❌ No | Deduped prompt cache, auto-evicted | | SuperBrainMemory (LangChain) | ❌ No | Chat history in distributed RAM |

# Python — no free() ever needed with high-level APIs
from superbrain import DistributedContextFabric

fabric = DistributedContextFabric(coordinator="localhost:50050")
ctx = fabric.create_context("session-42")

ctx.write("state", {"step": 10})   # written to distributed RAM
ctx.read("state")                  # read from any machine
# No free() ✅

Full Memory Management Guide


🔐 Security Features

| Feature | Status | |---------|--------| | mTLS (mutual TLS between all nodes) | ✅ | | E2EE (AES-256-GCM at SDK level) | ✅ | | Pub/Sub (real-time memory notifications) | ✅ | | Per-context key rotation | ✅ (v0.2.0) | | Anomaly detection | ✅ (v0.2.0) | | GDPR/SOC2 audit logging | ✅ (v0.2.0) |


🗺️ Roadmap

| Version | Milestone | Status | |---------|-----------|--------| | v0.1.0 | Core Distributed RAM | ✅ Shipped | | v0.1.1 | Secure Fabric (mTLS + E2EE) | ✅ Shipped | | v0.2.0 | Phase 3: Automated AI Memory Controller | ✅ Shipped | | v0.3.1 | Semantic Memory (FAISS-Backed Distributed Vectors) | ✅ Shipped | | v0.4.0 | Gossip & P2P Membership | ✅ Shipped | | v0.5.0 | High Availability & Partition Tolerance | ✅ Shipped | | v0.6.0 | Decentralized Observability & Metrics | ✅ Shipped | | v0.7.1 | Tiered Architecture & Zero-Copy SHM Bypass | ✅ Current | | v0.8.0 | Raft Consensus Replication | ✅ Shipped | | v0.9.0 | NVMe Spilling | ✅ Shipped |


📚 Documentation


🖥️ Server Setup (Required)

This SDK connects to a SuperBrain coordinator. To run one locally in 30 seconds:

git clone https://github.com/anispy211/memorypool
cd memorypool
docker compose up -d
# Dashboard: http://localhost:8080

MIT License · Built by Anispy