npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@nextain/alpha-memory

v0.1.4

Published

Alpha Memory System — Naia OS cognitive memory architecture

Downloads

196

Readme

Alpha Memory

Cognitive memory architecture for AI agents — importance-gated encoding, vector retrieval, knowledge graph, Ebbinghaus decay, and a head-to-head benchmark suite against popular memory systems.

Part of the Naia OS project. | 한국어


Overview

Alpha Memory implements a 4-store memory architecture inspired by cognitive science:

| Store | Brain Analog | What it holds | |-------|-------------|--------------| | Episodic | Hippocampus | Timestamped events with full context | | Semantic | Neocortex | Facts, entities, relationships | | Procedural | Basal Ganglia | Skills, strategies, learned patterns | | Working | Prefrontal Cortex | Active context (managed externally by ContextManager) |

Key Features

  • Importance gating — 3-axis scoring (importance × surprise × emotion) filters what gets stored
  • Knowledge graph — entity/relation extraction for semantic memory
  • Ebbinghaus decay — memory strength fades over time, strengthened by recall
  • Reconsolidation — contradiction detection on retrieval
  • Pluggable adapters — swap the vector backend (local SQLite, mem0, etc.)
  • Benchmark suite — compare against mem0, SillyTavern, Letta, Zep, SAP, OpenClaw, and more

Architecture

src/
├── memory/
│   ├── index.ts            # MemorySystem — main orchestrator
│   ├── types.ts            # Type definitions
│   ├── importance.ts       # 3-axis importance scoring (Amygdala analog)
│   ├── decay.ts            # Ebbinghaus forgetting curve
│   ├── reconsolidation.ts  # Contradiction detection
│   ├── knowledge-graph.ts  # Entity/relation extraction
│   ├── embeddings.ts       # Embedding abstraction
│   └── adapters/
│       ├── local.ts        # SQLite + hnswlib (local, no API key)
│       └── mem0.ts         # mem0 OSS backend
└── benchmark/
    ├── fact-bank.json          # 1000 Korean facts
    ├── fact-bank.en.json       # 1000 English facts
    ├── query-templates.json    # Korean test queries
    ├── query-templates.en.json # English test queries
    ├── criteria.ts             # Scoring criteria
    └── comparison/
        ├── run-comparison.ts       # Main benchmark runner
        ├── types.ts                # BenchmarkAdapter interface
        ├── adapter-naia.ts         # Alpha Memory (this project)
        ├── adapter-mem0.ts         # mem0 OSS
        ├── adapter-sillytavern.ts  # SillyTavern (vectra + transformers.js)
        ├── adapter-letta.ts        # Letta (formerly MemGPT)
        ├── adapter-zep.ts          # Zep CE
        ├── adapter-openclaw.ts     # OpenClaw
        ├── adapter-sap.ts          # Super Agent Party (mem0 + FAISS)
        └── adapter-jikime-mem.ts   # Jikime Memory

Installation

npm install @nextain/alpha-memory
# or
pnpm add @nextain/alpha-memory

Usage

import { MemorySystem } from "@nextain/alpha-memory";

// Initialize with local SQLite backend (no API key needed)
const memory = new MemorySystem({ adapter: "local" });
await memory.init();

// Encode a message into memory
await memory.encode("User prefers dark mode and uses Neovim as their editor");

// Recall relevant memories for a query
const results = await memory.recall("What editor does the user use?");
console.log(results); // ["User prefers dark mode and uses Neovim as their editor"]

// At session start — inject into system prompt
const context = await memory.sessionRecall("new conversation started");
// context: string to prepend to system prompt

Quick Start (Benchmark)

npm install

# Korean benchmark, keyword judge
GEMINI_API_KEY=your-key npx tsx src/benchmark/comparison/run-comparison.ts \
  --adapters=naia,mem0,sillytavern \
  --judge=keyword \
  --lang=ko

# With gateway (Vertex AI, no rate limits)
GATEWAY_URL=https://your-gateway GATEWAY_MASTER_KEY=your-key \
  npx tsx src/benchmark/comparison/run-comparison.ts \
  --adapters=naia \
  --judge=keyword \
  --lang=en

CLI Options

| Option | Default | Description | |--------|---------|-------------| | --adapters=a,b,c | naia,mem0 | Adapters to run | | --judge=keyword\|claude-cli | claude-cli | Scoring method | | --lang=ko\|en | ko | Fact bank language | | --embedder=gemini\|solar\|qwen3\|bge-m3 | gemini | Embedding model | | --llm=gemini\|qwen3 | qwen3 | LLM for Naia adapter | | --skip-encode | off | Reuse cached DB | | --runs=N | 1 | Runs per test | | --categories=a,b | all | Filter categories |

Available Adapters

| ID | System | Backend | |----|--------|---------| | naia | Alpha Memory (this project) | SQLite + vector search + KG | | mem0 | mem0 OSS | SQLite vector + LLM dedup | | sillytavern | SillyTavern | vectra + transformers.js | | letta | Letta | Requires Letta server | | zep | Zep CE | Requires Zep server | | openclaw | OpenClaw | Local (Naia Gateway) | | sap | Super Agent Party | mem0 + FAISS/ChromaDB | | open-llm-vtuber | Open-LLM-VTuber | Letta-based agent memory | | jikime-mem | Jikime Memory | Local | | no-memory | Baseline (no memory) | — |


Embedding Backends

| Backend | Model | Dims | Notes | |---------|-------|------|-------| | gemini | gemini-embedding-001 | 3072 | Requires GEMINI_API_KEY | | solar | embedding-query/passage | 4096 | Requires UPSTAGE_API_KEY | | qwen3 | qwen3-embedding (Ollama) | 2048 | Local | | bge-m3 | bge-m3 (Ollama) | 1024 | Local, multilingual | | Gateway | vertexai:text-embedding-004 | 768 | GATEWAY_URL + GATEWAY_MASTER_KEY |


Benchmark Results

See docs/reports/ for full reports.


Development

npm run typecheck   # TypeScript check
npm run check       # Biome lint + format

AI-Native Open Source

This project is built with an AI-native development philosophy:

  • AI context is a first-class artifact.agents/ context files are versioned alongside code, not treated as throwaway prompts
  • Privacy by architecture — memory is stored locally and E2E encrypted; even the service provider cannot access it
  • AI sovereignty — the user owns their AI memories; Nextain provides infrastructure, not access
  • Transparent AI assistance — AI contributions are credited via Assisted-by: git trailers

AI context in .agents/ and .users/ is licensed under CC-BY-SA 4.0 — attribution and same license required.

In the vibe coding era, AI context is an asset as valuable as code.


License

Apache 2.0 — see LICENSE

Part of Naia OS by Nextain.