npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@yamo/memory-mesh

v2.3.2

Published

Portable semantic memory system with Layer 0 Scrubber for YAMO agents

Readme

MemoryMesh

Portable, semantic memory system for AI agents with automatic Layer 0 sanitization.

Built on the YAMO Protocol for transparent agent collaboration with structured workflows and immutable provenance.

Features

  • Persistent Vector Storage: Powered by LanceDB for semantic search.
  • Layer 0 Scrubber: Automatically sanitizes, deduplicates, and cleans content before embedding.
  • Local Embeddings: Runs 100% locally using ONNX (no API keys required).
  • Portable CLI: Simple JSON-based interface for any agent or language.
  • YAMO Skills Integration: Includes yamo-super workflow system with automatic memory learning.
  • Pattern Recognition: Workflows automatically store and retrieve execution patterns for optimization.
  • LLM-Powered Reflections: Generate insights from memories using configurable LLM providers.
  • YAMO Audit Trail: Automatic emission of structured blocks for all memory operations.

Installation

npm install @yamo/memory-mesh

Usage

CLI

# Store a memory (automatically scrubbed & embedded)
memory-mesh store "My important memory" '{"tag":"test"}'

# Search memories
memory-mesh search "query" 5

Node.js API

import { MemoryMesh } from '@yamo/memory-mesh';

const mesh = new MemoryMesh();
await mesh.add('Content', { meta: 'data' });
const results = await mesh.search('query');

Enhanced Reflections with LLM

MemoryMesh supports LLM-powered reflection generation that synthesizes insights from stored memories:

import { MemoryMesh } from '@yamo/memory-mesh';

// Enable LLM integration (requires API key or local model)
const mesh = new MemoryMesh({
  enableLLM: true,
  llmProvider: 'openai',  // or 'anthropic', 'ollama'
  llmApiKey: process.env.OPENAI_API_KEY,
  llmModel: 'gpt-4o-mini'
});

// Store some memories
await mesh.add('Bug: type mismatch in keyword search', { type: 'bug' });
await mesh.add('Bug: missing content field', { type: 'bug' });

// Generate reflection (automatically stores result to memory)
const reflection = await mesh.reflect({ topic: 'bugs', lookback: 10 });

console.log(reflection.reflection);
// Output: "Synthesized from 2 memories: Bug: type mismatch..., Bug: missing content..."

console.log(reflection.confidence);  // 0.85
console.log(reflection.yamoBlock);    // YAMO audit trail

CLI Usage:

# With LLM (default)
memory-mesh reflect '{"topic": "bugs", "limit": 10}'

# Without LLM (prompt-only for external LLM)
memory-mesh reflect '{"topic": "bugs", "llm": false}'

YAMO Audit Trail

MemoryMesh automatically emits YAMO blocks for all operations when enabled:

const mesh = new MemoryMesh({ enableYamo: true });

// All operations now emit YAMO blocks
await mesh.add('Memory content', { type: 'event' });      // emits 'retain' block
await mesh.search('query');                                 // emits 'recall' block
await mesh.reflect({ topic: 'test' });                      // emits 'reflect' block

// Query YAMO log
const yamoLog = await mesh.getYamoLog({ operationType: 'reflect', limit: 10 });
console.log(yamoLog);
// [{ id, agentId, operationType, yamoText, timestamp, ... }]

Using in a Project

To use MemoryMesh with your Claude Code skills (like yamo-super) in a new project:

1. Install the Package

npm install @yamo/memory-mesh

2. Run Setup

This installs YAMO skills to ~/.claude/skills/memory-mesh/ and tools to ./tools/:

npx memory-mesh-setup

The setup script will:

  • Copy YAMO skills (yamo-super, scrubber) to Claude Code
  • Copy CLI tools to your project's tools/ directory
  • Prompt before overwriting existing files

3. Use the Skills

Your skills are now available in Claude Code with automatic memory integration:

# Use yamo-super workflow system
# Automatically retrieves similar past workflows and stores execution patterns
claude /yamo-super

Memory Integration Features:

  • Workflow Orchestrator: Searches for similar past workflows before starting
  • Design Phase: Stores validated designs with metadata
  • Debug Phase: Retrieves similar bug patterns and stores resolutions
  • Review Phase: Stores code review outcomes and quality metrics
  • Complete Workflow: Stores full execution pattern for future optimization

YAMO agents will automatically find tools in tools/memory_mesh.js.

Docker

docker run -v $(pwd)/data:/app/runtime/data \
  yamo/memory-mesh store "Content"

About YAMO Protocol

Memory Mesh is built on the YAMO (Yet Another Markup for Orchestration) Protocol - a structured language for transparent AI agent collaboration with immutable provenance tracking.

YAMO Protocol Features:

  • Structured Agent Workflows: Semicolon-terminated constraints, explicit handoff chains
  • Meta-Reasoning Traces: Hypothesis, rationale, confidence, and observation annotations
  • Blockchain Integration: Immutable audit trails via Model Context Protocol (MCP)
  • Multi-Agent Coordination: Designed for transparent collaboration across organizational boundaries

Learn More:

  • YAMO Protocol Organization: github.com/yamo-protocol
  • Protocol Specification: See the YAMO RFC documents for core syntax and semantics
  • Ecosystem: Explore other YAMO-compliant tools and skills

Memory Mesh implements YAMO v2.1.0 compliance with:

  • MemorySystemInitializer agent for graceful degradation
  • Context passing between agents (from_AgentName.output)
  • Structured logging with meta-reasoning
  • Priority levels and constraint-based execution
  • Automatic workflow pattern storage for continuous learning

Related YAMO Projects:

  • yamo-chain - Blockchain integration for agent provenance

Documentation

Configuration

LLM Provider Configuration

# Required for LLM-powered reflections
LLM_PROVIDER=openai          # Provider: 'openai', 'anthropic', 'ollama'
LLM_API_KEY=sk-...            # API key for OpenAI/Anthropic
LLM_MODEL=gpt-4o-mini         # Model name
LLM_BASE_URL=https://...      # Optional: Custom API base URL

Supported Providers:

  • OpenAI: GPT-4, GPT-4o-mini, etc.
  • Anthropic: Claude 3.5 Haiku, Sonnet, Opus
  • Ollama: Local models (llama3.2, mistral, etc.)

YAMO Configuration

# Optional YAMO settings
ENABLE_YAMO=true              # Enable YAMO block emission (default: true)
YAMO_DEBUG=true               # Enable verbose YAMO logging

LanceDB Configuration

# Vector database settings
LANCEDB_URI=./runtime/data/lancedb
LANCEDB_MEMORY_TABLE=memory_entries

Embedding Configuration

# Embedding model settings
EMBEDDING_MODEL_TYPE=local    # 'local', 'openai', 'cohere', 'ollama'
EMBEDDING_MODEL_NAME=Xenova/all-MiniLM-L6-v2
EMBEDDING_DIMENSION=384

Example .env File

# LLM for reflections
LLM_PROVIDER=openai
LLM_API_KEY=sk-your-key-here
LLM_MODEL=gpt-4o-mini

# YAMO audit
ENABLE_YAMO=true
YAMO_DEBUG=false

# Vector DB
LANCEDB_URI=./data/lancedb

# Embeddings (local default)
EMBEDDING_MODEL_TYPE=local
EMBEDDING_MODEL_NAME=Xenova/all-MiniLM-L6-v2