@yamo/memory-mesh
v2.3.2
Published
Portable semantic memory system with Layer 0 Scrubber for YAMO agents
Readme
MemoryMesh
Portable, semantic memory system for AI agents with automatic Layer 0 sanitization.
Built on the YAMO Protocol for transparent agent collaboration with structured workflows and immutable provenance.
Features
- Persistent Vector Storage: Powered by LanceDB for semantic search.
- Layer 0 Scrubber: Automatically sanitizes, deduplicates, and cleans content before embedding.
- Local Embeddings: Runs 100% locally using ONNX (no API keys required).
- Portable CLI: Simple JSON-based interface for any agent or language.
- YAMO Skills Integration: Includes yamo-super workflow system with automatic memory learning.
- Pattern Recognition: Workflows automatically store and retrieve execution patterns for optimization.
- LLM-Powered Reflections: Generate insights from memories using configurable LLM providers.
- YAMO Audit Trail: Automatic emission of structured blocks for all memory operations.
Installation
npm install @yamo/memory-meshUsage
CLI
# Store a memory (automatically scrubbed & embedded)
memory-mesh store "My important memory" '{"tag":"test"}'
# Search memories
memory-mesh search "query" 5Node.js API
import { MemoryMesh } from '@yamo/memory-mesh';
const mesh = new MemoryMesh();
await mesh.add('Content', { meta: 'data' });
const results = await mesh.search('query');Enhanced Reflections with LLM
MemoryMesh supports LLM-powered reflection generation that synthesizes insights from stored memories:
import { MemoryMesh } from '@yamo/memory-mesh';
// Enable LLM integration (requires API key or local model)
const mesh = new MemoryMesh({
enableLLM: true,
llmProvider: 'openai', // or 'anthropic', 'ollama'
llmApiKey: process.env.OPENAI_API_KEY,
llmModel: 'gpt-4o-mini'
});
// Store some memories
await mesh.add('Bug: type mismatch in keyword search', { type: 'bug' });
await mesh.add('Bug: missing content field', { type: 'bug' });
// Generate reflection (automatically stores result to memory)
const reflection = await mesh.reflect({ topic: 'bugs', lookback: 10 });
console.log(reflection.reflection);
// Output: "Synthesized from 2 memories: Bug: type mismatch..., Bug: missing content..."
console.log(reflection.confidence); // 0.85
console.log(reflection.yamoBlock); // YAMO audit trailCLI Usage:
# With LLM (default)
memory-mesh reflect '{"topic": "bugs", "limit": 10}'
# Without LLM (prompt-only for external LLM)
memory-mesh reflect '{"topic": "bugs", "llm": false}'YAMO Audit Trail
MemoryMesh automatically emits YAMO blocks for all operations when enabled:
const mesh = new MemoryMesh({ enableYamo: true });
// All operations now emit YAMO blocks
await mesh.add('Memory content', { type: 'event' }); // emits 'retain' block
await mesh.search('query'); // emits 'recall' block
await mesh.reflect({ topic: 'test' }); // emits 'reflect' block
// Query YAMO log
const yamoLog = await mesh.getYamoLog({ operationType: 'reflect', limit: 10 });
console.log(yamoLog);
// [{ id, agentId, operationType, yamoText, timestamp, ... }]Using in a Project
To use MemoryMesh with your Claude Code skills (like yamo-super) in a new project:
1. Install the Package
npm install @yamo/memory-mesh2. Run Setup
This installs YAMO skills to ~/.claude/skills/memory-mesh/ and tools to ./tools/:
npx memory-mesh-setupThe setup script will:
- Copy YAMO skills (
yamo-super,scrubber) to Claude Code - Copy CLI tools to your project's
tools/directory - Prompt before overwriting existing files
3. Use the Skills
Your skills are now available in Claude Code with automatic memory integration:
# Use yamo-super workflow system
# Automatically retrieves similar past workflows and stores execution patterns
claude /yamo-superMemory Integration Features:
- Workflow Orchestrator: Searches for similar past workflows before starting
- Design Phase: Stores validated designs with metadata
- Debug Phase: Retrieves similar bug patterns and stores resolutions
- Review Phase: Stores code review outcomes and quality metrics
- Complete Workflow: Stores full execution pattern for future optimization
YAMO agents will automatically find tools in tools/memory_mesh.js.
Docker
docker run -v $(pwd)/data:/app/runtime/data \
yamo/memory-mesh store "Content"About YAMO Protocol
Memory Mesh is built on the YAMO (Yet Another Markup for Orchestration) Protocol - a structured language for transparent AI agent collaboration with immutable provenance tracking.
YAMO Protocol Features:
- Structured Agent Workflows: Semicolon-terminated constraints, explicit handoff chains
- Meta-Reasoning Traces: Hypothesis, rationale, confidence, and observation annotations
- Blockchain Integration: Immutable audit trails via Model Context Protocol (MCP)
- Multi-Agent Coordination: Designed for transparent collaboration across organizational boundaries
Learn More:
- YAMO Protocol Organization: github.com/yamo-protocol
- Protocol Specification: See the YAMO RFC documents for core syntax and semantics
- Ecosystem: Explore other YAMO-compliant tools and skills
Memory Mesh implements YAMO v2.1.0 compliance with:
- MemorySystemInitializer agent for graceful degradation
- Context passing between agents (
from_AgentName.output) - Structured logging with meta-reasoning
- Priority levels and constraint-based execution
- Automatic workflow pattern storage for continuous learning
Related YAMO Projects:
- yamo-chain - Blockchain integration for agent provenance
Documentation
- Architecture Guide: docs/ARCHITECTURE.md - Comprehensive system architecture (1,118 lines)
- Development Guide: CLAUDE.md - Guide for Claude Code development
- Marketplace: .claude-plugin/marketplace.json - Plugin metadata
Configuration
LLM Provider Configuration
# Required for LLM-powered reflections
LLM_PROVIDER=openai # Provider: 'openai', 'anthropic', 'ollama'
LLM_API_KEY=sk-... # API key for OpenAI/Anthropic
LLM_MODEL=gpt-4o-mini # Model name
LLM_BASE_URL=https://... # Optional: Custom API base URLSupported Providers:
- OpenAI: GPT-4, GPT-4o-mini, etc.
- Anthropic: Claude 3.5 Haiku, Sonnet, Opus
- Ollama: Local models (llama3.2, mistral, etc.)
YAMO Configuration
# Optional YAMO settings
ENABLE_YAMO=true # Enable YAMO block emission (default: true)
YAMO_DEBUG=true # Enable verbose YAMO loggingLanceDB Configuration
# Vector database settings
LANCEDB_URI=./runtime/data/lancedb
LANCEDB_MEMORY_TABLE=memory_entriesEmbedding Configuration
# Embedding model settings
EMBEDDING_MODEL_TYPE=local # 'local', 'openai', 'cohere', 'ollama'
EMBEDDING_MODEL_NAME=Xenova/all-MiniLM-L6-v2
EMBEDDING_DIMENSION=384Example .env File
# LLM for reflections
LLM_PROVIDER=openai
LLM_API_KEY=sk-your-key-here
LLM_MODEL=gpt-4o-mini
# YAMO audit
ENABLE_YAMO=true
YAMO_DEBUG=false
# Vector DB
LANCEDB_URI=./data/lancedb
# Embeddings (local default)
EMBEDDING_MODEL_TYPE=local
EMBEDDING_MODEL_NAME=Xenova/all-MiniLM-L6-v2