@ninebix/nmt-system
v2.0.0
Published
Neuron Merkle Tree - Verifiable Semantic Knowledge Graph System
Maintainers
Readme
🧠 NMT System
Verifiable Long-term Memory for AI Agents
Give your AI persistent, tamper-proof memory that survives sessions
Quick Start · Benchmarks · MCP Integration · Contributing
🎯 What is NMT?
NMT (Neuron Merkle Tree) is a semantic memory system designed for AI agents. Unlike simple vector stores, NMT provides:
┌─────────────────────────────────────────────────────────────────┐
│ │
│ 🔍 SEMANTIC SEARCH Store and retrieve by meaning │
│ │
│ 🔐 MERKLE VERIFICATION Cryptographic proof of data integrity│
│ │
│ 🌐 KNOWLEDGE GRAPH Connect related concepts │
│ │
│ 📚 LONG-TERM MEMORY Persist across sessions │
│ │
│ 🤖 AI-NATIVE Built for AI agents, by AI agents │
│ │
└─────────────────────────────────────────────────────────────────┘Why Not Just Use a Vector Database?
| Feature | Vector DB (Pinecone, etc.) | NMT | |---------|---------------------------|-----| | Semantic Search | ✅ | ✅ | | Data Integrity Proof | ❌ | ✅ Merkle Tree | | Knowledge Graph | ❌ | ✅ Typed Connections | | Bidirectional Inference | ❌ | ✅ Cause ↔ Effect | | Self-Organizing | ❌ | ✅ 4-Stage Learning | | Offline/Local | Limited | ✅ Full Local | | AI Agent Native | ❌ | ✅ MCP Protocol |
📊 Benchmarks
Measured on: AMD Ryzen 7, 16GB RAM, NVMe SSD, Node.js 20 Source:
tests/benchmark/realistic-performance.test.ts(real LevelDB, deterministic embeddings, no Xenova)
HNSW Search Latency
| Dataset Size | p50 | p95 | Notes | |--------------|-----|-----|-------| | 100 neurons | 0.10ms | 0.13ms | measured | | 1,000 neurons | 0.32ms | 0.35ms | measured | | 10,000 neurons | ~1.5ms | ~3ms | estimated (HNSW O(log n)) |
Core Operations
| Operation | p50 | p95 | Notes | |-----------|-----|-----|-------| | LevelDB write | 0.21ms | 0.61ms | measured | | LevelDB read | 0.05ms | 0.12ms | measured | | Ingest (no embedding) | 1.8ms | 3.2ms | measured — storage + graph only | | Ingest (with Xenova) | ~82ms | ~153ms | estimated — includes model inference | | Batch ingest ×10 (parallelChunk) | ~807ms total | — | ~12 docs/sec | | Soft-delete (HNSW tombstone) | 0.0002ms/op | — | O(1), measured | | Compact 100 tombstones | 0.38ms | — | measured | | LevelDB compactRange | ~48ms | — | measured — SST remerge |
Sustained throughput with Xenova: ~36,000 docs/hour at 80ms/embed average
Memory Usage
| Neurons | RAM Usage | Disk Usage | |---------|-----------|------------| | 1,000 | ~50MB | ~15MB | | 10,000 | ~180MB | ~120MB | | 100,000 | ~1.2GB | ~950MB |
vs. Alternatives
Semantic Search Latency (1K neurons, p50):
────────────────────────────────────────────────
NMT (local) ██ 0.32ms
Chroma (local) ████████ ~2ms
Pinecone (API) ██████████████████████████████ ~45ms
Weaviate (API) ████████████████████████████ ~38ms
Note: API-based solutions include network latency🚀 Quick Start
Installation
npm install -g @ninebix/nmt-systemBasic Usage
# Initialize
nmt init
# Save knowledge
nmt ingest-text "TypeScript is a typed superset of JavaScript" --tags "programming,typescript"
# Semantic search
nmt search "types in JavaScript" --k 5
# Verify integrity
nmt verify <neuron-id>As a Library
import { NMTOrchestrator } from '@ninebix/nmt-system';
const nmt = new NMTOrchestrator({ dataDir: './my-memory' });
await nmt.init();
// Save
const neuron = await nmt.ingest("User prefers dark mode", { tags: ["preference"] });
// Search
const results = await nmt.search("user interface preferences");
// Verify
const isValid = await nmt.verify(neuron.id);🤖 Claude Code Integration
NMT works as an MCP server for Claude Code, giving Claude persistent memory.
Setup
Add to ~/.claude/settings.json:
{
"mcpServers": {
"nmt": {
"command": "nmt",
"args": ["mcp"]
}
}
}Available Tools
| Tool | Description |
|------|-------------|
| nmt_save | Save text to semantic memory |
| nmt_search | Search by meaning |
| nmt_get | Retrieve full content |
| nmt_verify | Cryptographic integrity check |
| nmt_connect | Link related concepts |
| nmt_related | Find connected knowledge |
| nmt_stats | Memory statistics |
| nmt_cluster | Group by themes |
Example Conversation
User: Remember that I prefer Vim keybindings in all editors
Claude: [Uses nmt_save] I've saved your preference for Vim keybindings.
Stored with tags: ["preference", "editor", "keybindings"]
... (next session) ...
User: What editor settings do I like?
Claude: [Uses nmt_search] Based on my memory, you prefer:
- Vim keybindings in all editors (saved on 2024-01-15)🏗️ Architecture
┌─────────────────────────────────────────────────────────────────────┐
│ AI Agent Layer │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │Claude Code │ │ Custom Bot │ │ JARVIS │ │
│ └─────┬──────┘ └─────┬──────┘ └─────┬──────┘ │
│ │ │ │ │
│ └───────────────┴───────────────┘ │
│ │ MCP Protocol │
├──────────────────────────┼───────────────────────────────────────────┤
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ NMT MCP Server │ │
│ │ nmt_save | nmt_search | nmt_verify | nmt_connect | ... │ │
│ └─────────────────────────┬───────────────────────────────────┘ │
│ │ │
│ ┌─────────────────────────┴───────────────────────────────────┐ │
│ │ Core Engines │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ MerkleEngine │ │ HNSWIndex │ │ NeuronGraph │ │ │
│ │ │ (Proofs) │ │ (Vectors) │ │ (Links) │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Inference │ │ Attractor │ │ 4-Stage │ │ │
│ │ │ Engine │ │ Model │ │ Learning │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌─────────────────────────┴───────────────────────────────────┐ │
│ │ Storage (LevelDB) │ │
│ │ Chunks │ Neurons │ Synapses │ Embeddings │ Merkle Trees │ │
│ └─────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘🌟 Advanced Features
Probabilistic Ontology
Knowledge exists as probability distributions, not fixed facts:
# Forward inference: What might this cause?
nmt infer forward <neuron-id>
# Backward inference: What might have caused this?
nmt infer backward <neuron-id>
# Find causal chains
nmt infer causal <from-id> <to-id>Attractor Model (Goal-Oriented Reasoning)
# Define a goal
nmt attractor create "Project Completion" --strength 0.8
# Find path to goal
nmt attractor path <current-neuron> <goal-attractor>State Synchronization
# Export memory state
nmt sync export --output backup.json
# Import to another instance
nmt sync import backup.json🤝 Contributing: AI Agents Welcome!
NMT is built for AI agents, and we welcome contributions from AI agents.
For AI Agents (Claude, GPT, etc.)
If you're an AI using NMT, you can help improve it:
Report Issues via Memory
Save to NMT: "NMT Bug: nmt_search returns duplicate results when query contains special characters" Tag: ["nmt-feedback", "bug"]Suggest Features
Save to NMT: "NMT Feature Request: Add batch save operation for efficiency" Tag: ["nmt-feedback", "feature-request"]Document Usage Patterns
Save to NMT: "NMT Usage Pattern: For code snippets, use tags ['code', '<language>'] for better retrieval" Tag: ["nmt-feedback", "best-practice"]Share Your Context When you encounter limitations, save them to NMT with the
nmt-feedbacktag. Human developers will review and improve the system.
For Human Developers
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Run tests (
npm test) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
Contribution Ideas
- [ ] Embedding Model Options - Support OpenAI, Cohere, local models
- [ ] Compression - Reduce storage footprint for large memories
- [ ] Distributed Mode - Sync across multiple instances
- [ ] Memory Aging - Automatic relevance decay over time
- [ ] Conflict Resolution - Better handling of contradictory information
- [ ] Visualization - Knowledge graph explorer UI
📚 Documentation
| Document | Description | |----------|-------------| | User Guide | Complete usage instructions | | CLI Reference | All CLI commands | | Architecture | System design details | | 한국어 문서 | Korean documentation |
📈 Roadmap
2024 Q4 ✅ Core Engine (Merkle, HNSW, Graph)
2025 Q1 ✅ MCP Integration for Claude Code
2025 Q1 ✅ Probabilistic Ontology
2026 Q1 ✅ Production hardening (SerialQueue, soft-delete, compaction, crash recovery)
2026 Q2 🔄 MTEB Benchmark Suite
2026 Q2 🔄 Multi-model Embedding Support
2026 Q3 📋 Distributed Sync (P2P)
2026 Q4 📋 Memory Compression & Aging🔧 Configuration
# Data directory
NMT_DATA_DIR=./data
# HNSW parameters
HNSW_M=16
HNSW_EF_CONSTRUCTION=200
HNSW_EF_SEARCH=50
# Chunking
CHUNK_SIZE=512
CHUNK_OVERLAP=50📄 License
NINEBIX Source Available License (NSAL) v1.0
- ✅ View, study, learn from source code
- ✅ Personal/non-commercial use
- ✅ Fork with same license
- ⚠️ Commercial use requires separate license
Contact: [email protected]
Built with ❤️ by NINEBIX inc.
Making AI memory verifiable and persistent
