memories-lite
v0.99.3
Published
A lightweight memory system for LLM agents
Downloads
238
Maintainers
Readme
🧠 Memories-lite
A lightweight memory layer for AI agents, leveraging LLMs for fact extraction and vector embeddings for retrieval.
📋 Table of Contents
- Quick Start
- Installation
- Basic Usage
- Key Features
- Memory Types
- Use Cases
- Advanced Configuration
- Documentation
- Acknowledgements
🚀 Quick Start
# Install the package
npm install memories-lite
# Basic usage
import { MemoriesLite } from 'memories-lite';
const memory = new MemoriesLite({
llm: {
provider: 'openai',
config: { apiKey: 'YOUR_API_KEY' }
},
embedder: {
provider: 'openai',
config: { apiKey: 'YOUR_API_KEY', model: 'text-embedding-3-small' }
}
});
// Add a memory for a user
await memory.capture("I prefer dark chocolate over milk chocolate", "user123");
// Retrieve relevant memories
const results = await memory.retrieve("What are my food preferences?", "user123");🌟 Highlights
- Higher Performance: Optimized memory operations that run significantly faster than mem0
- Business-Centric Design: Simplified API and workflows specifically tailored for business use cases
- Advanced Hybrid Scoring: Improved relevance through a custom scoring algorithm that balances vector similarity, recency, and importance
- Enhanced Security: One database per user architecture that provides stronger isolation and data protection
- Streamlined Implementation: Focused on essential features with minimal dependencies
📥 Installation
npm install memories-lite
# or
yarn add memories-lite🔍 Basic Usage
import { MemoriesLite } from 'memories-lite';
// Basic configuration
const memory = new MemoriesLite({
llm: {
provider: 'openai',
config: { apiKey: 'YOUR_OPENAI_API_KEY' }
},
embedder: {
provider: 'openai',
config: { apiKey: 'YOUR_OPENAI_API_KEY' }
}
// Vector store defaults to an in-memory store
});
// Unique ID for each user
const userId = 'user-123';
// Add memories
await memory.capture('I love Italian food', userId);
// Retrieve relevant memories
const results = await memory.retrieve('What foods do I like?', userId);
console.log('Relevant memories:', results.results.map(m => m.memory));
// Update a memory
if (results.results.length > 0) {
await memory.update(results.results[0].id, 'I love Italian and French cuisine', userId);
}
// Delete a memory
if (results.results.length > 0) {
await memory.delete(results.results[0].id, userId);
}
// Get all memories for a user
const allMemories = await memory.getAll(userId, {});🔑 Key Features
- Memory Capture: Extract and store relevant information from conversations
- Contextual Retrieval: Find memories most relevant to the current query
- User Isolation: Each user's memories are stored separately for privacy and security
- Memory Types: Support for different types of memories (factual, episodic, etc.)
- Custom Scoring: Hybrid scoring system balancing similarity, recency, and importance
🧩 Memory Types
Memories-lite supports four main types of memory:
Factual Memory ✓
- User preferences, traits, and personal information
- Example: "User likes Italian cuisine"
Episodic Memory ⏱️
- Time-based events and interactions
- Example: "User has a meeting tomorrow at 2pm"
Semantic Memory 🧠
- General knowledge and concepts
- Example: "Yoga is beneficial for mental health"
Procedural Memory 🔄
- Step-by-step processes and workflows
- Example: "Steps to configure the company VPN"
💼 Use Cases
- Customer Support Bots: Remember customer preferences and past interactions
- Personal Assistants: Build context-aware AI assistants that learn about user preferences
- Business Applications: Integrate with enterprise systems to maintain contextual awareness
- Educational Tools: Create learning assistants that remember student progress
⚙️ Advanced Configuration
// Custom scoring for different memory types
const customMemory = new MemoriesLite({
llm: {
provider: 'openai',
config: { apiKey: 'YOUR_API_KEY' }
},
embedder: {
provider: 'openai',
config: { apiKey: 'YOUR_API_KEY' }
},
vectorStore: {
provider: 'lite',
config: {
dimension: 1536,
scoring: {
// Prioritize factual memories with long retention
factual: { alpha: 0.7, beta: 0.2, gamma: 0.1, halfLifeDays: 365 },
// Make preferences permanently available
assistant_preference: { alpha: 0.6, beta: 0.0, gamma: 0.4, halfLifeDays: Infinity },
}
}
}
});📚 Documentation
For detailed technical information and implementation details, see:
- TECHNICAL.md - Technical implementation details
- MEMORIES.md - Detailed memory models and concepts
🙏 Acknowledgements
Forked from the Mem0 project ❤️.
Inspired by research concepts from:
- A-MEM: Agentic Memory for LLM Agents
- MemoryLLM: Self-Updatable Large Language Models
- Reflexion: Language Agents with Verbal Reinforcement Learning
📝 Development Roadmap
- [x] Semantic Memory Typing & Structuring: Explicitly tagging and utilizing memory types (factual, episodic, semantic, procedural)
- [x] Implicit Memory Updates: Auto-merging memories based on context without explicit ID references
- [x] Virtual Sessions/Context Grouping: Group memories related to specific conversation contexts
- [x] User Isolation: Separate storage per user for enhanced security and data privacy
- [x] Memory Type Detection: LLM-based automatic classification of memory types
- [x] Core Memory Operations: Basic CRUD operations with user-specific isolation
- [x] Memory Decay & Scoring: Hybrid scoring with recency decay and importance weights
- [ ] Reflexion Pattern Integration: Self-correction loops for memory refinement
- [x] Memory Recency: Prioritizing memories based on importance and time decay
- [x] Edge Case Tests: Complete unit tests for episodic and factual memory edge cases
- [ ] Middleware Support: Hooks and middleware for custom processing pipelines
📄 License
MIT
