@memorylayerai/sdk
v0.5.0
Published
Official Node.js/TypeScript SDK for MemoryLayer
Maintainers
Readme
MemoryLayer Node.js SDK
Official Node.js/TypeScript SDK for MemoryLayer - The intelligent memory layer for AI applications.
Features
- 🧠 Memory Management: Store, retrieve, and manage AI memories
- 🔍 Hybrid Search: Vector + keyword + graph-based retrieval
- 🕸️ Memory Graph: Visualize and traverse memory relationships
- 🎯 Smart Retrieval: LLM reranking and query rewriting
- 📊 Observability: Track performance and quality metrics
- 🔐 Type-Safe: Full TypeScript support with auto-completion
Installation
npm install @memorylayerai/sdkor with yarn:
yarn add @memorylayerai/sdkQuick Start
Option 1: Transparent Router (Beta) - Drop-in OpenAI Proxy ⚡
Change your baseURL to add automatic memory injection:
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://api.memorylayer.ai/v1', // ← Point to MemoryLayer
apiKey: 'ml_your_memorylayer_key' // ← Use your MemoryLayer key
});
// Memory is automatically retrieved and injected
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'What are my preferences?' }]
});Current Status:
- ✅ Works with
/v1/chat/completions(non-streaming) - ✅ OpenAI-compatible responses
- ✅ Configurable via headers (use
fetch/axiosfor guaranteed header support) - ⏳ Streaming support coming soon
See Transparent Router Guide for details.
Option 2: Manual Integration (Full Control)
For more control over memory retrieval and injection:
1. Get Your API Key
Sign up at memorylayer.com and create an API key from your project settings.
2. Initialize the Client
import { MemoryLayer } from '@memorylayerai/sdk';
const client = new MemoryLayer({
apiKey: 'ml_key_...',
// Optional: specify custom base URL
// baseUrl: 'https://api.memorylayer.com'
});3. Create Memories
// Create a single memory
const memory = await client.memories.create({
projectId: 'your-project-id',
content: 'The user prefers dark mode in their applications',
type: 'preference',
tags: {
category: 'ui',
importance: 'high'
}
});
console.log('Memory created:', memory.id);4. Search Memories
// Hybrid search (vector + keyword + graph)
const results = await client.search.hybrid({
projectId: 'your-project-id',
query: 'What are the user UI preferences?',
limit: 10,
// Optional: enable advanced features
useReranking: true, // LLM-based reranking
useQueryRewriting: true, // Query expansion
useGraphTraversal: true // Follow memory relationships
});
results.forEach(result => {
console.log(`Score: ${result.score}`);
console.log(`Content: ${result.content}`);
console.log(`Type: ${result.type}`);
});Transparent Router
The transparent router is an OpenAI-compatible proxy that automatically injects memory context into your requests.
Current Status:
- ✅ Works with
/v1/chat/completions(non-streaming) - ✅ OpenAI-compatible responses
- ⏳ Streaming support coming soon
Basic Usage
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://api.memorylayer.ai/v1',
apiKey: process.env.MEMORYLAYER_API_KEY
});
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'What are my preferences?' }]
});Configuration Headers
Control memory injection with optional headers. Note: For guaranteed header support, use fetch or axios directly:
// Using fetch for guaranteed header support
const response = await fetch('https://api.memorylayer.ai/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.MEMORYLAYER_API_KEY}`,
'Content-Type': 'application/json',
'x-memory-user-id': 'user_123', // User scope (required for multi-user apps)
'x-memory-session-id': 'sess_abc', // Session scope (persist from response)
'x-memory-limit': '10', // Max memories to inject
'x-memory-injection-mode': 'safe', // safe|full (balanced coming soon)
'x-memory-disabled': 'false' // Enable/disable memory
},
body: JSON.stringify({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
})
});Injection Modes
- safe (default): Only fact + preference types (minimal risk, structured data)
- full: All memory types including snippets (maximum context, higher token usage)
- balanced: Trusted summaries + facts + preferences (coming soon)
Diagnostic Headers
Every response includes diagnostic headers showing what happened:
const response = await fetch('https://api.memorylayer.ai/v1/chat/completions', { ... });
console.log('Memories retrieved:', response.headers.get('x-memory-hit-count'));
console.log('Tokens injected:', response.headers.get('x-memory-injected-tokens'));
console.log('Max score:', response.headers.get('x-memory-max-score'));
console.log('Query rewriting:', response.headers.get('x-memory-rewrite'));
console.log('Memory status:', response.headers.get('x-memory-status'));
console.log('Session ID:', response.headers.get('x-memory-session-id')); // Persist this!Session Management
For chat applications, persist x-memory-session-id from response headers and pass it in subsequent requests:
const response = await openai.chat.completions.create({ ... });
// Get generated session ID
const sessionId = response.headers?.['x-memory-session-id'];
// Store it and send on next request
const nextResponse = await openai.chat.completions.create({
messages: [...],
headers: {
'x-memory-session-id': sessionId // ← Persist this!
}
});Error Handling
The router gracefully degrades on errors:
const response = await openai.chat.completions.create({ ... });
// Check memory status
const memoryStatus = response.headers?.['x-memory-status'];
if (memoryStatus === 'error') {
console.warn('Memory retrieval failed:', response.headers?.['x-memory-error-code']);
console.log('But the request still succeeded (graceful degradation)');
}Migration from Manual Integration
See examples/MIGRATION_GUIDE.md for a complete migration guide.
Core Features
Memory Management
Create Memory
const memory = await client.memories.create({
projectId: 'project-id',
content: 'User completed onboarding on 2024-01-15',
type: 'fact',
tags: {
event: 'onboarding',
date: '2024-01-15'
},
metadata: {
source: 'mobile-app',
version: '2.1.0'
}
});List Memories
const memories = await client.memories.list({
projectId: 'project-id',
types: ['fact', 'preference'],
status: ['active'],
page: 1,
pageSize: 50
});
console.log(`Total: ${memories.total}`);
memories.items.forEach(memory => {
console.log(memory.content);
});Get Memory
const memory = await client.memories.get('memory-id');
console.log(memory.content);Update Memory
const updated = await client.memories.update('memory-id', {
content: 'Updated content',
tags: {
updated: 'true'
}
});Delete Memory
await client.memories.delete('memory-id');Search & Retrieval
Hybrid Search
Combines vector search, keyword search, and graph traversal:
const results = await client.search.hybrid({
projectId: 'project-id',
query: 'What does the user like?',
limit: 10,
// Scoring weights (optional)
vectorWeight: 0.5,
keywordWeight: 0.3,
recencyWeight: 0.2,
// Advanced features
useReranking: true, // Use LLM to rerank results
useQueryRewriting: true, // Expand and clarify query
useGraphTraversal: true, // Follow memory relationships
graphDepth: 2 // How many hops to traverse
});Vector Search Only
const results = await client.search.vector({
projectId: 'project-id',
query: 'user preferences',
limit: 5,
threshold: 0.7 // Minimum similarity score
});Keyword Search Only
const results = await client.search.keyword({
projectId: 'project-id',
query: 'dark mode',
limit: 5
});Memory Graph
Get Graph Data
const graph = await client.graph.get({
projectId: 'project-id',
// Optional filters
memoryTypes: ['fact', 'preference'],
searchQuery: 'user preferences',
dateRange: {
start: '2024-01-01',
end: '2024-12-31'
}
});
console.log(`Nodes: ${graph.nodes.length}`);
console.log(`Edges: ${graph.edges.length}`);
// Nodes
graph.nodes.forEach(node => {
console.log(`${node.id}: ${node.content}`);
});
// Edges (relationships)
graph.edges.forEach(edge => {
console.log(`${edge.source} -> ${edge.target} (${edge.type})`);
});Create Edge
const edge = await client.graph.createEdge({
projectId: 'project-id',
sourceMemoryId: 'memory-1',
targetMemoryId: 'memory-2',
relationshipType: 'derives', // or 'similarity', 'temporal', etc.
metadata: {
confidence: 0.95,
reason: 'User explicitly linked these'
}
});Traverse Graph
const related = await client.graph.traverse({
projectId: 'project-id',
startMemoryIds: ['memory-1'],
depth: 2, // How many hops
relationshipTypes: ['similarity', 'derives']
});
console.log(`Found ${related.length} related memories`);Ingestion
Ingest Document
const job = await client.ingestion.ingest({
projectId: 'project-id',
content: 'Long document content...',
metadata: {
title: 'Product Documentation',
source: 'docs.example.com'
},
// Chunking strategy
chunkingStrategy: 'semantic', // or 'fixed-size', 'sentence', 'paragraph'
chunkSize: 512,
chunkOverlap: 50
});
console.log(`Job ID: ${job.id}`);
console.log(`Status: ${job.status}`);Check Job Status
const job = await client.ingestion.getJob('job-id');
console.log(`Status: ${job.status}`);
console.log(`Progress: ${job.progress}%`);
console.log(`Memories created: ${job.memoriesCreated}`);Advanced Features
LLM Reranking
Improve search relevance using LLM-based reranking:
const results = await client.search.hybrid({
projectId: 'project-id',
query: 'complex user question',
limit: 20,
useReranking: true,
rerankingModel: 'gpt-4', // or 'claude-3'
rerankingTopK: 10 // Return top 10 after reranking
});Query Rewriting
Expand and clarify queries for better results:
const results = await client.search.hybrid({
projectId: 'project-id',
query: 'ML preferences', // Will expand to "machine learning preferences"
useQueryRewriting: true,
queryRewritingStrategy: 'expansion' // or 'clarification', 'multi-query'
});Graph Traversal
Follow memory relationships for contextual retrieval:
const results = await client.search.hybrid({
projectId: 'project-id',
query: 'user settings',
useGraphTraversal: true,
graphDepth: 2, // Follow relationships 2 hops deep
graphRelationshipTypes: ['similarity', 'derives']
});TypeScript Support
The SDK is written in TypeScript and provides full type definitions:
import {
MemoryLayer,
Memory,
SearchResult,
GraphData,
IngestionJob
} from '@memorylayerai/sdk';
// All methods are fully typed
const client = new MemoryLayer({ apiKey: 'ml_key_...' });
// TypeScript will auto-complete and type-check
const memory: Memory = await client.memories.create({
projectId: 'project-id',
content: 'typed content',
type: 'fact' // TypeScript knows valid types
});Error Handling
import { MemoryLayerError } from '@memorylayerai/sdk';
try {
const memory = await client.memories.create({
projectId: 'project-id',
content: 'test'
});
} catch (error) {
if (error instanceof MemoryLayerError) {
console.error('API Error:', error.message);
console.error('Status:', error.statusCode);
console.error('Request ID:', error.requestId);
} else {
console.error('Unexpected error:', error);
}
}Configuration
Custom Base URL
const client = new MemoryLayer({
apiKey: 'ml_key_...',
baseUrl: 'https://your-custom-domain.com'
});Timeout
const client = new MemoryLayer({
apiKey: 'ml_key_...',
timeout: 30000 // 30 seconds
});Retry Configuration
const client = new MemoryLayer({
apiKey: 'ml_key_...',
maxRetries: 3,
retryDelay: 1000 // 1 second
});Examples
Chatbot with Memory
import { MemoryLayer } from '@memorylayerai/sdk';
const client = new MemoryLayer({ apiKey: process.env.MEMORYLAYER_API_KEY });
const projectId = 'your-project-id';
async function chatWithMemory(userMessage: string, userId: string) {
// 1. Search for relevant memories
const memories = await client.search.hybrid({
projectId,
query: userMessage,
limit: 5,
useReranking: true,
useGraphTraversal: true
});
// 2. Build context from memories
const context = memories
.map(m => m.content)
.join('\n\n');
// 3. Send to LLM with context
const response = await callYourLLM({
system: `You are a helpful assistant. Use this context about the user:\n\n${context}`,
user: userMessage
});
// 4. Store new memory from conversation
await client.memories.create({
projectId,
content: `User said: "${userMessage}". Assistant responded: "${response}"`,
type: 'fact',
tags: { userId, timestamp: new Date().toISOString() }
});
return response;
}Document Q&A
async function ingestAndQuery(documentContent: string, question: string) {
// 1. Ingest document
const job = await client.ingestion.ingest({
projectId: 'your-project-id',
content: documentContent,
chunkingStrategy: 'semantic',
chunkSize: 512
});
// 2. Wait for ingestion to complete
let status = await client.ingestion.getJob(job.id);
while (status.status === 'processing') {
await new Promise(resolve => setTimeout(resolve, 1000));
status = await client.ingestion.getJob(job.id);
}
// 3. Query the document
const results = await client.search.hybrid({
projectId: 'your-project-id',
query: question,
limit: 3,
useReranking: true
});
return results.map(r => r.content).join('\n\n');
}API Reference
Full API documentation available at docs.memorylayer.com
Support
- 📧 Email: [email protected]
- 💬 Discord: discord.gg/memorylayer
- 📖 Docs: docs.memorylayer.com
- 🐛 Issues: github.com/memorylayer/sdk/issues
License
MIT License - see LICENSE file for details.
Changelog
v0.2.0 (2024-01-20)
- ✨ Added Memory Graph API support
- ✨ Added Hybrid Search with LLM reranking
- ✨ Added Query Rewriting capabilities
- ✨ Added Graph Traversal for contextual retrieval
- 🐛 Fixed type definitions for better TypeScript support
- 📚 Comprehensive documentation and examples
v0.1.1 (2024-01-10)
- 🐛 Bug fixes and stability improvements
v0.1.0 (2024-01-01)
- 🎉 Initial release
- ✨ Basic memory CRUD operations
- ✨ Vector search
- ✨ Ingestion API
