@lanonasis/mem-intel-sdk
v2.0.0
Published
AI-powered memory intelligence SDK with predictive recall for LanOnasis Memory-as-a-Service
Maintainers
Readme
Memory Intelligence SDK 🧠✨
The AI that anticipates what you need before you realize it.
An AI-powered memory intelligence SDK for the LanOnasis Memory-as-a-Service platform. Beyond storage and retrieval - this SDK predicts what memories you'll need based on your current context.
// The magic moment
const predictions = await client.predictiveRecall({
userId: "user-123",
context: {
currentProject: "Building dashboard components",
recentTopics: ["React", "performance"]
}
});
// "Sarah, here's what you might need"
// [92%] React Hooks Performance Optimization
// → Reason: Highly relevant to your current work
// → Action: ApplyFeatures
NEW in v2.0.0: Predictive Memory System
- Predictive Recall - AI anticipates what you'll need before you search
- Personalized Responses - "Sarah, here's what you might need"
- Explainable Predictions - Every suggestion includes confidence score and reasoning
- Learning Loop - Feedback improves future predictions
Core Intelligence
- Pattern Recognition - Understand usage trends and productivity patterns
- Smart Organization - AI-powered tag suggestions and duplicate detection
- Semantic Intelligence - Find related memories using vector similarity
- Actionable Insights - Extract key learnings and opportunities from your knowledge base
- Health Monitoring - Ensure your memory database stays organized and healthy
Platform Support
- ✅ Node.js - Full support with environment variable configuration
- ✅ Browser - Universal client for web applications
- ✅ React - Hooks with React Query integration
- ✅ Vue 3 - Composables for Vue applications
- ✅ MCP Server - Create Model Context Protocol servers
Installation
# npm
npm install @lanonasis/mem-intel-sdk
# yarn
yarn add @lanonasis/mem-intel-sdk
# pnpm
pnpm add @lanonasis/mem-intel-sdkFramework-specific peer dependencies
For React applications:
npm install react @tanstack/react-queryFor Vue applications:
npm install vueFor MCP Server:
npm install @modelcontextprotocol/sdkQuick Start
Basic Usage
import { MemoryIntelligenceClient } from "@lanonasis/mem-intel-sdk";
const client = new MemoryIntelligenceClient({
apiKey: "lano_xxxxxxxxxx", // Your Lanonasis API key
});
// Analyze memory patterns
const analysis = await client.analyzePatterns({
userId: "user-123",
timeRangeDays: 30,
});
console.log(`Total memories: ${analysis.total_memories}`);Node.js with Environment Variables
import { NodeMemoryIntelligenceClient } from "@lanonasis/mem-intel-sdk/node";
// Automatically reads LANONASIS_API_KEY from environment
const client = NodeMemoryIntelligenceClient.fromEnv();React Integration
import {
MemoryIntelligenceProvider,
usePatternAnalysis,
} from "@lanonasis/mem-intel-sdk/react";
// Wrap your app
function App() {
return (
<MemoryIntelligenceProvider config={{ apiKey: "lano_xxx" }}>
<Dashboard />
</MemoryIntelligenceProvider>
);
}
// Use hooks in components
function Dashboard() {
const { data, isLoading } = usePatternAnalysis({
userId: "user-123",
timeRangeDays: 30,
});
if (isLoading) return <div>Loading...</div>;
return <div>Total: {data?.total_memories}</div>;
}Vue Integration
<script setup>
import { usePatternAnalysis } from "@lanonasis/mem-intel-sdk/vue";
const { data, loading, execute } = usePatternAnalysis();
onMounted(() => {
execute({ userId: "user-123", timeRangeDays: 30 });
});
</script>Predictive Memory System (v2.0.0)
The flagship feature that makes your memory system feel magical.
How It Works
Your Current Context
│
▼
┌──────────────────────────────────────────┐
│ PREDICTION ENGINE │
│ │
│ Semantic (40%) ─┐ │
│ Temporal (30%) ─┼──► Combined Score │
│ Frequency (20%) ─┤ │
│ Serendipity (10%)┘ │
└──────────────────────────────────────────┘
│
▼
"Here's what you'll need"Basic Usage
const predictions = await client.predictiveRecall({
userId: "user-123",
context: {
currentProject: "Building dashboard components",
recentTopics: ["React", "TypeScript", "performance"],
activeFiles: ["/src/components/Dashboard.tsx"],
contextText: "Optimizing render performance for data tables"
},
limit: 5,
minConfidence: 50
});
// Each prediction includes:
for (const pred of predictions.data.predictions) {
console.log(`[${pred.confidence}%] ${pred.title}`);
console.log(` Why: ${pred.reason}`);
console.log(` Action: ${pred.suggestedAction}`);
console.log(` Scores: semantic=${pred.scoreBreakdown.semanticScore}, temporal=${pred.scoreBreakdown.temporalScore}`);
}With Personalization
// Response includes personalized greeting
const result = await client.predictiveRecall({
userId: "user-123",
context: { currentProject: "My project" }
});
console.log(result.personalization?.greeting);
// → "Sarah, here's what you might need"
console.log(result.personalization?.tier);
// → "pro"Recording Feedback (Improves Future Predictions)
// When user clicks on a prediction
await client.recordPredictionFeedback({
memoryId: prediction.id,
userId: "user-123",
useful: true,
action: "clicked"
});
// → { success: true, message: "Thanks Sarah! We'll use this to improve your predictions." }React Hook
import { usePredictiveRecall, usePredictionFeedback } from "@lanonasis/mem-intel-sdk/react";
function PredictionsPanel({ userId }) {
const { data, isLoading } = usePredictiveRecall({
userId,
context: {
currentProject: "Dashboard",
recentTopics: ["React", "charts"]
}
});
const { mutate: recordFeedback } = usePredictionFeedback();
if (isLoading) return <div>Finding relevant memories...</div>;
return (
<div>
<h2>{data?.personalization?.greeting}</h2>
{data?.predictions.map(pred => (
<div key={pred.id} onClick={() => recordFeedback({
memoryId: pred.id,
userId,
useful: true,
action: "clicked"
})}>
<span className="confidence">{pred.confidence}%</span>
<h3>{pred.title}</h3>
<p>{pred.reason}</p>
</div>
))}
</div>
);
}Scoring Algorithm
| Factor | Weight | Description | |--------|--------|-------------| | Semantic | 40% | Cosine similarity between context embedding and memory embedding | | Temporal | 30% | Exponential decay (Ebbinghaus curve, 14-day half-life) | | Frequency | 20% | Logarithmic scaling of access counts | | Serendipity | 10% | Bonus for "adjacent possible" discoveries (0.3-0.6 similarity) |
Overview
This SDK is designed to complement your existing @lanonasis/mcp-core infrastructure by adding an intelligence layer on top of basic memory CRUD operations. While your core server handles memory creation, storage, and retrieval, this SDK focuses on:
- Pattern Recognition - Understand usage trends and productivity patterns
- Smart Organization - AI-powered tag suggestions and duplicate detection
- Semantic Intelligence - Find related memories using vector similarity
- Actionable Insights - Extract key learnings and opportunities from your knowledge base
- Health Monitoring - Ensure your memory database stays organized and healthy
Why This SDK?
Modern MCP Patterns
Uses the latest server.registerTool() API with:
- Zod schema validation
- Structured content output
- Proper tool annotations
- Both JSON and Markdown response formats
Single Responsibility
Unlike monolithic servers, this focuses solely on intelligence features, making it:
- Easier to maintain
- More composable
- Better suited for specific use cases
Production-Ready
- Streamable HTTP transport support
- Proper error handling with actionable messages
- Character limit enforcement
- Comprehensive logging
MCP Server Setup
For standalone MCP server usage:
# Clone the repository
git clone https://github.com/lanonasis/memory-intelligence-engine.git
cd memory-intelligence-engine/mem-intelligence-sdk
# Install dependencies
npm install
# Build the project
npm run buildConfiguration
Create a .env file with your existing LanOnasis credentials:
# Required - Same as your @lanonasis/mcp-core
ONASIS_SUPABASE_URL=your_supabase_url
ONASIS_SUPABASE_SERVICE_KEY=your_service_key
OPENAI_API_KEY=your_openai_key
# Optional
TRANSPORT=stdio # or 'http' for HTTP mode
PORT=3010 # HTTP port (default: 3010)Usage
Stdio Mode (Default)
# Development
npm run dev
# Production
npm startHTTP Mode
# Development
npm run dev:http
# Production
npm run start:httpAvailable Tools
1. memory_analyze_patterns
Analyze usage patterns and trends in your memory collection.
{
"user_id": "uuid",
"time_range_days": 30,
"response_format": "markdown"
}Returns:
- Memory distribution by type and time
- Peak activity periods
- Tag frequency analysis
- AI-generated productivity insights
2. memory_suggest_tags
Get AI-powered tag suggestions for a memory.
{
"memory_id": "uuid",
"user_id": "uuid",
"max_suggestions": 5,
"include_existing_tags": true
}Returns:
- Tag suggestions with confidence scores
- Reasoning for each suggestion
- Consistency with existing tag vocabulary
3. memory_find_related
Find semantically related memories using vector similarity.
{
"memory_id": "uuid",
"user_id": "uuid",
"limit": 10,
"similarity_threshold": 0.7
}Returns:
- Related memories ranked by similarity
- Shared tags between memories
- Content previews
4. memory_detect_duplicates
Identify potential duplicate or near-duplicate memories.
{
"user_id": "uuid",
"similarity_threshold": 0.9,
"max_pairs": 20
}Returns:
- Duplicate pairs with similarity scores
- Recommendations (keep_newer, merge, etc.)
- Estimated storage savings
5. memory_extract_insights
Extract key insights and patterns from your knowledge base.
{
"user_id": "uuid",
"topic": "optional focus area",
"memory_type": "project",
"max_memories": 20
}Returns:
- Categorized insights (patterns, learnings, opportunities, risks, action items)
- Supporting evidence from memories
- Confidence scores
- Executive summary
6. memory_health_check
Analyze the organization quality of your memory collection.
{
"user_id": "uuid",
"response_format": "markdown"
}Returns:
- Overall health score (0-100)
- Embedding coverage
- Tagging consistency
- Type balance analysis
- Actionable recommendations
7. memory_predictive_recall (NEW in v2.0.0)
AI-powered prediction of memories you'll need based on current context.
{
"user_id": "uuid",
"context": {
"current_project": "Building dashboard components",
"recent_topics": ["React", "performance"],
"context_text": "Optimizing render performance"
},
"limit": 5,
"min_confidence": 50,
"include_serendipity": true
}Returns:
- Predicted memories with confidence scores
- Human-readable explanations
- Score breakdown (semantic, temporal, frequency, serendipity)
- Suggested actions (apply, review, explore, reference)
- Personalized greeting with user's name
8. memory_prediction_feedback (NEW in v2.0.0)
Record feedback on predictions to improve accuracy over time.
{
"memory_id": "uuid",
"user_id": "uuid",
"useful": true,
"action": "clicked"
}Returns:
- Personalized thank you message
- Feedback confirmation
Integration with @lanonasis/mcp-core
This server is designed to work alongside your existing infrastructure:
┌─────────────────────────┐ ┌──────────────────────────┐
│ @lanonasis/mcp-core │ │ memory-intelligence-mcp │
│ │ │ │
│ ✅ create_memory │ ←── │ 🧠 memory_analyze_patterns │
│ ✅ search_memories │ │ 🏷️ memory_suggest_tags │
│ ✅ update_memory │ ←── │ 🔗 memory_find_related │
│ ✅ delete_memory │ │ 🔍 memory_detect_duplicates│
│ ✅ list_memories │ ←── │ 💡 memory_extract_insights │
│ ✅ API key management │ │ 🏥 memory_health_check │
└─────────────────────────┘ └──────────────────────────┘
│ │
└───────────┬───────────────────────┘
▼
┌─────────────────┐
│ Supabase │
│ (Shared DB) │
└─────────────────┘Example Workflow
- Create memory using
@lanonasis/mcp-core - Get tag suggestions from
memory_suggest_tags - Update memory with suggested tags using core server
- Find related memories to build knowledge connections
- Extract insights periodically to surface learnings
- Run health checks to maintain organization quality
Claude Desktop Integration
Add to your claude_desktop_config.json:
{
"mcpServers": {
"lanonasis-core": {
"command": "node",
"args": ["/path/to/mcp-core/dist/index.js"],
"env": {
"ONASIS_SUPABASE_URL": "...",
"ONASIS_SUPABASE_SERVICE_KEY": "...",
"OPENAI_API_KEY": "..."
}
},
"memory-intelligence": {
"command": "node",
"args": ["/path/to/memory-intelligence-mcp-server/dist/index.js"],
"env": {
"ONASIS_SUPABASE_URL": "...",
"ONASIS_SUPABASE_SERVICE_KEY": "...",
"OPENAI_API_KEY": "..."
}
}
}
}Testing with MCP Inspector
npx @modelcontextprotocol/inspector dist/index.jsResponse Formats
All tools support both markdown (human-readable) and json (machine-readable) formats:
// Request with JSON format
{
"user_id": "...",
"response_format": "json"
}
// Returns structured data
{
"content": [{ "type": "text", "text": "{...}" }],
"structuredContent": { /* typed object */ }
}Error Handling
Tools return actionable error messages:
{
"isError": true,
"content": [
{
"type": "text",
"text": "Error analyzing patterns: Database connection failed. Try checking your ONASIS_SUPABASE_URL environment variable."
}
]
}Performance Considerations
- Duplicate detection: Limited to 500 memories for performance
- Insight extraction: Uses GPT-4o-mini for cost efficiency
- Vector search: Requires embeddings in your memory_entries table
- Response truncation: Automatic at 50,000 characters
Prerequisites
Your Supabase database must have:
memory_entriestable withembeddingcolumn (vector)match_memoriesRPC function for vector similarity search- Standard LanOnasis schema (id, title, content, type, tags, etc.)
Architecture Benefits
vs. Embedding in Core Server
| Aspect | Monolithic | Intelligence Server | | ------------------ | -------------------------- | -------------------------- | | Deployment | Single point of failure | Independent scaling | | Updates | Risk to core functionality | Safe to iterate | | Resource Usage | Shared memory/CPU | Dedicated resources | | Testing | Complex integration tests | Focused unit tests | | Reusability | Tied to LanOnasis | Portable to other projects |
What's Next
v2.1.0 - Knowledge Gap Detection (Q1 2026)
- [ ] "What should I learn next?" recommendations
- [ ] Personalized learning paths
- [ ] Integration with YouTube, Medium, Dev.to
v2.2.0 - Team Knowledge Graph (Q2 2026)
- [ ] "Who's the expert on X?" finder
- [ ] Privacy-first team aggregation
- [ ] Expertise scoring and visualization
v3.0.0 - Market Leadership (Q3 2026)
- [ ] Privacy-first local processing (ONNX Runtime)
- [ ] Autonomous organization agent
- [ ] API marketplace with 70/30 revenue share
Backlog
- [ ] Memory clustering with topic detection
- [ ] Automatic summarization of memory collections
- [ ] Anomaly detection in memory patterns
- [ ] Content quality scoring
- [ ] Multi-language support
Publishing
npm Publishing
# Build and verify the package
npm run publish:dry-run
# Publish to npm (requires npm login)
npm run publish:npmGitHub Packages Publishing
To publish to GitHub Packages, update .npmrc:
@lanonasis:registry=https://npm.pkg.github.com
//npm.pkg.github.com/:_authToken=${GITHUB_TOKEN}Then publish:
npm publish --access publicContributing
Contributions are welcome! Please feel free to submit a Pull Request.
Database Setup (v2.0.0)
For predictive recall features, run the migration:
# Option 1: Supabase CLI
supabase db push
# Option 2: Copy SQL to Supabase SQL Editor
# File: supabase/migrations/20260113_prediction_system.sqlThis creates:
prediction_feedback- Track prediction accuracyprediction_history- Audit logincrement_access_count()- Frequency scoringget_user_profile_for_predictions()- Personalizationhas_premium_feature()- Premium tier gatingget_prediction_accuracy()- Metrics
Premium Tier Setup (Optional)
-- Enable predictive recall for a user
UPDATE profiles
SET subscription_tier = 'pro',
feature_flags = '{"predictive_recall": true}'::jsonb
WHERE id = 'user-uuid';License
MIT License - See LICENSE for details.
Built with care by the LanOnasis team.
We believe AI should anticipate your needs, not just respond to commands.
