memvault-sdk-jakops88
v1.1.4
Published
Official SDK for the Long Term Memory API.
Downloads
1,334
Readme
MemVault SDK
A lightweight Node.js client for the MemVault Memory API.
MemVault solves the "context window" problem for AI agents. Instead of sending the entire conversation history to the LLM (which is expensive and slow), this SDK allows you to store facts and retrieve only the most relevant context using Hybrid Search.
Why use this?
Building RAG (Retrieval-Augmented Generation) pipelines usually involves setting up a Vector Database (Pinecone/Weaviate), writing embedding logic, and managing chunking.
MemVault abstracts this into a single API call.
- Hybrid Search 2.0: Combines Vector Similarity (pgvector) with BM25 Keyword Search and Recency Decay. This ensures the agent finds exact matches (like user IDs) as well as semantic concepts.
- Infrastructure-less: No need to manage Docker containers or Postgres clusters if you use the managed endpoint.
- Universal: Works with LangChain, Vercel AI SDK, and standard OpenAI/Anthropic implementations.
Installation
npm install memvault-sdk-jakops88Quick Start
You can use this SDK with either the Managed API (RapidAPI) or a Self-Hosted instance.
1. Initialize the Client
import { MemVault } from 'memvault-sdk-jakops88';
// Option A: Managed API (Production/Easiest)
// Get your key at: [https://rapidapi.com/jakops88/api/long-term-memory-api](https://rapidapi.com/jakops88/api/long-term-memory-api)
const memory = new MemVault({
apiKey: "YOUR_RAPID_API_KEY",
baseUrl: "[https://long-term-memory-api.p.rapidapi.com](https://long-term-memory-api.p.rapidapi.com)"
});
// Option B: Self-Hosted (Local Development)
// If running the Docker container locally
const memory = new MemVault({
baseUrl: "http://localhost:3000"
});2. Store a Memory
The API automatically handles text normalization, tokenization, and embedding generation (via OpenAI or local Ollama depending on your server config).
await memory.store({
sessionId: "user_123",
text: "The user is a TypeScript developer and prefers dark mode.",
importanceHint: "high" // Optional: 'low', 'medium', 'high'
});3. Retrieve Context
This performs a Hybrid Search. It ranks memories based on:
- Vector Score: Semantic meaning.
- Keyword Score (BM25): Exact word matches.
- Recency: How long ago the memory was created.
const context = await memory.retrieve({
sessionId: "user_123",
query: "What are the user's preferences?",
limit: 3
});
console.log(context.results);
// Returns an array of text chunks to inject into your LLM prompt.Hosting Options
Managed API (Recommended for Production)
For production apps where uptime and maintenance matter, use the managed service via RapidAPI. It includes scaling, security, and guaranteed availability.
Self-Hosting
MemVault is open source. You can run the backend on your own infrastructure using Docker and PostgreSQL.
License
MIT
