velixar
v1.0.0
Published
Persistent memory infrastructure for AI applications
Downloads
440
Maintainers
Readme
Velixar JavaScript SDK
Persistent memory for AI assistants and agents. Give any LLM-powered application long-term recall across sessions.
Velixar is an open memory layer — it works with any AI assistant, agent framework, or LLM pipeline. Store facts, preferences, and context that persist beyond a single conversation.
Installation
npm install velixarQuick Start
import Velixar from 'velixar';
const v = new Velixar({ apiKey: 'vlx_your_key' });
// Store a memory
const { id } = await v.store('User prefers dark mode');
// Search memories semantically
const { memories } = await v.search('preferences');
// Retrieve by ID
const { memory } = await v.get(id);
// Delete
await v.delete(id);Memory Tiers
| Tier | Name | Use Case | |------|------|----------| | 0 | Pinned | Critical facts, never expire | | 1 | Session | Current conversation context | | 2 | Semantic | Long-term memories (default) | | 3 | Organization | Shared team/org knowledge (Hivemind+) |
// Pin a critical fact
await v.store('User is allergic to peanuts', { tier: 0 });
// Store session context
await v.store('Currently debugging auth flow', { tier: 1 });Cognitive Features by Plan
| Feature | Free | Cortex ($29) | Synapse ($75) | Hivemind ($25/seat) | |---------|------|-------------|----------------|---------------------| | Store & search | ✓ | ✓ | ✓ | ✓ | | Neural ensembles | — | ✓ | ✓ | ✓ | | Temporal chains | — | ✓ | ✓ | ✓ | | Consolidation | — | ✓ | ✓ | ✓ | | Identity modeling | — | — | ✓ | ✓ | | Org memory (tier 3) | — | — | — | ✓ |
Free tier stores and searches. Paid tiers activate cognitive features automatically. Pricing →
Per-User Memories
await v.store('Loves TypeScript', { userId: 'user_123' });
const results = await v.search('programming', {
userId: 'user_123',
limit: 5,
});Use With Any AI Assistant
Velixar is assistant-agnostic. Use it with OpenAI, Anthropic, LangChain, custom agents, IDE assistants, CLI tools, or any system that calls an LLM:
// Before calling your LLM, inject relevant memories as context
const { memories } = await v.search(userMessage, { limit: 5 });
const context = memories.map(m => m.content).join('\n');
const response = await llm.chat([
{ role: 'system', content: `Relevant memories:\n${context}` },
{ role: 'user', content: userMessage },
]);
// After the conversation, store important facts
await v.store('User prefers concise answers', { userId: 'user_123' });Error Handling
import { Velixar, VelixarError } from 'velixar';
try {
await v.store('content');
} catch (e) {
if (e instanceof VelixarError) {
console.log(e.status, e.message);
}
}Configuration
const v = new Velixar({
apiKey: 'vlx_...', // Required — get one at velixarai.com
baseUrl: 'https://...', // Optional — custom endpoint
});Get an API Key
Sign up at velixarai.com and generate a key under Settings → API Keys.
Related
- velixar (Python SDK) — Python client with LangChain/LlamaIndex integrations
- velixar-mcp-server — MCP server for any MCP-compatible AI assistant
License
MIT
