engram-ai
v0.3.0
Published
Small, explicit memory layer for AI agents
Readme
Engram AI
Engram AI is a small, open-source memory layer for AI agents. It turns conversations into explicit, structured memories that persist across sessions — without becoming a framework.
Why Engram
- Long-term memory, not RAG. Use Engram to store user preferences, goals, and ongoing projects.
- Explicit and controllable. Memories are data, not embeddings or hidden prompts.
- Small by design. Keep 50–200 high-signal memories per user.
What It Is / Isn’t
Engram is: a memory abstraction, a long-term context layer, a lightweight library.
Engram is not: a vector database, a prompt template engine, or an agent framework.
Install
npm
npm i --save engrambun
bun add engram-aiQuick Start
import { extractMemories } from "@/index";
const messages = [
{ role: "user", message: "Hi, my name is Steve" },
{ role: "assistant", message: "Nice to meet you" },
{ role: "user", message: "I prefer TypeScript" },
];
const memories = await extractMemories(messages, {
apiKey: process.env.DEEPSEEK_API_KEY!,
});
console.log(memories);Store + Merge Flow
import { extractMemories, mergeMemories, InMemoryStore } from "@/index";
const store = new InMemoryStore();
const userId = "user-123";
const existing = await store.get(userId);
const candidates = await extractMemories(messages, {
apiKey: process.env.DEEPSEEK_API_KEY!,
existingMemories: existing,
});
const merged = mergeMemories(existing, candidates ?? [], { maxMemories: 200 });
await store.put(userId, merged);
const stored = await store.get(userId);Memory Summary Prompt
import { buildMemorySummaryPrompt } from "@/index";
const summary = await buildMemorySummaryPrompt(stored, {
apiKey: process.env.DEEPSEEK_API_KEY!,
});
// Use it as part of your agent system prompt
console.log(summary);To persist a summary per user:
import { InMemorySummaryStore } from "@/index";
const summaryStore = new InMemorySummaryStore();
await summaryStore.upsert({
userId,
summary: summary ?? "",
updatedAt: new Date().toISOString(),
});You can tune capacity by type and weight:
const merged = mergeMemories(existing, candidates ?? [], {
maxMemories: 200,
maxPerType: { profile: 50, project: 50, goal: 40, preference: 40, temp: 20 },
typeWeights: { profile: 0.9, project: 0.7, goal: 0.6, preference: 0.5, temp: 0.3 },
});To switch providers/models:
await extractMemories(messages, {
apiKey: process.env.OPENAI_API_KEY!,
provider: "openai",
model: "gpt-4o-mini",
});Claude example (via an OpenAI-compatible gateway):
await extractMemories(messages, {
apiKey: process.env.CLAUDE_API_KEY!,
provider: "openai-compatible",
model: "claude-3-5-sonnet-20241022",
baseUrl: "https://your-claude-gateway/v1",
});Qwen (OpenAI-compatible) example:
await extractMemories(messages, {
apiKey: process.env.DASHSCOPE_API_KEY!,
provider: "openai-compatible",
model: "qwen-plus",
baseUrl: "https://dashscope.aliyuncs.com/compatible-mode/v1",
});You can also use environment defaults:
export ENGRAM_PROVIDER=openai
export ENGRAM_MODEL=gpt-4o-mini
export ENGRAM_BASE_URL=https://api.openai.com/v1Supported providers:
deepseek(default)openaiopenai-compatible(requiresbaseUrl)
Direct Anthropic API is not wired yet; use an OpenAI-compatible gateway if needed.
Core Concepts
Memory Candidates
Engram extracts memories with explicit actions (create/update/ignore):
type MemoryCandidateAction = {
action: "create" | "update" | "ignore";
targetId?: string; // required when action is "update"
type: "profile" | "project" | "goal" | "preference" | "temp";
content: string;
rationale: string;
confidence: number; // 0–1
};Deterministic Output
Memories are meant to be short, stable, and easy to audit. Update or merge memories instead of appending endlessly.
Development
bun install
DEEPSEEK_API_KEY=xxx bun testHosted API (Bun + Hono)
DEEPSEEK_API_KEY=xxx SUPABASE_URL=... SUPABASE_SERVICE_ROLE_KEY=... bun run api/server.tsEndpoints:
POST /v1/memory/ingest{ userId, messages }GET /v1/memory/summary/:userId
Client example:
API_BASE_URL=http://localhost:3000 bun examples/api-client.tsSupabase Store
Create a table (default name: memories):
create table memories (
user_id text not null,
id text not null,
type text not null,
content text not null,
rationale text,
confidence numeric,
weight numeric,
created_at timestamptz not null,
updated_at timestamptz not null,
deleted_at timestamptz,
primary key (user_id, id)
);
create index memories_user_id_idx on memories (user_id);
create index memories_active_idx on memories (user_id)
where deleted_at is null;
create table memory_summaries (
user_id text primary key,
summary text not null,
updated_at timestamptz not null
);Usage (server-side with Service Role Key; anon key not recommended here):
import { SupabaseStore } from "@/index";
const store = new SupabaseStore({
url: process.env.SUPABASE_URL!,
key: process.env.SUPABASE_SERVICE_ROLE_KEY!,
table: "memories",
});Note: Service Role bypasses RLS. Do not expose it in client-side apps.
This store uses logical deletes via deleted_at and never hard-deletes rows.
Project Structure
core/extractor.ts: LLM-based memory extraction.extractor.test.ts: Bun tests.index.ts: package entry.
Roadmap
- Memory merging + deduplication.
- Storage adapters (file/db/hosted API).
- Retrieval and relevance scoring.
Contributing
Open an issue before large changes. Keep PRs small, focused, and consistent with the “small by design” principle.
License
MIT
