@anura-gate/anura-graph
v0.4.0
Published
Typed client for the GraphMem knowledge graph API
Downloads
435
Maintainers
Readme
@anura-gate/anura-graph
Typed TypeScript/JavaScript client for the Anura Memory API. Zero dependencies.
Anura Memory provides two memory products for AI agents:
- GraphRag — Knowledge graph with automatic triple extraction, deduplication, and hybrid retrieval
- FilesRag — Markdown file storage with heading-based chunking and semantic search
Installation
npm install @anura-gate/anura-graphQuick Start
import { GraphMem } from '@anura-gate/anura-graph';
const mem = new GraphMem({
apiKey: 'gm_your_key_here',
// baseUrl: 'https://anuramemory.com' (default)
});
// --- GraphRag ---
await mem.remember("Alice is VP of Engineering at Acme Corp");
const ctx = await mem.getContext("Tell me about Alice");
// --- FilesRag ---
await mem.writeFile({ path: "/notes/standup.md", content: "# Standup\n## 2026-02-21\n- Shipped auth module" });
const results = await mem.searchFiles("auth module");Configuration
new GraphMem({
apiKey: 'gm_...', // required — from dashboard
baseUrl: 'https://anuramemory.com', // default
retry: {
maxRetries: 3, // default
baseDelayMs: 500, // exponential backoff base
maxDelayMs: 10_000, // max delay cap
retryOn: [429, 500, 502, 503, 504] // retryable status codes
}
});API Reference
GraphRag
remember(text): Promise<RememberResult>
Extract knowledge from text and store as triples in the graph.
const result = await mem.remember(
"Albert Einstein was born in Ulm, Germany. He developed the theory of relativity."
);
// { extractedCount: 2, mergedCount: 2, conflicts: [], message: "..." }If a new fact contradicts an existing one on a single-valued predicate (e.g., LIVES_IN, WORKS_AT), the old value is replaced and the conflict is returned:
const r = await mem.remember("Alice now works at Microsoft");
// r.conflicts → [{ subjectName: "alice", predicate: "WORKS_AT", oldObject: "google", newObject: "microsoft", resolution: "auto_recency" }]getContext(query, options?): Promise<ContextResult>
Retrieve context from the knowledge graph using hybrid search (graph traversal + vector similarity + community summaries).
// Markdown format (ideal for LLM system prompts)
const ctx = await mem.getContext("Einstein", { format: "markdown" });
// ctx.content → "- Albert Einstein → BORN_IN → Ulm..."
// JSON format
const json = await mem.getContext("Einstein", { format: "json" });
// json.entities, json.edges, json.vectorEntities, json.communities
// Hybrid mode (graph + vector + communities)
const hybrid = await mem.getContext("Einstein", { mode: "hybrid" });| Option | Type | Default | Description |
|--------|------|---------|-------------|
| format | "json" \| "markdown" \| "text" | "json" | Response format |
| mode | "graph" \| "hybrid" | auto | Search mode |
search(entity, options?): Promise<SearchResult>
Find an entity and its direct (1-hop) connections. In fuzzy mode, filter results by entity type.
const result = await mem.search("typescript");
console.log(result.entity); // { name: "Typescript", type: "technology", userId: "..." }
console.log(result.edges); // all incoming + outgoing relationships
// Search with type filter (fuzzy mode)
const result = await mem.search("alice", { type: "person" });ingestTriples(triples): Promise<IngestResult>
Ingest pre-formatted triples directly (no LLM extraction, max 100 per request).
const result = await mem.ingestTriples([
{ subject: "TypeScript", predicate: "CREATED_BY", object: "Microsoft" },
{ subject: "TypeScript", predicate: "SUPERSET_OF", object: "JavaScript" },
]);getGraph(options?): Promise<GraphData>
Get the full graph (all nodes, edges, communities). Optionally filter by entity type.
const graph = await mem.getGraph();
// graph.nodes: GraphNode[], graph.edges: GraphEdge[], graph.communities?: GraphCommunity[]
// Filter graph by entity type
const people = await mem.getGraph({ type: "person" });
const orgs = await mem.getGraph({ type: "person,organization" });deleteEdge(id, options?): Promise<void>
Delete an edge. Optionally blacklist the triple to prevent re-creation.
await mem.deleteEdge("edge_id");
await mem.deleteEdge("edge_id", { blacklist: true });updateEdgeWeight(id, options?): Promise<void>
Set or increment an edge's weight. Higher-weight edges appear first in context retrieval.
await mem.updateEdgeWeight("edge_id", { weight: 5.0 }); // absolute
await mem.updateEdgeWeight("edge_id", { increment: 1.0 }); // deltadeleteNode(id): Promise<void>
Delete a node and all its connected edges.
exportGraph(): Promise<ExportData>
Export the entire graph as portable JSON.
importGraph(data): Promise<ImportResult>
Import a graph export into the current project (merges, does not delete existing data).
listCommunities(): Promise<Community[]>
List all detected communities.
detectCommunities(): Promise<DetectCommunitiesResult>
Run Louvain community detection + LLM summarization.
const result = await mem.detectCommunities();
// { communityCount: 3, entityCount: 25, message: "..." }FilesRag
writeFile(options): Promise<WriteFileResult>
Create or update a markdown memory file. Files are automatically chunked by ## headings and indexed for semantic search.
const result = await mem.writeFile({
path: "/docs/architecture.md",
content: "# Architecture\n\n## Backend\nNode.js with Prisma...\n\n## Frontend\nNext.js with React...",
name: "architecture.md", // optional, derived from path if omitted
});
// { file: { id, name, path, size, ... }, chunkCount: 3, created: true }If a file already exists at the given path, its content is replaced and re-indexed.
listFiles(): Promise<MemoryFile[]>
List all files in the current project.
const files = await mem.listFiles();
// [{ id, name, path, size, createdAt, updatedAt }, ...]readFile(id): Promise<FileWithContent>
Get a file with its content.
const file = await mem.readFile("file_id");
console.log(file.content); // full markdown contentupdateFile(id, content, name?): Promise<WriteFileResult>
Update a file's content (re-chunks and re-indexes).
const result = await mem.updateFile("file_id", "# Updated content\n...");deleteMemoryFile(id): Promise<void>
Delete a file and all its indexed chunks.
searchFiles(query, options?): Promise<FileSearchResult[]>
Semantic search across file chunks. Pass fileId to scope the search to a single file.
const results = await mem.searchFiles("authentication flow", { limit: 5 });
// Search within a single file:
// const results = await mem.searchFiles("auth", { fileId: "file_abc123" });
for (const r of results) {
console.log(r.file.path);
for (const chunk of r.chunks) {
console.log(` ${chunk.headingTitle} (${chunk.score.toFixed(2)})`);
console.log(` ${chunk.excerpt}`);
}
}Projects
listProjects(): Promise<Project[]>
createProject(name): Promise<Project>
deleteProject(id): Promise<void>
selectProject(id): Promise<void>
Traces
listTraces(options?): Promise<Trace[]>
getTrace(id): Promise<TraceDetail>
Blacklist
listBlacklist(options?): Promise<BlacklistedTriple[]>
addToBlacklist(subject, predicate, object): Promise<{ id: string }>
removeFromBlacklist(id): Promise<void>
Conflict Log
listConflicts(options?): Promise<ConflictLogEntry[]>
List conflict resolution log entries (newest first, paginated).
const conflicts = await mem.listConflicts({ limit: 10 });
// [{ id, subjectName, predicate, oldObject, newObject, resolution, createdAt }, ...]Pending Facts
listPending(options?): Promise<PendingFact[]>
approveFact(id): Promise<void>
rejectFact(id, options?): Promise<void>
approveAll(): Promise<void>
rejectAll(): Promise<void>
Usage
getUsage(): Promise<UsageInfo>
const usage = await mem.getUsage();
// { tier: "FREE", factLimit: 100, currentFacts: 42, nodesCount: 20, edgesCount: 22,
// fileStorageLimit: 52428800, fileCountLimit: 20, currentFileStorage: 1024, currentFileCount: 3 }Health
health(): Promise<HealthResult>
Error Handling
All methods throw GraphMemError on failure:
import { GraphMemError } from '@anura-gate/anura-graph';
try {
await mem.readFile("nonexistent");
} catch (err) {
if (err instanceof GraphMemError) {
console.log(err.status); // 404
console.log(err.message); // "File not found"
console.log(err.body); // full response body
}
}Rate Limiting
After each request, rate limit info is available:
await mem.remember("some fact");
console.log(mem.rateLimit.remaining); // requests remaining
console.log(mem.rateLimit.limit); // total allowed per window
console.log(mem.rateLimit.reset); // unix timestamp when window resetsTypes
All types are exported:
import type {
// GraphRag
RememberResult, ConflictResolution, ConflictLogEntry,
ContextResult, SearchResult, Triple,
GraphNode, GraphEdge, GraphData, Community,
// FilesRag
MemoryFile, FileWithContent, FileSearchResult,
WriteFileOptions, WriteFileResult,
// Config
GraphMemConfig, RetryConfig, RateLimitInfo, UsageInfo,
} from '@anura-gate/anura-graph';License
MIT
