hrr-memory-obs
v0.1.2
Published
Observation layer for hrr-memory: temporal awareness, conflict detection, and LLM-driven belief synthesis.
Maintainers
Readme
hrr-memory-obs
Your agent remembers facts (if you installed hrr-memory). Now let it notice when beliefs change.
hrr-memory-obs wraps hrr-memory with temporal awareness, algebraic conflict detection, and LLM-driven observation synthesis. When your agent stores (alice, likes, payments) after previously storing (alice, likes, rust), the library flags the conflict automatically. Consolidate the flags into natural-language observations whenever you want — with whatever LLM you want.
Install
npm install hrr-memory-obs hrr-memory30-Second Demo
import { HRRMemory } from 'hrr-memory';
import { ObservationMemory } from 'hrr-memory-obs';
const hrr = new HRRMemory();
const mem = new ObservationMemory(hrr);
await mem.store('alice', 'interested_in', 'rust');
await mem.store('alice', 'interested_in', 'payments');
mem.flags();
// → [{ subject: 'alice', oldObject: 'rust', newObject: 'payments', similarity: 0.05 }]
mem.history('alice');
// → [{ ts: ..., op: 'store', object: 'rust' },
// { ts: ..., op: 'store', object: 'payments', conflict: { oldObject: 'rust' } }]
mem.at(Date.now()).facts('alice', 'interested_in');
// → ['rust', 'payments']How Conflict Detection Works
On every store(), the library checks if the (subject, relation) pair already has a value in HRR. If it does, it compares the old and new object vectors using cosine similarity. Low similarity = belief change = flag.
The first write to any (subject, relation) pair skips the check entirely — zero overhead. The query only runs on subsequent writes, exactly when conflicts are possible.
Consolidation
Flags accumulate until you consolidate them. The library builds the prompt; you bring the LLM.
const mem = new ObservationMemory(hrr, {
executor: (prompt) => callYourLLM(prompt),
});
// ... after some conflicting stores ...
const observations = await mem.consolidate();
// → [{ subject: 'alice', observation: 'Interest shifted from Rust to payments',
// evidence: [...], confidence: 'high' }]Or skip the LLM entirely and write observations directly:
mem.addObservation({
subject: 'alice',
observation: 'Interest shifted from Rust to payments',
evidence: [{ ts: 1711234567890, triple: ['alice', 'interested_in', 'rust'] }],
confidence: 'high',
});Point-in-Time Queries
mem.at(lastWeek).facts('alice', 'interested_in');
// → ['rust'] (what we knew then)
mem.at(Date.now()).facts('alice', 'interested_in');
// → ['rust', 'payments'] (what we know now)This is a symbolic replay of the timeline, not an HRR rebuild. Fast and cheap.
Persistence
Two files: the HRR index (backward compatible) and the observations/timeline.
mem.save('hrr-index.json', 'observations.json');
const loaded = ObservationMemory.load('hrr-index.json', 'observations.json', {
executor: (prompt) => callYourLLM(prompt),
});Custom Prompts
The built-in defaultPrompt instructs the LLM to produce structured JSON with evidence chains. Override it:
import { ObservationMemory, defaultPrompt } from 'hrr-memory-obs';
const mem = new ObservationMemory(hrr, {
executor,
promptFn: (input) => myCustomPrompt(input),
});API
| Method | Returns | Description |
|--------|---------|-------------|
| await store(s, r, o) | Promise<boolean> | Store triple, record timeline, check conflicts |
| await forget(s, r, o) | Promise<boolean> | Forget triple, record timeline |
| query(s, r) | QueryResult | Delegated to HRRMemory |
| querySubject(s) | Fact[] | Delegated to HRRMemory |
| search(r?, o?) | Triple[] | Delegated to HRRMemory |
| ask(question) | AskResult | Delegated to HRRMemory |
| stats() | Stats | Delegated to HRRMemory |
| history(s, r?) | TimelineEntry[] | Temporal history, oldest first |
| at(ts).facts(s, r?) | string[] | Point-in-time symbolic query |
| flags() | ConflictFlag[] | Unflushed conflict flags |
| observations(s?) | Observation[] | Synthesized beliefs, newest first |
| await consolidate() | Observation[] | LLM-driven synthesis of flagged changes |
| addObservation(obs) | Observation | Store observation directly (no LLM) |
| clearFlags(subject) | void | Clear flags for a subject |
| save(hrrPath, obsPath) | void | Persist to two files |
| load(hrrPath, obsPath, opts) | ObservationMemory | Load from two files (static) |
Standalone Components
Each layer works independently:
import { Timeline, ConflictDetector, defaultPrompt } from 'hrr-memory-obs';
// Just the timeline
const tl = new Timeline();
tl.append({ ts: Date.now(), subject: 'x', relation: 'y', object: 'z', op: 'store' });
// Just the conflict detector
const cd = new ConflictDetector(hrr, 0.3);
cd.track('x', 'y');
cd.check('x', 'y', 'new_value');
// Just the prompt
const prompt = defaultPrompt({ entries, flags, existingObservations });Performance
| Operation | 1K entries | 10K entries |
|-----------|-----------|-------------|
| store() (first write) | ~1.6ms | ~1.6ms |
| store() (conflict check) | ~2.4ms | ~2.4ms |
| history() | 0.1ms | 1.2ms |
| at().facts() | 0.25ms | 2.9ms |
Zero overhead on first writes. Conflict detection adds ~0.8ms (one HRR query).
License
MIT
