xmemory-records-mapper
v1.0.0
Published
Discovers and persists **mapping relationships** between MongoDB collections into [XMemory](https://github.com/nx-intelligence) at three levels:
Readme
@xmemory/records-mapper
Discovers and persists mapping relationships between MongoDB collections into XMemory at three levels:
- Collection level — collection ↔ collection
- Schema/field level — field ↔ field
- Record level — document ↔ document
It writes these mappings as Things and edges into XMemory:
- Things via
@xmemory/equal(xmemory-equal) - Edges via
@xmemory/relations(xmemory-relations)
It can use:
- nx-semantic-matcher (deterministic semantic matching) — optional
- nx-ai-api (LLM-assisted collection/field mapping) — optional but recommended
This package does not scope, traverse, or answer questions — that is the responsibility of @xmemory/scoper. The mapper only produces mapping data; the scoper consumes it.
Installation
npm install @xmemory/records-mapper nx-mongo xmemory-equal xmemory-relationsOptional: nx-ai-api, nx-semantic-matcher (if available).
Usage
import { createRecordsMapper, createNxMongoAdapter, createRelationsBulkAdapter, wrapEqualForMapper } from "@xmemory/records-mapper";
import { SimpleMongoHelper } from "nx-mongo";
import { createEqualClient } from "xmemory-equal";
import { createRelationsClient } from "xmemory-relations";
const nxMongo = new SimpleMongoHelper(process.env.MONGO_URI);
await nxMongo.initialize({ databaseName: process.env.MONGO_DB });
const xEqual = wrapEqualForMapper(createEqualClient({ nxMongo, namespace: "myapp" }), "myapp");
const relationsClient = createRelationsClient({ nxMongo, nxEqual, namespace: "myapp" });
const xRelations = createRelationsBulkAdapter(relationsClient, xEqual, "myapp");
const nxMongoAdapter = createNxMongoAdapter(nxMongo);
const mapper = createRecordsMapper({
nxMongo: nxMongoAdapter,
xEqual,
xRelations,
config: {
namespace: "myapp",
collections: [
{ server: "default", database: "db1", collection: "users" },
{ server: "default", database: "db2", collection: "accounts" },
],
modes: { collections: true, schema: true, records: true },
thresholds: {
schemaConfidence: 0.75,
recordConfidence: 0.75,
equalityThreshold: 0.95,
highTier: 0.9,
},
},
});
const result = await mapper.run();
console.log(result.stats);What it does
- Profile — Ensures collection and field Things exist in XMemory (with
metadata.db,metadata.collection). - Collection mapping (optional) — When LLM is enabled, proposes collection pairs and writes
collection-relationedges. - Schema mapping — Builds field ↔ field candidates (LLM-seeded or lexical/semantic), scores them, and writes
schema-relationedges. - Record mapping — Uses
schema-relationedges as drivers, batch-reads documents, matches values (semantic or equality), ensures document Things, writesrecord-relationedges, and promotes high-confidence matches to equality viaxEqual.link.
Outputs in XMemory
- Things:
schema:collection,schema:field,record:document(each withmetadata.db,metadata.collection). - Edges:
collection-relation,schema-relation,record-relation(with confidence, tier, source, sessionId; record edges includedrivingFieldRelationEdgeId).
Known limitation
Single (db, collection) per Thing. Scope metadata stores one metadata.db and one metadata.collection per node. Multi-database or cross-cluster scope is not represented by these keys.
Safety
Record payload storage is capped and configurable:
records.storeRecordPayload:"none"|"keys-only"|"matched-fields"records.matchedFieldsMaxLen: max length for stored matched values (default 256)
API
run()— Full pipeline: profile → collections (if enabled) → schema → records.runCollections()— Profile + collection-level mapping only.runSchema()— Profile + schema (field) mapping only.runRecords()— Record mapping only (assumes schema edges exist).runOnDemand(pair)— Run full pipeline for a single collection pair.runIncremental({ changedDocs })— Record mapping in incremental mode (v1: delegates to full record run).
License
ISC
