@x12i/xmemory-store
v2.9.8
Published
Memorix Mongo tier (per-entity content types, read/write, Catalox seeds); XMemory scoper data tier (xronox-store): maps, scoped views, things corpus
Readme
@x12i/xmemory-store
Public package on the npm registry. Memorix is the core product here: a MongoDB layout for per-entity data where each logical entity gets fixed content-type slices ({entityName}{suffix} — only values in MEMORIX_ENTITY_CONTENT_TYPES), exposed through createMemorixDataTier, reads (countMemorixEntityDocuments, listMemorixEntityDocuments), and writes (insertMemorixEntityDocument, replaceMemorixEntityDocument, patchMemorixEntityDocument, updateMemorixEntityDocument, deleteMemorixEntityDocument). The repo ships a Catalox 4.x manifest so memorix, xmemory, and graphs-studio discover the same content-type catalog in Firestore without hardcoding enums.
Also in this package (same cluster, shared MONGO_URI, different concerns): the XMemory scoper data tier — XmemoryMongoClient over @x12i/xronox-store for scoping maps, mapped questions, things corpus, and scoped views — which @x12i/xmemory-scoper and mappers rely on. There is no peer dependency enforced the other way.
- Start here (Memorix + reads):
docs/xmemory-entities-client.md— entities, content types, scoped data, pagination, counts, Catalox. - Full contract:
docs/spec.md— roles, DB/collection defaults, env precedence, acceptance, coverage vs runtime.
In typical deployments, Memorix and the XMemory tier often share one Mongo cluster; hosts choose memorix_db vs mapsDb / opDb per docs/spec.md. Mapper and scoper integration: docs/host-integration-mapper-store-scoper.md.
Memorix (core)
Memorix is a parallel operational database (default name memorix, overridable via memorix_db / MEMORIX_DB) of per-entity collections. You pass an entityName (prefix) and a MemorixEntityContentType; the tier resolves the physical collection (e.g. assets + scoped → assets-scoped). See src/memorix-tier.ts and docs/spec.md §2.3.
| Concern | Role | Default Mongo database | Collection pattern |
|--------|------|------------------------|----------------------|
| Memorix per-entity stores | Memorix — read/write via createMemorixDataTier + persistence helpers | memorix (memorix_db, default) | {entityName}-{suffix} — suffix is a fixed enum only (table below) |
Supported content types
The Memorix data tier resolves names as {entityName} + - + known suffix. Callers pass contentType; only the values below are supported. The canonical array is MEMORIX_ENTITY_CONTENT_TYPES.
| contentType (pass to API) | Suffix appended to entityName | Example collection (entityName = assets) |
|-----------------------------|-----------------------------------|-----------------------------------------------|
| scoped | -scoped | assets-scoped |
| snapshots | -snapshots | assets-snapshots |
| narratives | -narratives | assets-narratives |
| web | -web | assets-web |
| web-scoped | -web-scoped | assets-web-scoped |
| raw | -raw | assets-raw |
| inference | -inference | assets-inference |
| analytics | -analytics | assets-analytics |
| foresights | -foresights | assets-foresights |
Use createMemorixDataTier(), countMemorixEntityDocuments(), and listMemorixEntityDocuments() (optional includeTotal: true) for reads; insertMemorixEntityDocument(), replaceMemorixEntityDocument(), patchMemorixEntityDocument(), updateMemorixEntityDocument(), and deleteMemorixEntityDocument() for persistence — all with memorix_db / MEMORIX_DB and the same MONGO_URI as the rest of your stack. Helpers: memorixEntityCollectionName(), assertMemorixEntityName(), resolveMemorixTierEnv().
Catalox catalog (ships with Memorix)
This repo publishes a Catalox 4.x seed manifest so Firestore-backed apps share one native catalog for Memorix content types:
- Catalog id:
memorix_entity_content_types(native). - Catalog type:
memorix(host-defined tag for presentation defaults). - Bindings:
memorix(read/write/admin),xmemory(read),graphs-studio(read), all tomemorix_entity_content_types. - Manifest:
catalox-seeds/memorix-entity-content-types.manifest.json— each item includessourcePackage:@x12i/xmemory-storeandmemorixDataTier: true next tocontentType/suffix/label.
Prerequisites: Firebase Admin env as for any catalox CLI (FIREBASE_PROJECT_ID, GOOGLE_SERVICE_ACCOUNT_BASE64 or ADC). See @x12i/catalox README.
# Validate JSON only (no Firestore writes)
npm run catalox:seed:memorix:validate
# Idempotent apply: create catalog + descriptor + bindings + items (requires --god for descriptors / cross-app bindings)
npm run catalox:seed:memorix:applyWith the same Firebase Admin env as above, npm test runs validate then apply in test/memorix-catalox-seed.test.ts so memorix_entity_content_types items stay seeded in Catalox (skipped when env is unset).
The manifest’s item rows must stay aligned with MEMORIX_ENTITY_CONTENT_TYPES in src/memorix-tier.ts; update both when adding a type.
XMemory scoper tier (also in this package)
Maps, corpus, scoped views, and scoped operational payloads for @x12i/xmemory-scoper (and related tooling). Same MONGO_URI; different default databases and collections than Memorix.
| Concern | Role (see spec) | Default Mongo database | Default collection |
|--------|------------------|-------------------------|-------------------|
| Scoping maps | B — requirement map | xmemory-meta | scoping_maps |
| Mapped scoping questions (catalog rows) | B′ — meta catalog | xmemory-meta (same as B) | mapped_scoping_questions |
| Scoped views / about-cache | D — scoped projection | xmemory-op | scoped_views |
| Scoped entity payloads (optional; tier statistics) | D′ — op-tier documents | xmemory-op | x_scoped_data |
| Snapshots (optional; tier statistics) | D″ — op-tier documents | xmemory-op | x-snapshots |
| Things corpus (candidates by namespace + thingType) | C — evidence corpus | xmemory-meta | xmemory_things |
createXmemoryDataTier()(alias:createScoperMongoClient()) builds one or twoXronoxStoreinstances for the metadata/corpus side (scoping_maps+mapped_scoping_questions, with things when colocated) and one for the op side (scoped views), then returns anXmemoryMongoClientimplementation that routesupsertOne,findOne,findMany(with skip/limit for maps, mapped questions, and things),updateOne/deleteOneon the two catalog collections, andcreateIndexto those stores.getXmemoryTierStatistics(tier, options)(src/tier-statistics.ts) requiresTierFetchScope(namespaceplus optionalentityTypeKey,extraMatch,datePresetIds) and returns paginated entity-type buckets per surface (matchedDocTotal,entityTypeswithtypeKey/docCount,entityTypesHasMore). Uses the samebuildTierFetchMatchpredicates asgetTierFilterCatalogand hostfindMany(see below). Cached per full scope + pagination (entityTypesSkip/entityTypesLimit, capped byENTITY_TYPES_LIMIT_MAX).getTierFilterCatalog(tier, options)(src/tier-filter-catalog.ts) returns generic date-preset metadata, a paginated entity-type page, and per-type observed fields (coverage + coarse BSON kinds) plus facet value lists (allowlisted dimensions, truncated with caps). TTL cache:DEFAULT_TIER_FILTER_CATALOG_TTL_MS.buildTierFetchMatch,buildDatePresetMatches,resolveSurfaceDbCollection(src/tier-fetch-scope.ts) — reuse these so list/detail queries match dashboard statistics/catalog semantics.listScopedDataDocuments,countScopedDataDocuments,getScopedDataDocumentById,getScopedDataDocumentsByIds(src/scoped-data-reads.ts) — paginated reads, counts, and id lookups ontier.env.scopedDataCollectionusingtier.runMongoAggregate, with the sameTierFetchScope/buildTierFetchMatchsemantics assurface: "scoped_data"statistics; allowlistedsortfields, optional escapedsearchText, and optionalincludeTotal: trueon list; caps:SCOPED_DATA_LIST_MAX_LIMIT,FIND_MANY_MAX_SKIP,SCOPED_DATA_MAX_BATCH_IDS(seedocs/spec.md§7).listScopedDataEntityTypes— entity-type buckets forscoped_dataonly (thin wrapper overgetXmemoryTierStatistics); usegetTierFilterCatalogwhen you need observed fields and facets.clearTierFetchCaches()clears statistics + catalog in-memory caches (tests / forced refresh workflows).
Breaking change (statistics): earlier releases allowed getXmemoryTierStatistics(tier) without a namespace (global counts). Callers must pass { namespace } (partition key per docs/spec.md).
getScoperStoreBindings()/tier.storeBindingsgive the{ database, collection }objects you pass into scoper APIs (metadataStore,metadataMappedQuestionsStore,thingsStore,persistCache). Seedocs/spec.md§2.2 for meta definitions vs op answers.- Environment resolution via
resolveXmemoryStoreEnv()matches the same env vars as scoper’s coverage tooling (MAPS_COLLECTION,THINGS_DB,MONGO_XMEMORY_META_DB, etc.).
Coverage tooling vs runtime (dual write path)
Runtime (createScoper, scopeAbout, processThingTypeScopeViews, …) should use this package’s tier XmemoryMongoClient (tier.mongoClient) for the tier collections.
Coverage (npm run report:coverage in @x12i/xmemory-scoper, if you use it) uses the native mongodb driver (MongoClient, updateOne / updateMany) on the same URI. That is by design until the scoper port and report script are refactored to one abstraction. See docs/spec.md §7 and scoper docs/reports/ai-only-scoping-handoff.md.
What it does not do
- Canonical things and edges (
xmemory-equal,xmemory-relations) are unchanged: scoper still needsXEqualClientandXRelationsClientwired to your existing Mongo stack. - Full-stack E2E beyond this package’s tests may live in
@x12i/xmemory-scoperor your integration suite.
Architecture
The diagram below is the XMemory scoper tier (Xronox stores → Mongo). Memorix is separate: same MongoClient / URI, createMemorixDataTier targets env.memorixDb and per-entity collection names (see Memorix (core)).
flowchart TB
subgraph scoperPkg [scoper package]
CS[createScoper]
end
subgraph storePkg [xmemory store package]
TIER[createXmemoryDataTier]
ADAPTER[XmemoryMongoClient adapter]
MS[mapsStore and corpusStore]
OS[opStore]
end
subgraph xronoxStack [Xronox stack]
XS[XronoxStore]
XR[xronox driver]
end
subgraph mongo [MongoDB]
DB1[(mapsDb thingsDb)]
DB2[(opDb)]
end
CS -->|XmemoryMongoClient| ADAPTER
TIER --> ADAPTER
TIER --> MS
TIER --> OS
MS --> XS
OS --> XS
XS --> XR
XR --> DB1
XR --> DB2Scoped view documents use a synthetic primary key scopedViewKey (hash of scopingMapId + aboutThingId) inside xronox-store, while the adapter still accepts scoper’s upsert filter { aboutThingId, scopingMapId }.
Installation
Published to the public npm registry as @x12i/xmemory-store. No GitHub Packages auth is required to install.
npm install @x12i/xmemory-storeWith scoper (example):
npm install @x12i/xmemory-store @x12i/xmemory-scoperCompatibility: docs/spec.md §8.
Migrating from 1.x (breaking): NxMongoClient → XmemoryMongoClient; tier.nxMongo → tier.mongoClient; fallbackNxMongo → fallbackMongoClient; createScoperNxMongoClient → createScoperMongoClient; createXronoxNxMongoAdapter → createXronoxTierMongoAdapter. Module files: nx-mongo-client.ts → xmemory-mongo-client.ts, xronox-nx-mongo-adapter.ts → xronox-tier-mongo-adapter.ts. Downstream createScoper({ nxMongo: … }) may keep the property name nxMongo until scoper renames it—pass tier.mongoClient.
Publishing (npm publish)
- Ensure you are logged in to npm (
npm login) and have publish rights to the@x12iscope. package.jsonuses"publishConfig": { "access": "public" }so scoped packages publish as public.- Run
npm publishfrom a clean build (prepublishOnlyrunstsc).
CI: set NPM_TOKEN and write //registry.npmjs.org/:_authToken=${NPM_TOKEN} into a temporary .npmrc before publish. See .npmrc.example.
Publish troubleshooting (403 / 402)
- Confirm
npm whoamiprints the npm user that owns@x12i. - For automation tokens, enable the correct granular or classic token permissions for publish on npmjs.com.
Usage
Memorix (example)
import { createMemorixDataTier, listMemorixEntityDocuments } from "@x12i/xmemory-store";
const memorix = createMemorixDataTier({
// mongoUri defaults from MONGO_URI / MONGO_CONNECTION_STRING; memorixDb from memorix_db / MEMORIX_DB
});
await memorix.init();
const page = await listMemorixEntityDocuments(memorix, {
entityName: "assets",
contentType: "scoped",
filter: { status: "ready" },
skip: 0,
limit: 50,
sort: { updatedAt: -1 },
includeTotal: true,
});
// page.documents, page.totalCount — use countMemorixEntityDocuments for count-only
await memorix.close();XMemory tier + scoper (example)
import { createScoper } from "@x12i/xmemory-scoper";
import { createXmemoryDataTier } from "@x12i/xmemory-store";
// Optional: use scoper or host helper to build a driver-backed XmemoryMongoClient for enrichment
// for enrichment reads (arbitrary db.collection findById).
const tier = createXmemoryDataTier({
// Optional: fallbackMongoClient: yourDriverBackedClient,
});
await tier.init();
const scoper = createScoper({
nxMongo: tier.mongoClient, // scoper option name; value is this package’s XmemoryMongoClient
xEqual,
xRelations,
});
const { metadataStore, metadataMappedQuestionsStore, thingsStore, persistCache } = tier.storeBindings;
// Catalog CRUD: use tier XmemoryMongoClient (tier.mongoClient) with metadataStore / metadataMappedQuestionsStore (see docs/spec.md §2.2).
// Example: scopeAbout — pass store bindings + persistCache options scoper expects
await scoper.scopeAbout({
namespace: "my-namespace",
about: { thingId: "…" },
metadataStore,
scopingMapId: "…",
persistCache: {
enabled: true,
database: persistCache.database,
collection: persistCache.collection,
ttlSeconds: 3600,
},
});Logging
By default, @x12i/xmemory-store only emits error logs from the shared log helper (library code should use log.debug / log.info / log.warn / log.error instead of raw console.*).
To enable richer local logs (debug / info / warn), set:
ENABLE_XMEMORY_STORE_LOGXER=true
XMEMORY_STORE_LOGS_LEVEL=debugIf ENABLE_XMEMORY_STORE_LOGXER is missing, empty, or not the literal true (case-insensitive), the effective level stays error regardless of XMEMORY_STORE_LOGS_LEVEL.
When rich logging is on, XMEMORY_STORE_LOGS_LEVEL may be debug, info, warn, error, or off (the legacy XMEMORY_STORE_LOG_LEVEL is read as a fallback if XMEMORY_STORE_LOGS_LEVEL is unset). If both are unset while enabled, the default is warn. The CLI continues to print usage and command output to stdout/stderr as usual.
Scoper-oriented helpers
createScoperMongoClient(options)— Same ascreateXmemoryDataTier(options); use whichever name fits your app. PassfallbackMongoClientwhenever scoper may callfindByIdon collections outside the tier.withEnrichmentFallback(tier, fallbackMongoClient)— Returns a new tier object with the same stores but the tierXmemoryMongoClientrebuilt to include a fallback (if you did not pass one at construction).- Scoping catalog handoff (scoper + worox-graph):
docs/scoping-catalog-downstream.md.
Programmatic overrides
You can pass partial ResolvedXmemoryStoreEnv fields into createXmemoryDataTier({ ... }) (mongoUri, mapsDb, thingsDb, opDb, mapsCollection, mappedQuestionsCollection, other collection names). metaDb is still accepted as an alias for mapsDb (deprecated).
Splitting corpus from maps database
If THINGS_DB (or env chain) resolves to a different database than METADATA_DB / MONGO_XMEMORY_META_DB, this package opens a second XronoxStore for things only. tier.mapsStore holds scoping_maps and mapped_scoping_questions; tier.corpusStore holds the things collection (same instance as mapsStore when both DBs match).
Enrichment and findById
Scoper’s enrichment path calls findById on arbitrary database / collection from thing metadata. The xronox adapter only handles the tier collections (scoping_maps, mapped_scoping_questions, things, scoped_views). Provide fallbackMongoClient (a driver-backed XmemoryMongoClient, e.g. from your host or scoper helper) so those reads still work.
findMany on catalog + things (skip, limit, caps)
XronoxStore’s readMany has no Mongo skip. For scoping_maps, mapped_scoping_questions, and the things collection, the adapter requests a bounded window (skip + limit, using FIND_MANY_DEFAULT_LIMIT_WITH_SKIP when limit is omitted and skip > 0) and slices in memory. Results are not cursor-stable if data changes between calls. Filters with Mongo operators (e.g. $or) still require fallbackMongoClient.
Hard limits (also enforced in @x12i/xmemory-scoper processThingTypeScopeViews for skip):
| Constant | Value | Role |
|----------|-------|------|
| FIND_MANY_MAX_SKIP | 50_000 | Larger skip throws. |
| FIND_MANY_MAX_WINDOW | 100_000 | skip + effectiveLimit for the internal readMany call cannot exceed this. |
| FIND_MANY_DEFAULT_LIMIT_WITH_SKIP | 10_000 | Default tail size when skip > 0 and limit is omitted. |
Import these from @x12i/xmemory-store if you need the same numbers in application code.
projection
For tier collections, projection (Record<string, 0 \| 1>) is applied after documents are loaded, shallow top-level keys only (no dotted paths). Heavy workloads still pay full-document read cost from the store.
Environment variables
| Variable | Purpose |
|----------|---------|
| MONGO_URI or MONGO_CONNECTION_STRING | Required for resolveXmemoryStoreEnv() and typical test runs |
| memorix_db, MEMORIX_DB | Memorix database name (first non-empty wins; default memorix); see resolveMemorixTierEnv() |
| FIREBASE_PROJECT_ID | With GOOGLE_SERVICE_ACCOUNT_BASE64 or GOOGLE_APPLICATION_CREDENTIALS, enables Catalox seed validate / seed apply and the matching Vitest live block |
| METADATA_DB, MONGO_XMEMORY_META_DB, MONGO_XMEMORY_METADATA_DB | Maps database (role B); first non-empty wins |
| THINGS_DB, MONGO_XMEMORY_META_DB | Corpus database (role C); corpus defaults to maps DB if unset |
| MONGO_XMEMORY_DB, VIEWS_DB, MONGO_XMEMORY_OPERATIONAL_DB | Scoped views database (role D); MONGO_XMEMORY_OPERATIONAL_DB is views only, not relations / record-stage operational DB |
| MAPS_COLLECTION, MAPPED_QUESTIONS_COLLECTION, VIEWS_COLLECTION, THINGS_COLLECTION, SCOPED_DATA_COLLECTION, X_SCOPED_DATA_COLLECTION, SNAPSHOTS_COLLECTION, … | Collection name overrides (see ENV_KEYS / docs/spec.md §2.1) |
See ENV_KEYS and resolveXmemoryStoreEnv in the published API for the exact key order.
Equal and the things corpus: wire canonical things to thingsDb — THINGS_DB / MONGO_XMEMORY_META_DB, else the resolved maps database — plus THINGS_COLLECTION. MONGO_XMEMORY_DB is opDb (scoped views) only; it is not the equal/things database. Use tier.env.mongoUri, tier.env.thingsDb, and tier.env.thingsCollection after createXmemoryDataTier() so equal and any corpus XmemoryMongoClient match the tier. tier.mongoClient is the tier XmemoryMongoClient for scoper collections only, not equal’s persistence layer — see docs/spec.md §2.1, §7, and §9.
Operational notes / gotchas (real Mongo)
Host helpers that require a default DB in the URI path
This tier uses @x12i/xronox-store and does not require MONGO_URI to include a /dbname path; it resolves DB names (mapsDb, thingsDb, opDb) separately and opens the right DB explicitly.
However, some host-side Mongo helpers may require a default database either via:
- a
/dbnamein the Mongo URI path, or - an explicit
databaseNameparameter (helper-specific).
MongoDB itself does not require a default database in the connection string; this is a helper expectation (outside this package).
If your MONGO_URI has no pathname, build a helper-specific URI by appending the desired role DB. Example (choose thingsDb for equal/corpus flows; opDb for scoped views tooling):
import { resolveXmemoryStoreEnv } from "@x12i/xmemory-store";
const env = resolveXmemoryStoreEnv();
const u = new URL(env.mongoUri);
if (!u.pathname || u.pathname === "/") u.pathname = `/${env.thingsDb}`;
// Optional if your deployment needs it:
// u.searchParams.set("authSource", u.searchParams.get("authSource") ?? "admin");
const helperMongoUri = u.toString();E11000 during ensureIndexes() on the things corpus unique index
If you see an index build error like:
E11000 duplicate key … uniq_things_ns_thingType_kind_refNorm … { namespace: null, thingType: null, kind: null, refNorm: null }
that typically indicates pre-existing legacy/bad documents already in the corpus collection (nulls for fields included in the unique index), or a shared operational DB being reused across test runs. The remedy is operational (clean/quarantine offending rows) or test isolation (ephemeral DB/collection per run), not a change to this tier’s env resolution or collection defaults.
If you already suspect such rows exist, you can locate them with mongosh and either quarantine or delete them. Adjust DB/collection names to your deployment (often thingsDb / thingsCollection).
// Find candidate "bad rows" (top-level fields shown here; adapt if your corpus uses `_header.*`)
db.xmemory_things.find({
$or: [
{ namespace: null, thingType: null, kind: null, refNorm: null },
{ namespace: { $exists: false }, thingType: { $exists: false }, kind: { $exists: false }, refNorm: { $exists: false } },
],
}).limit(20);
// Option A: quarantine (copy) then delete (safer than blind delete)
db.xmemory_things.aggregate([
{ $match: { namespace: null, thingType: null, kind: null, refNorm: null } },
{ $merge: { into: "xmemory_things_quarantine", whenMatched: "keepExisting", whenNotMatched: "insert" } },
]);
db.xmemory_things.deleteMany({ namespace: null, thingType: null, kind: null, refNorm: null });CLI: quarantine/delete “bad things” documents
This package also ships a small CLI you can run against a live DB to quarantine (default) or delete documents with all-null or all-missing unique-key fields.
# Dry-run (default): prints counts + sample ids
MONGO_URI="mongodb://127.0.0.1:27017" THINGS_DB="xmemory-meta" THINGS_COLLECTION="xmemory_things" \
npx -y @x12i/xmemory-store xmemory-store fix-bad-things-docs --layout auto
# Apply quarantine (copies to `<thingsCollection>_quarantine`, then deletes from source)
MONGO_URI="mongodb://127.0.0.1:27017" THINGS_DB="xmemory-meta" THINGS_COLLECTION="xmemory_things" \
npx -y @x12i/xmemory-store xmemory-store fix-bad-things-docs --layout auto --apply --mode quarantineScripts
| Command | Description |
|---------|-------------|
| npm run build | Compile TypeScript to dist/ |
| npm test | Vitest (see Testing below) |
| npm run catalox:seed:memorix:validate | Validate catalox-seeds/memorix-entity-content-types.manifest.json (Firebase Admin env) |
| npm run catalox:seed:memorix:apply | Idempotent Catalox seed: catalog memorix_entity_content_types, descriptor, bindings, native items (--god; Firebase Admin env) |
Testing
Memorix + Catalox: With
MONGO_URI,test/memorix.integration.test.tsexercisescreateMemorixDataTier, reads (countMemorixEntityDocuments,listMemorixEntityDocuments/includeTotal), and writes (insert / replace / patch / update / delete) on an ephemeral Memorix database. WithFIREBASE_PROJECT_IDplusGOOGLE_SERVICE_ACCOUNT_BASE64orGOOGLE_APPLICATION_CREDENTIALS, Vitest runscatalox:seed:memorix:validatethencatalox:seed:memorix:applyintest/memorix-catalox-seed.test.ts(idempotent catalog seed). Skipped when unset. Warning: apply writes to the configured Firebase project; use a dev/staging project in CI unless you intend to mutate that catalog in production.Without
MONGO_URI/MONGO_CONNECTION_STRING: Unit tests only (defaults, scoped-view keys,applyShallowProjection, adapter routing with mocks). Fast and suitable for CI with no database.With
MONGO_URI(orMONGO_CONNECTION_STRING) set: The suite also runstest/mongo.integration.test.ts, which creates ephemeral databases, callstier.init(), and checks:- map
upsertOne/findOne - scoped view composite
upsertOne/findOne - things
findManyskip/limitconsistency (same filter, sliced window) scoped_viewsTTL-style index presence afterinit()getXmemoryTierStatistics+getTierFilterCatalogunder a sharednamespace(scoped counts, catalog facets, cache behavior)withEnrichmentFallbackdelegatingfindByIdto a fallback client- mapper env aliases (FR-ST-001) resolving
mapsDb/opDbfrom alternate vars
- map
Vitest loads .env when present (see vitest.config.ts), so local live runs are typically:
export MONGO_URI="mongodb://127.0.0.1:27017"
npm testAPI surface (summary)
| Export | Role |
|--------|------|
| createMemorixDataTier, countMemorixEntityDocuments, listMemorixEntityDocuments, insertMemorixEntityDocument, replaceMemorixEntityDocument, patchMemorixEntityDocument, updateMemorixEntityDocument, deleteMemorixEntityDocument, memorixEntityCollectionName, assertMemorixEntityName, resolveMemorixTierEnv, MEMORIX_ENTITY_CONTENT_TYPES, limits (MEMORIX_LIST_MAX_LIMIT, …) | Memorix per-entity Mongo tier (reads + persistence); see docs/spec.md §2.3 and docs/xmemory-entities-client.md |
| createXmemoryDataTier, createScoperMongoClient | Main entry: stores + XmemoryMongoClient (mongoClient) + runMongoAggregate + storeBindings + init / close |
| listScopedDataDocuments, countScopedDataDocuments, getScopedDataDocumentById, getScopedDataDocumentsByIds, listScopedDataEntityTypes | Reads on tier.env.scopedDataCollection (TierFetchScope); list supports includeTotal; see docs/xmemory-entities-client.md |
| getXmemoryTierStatistics, clearTierStatisticsCache, DEFAULT_TIER_STATISTICS_TTL_MS | Scoped statistics (TierFetchScope); entityTypesSkip / entityTypesLimit; surfaces subset optional |
| getTierFilterCatalog, clearTierFilterCatalogCache, DEFAULT_TIER_FILTER_CATALOG_TTL_MS | UX-oriented catalog (observed fields + facets + date-preset metadata) |
| clearTierFetchCaches | Clears both statistics and catalog caches |
| buildTierFetchMatch, TierFetchScope, TierFilterSurface, TIER_DATE_PRESET_IDS, facet/path constants | Same $match semantics for mongoClient.findMany as for tier analytics |
| TIER_STATS_MISSING_TYPE | Sentinel __missing__ entity-type bucket |
| withEnrichmentFallback | Attach a fallback XmemoryMongoClient (fallbackMongoClient) to an existing tier without reopening stores |
| getScoperStoreBindings | Build { metadataStore, metadataMappedQuestionsStore, thingsStore, persistCache } from a resolved env |
| resolveXmemoryStoreEnv | Read env → ResolvedXmemoryStoreEnv |
| FIND_MANY_MAX_SKIP, FIND_MANY_MAX_WINDOW, FIND_MANY_DEFAULT_LIMIT_WITH_SKIP | Things findMany caps (see above) |
| applyShallowProjection | Test / advanced use: same shallow projection helper the adapter uses |
| createXronoxTierMongoAdapter | Lower-level: custom XronoxStore wiring |
| makeScopedViewKey, parseScopedViewUpsertFilter | Scoped view PK helpers |
| mapsCollectionDef, scopedViewsCollectionDef, thingsCollectionDef | XronoxStore collection definitions |
| COLLECTION_DEFAULTS, ENV_KEYS, DEFAULT_* (incl. DEFAULT_SCOPED_DATA_COLLECTION, DEFAULT_SNAPSHOTS_COLLECTION) | Constants aligned with the spec |
Documentation
docs/xmemory-entities-client.md— Memorix-first client guide: entities, content types, scoped data reads, pagination, counts, Catalox alignment.docs/spec.md— Roles A–E, Mongo defaults, §2.1 env precedence, acceptance matrix, coverage vs runtime,XmemoryMongoClientsemantics, §8 version compatibility, §9 host: mapper + scoper.docs/check-with-xmemory-store.md— Cross-repo alignment checklist (equal,XmemoryMongoClient, scoper, xronox-store).docs/host-integration-mapper-store-scoper.md— Mapper + tier + scoper composition, env aliases (FR-ST-001), FR-ST-002 narrative.@x12i/xmemory-records-mapper(or your mapper package) should link here (FR-MAP-001). Legacy namefr-xmemory-store-data-tier-integration.mdredirects to the same content.
Published npm tarballs include dist/, README.md, and catalox-seeds/ (see package.json files). Other docs/ files are read from the GitHub repository unless you add them to files in a deliberate release.
License
See the repository’s license file if one is provided.
