npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@mnexium/core

v0.1.1

Published

CORE is Mnexium's memory engine: a Postgres-backed HTTP service for durable memory, claim extraction, truth-state resolution, and retrieval for LLM applications.

Readme

🧠 CORE

CORE is Mnexium's memory engine: a Postgres-backed HTTP service for durable memory, claim extraction, truth-state resolution, and retrieval for LLM applications.

It is built to run standalone and integrate cleanly into existing platform stacks.

✨ Why LLM App Teams Use CORE

  • Grounded outputs: retrieve durable user memory instead of relying only on short chat context.
  • Persistent personalization: keep preferences, history, and decisions across sessions/channels.
  • Lower hallucination risk: combine memory retrieval with claim/slot truth state.
  • Context-window relief: recall important memory on demand without re-prompting everything.
  • Faster shipping: use a ready memory/truth backend instead of building custom memory infra.

🔩 What CORE Provides

  • Memory lifecycle APIs: create, list, search, update, soft-delete, restore.
  • Memory extraction from text (/api/v1/memories/extract) with optional learning writes.
  • Claim APIs with slot-based truth resolution and retraction workflows.
  • Retrieval engine with vector + lexical search and LLM-expanded modes.
  • SSE stream for memory events (memory.created, memory.superseded, memory.updated, memory.deleted).

🧱 Core Concepts

  • Memory: user-scoped durable facts/context.
  • Claim: structured assertion (predicate, object_value, metadata).
  • Slot state (slot_state): active winner for a semantic slot.
  • Supersession: medium-similarity memories can be marked superseded by newer memories.

🔎 Retrieval Intelligence

When LLM retrieval expansion is enabled, search classifies queries into:

  • broad: profile/summary recall (importance + recency weighted).
  • direct: specific fact lookup with truth/claim-aware boosts.
  • indirect: advice/planning prompts with expanded query set + rerank.

Fallback behavior is built in:

  • missing LLM provider keys -> simple retrieval/extraction mode
  • missing embedding key -> non-vector lexical path still works

⚙️ Runtime Modes

CORE_AI_MODE supports:

  • auto (default): cerebras -> openai -> simple
  • cerebras: requires CEREBRAS_API (else falls back to simple)
  • openai: requires OPENAI_API_KEY (else falls back to simple)
  • simple: no LLM client

USE_RETRIEVAL_EXPAND controls search-time classify/expand/rerank behavior.

🚀 Quick Start

Use the setup guide for the complete runbook, Docker path, and environment reference:

🧪 API Surface

Key route groups:

  • health: GET /health
  • memories: /api/v1/memories*
  • claims/truth: /api/v1/claims*
  • events: GET /api/v1/events/memories

Full endpoint contracts:

🛡️ Production Posture

CORE is integration-first. Auth, tenancy policy, idempotency strategy, and event bus scaling are intentionally externalized so you can fit CORE into your existing platform controls.

Production checklist:

📚 Documentation Map