npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@equationalapplications/react-llm-wiki

v4.1.0

Published

React hooks and web utilities for LLM Wiki Memory.

Readme

@equationalapplications/react-llm-wiki

React hooks and web utilities for @equationalapplications/core-llm-wiki, designed for web and Expo.

Inspired by Andrej Karpathy's LLM Wiki memory spec.

Features

  • Semantic search — Vector embeddings with optional embed function and MiniSearch fallback
  • Retrieval tuning — Per-call overrides for hybrid scoring, pre-filtering, result limits
  • Reactive reads — Auto-refetch on entityId, query, or options changes
  • Mutation hooksuseWikiWrite, useWikiIngest, useWikiForget, useWikiMaintenance, etc.
  • Shared context — Single WikiProvider per app, use anywhere
  • Full-featured memory — Facts, tasks, events, maintenance jobs (librarian, heal, reembed, prune)

Installation

npm install @equationalapplications/react-llm-wiki

Semantic Search Setup

Enable vector-based retrieval by providing an embed function in WikiOptions:

import { WikiProvider, createWiki } from '@equationalapplications/react-llm-wiki';

const wiki = createWiki(adapter, {
  config: {
    preFilterLimit: 50,    // Optimize for wikis with 500+ facts
    hybridWeight: 0.7,     // Blend semantic (70%) + keyword (30%)
  },
  llmProvider: {
    generateText: async ({ systemPrompt, userPrompt }) => {
      // Your LLM
      return 'Model output';
    },
    embed: async (text: string) => {
      // Your embedding service
      const res = await fetch('https://your-app.example.com/api/embed', { 
        method: 'POST', 
        body: JSON.stringify({ text }) 
      });
      const { embedding } = await res.json();
      return embedding; // number[]
    },
  },
  onRetrievalFallback: (error) => {
    console.warn('Embeddings unavailable, using keyword search:', error);
  },
});

await wiki.setup();

<WikiProvider wiki={wiki}>
  <App />
</WikiProvider>

Setup

React (with any SQLiteAdapter):

import { WikiProvider, createWiki } from '@equationalapplications/react-llm-wiki';

// Create wiki instance and initialize tables
const wiki = createWiki(adapter, options);
await wiki.setup();

// Wrap app
<WikiProvider wiki={wiki}>
  <App />
</WikiProvider>

Expo / React Native (@equationalapplications/expo-llm-wiki re-exports both createWiki and WikiProvider):

import { createWiki, WikiProvider } from '@equationalapplications/expo-llm-wiki';
import { openDatabaseSync } from 'expo-sqlite';

const db = openDatabaseSync('wiki.db');
const wiki = createWiki(db, options);
await wiki.setup();

<WikiProvider wiki={wiki}>
  <App />
</WikiProvider>

Configuration

All WikiConfig fields are optional:

const wiki = createWiki(adapter, {
  llmProvider: { /* ... */ },
  config: {
    tablePrefix: 'llm_wiki_',          // default: 'llm_wiki_'
    maxResults: 10,                    // default: 10
    autoLibrarianThreshold: 20,        // default: 20 — events before librarian auto-runs
    autoHealThreshold: 100,            // default: 100 — events before heal auto-runs
    maxChunkLength: 12000,             // default: 12000 (char count per ingestDocument chunk)
    chunkOverlap: 400,                 // default: 400 (overlap between chunks in characters)
    chunkConcurrency: 1,               // default: 1 (parallel LLM calls per ingestDocument)
    pruneRetainSoftDeletedFor: 7,      // default: 7 (days before hard-deleting soft-deleted facts)
    pruneEventsAfter: 30,              // default: 30 (days before hard-deleting old events)
    orphanAfterDays: 30,               // default: 30 (days before runHeal flags sourceless facts; null to disable)
    staleInferredAfterDays: 60,        // default: 60 (days before runHeal downgrades inferred facts; null to disable)
    preFilterLimit: 50,                // default: undefined — MiniSearch pre-filter before cosine scan; recommended for >500 facts
    hybridWeight: 0.7,                 // default: undefined — blend semantic (1.0) ↔ keyword (0.0); pure semantic when unset
  },
});

Retrieval Tuning

Optimize read() performance and blend retrieval strategies:

const config = {
  // Limit cosine similarity scoring to top-K MiniSearch keyword candidates
  preFilterLimit: 50,
  
  // Blend semantic and keyword scores (0.0 = pure keyword, 1.0 = pure semantic)
  hybridWeight: 0.7,
  
  // Max results returned per read
  maxResults: 10,
};

const wiki = createWiki(adapter, {
  config,
  llmProvider: { /* ... */ },
});

Hybrid scoring blends:

  • hybridWeight: 1.0 → pure semantic scoring among the candidates being scored; if preFilterLimit is set, semantic scoring is still limited to the top-K MiniSearch matches
  • hybridWeight: 0.5 → balanced semantic + keyword (50/50 blend)
  • hybridWeight: 0.0 → pure keyword ranking, skips embed() entirely (no LLM API cost)

Per-call overrides:

const { data } = useMemoryRead('user-123', 'preferences', {
  maxResults: 5,
  preFilterLimit: 20,      // Tighter pre-filter for speed
  hybridWeight: 0.5,       // More keyword weight
});

Hooks

useMemoryRead(entityId, query, options?)

Fetch memory reactively. Auto-refetches when entityId, query, wiki, or options change, including per-call overrides such as maxResults, preFilterLimit, and hybridWeight.

const { data, isPending, error, refetch } = useMemoryRead('user-123', 'preferences');

if (isPending) return <div>Loading...</div>;
if (error) return <div>Error: {error.message}</div>;

return (
  <div>
    {data?.facts.map(fact => (
      <div key={fact.id}>
        <strong>{fact.title}</strong>: {fact.body}
      </div>
    ))}
  </div>
);

With tuning overrides:

const { data } = useMemoryRead('user-123', 'preferences', {
  maxResults: 5,
  hybridWeight: 0.8,
});

useWikiWrite()

Record observations and events. The librarian job extracts facts from accumulated events. This does not currently invalidate useMemoryRead() results automatically, so existing readers keep their previous data until their inputs change or refetch() is called.

const { execute, isPending, error } = useWikiWrite();

const handleSave = async () => {
  try {
    await execute('user-123', { 
      event_type: 'observation', 
      summary: 'User prefers async/await' 
    });
  } catch (e) {
    console.error('Write failed:', e);
  }
};

return <button onClick={handleSave} disabled={isPending}>Save</button>;

useWikiIngest()

Ingest documents into memory. Parses facts and tasks from document chunks.

const { execute, isPending, error } = useWikiIngest();

const handleIngest = async (document: string) => {
  const sourceHash = await calculateHash(document);
  try {
    await execute('user-123', {
      sourceRef: 'doc-readme',
      sourceHash,
      documentChunk: document,
    });
  } catch (e) {
    console.error('Ingest failed:', e);
  }
};

useWikiForget()

Delete entries from memory by ID.

const { execute, isPending, error } = useWikiForget();

const handleDelete = async (factId: string) => {
  try {
    await execute('user-123', { entryId: factId });
  } catch (e) {
    console.error('Delete failed:', e);
  }
};

useWikiMaintenance()

Run background maintenance jobs: librarian (deduplication/fact extraction), heal (LLM-driven fact review: removes orphaned facts, downgrades stale inferences, repairs incorrect facts), reembed (convert TEXT embeddings to BLOB / update after model change), prune (hard-delete soft-deleted entries/tasks after the retention window and prune old events).

const { runLibrarian, runHeal, runReembed, runPrune, isPending, error, lastResult } = useWikiMaintenance();

// Deduplicate and consolidate facts from events
await runLibrarian('user-123');

// LLM-driven fact review: remove orphaned/stale facts, repair incorrect inferences
await runHeal('user-123');

// Re-embed all facts with the current provider (e.g. after switching models or
// enabling an embed provider for the first time — default always re-embeds all):
const { embedded, skipped, failed } = await runReembed('user-123');
// After a round-trip export/import on the same model, skip facts that already have vectors:
const { embedded: embeddedRoundtrip, skipped: skippedRoundtrip, failed: failedRoundtrip } = await runReembed('user-123', { skipExisting: true });

// Hard-delete soft-deleted entries/tasks after retention and prune old events
await runPrune('user-123');

lastResult note: runLibrarian, runHeal, and runPrune each update lastResult on success. runReembed intentionally does not — it clears lastResult to null at start but leaves it null on completion. This avoids a source-breaking change to the MaintenanceResult type for consumers that exhaustively switch on lastResult.operation. Use the Promise return value to inspect reembed results.

useWikiHasChanged()

Check if a source document has changed since last ingest.

const { execute, lastResult, isPending, error } = useWikiHasChanged();

const handleCheckChanges = async (sourceRef: string, sourceHash: string) => {
  const changed = await execute('user-123', sourceRef, sourceHash);
  if (changed) {
    console.log('Document has been updated, re-ingest recommended');
  }
};

useWikiExport()

Export memory dump.

const { execute, lastResult, isPending, error } = useWikiExport();
await execute(['user-123']);
// lastResult: MemoryDump | null

Component Lifecycle

flowchart TD
    A["<WikiProvider wiki={wiki}>"] --> B["App Components"]
    B --> C{"Use Hook?"}
    C -->|"useMemoryRead(entityId, query, options?)"| D["[Read Memory]"]
    C -->|"useWikiWrite()"| E["[Write Memory]"]
    C -->|"useWikiIngest()"| F["[Ingest Document]"]
    C -->|"useWikiForget()"| G["[Delete Memory]"]
    C -->|"useWikiMaintenance()"| H["[Run Jobs]"]
    D --> I{"entityId, query,<br/>ReadOptions, or wiki changed?"}
    I -->|"Yes"| J["Auto-refetch"]
    I -->|"No"| K["Return cached data"]
    J --> L["Trigger read()"]
    L --> M["Embed query<br/>if embed available"]
    M --> N["Phase 1: Score facts<br/>Phase 2: Fetch winners"]
    N --> O["Update component state"]
    O --> P["Re-render with data"]
    E --> Q["Execute write()"]
    F --> Q
    G --> Q
    H --> Q
    Q --> R["Write completes"]

Data flow:

  1. Wrap app with <WikiProvider wiki={wiki}> — provides wiki context
  2. Use hooks in components — access memory reactively
  3. Read operations auto-refetch when entityId, query, wiki, or ReadOptions values change; call refetch() to refresh manually
  4. Write operations (write, ingest, forget, maintenance) do not automatically re-trigger useMemoryRead; call refetch() after a write to refresh read results
  5. Re-render with new data flowing back to UI

Retrieval Engine Internals

flowchart TD
    A["read(entityId, query)"] --> B{hybridWeight = 0?}
    B -->|Yes| C["MiniSearch only<br/>(skip embed)"]
    B -->|No| D{embed available?}
    D -->|No| C
    D -->|Yes| F["Embed query"]
    F -->|throws| E["onRetrievalFallback<br/>callback"]
    E --> C
    F -->|succeeds| G{preFilterLimit<br/>active?}
    G -->|Yes| H["MiniSearch pre-filter<br/>top K candidates"]
    H --> I["Phase 1: Cosine score<br/>top K candidates"]
    G -->|No| J["Phase 1: Cosine score<br/>all facts"]
    J --> K["Cache vectors<br/>in-memory<br/>(full scan only)"]
    K --> L{hybridWeight = 1?}
    I --> L
    L -->|Yes| M["Pure semantic<br/>ranking"]
    L -->|No| N["Hybrid blend:<br/>semantic + keyword<br/>via MiniSearch"]
    M --> O["Phase 2: Fetch full rows<br/>top maxResults"]
    N --> O
    C --> P["MiniSearch ranking"]
    P --> O
    O --> R["Track access"]
    R --> Q["Return MemoryBundle"]

The flowchart shows:

  1. Fast-path when hybridWeight = 0 (pure keyword, no embed cost)
  2. Fallback chain when embed unavailable (MiniSearch silently) or throws (onRetrievalFallback callback, then MiniSearch)
  3. Pre-filtering to limit cosine scoring to top-K keyword matches (O(N) → O(K))
  4. Two-phase SELECT: phase 1 scores all/filtered facts with minimal columns, phase 2 fetches full rows for winners
  5. Hybrid scoring to blend semantic and keyword rankings
  6. Vector caching on full scans only; reads with preFilterLimit active skip cache population

License

MIT


Made with ❤️ by Equational Applications LLC. https://equationalapplications.com/