npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@tensakulabs/memory

v0.1.4

Published

Three-tier memory system (hot/warm/cold) with REM Sleep batch consolidation for AI agents

Readme

@tensakulabs/memory

Three-tier memory management (hot/warm/cold) with REM Sleep batch consolidation for AI agents.

Designed for agents that need persistent, cost-efficient memory across sessions — without paying for every write.

How It Works

flowchart LR
    A([Agent Session]) -->|memory stage| B[(rem-staging.jsonl\n$0 cost)]
    B -->|REM Sleep| C{Classify}
    C -->|hot| D[MEMORY.md\nsuggested only]
    C -->|warm| E[warm.jsonl\n7-day TTL]
    C -->|cold| F[(mem0\ndeduplicated)]
  1. Stage — during sessions, memory stage "fact" appends to a local JSONL file ($0)
  2. Sleep — REM Sleep runs periodically (cron, launchd, or manual), classifies facts, deduplicates, and routes to the right tier
  3. Servememory serve starts an MCP stdio server so agents can stage facts as a tool call

Install

bun add -g @tensakulabs/memory

Quick Start

# Create config
memory init

# Stage a fact during a session (free)
memory stage "Bun is the preferred runtime for this project"
memory stage "API rate limit is 40 RPM" --tier cold
memory stage "Team uses PST timezone" --context "scheduling"

# Preview what REM Sleep would do
memory sleep --dry-run

# Run batch processor
memory sleep

# Start MCP server (for tool-based agents)
memory serve

Commands

memory stage

Append a fact to the staging queue. Zero cost — local file write only.

memory stage "fact"                              # auto-classify tier
memory stage "fact" --tier cold                  # force tier: hot, warm, or cold
memory stage "fact" --context "why it matters"   # add context
memory stage --list                              # show pending facts
memory stage --count                             # count pending

memory sleep

Run the REM Sleep batch processor. Classifies staged facts, deduplicates against mem0, and routes to storage.

memory sleep              # full run
memory sleep --dry-run    # preview decisions without writing
memory sleep --stats      # show staging statistics only

Behavior:

  • Hot candidates printed as suggestions — never auto-written (requires human review)
  • Cold candidates deduplicated: skipped if >85% match exists in mem0
  • Max 5 cold writes per run to cap cost (~$0.005/run)

memory serve

Start an MCP stdio server exposing two tools:

| Tool | Description | |------|-------------| | memory_stage | Stage a fact (params: fact, context?, tier_hint?) | | memory_status | Show count of staged facts pending REM Sleep |

Claude Code — add to ~/.claude.json:

{
  "mcpServers": {
    "memory": {
      "command": "memory",
      "args": ["serve"]
    }
  }
}

OpenClaw agents — use exec instead (OpenClaw doesn't support custom MCP servers):

memory stage "fact"

memory init

Create config at ~/.tl-memory/config.json.

memory init               # default location
memory init --path ./     # custom location

memory config

Show current configuration and resolved paths.

Configuration

Config is found in this order:

  1. --config flag
  2. MEMORY_CONFIG env var
  3. ~/.tl-memory/config.json
  4. ./config.json

MCP mode (Claude Code agents)

Use when your agent has MCP tool access (memory serve running). REM Sleep calls mem0 via add_memory / search_memories MCP tools.

{
  "agent": "my-agent",
  "hot": {
    "path": "~/.my-agent/MEMORY.md"
  },
  "warm": {
    "path": "~/.tl-memory/warm.jsonl",
    "ttlDays": 7
  },
  "cold": {
    "mode": "mcp",
    "userId": "your-user-id"
  },
  "remSleep": {
    "maxColdWrites": 5,
    "dedupThreshold": 0.85,
    "stagingPath": "~/.tl-memory/rem-staging.jsonl"
  }
}

HTTP mode (standalone agents, cron jobs)

Use when running REM Sleep outside an agent context (e.g., scheduled cron, OpenClaw exec). Calls mem0 REST API directly.

{
  "agent": "my-agent",
  "hot": {
    "path": "~/.my-agent/MEMORY.md"
  },
  "warm": {
    "path": "~/.tl-memory/warm.jsonl",
    "ttlDays": 7
  },
  "cold": {
    "mode": "http",
    "endpoint": "http://localhost:8765",
    "userId": "your-user-id"
  },
  "remSleep": {
    "maxColdWrites": 5,
    "dedupThreshold": 0.85,
    "stagingPath": "~/.tl-memory/rem-staging.jsonl"
  }
}

Memory Tiers

| Tier | Storage | TTL | Purpose | |------|---------|-----|---------| | Hot | MEMORY.md | Permanent | Agent-specific facts always in context | | Warm | warm.jsonl | 7 days | Cross-agent working context | | Cold | mem0 | Permanent | Long-tail semantic search |

When to use each tier:

  • hot — identity, persistent preferences, architectural decisions
  • warm — shared facts between agents, active project state
  • cold (default) — anything worth remembering but not queried every session

Requirements

  • Bun >= 1.0.0
  • mem0 instance (for cold tier — self-hosted or cloud)

License

MIT