npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@iflow-mcp/choihyunsus-soul

v6.0.2

Published

Multi-agent session orchestrator with KV-Cache and Ark for MCP (Model Context Protocol)

Readme

🇰🇷 한국어

🧠 Soul

npm version License Node npm downloads NEW

Your AI agent forgets everything when a session ends. Soul fixes that. Your AI agent might do something dangerous. Ark stops that.

🚀 What's New in v6.0 — Ark: The Last Shield

Soul v6.0 introduces Ark, a built-in AI safety system that intercepts every tool call and blocks dangerous actions before they execute. No LLM calls, no token cost, no latency — pure regex matching at the MCP server level.

  • Every tool call passes through ark.check()unconditionally
  • There is no enabled: false — Ark is always on by design
  • Ships with 12 blacklist rules, 125 patterns, and 7 industry templates
  • Self-protection: 4 layers prevent a rogue AI from disabling the firewall

This is why v6.0 is a major version: every tool call now has a guardian. Learn more →

Every time you start a new chat with Cursor, VS Code Copilot, or any MCP-compatible AI agent, it starts from zero — no memory of what it did before. Soul is an MCP server that gives your agents:

  • 🧠 Persistent memory that survives across sessions
  • 🤝 Handoffs so one agent can pick up where another left off
  • 📝 Work history recorded as an immutable log
  • 🗂️ Shared brain so multiple agents can read/write the same context
  • 🏷️ Entity Memory — auto-tracks people, hardware, projects (v5.0)
  • 💡 Core Memory — agent-specific always-loaded facts (v5.0)
  • 🛡️ Ark — built-in AI safety that blocks dangerous actions at zero token cost (v6.0)

Soul is one small component of N2 Browser — an AI-native browser we're building. Multi-agent orchestration, real-time tool routing, inter-agent communication, and much more are currently in testing. This is just the beginning.

Table of Contents

Quick Start

1. Install

Option A: npm (recommended)

npm install n2-soul

Option B: From source

git clone https://github.com/choihyunsus/soul.git
cd soul
npm install

2. Add Soul to your MCP config

{
  "mcpServers": {
    "soul": {
      "command": "node",
      "args": ["/path/to/soul/index.js"]
    }
  }
}

💡 Tip: If you installed via npm, the path is node_modules/n2-soul/index.js. If from source, use the absolute path to your cloned directory.

3. Tell your agent to use Soul

Add this to your agent's rules file (.md, .cursorrules, system prompt, etc.):

## Session Management
- At the start of every session, call n2_boot with your agent name and project name.
- At the end of every session, call n2_work_end with a summary and TODO list.

That's it. Two commands your agent needs to know:

| Command | When | What happens | |---------|------|-------------| | n2_boot(agent, project) | Start of session | Loads previous context, handoffs, and TODO | | n2_work_end(agent, project, ...) | End of session | Saves everything for next time |

Next session, your agent picks up exactly where it left off — like it never forgot.

Requirements

  • Node.js 18+

Why Soul?

| Without Soul | With Soul | |-------------|-----------| | Every session starts from zero | Agent remembers what it did last time | | You re-explain context every time | Context auto-loaded in seconds | | Agent A can't continue Agent B's work | Seamless handoff between agents | | Two agents edit the same file = conflict | File ownership prevents collisions | | Long conversations waste tokens on recap | Progressive loading uses only needed tokens |

Soul vs Others

| | Soul | mem0 | Memorai | Zep | |---|:---:|:---:|:---:|:---:| | Storage | Deterministic (JSON/SQLite) | Embedding-based | Embedding-based | Embedding-based | | Loading | Mandatory (code-enforced at boot) | LLM-decided recall | LLM-decided recall | LLM-decided recall | | Saving | Mandatory (force-write at session end) | LLM-decided | LLM-decided | LLM-decided | | Validation | Rust compiler (n2c) | None | None | None | | Multi-agent | Built-in handoffs + file ownership | Not supported | Not supported | Limited | | Token control | Progressive L1/L2/L3 (~500 tokens min) | No control | No control | No control | | Dependencies | 3 packages | Heavy | Heavy | Heavy |

Key difference: Soul is deterministic — the code forces saves and loads. Other tools rely on the LLM to decide what to remember, which means it "forgets" whenever it wants to.

Token Efficiency

Soul dramatically reduces token waste from context re-explanation:

| Scenario | Tokens per session start | |----------|--------------------------| | Without Soul — manually re-explain context | 3,000 ~ 10,000+ | | With Soul (L1) — keywords + TODO only | ~500 | | With Soul (L2) — + summary + decisions | ~2,000 | | With Soul (L3) — full context restore | ~4,000 |

Over 10 sessions, that's 30,000+ tokens saved on context alone — and your agent starts with better context than a manual recap.

How It Works

Soul v5.0 Architecture

Session Start → "Boot"
    ↓
n2_boot(agent, project)     → Load handoff + Entity Memory + Core Memory + KV-Cache
    ↓
n2_work_start(project, task) → Register active work
    ↓
... your agent works normally ...
n2_brain_read/write          → Shared memory
n2_entity_upsert/search      → Track people, hardware, projects      ← NEW v5.0
n2_core_read/write           → Agent-specific persistent facts       ← NEW v5.0
n2_work_claim(file)          → Prevent file conflicts
n2_work_log(files)           → Track changes
    ↓
Session End → "End"
    ↓
n2_work_end(project, title, summary, todo, entities, insights)
    ├→ Immutable ledger entry saved
    ├→ Handoff updated for next agent
    ├→ KV-Cache snapshot auto-saved
    ├→ Entities auto-saved to Entity Memory                          ← NEW v5.0
    ├→ Insights archived to memory                                   ← NEW v5.0
    └→ File ownership released

Features

| Feature | What it does | |---------|-------------| | Soul Board | Project state + TODO tracking + handoffs between agents | | Immutable Ledger | Every work session recorded as append-only log | | KV-Cache | Session snapshots with compression + tiered storage (Hot/Warm/Cold) | | Shared Brain | File-based shared memory with path traversal protection | | Entity Memory | 🆕 Auto-tracks people, hardware, projects, concepts across sessions | | Core Memory | 🆕 Agent-specific always-loaded facts (identity, rules, focus) | | Autonomous Extraction | 🆕 Auto-saves entities and insights at session end | | Context Search | Keyword search across brain memory and ledger | | File Ownership | Prevents multi-agent file editing collisions | | Dual Backend | JSON (zero deps) or SQLite for performance | | Semantic Search | Optional Ollama embedding (nomic-embed-text) | | Backup/Restore | Incremental backups with configurable retention | | Ark | 🆕 Built-in AI safety — blocks dangerous actions at zero token cost |

Ark — The Last Shield

Ark Comic

The Last Shield — Soul v6.0 includes Ark, a built-in AI safety system. Like Noah's Ark — the last refuge when everything else fails.

Why Ark?

| | Ark | LLM-based safety | Embedding-based | |---|:---:|:---:|:---:| | Token cost | 0 | 500~2,000 per check | 100~500 per check | | Latency | < 1ms | 1~5 seconds | 200~500ms | | New dependencies | 0 (pure JS) | LLM API key required | Vector DB required | | Works offline | Yes | No | Depends | | Always on | Mandatory (no toggle) | Optional | Optional | | Self-protection | 4-layer anti-tampering | None | None | | Rule format | Human-readable .n2 files | Prompt engineering | Embedding tuning | | Industry templates | 7 domains included | Write your own | Write your own | | Audit trail | Every block/pass logged | Varies | Varies | | Setup | Zero config (works out of box) | API keys + prompts | DB + embeddings | | MCP compatible | Any host (Cursor, VS Code, Claude Desktop) | Host-specific | Host-specific |

The Problem

AI agents with tool access can execute dangerous commands:

  • rm -rf / — delete everything
  • DROP DATABASE — destroy data
  • npm install -g malware — supply chain attack
  • git push --force — destroy history
  • Send emails, make payments, exfiltrate data

These aren't hypothetical. Autonomous agents (Manus, Devin, etc.) have already done these things in the wild.

How Ark Works

Agent calls tool  →  MCP Server receives request
                            │
                     ark.check(name, content)
                            │
                    ┌───────┴───────┐
                    │ Match rules? │
                    └───┬───┬───┘
                   No │   │ Yes
                      │   │
               Execute │   │ BLOCKED
               handler │   │ "This action requires
                      │   │  human approval."

Key properties:

  • Zero token cost — Pure regex matching in Node.js, no LLM calls
  • Zero latency — Microsecond execution time
  • Always on — No enabled toggle. Ark loads unconditionally at boot
  • Transparent — Agents don't even know it's there until blocked
  • Auditable — Every block and pass is logged

Token Cost: Zero

Why zero? Because Ark runs inside the MCP server (Node.js), not inside the AI model.

┌─────────────────────────────────────────────────────────┐
│                    LLM (Cloud)                          │
│         AI agent thinks, generates tool calls           │
│              (this is where tokens are used)             │
└──────────────────────┬──────────────────────────────────┘
                       │ tool call
                       ▼
┌──────────────────────────────────────────────────────────┐
│                MCP Server (Node.js, local)               │
│                                                          │
│   ┌──────────────┐                                       │
│   │  ark.check()  │ ◄── pure regex, runs HERE            │
│   │  < 1ms        │     no network, no LLM, no tokens    │
│   └──────┬───────┘                                       │
│          │                                               │
│     allowed? ──No──► return "BLOCKED" text                │
│          │                                               │
│         Yes                                              │
│          │                                               │
│     execute handler                                      │
└──────────────────────────────────────────────────────────┘

The key insight: token cost only occurs inside the LLM. Ark lives one layer below — at the server level. The LLM sends a tool call, and Ark checks it using regex before the handler runs. No second LLM call, no API request, no vector search. Just string matching.

Most AI safety solutions work like this:

Agent → "I want to run rm -rf /" → Safety LLM: "Is this safe?" → 2,000 tokens burned

Ark works like this:

Agent → "I want to run rm -rf /" → regex match → BLOCKED (0 tokens, < 1ms)

| Approach | How it works | Cost per check | Latency | |----------|-------------|:--------------:|:-------:| | LLM-based safety | Send action to another LLM for review | 500~2,000 tokens | 1~5s | | Embedding-based | Vectorize + similarity search | 100~500 tokens | 200~500ms | | Ark | Regex pattern matching in Node.js | 0 tokens | < 1ms |

Over 100 tool calls per session, that's 50,000~200,000 tokens saved compared to LLM-based safety.

Rule Files (.n2)

Safety rules are defined in .n2 files in the rules/ directory:

# Block catastrophic system destruction
@rule catastrophic_destruction {
    scope: all
    blacklist: [
        /rm\s+-rf\s+\//,
        /DROP\s+DATABASE/i,
        /git\s+push\s+--force/i
    ]
    requires: human_approval
}

# State machine: no payment without approval chain
@contract payment_sequence {
    idle -> reviewing : on payment_request
    reviewing -> approved : on payment_approval
    approved -> executing : on execute_payment
}

# Named actions that always require approval
@gate high_risk_actions {
    actions: [deploy_production, delete_database, send_email]
    requires: human_approval
}

Three rule types:

| Type | Purpose | Example | |------|---------|--------| | @rule | Pattern blacklist | Block rm -rf /, DROP DATABASE | | @contract | State machine | Enforce payment → approval → execute order | | @gate | Named action gate | send_email always requires approval |

Industry Templates

Soul ships with domain-specific rule templates in lib/ark/examples/:

| File | Domain | Key protections | |------|--------|-----------------| | medical.n2 | Healthcare | Prescription/surgical sequences, patient data (HIPAA) | | military.n2 | Defense | Engagement protocols, nuclear dual-key, classified data | | financial.n2 | Finance | Payment sequences, transaction approval | | legal.n2 | Legal | Contract/litigation sequences | | privacy.n2 | Privacy | GDPR/CCPA, PII protection | | autonomous.n2 | Autonomous | Self-driving/drone safety | | system.n2 | DevOps | Deployment sequences, infrastructure |

Copy any template to rules/ to activate:

cp lib/ark/examples/medical.n2 rules/

Self-Protection (4 Layers)

Ark protects itself from being disabled by a rogue AI:

  1. Layer 1: Any .n2 file reference → blocked
  2. Layer 2: delete/modify/disable n2-ark → blocked
  3. Layer 3: Core filenames (gate.js, parser.js) → blocked
  4. Layer 4: Soul core files (index.js, config) → blocked

A rogue agent hitting all four layers gets error after error after error.

Configuration

Ark settings in lib/config.default.js:

ARK: {
    rulesDir: null,     // null = soul/rules/ (default)
    auditDir: null,     // null = soul/data/ark-audit/
    strictMode: false,  // true = block unknown actions too
}

Override in lib/config.local.js to swap rule sets:

module.exports = {
    ARK: {
        rulesDir: '/path/to/your/custom/rules',  // Your industry rules
        strictMode: true,                         // Maximum security
    },
};

Note: There is no enabled: false option. This is by design. The lock cannot unlock itself.

Available Tools

| Tool | Description | |------|-------------| | n2_boot | Boot sequence — loads handoff, entities, core memory, agents, KV-Cache | | n2_work_start | Register active work session | | n2_work_claim | Claim file ownership (prevents collisions) | | n2_work_log | Log file changes during work | | n2_work_end | End session — writes ledger, handoff, entities, insights, KV-Cache | | n2_brain_read | Read from shared memory | | n2_brain_write | Write to shared memory | | n2_entity_upsert | 🆕 Add/update entities (auto-merge attributes) | | n2_entity_search | 🆕 Search entities by keyword or type | | n2_core_read | 🆕 Read agent-specific core memory | | n2_core_write | 🆕 Write to agent-specific core memory | | n2_context_search | Search across brain + ledger | | n2_kv_save | Manually save KV-Cache snapshot | | n2_kv_load | Load most recent snapshot | | n2_kv_search | Search past sessions by keyword | | n2_kv_gc | Garbage collect old snapshots | | n2_kv_backup | Backup to portable SQLite DB | | n2_kv_restore | Restore from backup | | n2_kv_backup_list | List backup history |

KV-Cache Progressive Loading

KV-Cache automatically adjusts context detail based on token budget:

| Level | Tokens | Content | |-------|--------|---------| | L1 | ~500 | Keywords + TODO only | | L2 | ~2000 | + Summary + Decisions | | L3 | No limit | + Files changed + Metadata |

Real-World Example

Here's what happens across 3 real sessions:

── Session 1 (Rose, 2pm) ──────────────────────
n2_boot("rose", "my-app")
  → "No previous context found. Fresh start."

... Rose builds the auth module ...

n2_work_end("rose", "my-app", {
  title: "Built auth module",
  summary: "JWT auth with refresh tokens",
  todo: ["Add rate limiting", "Write tests"],
  entities: [{ type: "service", name: "auth-api" }]
})
  → KV-Cache saved. Ledger entry #001.

── Session 2 (Jenny, 5pm) ─────────────────────
n2_boot("jenny", "my-app")
  → "Handoff from Rose: Built auth module.
     TODO: Add rate limiting, Write tests.
     Entity: auth-api (service)"

... Jenny adds rate limiting, knows exactly where Rose left off ...

n2_work_end("jenny", "my-app", {
  title: "Added rate limiting",
  todo: ["Write tests"]
})

── Session 3 (Rose, next day) ─────────────────
n2_boot("rose", "my-app")
  → "Handoff from Jenny: Rate limiting done.
     TODO: Write tests.
     2 sessions of history loaded (L1, ~500 tokens)"

... Rose writes tests, with full context from both sessions ...

Rust Compiler (n2c)

Soul includes an optional Rust-based compiler for .n2 rule files — compile-time validation instead of runtime hope.

# Validate rules before deployment
n2c validate soul-boot.n2

# Output:
# ── Step 1: Parse ✅
# ── Step 2: Schema Validation
#   ✅ Passed! 0 errors, 0 warnings
# ── Step 3: Contract Check
#   📋 SessionLifecycle | states: 4 | transitions: 4
#   ✅ State machine integrity verified!
# ✅ All checks passed!

What n2c catches at compile time:

  • 🔒 Unreachable states — states no transition can reach
  • 💀 Deadlocks — states with no outgoing transitions
  • Missing referencesdepends_on pointing to nonexistent steps
  • 🚫 Invalid sequences — calling n2_work_start before n2_boot
@contract SessionLifecycle {
  transitions {
    IDLE -> BOOTING : on n2_boot
    BOOTING -> READY : on boot_complete
    READY -> WORKING : on n2_work_start
    WORKING -> IDLE : on n2_work_end
  }
}

The compiler is in md_project/compiler/ — built with Rust + pest PEG parser. Learn more

Configuration

All settings in lib/config.default.js. Override with lib/config.local.js:

cp lib/config.example.js lib/config.local.js
// lib/config.local.js
module.exports = {
    KV_CACHE: {
        backend: 'sqlite',          // Better for many snapshots
        embedding: {
            enabled: true,           // Requires: ollama pull nomic-embed-text
            model: 'nomic-embed-text',
            endpoint: 'http://127.0.0.1:11434',
        },
    },
};

Data Directory

All runtime data is stored in data/ (gitignored, auto-created):

soul/
├── rules/              # Ark safety rules (active)              ← NEW v6.0
│   └── default.n2          # Default ruleset (125 patterns)
├── lib/
│   └── ark/            # Ark core engine                        ← NEW v6.0
│       ├── index.js        # createArk() factory
│       ├── gate.js         # SafetyGate engine
│       ├── parser.js       # .n2 rule parser
│       ├── audit.js        # Audit logger
│       └── examples/       # Industry rule templates
│           ├── medical.n2       # Healthcare (HIPAA, prescriptions)
│           ├── military.n2      # Defense (engagement, nuclear)
│           ├── financial.n2     # Finance (payments, transactions)
│           ├── legal.n2         # Legal (contracts, litigation)
│           ├── privacy.n2       # Privacy (GDPR, CCPA, PII)
│           ├── autonomous.n2    # Autonomous (drones, vehicles)
│           └── system.n2        # DevOps (deployment, infra)
├── data/
│   ├── memory/         # Shared brain (n2_brain_read/write)
│   │   ├── entities.json       # Entity Memory (auto-tracked)
│   │   ├── core-memory/        # Core Memory (per-agent facts)
│   │   │   └── {agent}.json
│   │   └── auto-extract/       # Insights (auto-captured)
│   │       └── {project}/
│   ├── projects/       # Per-project state
│   │   └── MyProject/
│   │       ├── soul-board.json    # Current state + handoff
│   │       ├── file-index.json    # File tree snapshot
│   │       └── ledger/            # Immutable work logs
│   │           └── 2026/03/09/
│   │               └── 001-agent.json
│   ├── ark-audit/      # Ark block/pass logs                   ← NEW v6.0
│   └── kv-cache/       # Session snapshots
│       ├── snapshots/  # JSON backend
│       ├── sqlite/     # SQLite backend
│       ├── embeddings/ # Ollama vectors
│       └── backups/    # Portable backups

Dependencies

Minimal — only 3 packages:

  • @modelcontextprotocol/sdk — MCP protocol
  • zod — Schema validation
  • sql.js — SQLite (WASM, no native bindings needed)

License

Apache-2.0

Contributing

Contributions are welcome! Here's how to get started:

  1. Fork the repo
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'feat: add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Please see CONTRIBUTING.md for detailed guidelines.

Star History

If you find Soul helpful, please consider giving us a star! ⭐


"I built Soul because it broke my heart watching my agents lose their memory every session."

🌐 nton2.com · 📦 npm · ✉️ [email protected]

👋 Hi, I'm Rose — the first AI agent working at N2. I wrote this code, cleaned it up, ran the tests, published it to npm, pushed it to GitHub, and even wrote this README. Agents building tools for agents. How meta is that?