npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@semanticintent/semantic-wake-intelligence-mcp

v3.4.0

Published

Wake Intelligence - 3-layer temporal intelligence brain for AI agents (Past/Present/Future). MCP server with causality tracking, memory management, and predictive pre-fetching.

Downloads

452

Readme

Wake Intelligence MCP

License: MIT CI Tests TypeScript Node.js

Semantic Intent Reference Implementation Hexagonal Architecture PRs Welcome Code of Conduct

Wake Intelligence: 4-Layer Temporal Intelligence for AI Agents

A production-ready Model Context Protocol (MCP) server implementing a temporal intelligence "brain" with four layers: Past (causality tracking), Present (memory management), Future (predictive pre-fetching), and Adaptive (meta-learning — per-project weight tuning).

Reference implementation of Semantic Intent as Single Source of Truth patterns with hexagonal architecture.

📚 Table of Contents

🧠 Wake Intelligence Brain Architecture

Wake Intelligence implements a 4-layer temporal intelligence system that learns from the past, manages the present, predicts the future, and continuously adapts its own prediction weights:

Layer 1: Causality Engine (Past - WHY)

Tracks WHY contexts were created and their causal relationships.

Features:

  • ✅ Causal chain tracking (what led to what)
  • ✅ Dependency auto-detection from temporal proximity
  • ✅ Reasoning reconstruction ("Why did I do this?")
  • ✅ Action type taxonomy (decision, implementation, refactor, etc.)

Use Cases:

  • Trace decision history backwards through time
  • Understand why a context was created
  • Identify context dependencies automatically
  • Reconstruct reasoning from past sessions

Layer 2: Memory Manager (Present - HOW)

Manages HOW relevant contexts are right now based on temporal patterns.

Features:

  • ✅ 4-tier memory classification (ACTIVE, RECENT, ARCHIVED, EXPIRED)
  • ✅ LRU tracking (last access time + access count)
  • ✅ Automatic tier recalculation based on age
  • ✅ Expired context pruning

Memory Tiers:

  • ACTIVE: Last accessed < 1 hour ago
  • RECENT: Last accessed 1-24 hours ago
  • ARCHIVED: Last accessed 1-30 days ago
  • EXPIRED: Last accessed > 30 days ago

Use Cases:

  • Prioritize recent contexts in search results
  • Automatically archive old contexts
  • Prune expired contexts to save storage
  • Track context access patterns

Layer 3: Propagation Engine (Future - WHAT)

Predicts WHAT contexts will be needed next for proactive optimization.

Features:

  • ✅ Composite prediction scoring (40% temporal + 30% causal + 30% frequency)
  • ✅ Pattern-based next access estimation
  • ✅ Observable prediction reasoning
  • ✅ Staleness management with lazy refresh
  • ✅ Proactive background refresh via scheduled cron (every 6 hours, all projects)

Prediction Algorithm:

  • Temporal Score (40%): Exponential decay based on last access time
  • Causal Score (30%): Position in causal chains (roots score higher)
  • Frequency Score (30%): Logarithmic scaling of access count

Use Cases:

  • Pre-fetch high-value contexts for faster retrieval
  • Cache frequently accessed contexts in memory
  • Prioritize contexts by prediction score
  • Identify patterns in context usage

Temporal Intelligence Flow:

┌─────────────────────────────────────────────────────────────┐
│                   WAKE INTELLIGENCE BRAIN                    │
├─────────────────────────────────────────────────────────────┤
│                                                               │
│  LAYER 4: META-LEARNING ENGINE (Adaptive - HOW WELL)        │
│  ┌─────────────────────────────────────────────────────┐    │
│  │ • Tunes per-project prediction weights              │    │
│  │ • Learns from access outcomes (≥20 samples)         │    │
│  │ • Clamps weights [0.1, 0.6] — no dimension dominates│    │
│  └─────────────────────────────────────────────────────┘    │
│                            ▲                                  │
│  LAYER 3: PROPAGATION ENGINE (Future - WHAT)                │
│  ┌─────────────────────────────────────────────────────┐    │
│  │ • Predicts WHAT will be needed next                 │    │
│  │ • Composite scoring (temporal + causal + frequency) │    │
│  │ • Pre-fetching optimization                         │    │
│  └─────────────────────────────────────────────────────┘    │
│                            ▲                                  │
│  LAYER 2: MEMORY MANAGER (Present - HOW)                    │
│  ┌─────────────────────────────────────────────────────┐    │
│  │ • Tracks HOW relevant contexts are NOW              │    │
│  │ • 4-tier memory classification                      │    │
│  │ • LRU tracking + automatic tier updates             │    │
│  └─────────────────────────────────────────────────────┘    │
│                            ▲                                  │
│  LAYER 1: CAUSALITY ENGINE (Past - WHY)                     │
│  ┌─────────────────────────────────────────────────────┐    │
│  │ • Tracks WHY contexts were created                  │    │
│  │ • Causal chain tracking + cross-project dependents  │    │
│  │ • Dependency auto-detection                         │    │
│  └─────────────────────────────────────────────────────┘    │
│                                                               │
└─────────────────────────────────────────────────────────────┘

Benefits:

  • 🎯 Learn from the past: Understand causal relationships across projects
  • 🎯 Optimize the present: Manage memory intelligently
  • 🎯 Predict the future: Pre-fetch what's needed next
  • 🎯 Adapt continuously: Per-project weights improve with every access
  • 🎯 Observable reasoning: Every decision is explainable

🎯 What Makes This Different

This isn't just another MCP server—it's a reference implementation of proven semantic intent patterns:

  • Semantic Anchoring: Decisions based on meaning, not technical characteristics
  • Intent Preservation: Semantic contracts maintained through all transformations
  • Observable Properties: Behavior anchored to directly observable semantic markers
  • Domain Boundaries: Clear semantic ownership across layers

Built on research from Semantic Intent as Single Source of Truth, this implementation demonstrates how to build maintainable, AI-friendly codebases that preserve intent.


🚀 Quick Start

Prerequisites

  • Node.js 20.x or higher
  • Cloudflare account (free tier works)
  • Wrangler CLI: npm install -g wrangler

Installation

  1. Clone the repository

    git clone https://github.com/semanticintent/semantic-wake-intelligence-mcp.git
    cd semantic-wake-intelligence-mcp
  2. Install dependencies

    npm install
  3. Configure Wrangler

    Copy the example configuration:

    cp wrangler.jsonc.example wrangler.jsonc

    Create a D1 database:

    wrangler d1 create mcp-context

    Update wrangler.jsonc with your database ID. The example also includes a triggers.crons entry for the Layer 3 scheduled prediction refresh (runs every 6 hours):

    {
      "d1_databases": [{
        "database_id": "your-database-id-from-above-command"
      }],
      "triggers": {
        "crons": ["0 */6 * * *"]
      }
    }
  4. Run database migrations

    # Local development
    wrangler d1 execute mcp-context --local --file=./migrations/0001_initial_schema.sql
    
    # Production
    wrangler d1 execute mcp-context --file=./migrations/0001_initial_schema.sql
  5. Start development server

    npm run dev

Deploy to Production

npm run deploy

Your MCP server will be available at: semantic-wake-intelligence-mcp.<your-account>.workers.dev

📚 Learning from This Implementation

This codebase demonstrates semantic intent patterns throughout:

Architecture Files:

Documentation & Patterns:

Each file includes comprehensive comments explaining WHY decisions preserve semantic intent, not just WHAT the code does.

Connect to Cloudflare AI Playground

You can connect to your MCP server from the Cloudflare AI Playground, which is a remote MCP client:

  1. Go to https://playground.ai.cloudflare.com/
  2. Enter your deployed MCP server URL (remote-mcp-server-authless.<your-account>.workers.dev/sse)
  3. You can now use your MCP tools directly from the playground!

Connect Claude Desktop to your MCP server

You can also connect to your remote MCP server from local MCP clients, by using the mcp-remote proxy.

To connect to your MCP server from Claude Desktop, follow Anthropic's Quickstart and within Claude Desktop go to Settings > Developer > Edit Config.

Update with this configuration:

{
  "mcpServers": {
    "semantic-context": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://localhost:8787/sse"  // or semantic-wake-intelligence-mcp.your-account.workers.dev/sse
      ]
    }
  }
}

Restart Claude and you should see the tools become available.

🏗️ Architecture

This project demonstrates Domain-Driven Hexagonal Architecture with clean separation of concerns:

┌─────────────────────────────────────────────────────────┐
│                   Presentation Layer                     │
│              (MCPRouter - HTTP routing)                  │
└────────────────────┬────────────────────────────────────┘
                     │
┌────────────────────▼────────────────────────────────────┐
│                  Application Layer                       │
│     (ToolExecutionHandler, MCPProtocolHandler)          │
│              MCP Protocol & Orchestration                │
└────────────────────┬────────────────────────────────────┘
                     │
┌────────────────────▼────────────────────────────────────┐
│                    Domain Layer                          │
│         (ContextService, ContextSnapshot)                │
│                 Business Logic                           │
└────────────────────┬────────────────────────────────────┘
                     │
┌────────────────────▼────────────────────────────────────┐
│                Infrastructure Layer                      │
│    (D1ContextRepository, CloudflareAIProvider)          │
│           Technical Adapters (Ports & Adapters)         │
└─────────────────────────────────────────────────────────┘

Layer Responsibilities:

Domain Layer (src/domain/):

  • Pure business logic independent of infrastructure
  • ContextSnapshot: Entity with validation rules
  • ContextService: Core business operations

Application Layer (src/application/):

  • Orchestrates domain operations
  • ToolExecutionHandler: Translates MCP tools to domain operations
  • MCPProtocolHandler: Manages JSON-RPC protocol

Infrastructure Layer (src/infrastructure/):

  • Technical adapters implementing ports (interfaces)
  • D1ContextRepository: Cloudflare D1 persistence
  • CloudflareAIProvider: Workers AI integration
  • CORSMiddleware: Cross-cutting concerns

Presentation Layer (src/presentation/):

  • HTTP routing and request handling
  • MCPRouter: Routes requests to appropriate handlers

Composition Root (src/index.ts):

  • Dependency injection
  • Wires all layers together
  • 74 lines (down from 483 - 90% reduction)

Benefits:

  • Testability: Each layer independently testable
  • Maintainability: Clear responsibilities per layer
  • Flexibility: Swap infrastructure (D1 → Postgres) without touching domain
  • Semantic Intent: Comprehensive documentation of WHY
  • Type Safety: Strong TypeScript contracts throughout

Features

Core Context Management

  • save_context: Save conversation context with AI-powered summarization and auto-tagging; supports crossProject: true to detect dependencies across all projects
  • load_context: Retrieve relevant context for a project (with Layer 2 LRU tracking)
  • search_context: Semantic vector search (Cloudflare Vectorize) with keyword fallback

Wake Intelligence Layer 1: Causality (Past)

  • reconstruct_reasoning: Understand WHY a context was created
  • build_causal_chain: Trace decision history backwards through time
  • get_causality_stats: Analytics on causal relationships and action types
  • get_cross_project_dependents: Find all downstream contexts (any project) caused by a given snapshot

Wake Intelligence Layer 2: Memory (Present)

  • get_memory_stats: View memory tier distribution and access patterns
  • recalculate_memory_tiers: Update tier classifications based on current time
  • prune_expired_contexts: Automatic cleanup of old, unused contexts

Wake Intelligence Layer 3: Propagation (Future)

  • update_predictions: Refresh prediction scores for a project
  • get_high_value_contexts: Retrieve contexts most likely to be accessed next
  • get_propagation_stats: Analytics on prediction quality and patterns

Wake Intelligence Layer 4: Meta-Learning (Adaptive)

  • get_learning_stats: View learned per-project weights and component averages
  • reindex_project: Backfill semantic embeddings for historical snapshots

🧪 Testing

This project includes comprehensive unit tests with 221 tests covering all architectural layers.

Run Tests

# Run all tests
npm test

# Run tests in watch mode
npm run test:watch

# Run tests with UI
npm run test:ui

# Run tests with coverage report
npm run test:coverage

Test Coverage

  • Domain Layer: 146 tests (ContextSnapshot, CausalityService, ContextService, MemoryManagerService, PropagationService, MetaLearningService)
  • Application Layer: 10 tests (ToolExecutionHandler, MCP tool dispatch)
  • Infrastructure Layer: 53 tests (D1Repository, VectorizeRepository, CloudflareAIProvider with fallbacks)
  • Presentation Layer: 12 tests (MCPRouter, CORS, error handling)

Test Structure

Tests are co-located with source files using the .test.ts suffix:

src/
├── domain/
│   ├── models/
│   │   ├── ContextSnapshot.ts
│   │   └── ContextSnapshot.test.ts
│   └── services/
│       ├── ContextService.ts
│       ├── ContextService.test.ts
│       ├── CausalityService.ts
│       ├── CausalityService.test.ts
│       ├── MemoryManagerService.ts
│       ├── MemoryManagerService.test.ts
│       ├── PropagationService.ts
│       ├── PropagationService.test.ts
│       ├── MetaLearningService.ts
│       └── MetaLearningService.test.ts
├── application/
│   └── handlers/
│       ├── ToolExecutionHandler.ts
│       └── ToolExecutionHandler.test.ts
└── ...

All tests use Vitest with mocking for external dependencies (D1, AI services).

Continuous Integration

This project uses GitHub Actions for automated testing and quality checks.

Automated Checks on Every Push/PR:

  • ✅ TypeScript compilation (npm run type-check)
  • ✅ Unit tests (npm test)
  • ✅ Test coverage reports
  • ✅ Code formatting (Biome)
  • ✅ Linting (Biome)

Status Badges:

  • CI status displayed at top of README
  • Automatically updates on each commit
  • Shows passing/failing state

Workflow Configuration: .github/workflows/ci.yml

The CI pipeline runs on Node.js 20.x and ensures code quality before merging.

Database Setup

This project uses Cloudflare D1 for persistent context storage.

Initial Setup

  1. Create D1 Database:

    wrangler d1 create mcp-context
  2. Update wrangler.jsonc with your database ID:

    {
      "d1_databases": [
        {
          "binding": "DB",
          "database_name": "mcp-context",
          "database_id": "your-database-id-here"
        }
      ]
    }
  3. Run Initial Migration:

    wrangler d1 execute mcp-context --file=./migrations/0001_initial_schema.sql

Local Development

For local testing, initialize the local D1 database:

wrangler d1 execute mcp-context --local --file=./migrations/0001_initial_schema.sql

Verify Schema

Check that tables were created successfully:

# Production
wrangler d1 execute mcp-context --command="SELECT name FROM sqlite_master WHERE type='table'"

# Local
wrangler d1 execute mcp-context --local --command="SELECT name FROM sqlite_master WHERE type='table'"

Database Migrations

All database schema changes are managed through versioned migration files in migrations/:

  • 0001_initial_schema.sql - Initial context snapshots table with semantic indexes

See migrations/README.md for detailed migration management guide.

License

This project is licensed under the MIT License - see the LICENSE file for details.

🔬 Research Foundation

This implementation is based on the research paper "Semantic Intent as Single Source of Truth: Immutable Governance for AI-Assisted Development".

Core Principles Applied:

  1. Semantic Over Structural - Use meaning, not technical characteristics
  2. Intent Preservation - Maintain semantic contracts through transformations
  3. Observable Anchoring - Base behavior on directly observable properties
  4. Immutable Governance - Protect semantic integrity at runtime

Related Resources:

🤝 Contributing

We welcome contributions! This is a reference implementation, so contributions should maintain semantic intent principles.

How to Contribute

  1. Read the guidelines: CONTRIBUTING.md
  2. Check existing issues: Avoid duplicates
  3. Follow the architecture: Maintain layer boundaries
  4. Add tests: All changes need test coverage
  5. Document intent: Explain WHY, not just WHAT

Contribution Standards

  • ✅ Follow semantic intent patterns
  • ✅ Maintain hexagonal architecture
  • ✅ Add comprehensive tests
  • ✅ Include semantic documentation
  • ✅ Pass all CI checks

Quick Links:

Community

🔒 Security

Security is a top priority. Please review our Security Policy for:

  • Secrets management best practices
  • What to commit / what to exclude
  • Reporting security vulnerabilities
  • Security checklist for deployment

Found a vulnerability? Email: [email protected]