npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, πŸ‘‹, I’m Ryan HefnerΒ  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you πŸ™

Β© 2026 – Pkg Stats / Ryan Hefner

@echoes-io/mcp-server

v8.1.0

Published

πŸ“š Model Context Protocol server for AI-powered storytelling with Narrative Knowledge Graph - extract characters, locations, relationships and search your stories semantically.

Readme

Echoes MCP Server

CI npm Node License: MIT Coverage Badge

Model Context Protocol server for AI integration with Echoes storytelling platform.

Features

  • Narrative Knowledge Graph: Automatically extracts characters, locations, events, and their relationships using Gemini AI
  • Semantic Search: Find relevant chapters using natural language queries
  • Entity Search: Search for characters, locations, and events
  • Relation Search: Explore relationships between entities
  • Arc Isolation: Each arc is a separate narrative universe - no cross-arc contamination
  • Statistics: Aggregate word counts, POV distribution, and more
  • Dynamic Prompts: Reusable prompt templates with placeholder substitution

Installation

npm install -g @echoes-io/mcp-server

Or run directly with npx:

npx @echoes-io/mcp-server --help

Requirements

  • Node.js 20+
  • Gemini API key (for entity extraction)

Usage

CLI

# Count words in a markdown file
echoes words-count ./content/arc1/ep01/ch001.md

# Index timeline content
echoes index ./content

# Index only a specific arc
echoes index ./content --arc bloom

# Get statistics
echoes stats
echoes stats --arc arc1 --pov Alice

# Search (filters by arc to avoid cross-arc contamination)
echoes search "primo incontro" --arc bloom
echoes search "Alice" --type entities --arc bloom

# Check narrative consistency
echoes check-consistency bloom
echoes check-consistency bloom --rules kink-firsts,outfit-claims

MCP Server

Configure in your MCP client (e.g., Claude Desktop, Kiro):

{
  "mcpServers": {
    "echoes": {
      "command": "npx",
      "args": ["@echoes-io/mcp-server"],
      "cwd": "/path/to/timeline",
      "env": {
        "GEMINI_API_KEY": "your_api_key"
      }
    }
  }
}

Environment Variables

| Variable | Required | Default | Description | |----------|----------|---------|-------------| | GEMINI_API_KEY | Yes | - | API key for Gemini entity extraction | | ECHOES_GEMINI_MODEL | No | gemini-2.5-flash | Gemini model for extraction | | ECHOES_EMBEDDING_MODEL | No | Xenova/e5-small-v2 | HuggingFace embedding model | | ECHOES_EMBEDDING_DTYPE | No | fp32 | Quantization level: fp32, q8, q4 (see Performance Notes) | | HF_TOKEN | No | - | HuggingFace token for gated models |

Available Tools

| Tool | Description | |------|-------------| | words-count | Count words and statistics in a markdown file | | index | Index timeline content into LanceDB | | search | Search chapters, entities, or relations | | stats | Get aggregate statistics | | check-consistency | Analyze arc for narrative inconsistencies | | timeline-overview | Quick overview of all arcs: status, chapters, words, POVs | | graph-export | Export knowledge graph in various formats | | history | Query character/arc history (kinks, outfits, locations, relations) | | review-generate | Generate review file for pending entity/relation extractions | | review-status | Show review statistics for an arc | | review-apply | Apply corrections from review file to database |

Available Prompts

| Prompt | Arguments | Description | |--------|-----------|-------------| | arc-resume | arc, episode?, lastChapters? | Load complete context for resuming work on an arc | | new-chapter | arc, chapter | Create a new chapter | | revise-chapter | arc, chapter | Revise an existing chapter | | expand-chapter | arc, chapter, target | Expand chapter to target word count | | new-character | name | Create a new character sheet | | new-episode | arc, episode | Create a new episode outline | | new-arc | name | Create a new story arc | | revise-arc | arc | Review and fix an entire arc |

Architecture

Content Hierarchy

Timeline (content directory)
└── Arc (story universe)
    └── Episode (story event)
        └── Chapter (individual .md file)

Arc Isolation

Each arc is treated as a separate narrative universe:

  • Entities are scoped to arcs: bloom:CHARACTER:Alice β‰  work:CHARACTER:Alice
  • Relations are internal to arcs
  • Searches can be filtered by arc to avoid cross-arc contamination

Data Flow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     INDEXING PHASE                          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  1. Scan content/*.md (filesystem scanner)                  β”‚
β”‚  2. Parse frontmatter + content (gray-matter)               β”‚
β”‚  3. For each chapter:                                       β”‚
β”‚     a. Extract entities/relations with Gemini API           β”‚
β”‚     b. Generate embeddings (Transformers.js ONNX)           β”‚
β”‚     c. Calculate word count and statistics                  β”‚
β”‚  4. Save everything to LanceDB                              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Development

# Install dependencies
npm install

# Run tests
npm test

# Run tests with coverage
npm run test:coverage

# Lint
npm run lint

# Type check
npm run typecheck

# Build
npm run build

Tech Stack

| Purpose | Tool | |---------|------| | Runtime | Node.js 20+ | | Language | TypeScript | | Vector DB | LanceDB | | Embeddings | @huggingface/transformers (ONNX) | | Entity Extraction | Gemini AI | | MCP SDK | @modelcontextprotocol/sdk | | Testing | Vitest | | Linting | Biome |

Performance Notes

Embedding Quantization

The default embedding model (Xenova/e5-small-v2) supports different quantization levels via ECHOES_EMBEDDING_DTYPE:

| Level | Speed | Quality | Memory | Recommendation | |-------|-------|---------|---------|----------------| | fp32 | Baseline | Best (100%) | High | Production with ample resources | | q8 | 2-3x faster | Excellent (99.6%) | 50% less | Recommended - optimal balance | | q4 | 3-4x faster | Good (99.1%) | 75% less | Resource-constrained environments |

Note: Some models like onnx-community/embeddinggemma-300m-ONNX don't support fp16. Always check model documentation.

Recommended setting:

export ECHOES_EMBEDDING_DTYPE=q8

License

MIT


Part of the Echoes project - a multi-POV digital storytelling platform.