npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

log-intelligence-mcp

v1.0.0

Published

AI Log Intelligence MCP Server - Consumes structured logs, detects anomalies, clusters errors, and identifies regressions

Readme

Log Intelligence MCP Server

Add to Cursor Install in VS Code

An AI-powered log analysis server built on the Model Context Protocol (MCP). It ingests structured logs from multiple sources, detects anomalies, clusters similar errors, summarises recurring failures, and identifies regressions after deployments.

Features

  • Multi-source ingestion -- Serilog JSON/CLEF, Elasticsearch dumps, SQLite databases, plain text log files
  • Error fingerprinting -- Automatically normalises and clusters similar error messages
  • Anomaly detection -- Statistical spike detection using z-score analysis on time-bucketed error rates
  • Regression detection -- Compares error patterns before/after a deployment date
  • Full-text search -- Search across all ingested log entries
  • Timeline visualisation -- Error frequency bucketed over time with ASCII bar charts

Install

# From npm (recommended)
npm install -g log-intelligence-mcp

# Or run without installing
npx log-intelligence-mcp

Quick Start (from source)

# Install dependencies
npm install

# Build
npm run build

# Run (stdio transport for MCP clients)
npm start

MCP Client Configuration

Cursor (recommended: npx)

Add to your .cursor/mcp.json:

{
  "mcpServers": {
    "log-intelligence": {
      "command": "npx",
      "args": ["log-intelligence-mcp"]
    }
  }
}

Cursor (global install)

If you installed globally with npm install -g log-intelligence-mcp:

{
  "mcpServers": {
    "log-intelligence": {
      "command": "log-intelligence-mcp"
    }
  }
}

Cursor (local development)

When developing from source:

{
  "mcpServers": {
    "log-intelligence": {
      "command": "node",
      "args": ["path/to/log-intelligence-mcp/dist/index.js"]
    }
  }
}

Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "log-intelligence": {
      "command": "npx",
      "args": ["log-intelligence-mcp"]
    }
  }
}

Tools

| Tool | Description | |------|-------------| | ingest_logs | Load logs from a source (serilog, elastic, sql, flatfile) | | summarise_errors | Cluster and summarise error-level entries | | detect_new_error_pattern | Find errors in a comparison window absent from a baseline | | regression_after_date | Compare error patterns before/after a deployment | | search_logs | Full-text search across all ingested entries | | get_error_timeline | Error frequency over time with optional anomaly detection |

Resources

| URI | Description | |-----|-------------| | logs://sources | List all ingested log sources with entry counts | | logs://summary | Overall stats: total entries, error count, unique patterns |

Prompts

| Prompt | Description | |--------|-------------| | analyse-logs | Guided workflow: ingest, summarise, detect anomalies | | investigate-error | Deep dive into a specific error pattern |

Log Sources

Serilog JSON (.clef)

Reads Compact Log Event Format files -- one JSON object per line with @t, @l, @mt, @m, @x fields. Also supports standard Serilog JSON output.

# Example: ingest a Serilog file
ingest_logs(source="serilog", path="./logs/app.clef")

Elasticsearch Dumps

Accepts JSON array of _source documents, full ES response format (hits.hits[]._source), or NDJSON. Can also query a live Elasticsearch endpoint.

# File dump
ingest_logs(source="elastic", path="./dumps/errors.json")

# Live query
ingest_logs(source="elastic", endpoint="http://localhost:9200", index="app-logs", filter="level:error")

SQLite Debug Tables

Reads from SQLite database files. Auto-maps columns by convention (Timestamp, Level, Message, Exception).

ingest_logs(source="sql", path="./debug.db", table="Logs", filter="Level = 'Error'")

Flat Files

Parses common formats with ISO timestamps, log levels, and messages. Handles multi-line stack traces. Supports custom regex patterns.

ingest_logs(source="flatfile", path="./logs/app.log")

Configuration

Optionally place a log-intelligence.config.json in your working directory to pre-configure sources:

{
  "sources": [
    { "name": "app-logs", "type": "serilog", "path": "./logs/app.clef" },
    { "name": "db-errors", "type": "sql", "path": "./debug.db", "table": "Logs", "filter": "Level = 'Error'" }
  ],
  "defaults": {
    "bucketMinutes": 60,
    "anomalyThreshold": 2.0,
    "maxSamples": 3
  }
}

Pre-configured sources are automatically ingested when the server starts.

How It Works

Error Fingerprinting

Messages are normalised by stripping variable tokens (UUIDs, numbers, timestamps, paths, URLs, IPs, quoted strings), then SHA-256 hashed to produce a 16-character fingerprint. Entries sharing a fingerprint are grouped into error clusters.

Anomaly Detection

Error entries are bucketed into time windows. The baseline mean and standard deviation are computed, and buckets where count > mean + threshold * stddev are flagged as anomalous. Each bucket gets a z-score for ranking.

Regression Detection

Given a deployment date, the detector splits logs into before/after windows, clusters errors independently in each, then performs a set-diff to identify:

  • New errors -- appeared only after deployment
  • Increased errors -- existed before but rate increased >2x
  • Resolved errors -- disappeared after deployment

Sample Data

The samples/ directory contains example log files for testing:

  • app.clef -- Serilog CLEF format with a simulated deployment + Redis outage
  • webserver.log -- Flat file format with Java stack traces
  • elastic-dump.json -- Elasticsearch JSON dump
  • log-intelligence.config.json -- Config to auto-ingest all samples

To test with sample data, copy the config to your working directory:

cp samples/log-intelligence.config.json .
npm start

Development

# Run in dev mode with hot reload
npm run dev

# Type-check without emitting
npx tsc --noEmit

License

MIT