npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@suppleaardvark/csv-explorer-mcp

v1.0.0

Published

MCP server for exploring and analyzing CSV files with streaming support

Downloads

102

Readme

CSV Explorer MCP Server

A Model Context Protocol (MCP) server for exploring and analyzing CSV files. Provides tools for inspection, sampling, schema inference, statistics, filtering, and more.

Installation

npm install
npm run build

Usage

Add to your MCP configuration:

{
  "mcpServers": {
    "csv-explorer": {
      "command": "node",
      "args": ["path/to/dist/index.js"]
    }
  }
}

Tools

csv_inspect

Get an overview of a CSV file including size, row/column count, detected delimiter, and a preview of the data. Large field values are automatically truncated with content-type hints.

csv_inspect({ file: "/path/to/data.csv", previewRows: 5 })

csv_sample

Get sample records using various sampling strategies.

csv_sample({ file: "/path/to/data.csv", mode: "random", count: 10 })
// modes: "first", "last", "random", "range"

csv_schema

Infer the schema by sampling records. Returns column names, types, and nullability.

csv_schema({ file: "/path/to/data.csv", sampleSize: 1000 })
// outputFormat: "inferred", "json-schema", "formatted"

csv_stats

Collect aggregate statistics for fields. Includes min/max, mean, median, stdDev for numeric fields, and top values for categorical fields.

csv_stats({ file: "/path/to/data.csv", fields: ["price", "category"] })

csv_search

Search for records where a field matches a regex pattern.

csv_search({ file: "/path/to/data.csv", field: "email", pattern: "@example\\.com$" })

csv_filter

Filter records using query expressions. Supports comparisons (==, !=, <, >, <=, >=), text operations (contains, startswith, endswith, matches), and compound queries (AND, OR).

csv_filter({ file: "/path/to/data.csv", query: 'status == "active" AND age > 30' })

csv_validate

Validate a CSV file for syntax errors and optionally against a schema.

csv_validate({
  file: "/path/to/data.csv",
  schema: {
    columns: [
      { name: "id", type: "integer", required: true },
      { name: "email", type: "string", pattern: "^[^@]+@[^@]+$" }
    ]
  }
})

csv_tail

Read new records appended since a cursor position. Use for monitoring actively-written files.

csv_tail({ file: "/path/to/data.csv", cursor: 1024, maxRecords: 100 })

csv_get_cursor

Get the current end-of-file position for use with csv_tail.

csv_get_cursor({ file: "/path/to/data.csv" })

csv_diff

Compare two CSV files and report differences.

csv_diff({ file1: "/path/to/old.csv", file2: "/path/to/new.csv", keyField: "id" })

csv_extract

Extract a specific field value from a CSV record. Use for retrieving large/truncated field data. Can write to file for binary data (e.g., base64 images).

// Get field value inline
csv_extract({ file: "/path/to/data.csv", field: "description", line: 5 })

// Decode base64 and write to file
csv_extract({
  file: "/path/to/data.csv",
  field: "screenshot",
  line: 1,
  decode: "base64",
  outputFile: "/tmp/screenshot.png"
})

csv_large_fields

List fields containing large values (e.g., base64 images, JSON blobs). Helps identify which fields were truncated in csv_inspect.

csv_large_fields({ file: "/path/to/data.csv", threshold: 1000, sampleRows: 100 })

Features

  • Streaming Architecture: Memory-efficient processing of large files
  • Auto-Detection: Automatically detects delimiters (comma, tab, semicolon, pipe) and encoding
  • Smart Truncation: Large field values are truncated with content-type hints (base64, JSON, HTML)
  • Query Engine: Filter records with SQL-like expressions supporting AND/OR logic
  • Schema Inference: Detect column types (string, integer, number, boolean, date, email, url)
  • Online Statistics: Uses Welford's algorithm for efficient single-pass statistics

Development

# Run tests
npm test

# Build
npm run build

# Watch mode
npm run dev

License

MIT