@suppleaardvark/csv-explorer-mcp
v1.0.0
Published
MCP server for exploring and analyzing CSV files with streaming support
Downloads
102
Maintainers
Readme
CSV Explorer MCP Server
A Model Context Protocol (MCP) server for exploring and analyzing CSV files. Provides tools for inspection, sampling, schema inference, statistics, filtering, and more.
Installation
npm install
npm run buildUsage
Add to your MCP configuration:
{
"mcpServers": {
"csv-explorer": {
"command": "node",
"args": ["path/to/dist/index.js"]
}
}
}Tools
csv_inspect
Get an overview of a CSV file including size, row/column count, detected delimiter, and a preview of the data. Large field values are automatically truncated with content-type hints.
csv_inspect({ file: "/path/to/data.csv", previewRows: 5 })csv_sample
Get sample records using various sampling strategies.
csv_sample({ file: "/path/to/data.csv", mode: "random", count: 10 })
// modes: "first", "last", "random", "range"csv_schema
Infer the schema by sampling records. Returns column names, types, and nullability.
csv_schema({ file: "/path/to/data.csv", sampleSize: 1000 })
// outputFormat: "inferred", "json-schema", "formatted"csv_stats
Collect aggregate statistics for fields. Includes min/max, mean, median, stdDev for numeric fields, and top values for categorical fields.
csv_stats({ file: "/path/to/data.csv", fields: ["price", "category"] })csv_search
Search for records where a field matches a regex pattern.
csv_search({ file: "/path/to/data.csv", field: "email", pattern: "@example\\.com$" })csv_filter
Filter records using query expressions. Supports comparisons (==, !=, <, >, <=, >=), text operations (contains, startswith, endswith, matches), and compound queries (AND, OR).
csv_filter({ file: "/path/to/data.csv", query: 'status == "active" AND age > 30' })csv_validate
Validate a CSV file for syntax errors and optionally against a schema.
csv_validate({
file: "/path/to/data.csv",
schema: {
columns: [
{ name: "id", type: "integer", required: true },
{ name: "email", type: "string", pattern: "^[^@]+@[^@]+$" }
]
}
})csv_tail
Read new records appended since a cursor position. Use for monitoring actively-written files.
csv_tail({ file: "/path/to/data.csv", cursor: 1024, maxRecords: 100 })csv_get_cursor
Get the current end-of-file position for use with csv_tail.
csv_get_cursor({ file: "/path/to/data.csv" })csv_diff
Compare two CSV files and report differences.
csv_diff({ file1: "/path/to/old.csv", file2: "/path/to/new.csv", keyField: "id" })csv_extract
Extract a specific field value from a CSV record. Use for retrieving large/truncated field data. Can write to file for binary data (e.g., base64 images).
// Get field value inline
csv_extract({ file: "/path/to/data.csv", field: "description", line: 5 })
// Decode base64 and write to file
csv_extract({
file: "/path/to/data.csv",
field: "screenshot",
line: 1,
decode: "base64",
outputFile: "/tmp/screenshot.png"
})csv_large_fields
List fields containing large values (e.g., base64 images, JSON blobs). Helps identify which fields were truncated in csv_inspect.
csv_large_fields({ file: "/path/to/data.csv", threshold: 1000, sampleRows: 100 })Features
- Streaming Architecture: Memory-efficient processing of large files
- Auto-Detection: Automatically detects delimiters (comma, tab, semicolon, pipe) and encoding
- Smart Truncation: Large field values are truncated with content-type hints (base64, JSON, HTML)
- Query Engine: Filter records with SQL-like expressions supporting AND/OR logic
- Schema Inference: Detect column types (string, integer, number, boolean, date, email, url)
- Online Statistics: Uses Welford's algorithm for efficient single-pass statistics
Development
# Run tests
npm test
# Build
npm run build
# Watch mode
npm run devLicense
MIT
