json-mcp
v1.2.0
Published
MCP server for efficient JSON manipulation
Maintainers
Readme
json-mcp
Ergonomic MCP tools for efficient JSON manipulation in Claude Code.
Stop wrestling with bash/jq pipelines and multi-turn operations. json-mcp provides four simple tools that handle JSON files of any size with intuitive syntax and zero setup overhead.
Why json-mcp?
Before (without MCP tools):
- 4+ turns to filter, transform, or update JSON
- Complex jq syntax and bash pipelines
- Frequent errors with large files
- Setup overhead (npm install, script writing)
After (with json-mcp):
- 1 turn for most operations
- Simple filter syntax:
{"price": ">1000"} - Handles 100MB+ files with automatic streaming
- Zero setup - just install and go
Proven Results
Rigorous testing across 5 real-world scenarios showed:
- 76% reduction in turns per task
- 83% reduction in tool calls
- 100% error elimination
- Perfect success rate maintained
Installation
Option 1: Global Installation (Recommended)
npm install -g json-mcpVerify installation:
json-mcp --versionNote: Ensure your npm global bin directory is on PATH. Check with:
npm bin -gOption 2: One-Off Usage
npx json-mcp@latestConfiguration
Global Configuration
Add to ~/.claude.json:
{
"mcpServers": {
"json-mcp": {
"command": "json-mcp"
}
}
}After configuring, restart Claude Code for changes to take effect.
Per-Project Configuration (Portable)
Create .mcp.json in your project root:
{
"mcpServers": {
"json-mcp": {
"command": "json-mcp"
}
}
}This allows the MCP server to be automatically available when working in that directory. Commit .mcp.json to share the configuration with your team.
Tools
json_query
Extract and filter data from JSON files with simple syntax.
Features:
- Simple comparison operators:
>,<,>=,<=,= - Grouping and aggregation:
count - Multiple output formats:
json,table,csv,keys - Automatic JSONL streaming for large files
- Preview mode with
limit
Examples:
// Find expensive products
json_query({
file: "products.json",
filter: { "price": ">1000" },
output: "table"
})
// Group error logs by type
json_query({
file: "logs.jsonl",
filter: { "level": "error" },
groupBy: "error_type",
aggregate: "count"
})
// Preview first 5 results
json_query({
file: "large-dataset.json",
filter: { "status": "active" },
limit: 5
})json_transform
Transform JSON structure and convert between formats.
Features:
- JSON → CSV conversion, and JSON → JSONL
- Pre-filtering before transformation
- Field mapping and renaming
- Handles large files efficiently
Examples:
// Convert JSON to CSV
json_transform({
file: "products.json",
output: "products.csv",
format: "csv"
})
// Filter then transform
json_transform({
file: "users.json",
output: "active-users.csv",
format: "csv",
filter: { "status": "active" }
})
// Field mapping
json_transform({
file: "data.json",
output: "mapped.json",
format: "json",
template: {
"id": "sku",
"name": "title"
}
})json_update
Update or create fields in JSON files atomically with wildcard paths.
Features:
- Create new fields: Automatically creates nested paths that don't exist
- Wildcard path syntax:
**.fieldName(no jq expertise required) - Conditional updates with
whereclause - Dry-run preview mode
- Automatic backup creation
- Atomic writes (no broken JSON)
Examples:
// Update existing field
json_update({
file: "config.json",
updates: [{
path: "$.server.port",
value: 8080
}]
})
// Create new nested field (creates intermediate objects automatically)
json_update({
file: "settings.json",
updates: [{
path: "$.mcpServers.vision",
value: {
command: "npx",
args: ["-y", "mcp-gemini-vision"]
}
}]
})
// Update all timeout values conditionally
json_update({
file: "config.json",
updates: [{
path: "**.timeout",
value: 60000,
where: { oldValue: 30000 }
}]
})
// Preview changes first
json_update({
file: "config.json",
updates: [{
path: "$.server.port",
value: 8080
}],
dryRun: true
})
// Multiple updates at once
json_update({
file: "settings.json",
updates: [
{ path: "$.theme", value: "dark" },
{ path: "$.notifications.enabled", value: true },
{ path: "$.newSection.subsection.field", value: "creates full path" },
{ path: "$.permissions.allow[0]", value: "mcp__chrome-devtools__*" } // creates array as needed
]
})json_validate
Validate JSON files against schemas with detailed error reporting.
Features:
- JSON Schema validation using Ajv
- Batch validation of multiple files
- Detailed or summary output modes
- Clear violation reporting
- 100% accuracy
Examples:
// Validate single file
json_validate({
files: "data.json",
schema: "schema.json"
})
// Validate multiple files
json_validate({
files: ["file1.json", "file2.json", "file3.json"],
schema: "schema.json",
output: "detailed"
})
// Summary mode
json_validate({
files: "products.json",
schema: "product-schema.json",
output: "summary"
})Use Cases
Query & Filter
# Find all error logs from the last hour
json_query logs.jsonl --filter '{"level":"error","timestamp":">=2024-10-22T10:00:00Z"}' --groupBy error_type --aggregate countTransform & Export
# Convert product catalog to CSV for Excel
json_transform products.json --output products.csv --format csv --filter '{"category":"Electronics"}'Update Configuration
# Update all API timeouts across nested config
json_update config.json --path "**.timeout" --value 60000Validate Data Quality
# Validate multiple data files against schema
json_validate data-*.json --schema product-schema.json --output detailedjson_audit
Explore schema and structure of large JSON/JSONL files with streaming inference.
Features:
- Streaming inference for arrays/objects/JSONL
- JSON Schema generation (2020-12)
- Field statistics (types, null rates, examples)
- Sampling and item limits for very large datasets
- Map/dictionary detection: Objects with many dynamic keys (author names, dates, IDs) are automatically collapsed to
additionalPropertiespattern instead of listing thousands of keys
Examples:
// Audit file structure
json_audit({ file: "data.json" })
// Sample large files
json_audit({ file: "large.json", sampling: 0.25, maxItems: 50000 })
// Disable map collapsing for full schema
json_audit({ file: "data.json", collapseMapThreshold: 0 })Performance
json-mcp is designed to handle files of any size:
- Automatic streaming for JSONL files (tested with 100MB+)
- Memory-efficient operations with large JSON arrays
- Fast filtering without loading entire file into memory
- Atomic writes prevent data corruption
Tested on:
- 5MB product catalogs (51,000+ records)
- 100MB+ JSONL logs (500,000+ entries)
- 189MB GeoJSON files (deeply nested structures)
Architecture
json-mcp uses a declarative tool pattern that makes it easy to add new tools:
export const tools: Tool[] = [
{
name: "json_query",
description: "Extract and query JSON data...",
inputSchema: { /* JSON Schema */ },
handler: jsonQueryHandler
},
// Add more tools...
];See ARCHITECTURE.md for implementation details.
Development
Setup
npm install
npm run buildDevelopment Mode
npm run devTesting
The project includes comprehensive test scenarios with real-world datasets. See ARCHIVE/ for testing methodology and results.
Contributing
Contributions welcome! The declarative architecture makes adding new tools straightforward:
- Add tool definition to
src/tools/index.ts - Implement handler function
- Update tests
- Submit PR
Research & Validation
This project underwent rigorous comparative testing to validate its effectiveness:
- 5 real-world test scenarios (query, transform, update, validate, large files)
- Parallel baseline testing (without MCP tools)
- Comparative testing (with json-mcp tools)
- Quantified improvements (76% turn reduction, 83% tool call reduction)
For detailed methodology and results, see ARCHIVE/.
Challenge Results
We completed multiple JSON-MCP challenges to validate real-world usage:
- JSONTestSuite parsing compatibility: see
CHALLENGE_RESULTS.md - Real dataset filtering/aggregation/transform: see
REAL_CHALLENGE_RESULTS.md
Commands used and prompts are captured in JSON_CHALLENGES.md.
License
MIT
Links
Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Built with the Model Context Protocol for Claude Code
