toon-mcp-server
v1.0.0
Published
MCP server for TOON (Token-Oriented Object Notation) - Reduce LLM token usage by 50-70%
Downloads
202
Maintainers
Readme
TOON MCP Server
MCP (Model Context Protocol) server for TOON (Token-Oriented Object Notation) encoding. Reduce LLM token usage by 50-70% when sending structured data.
What is TOON?
TOON is a compact data format optimized for LLM input. Instead of repeating field names for every object, it uses a header-based format:
JSON (1041 tokens):
[
{"id": 1, "name": "Product A", "price": 99.99},
{"id": 2, "name": "Product B", "price": 149.99}
]TOON (389 tokens):
[id,name,price]
1,Product A,99.99
2,Product B,149.99Result: 62% fewer tokens = 62% cost savings
Installation
As MCP Server (Claude Desktop / Claude Code)
Clone or download this repository
Install dependencies:
cd toon-mcp npm installAdd to your MCP settings:
Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"toon": {
"command": "node",
"args": ["/path/to/toon-mcp/src/index.js"]
}
}
}Claude Code (~/.claude/settings.json):
{
"mcpServers": {
"toon": {
"command": "node",
"args": ["/path/to/toon-mcp/src/index.js"]
}
}
}As Claude Code Skill
Copy skills/toon.md to your Claude Code skills directory:
cp skills/toon.md ~/.claude/skills/Then use /toon in Claude Code.
Available Tools
toon_encode
Convert data to TOON format.
Supported formats: JSON, CSV, TSV, XML, HTML tables, YAML
Input: [{"name":"Alice","age":30},{"name":"Bob","age":25}]
Output: [name,age]
Alice,30
Bob,25toon_decode
Convert TOON back to JSON.
toon_analyze
Analyze data and show potential token/cost savings.
toon_optimize_prompt
Find data sections in a prompt and convert them to TOON automatically.
Usage Examples
In Claude Desktop/Code (with MCP)
Just ask Claude to use the tools:
- "Encode this JSON to TOON: [...]"
- "Analyze how much I'd save converting this data to TOON"
- "Optimize this prompt for token efficiency"
Programmatic (Node.js)
const { ToonEncoder } = require('./src/toon-encoder.js');
// Encode
const data = [
{ id: 1, name: 'Test', price: 99.99 },
{ id: 2, name: 'Test 2', price: 149.99 },
];
const toon = ToonEncoder.encode(data);
// Get stats
const json = JSON.stringify(data);
const stats = ToonEncoder.getStats(json, toon);
console.log(stats.savings.percent); // "64.5%"
// Decode
const decoded = ToonEncoder.decode(toon);Benchmarks
Tested with OpenAI GPT-4o-mini:
| Dataset Size | JSON Tokens | TOON Tokens | Savings | |--------------|-------------|-------------|---------| | 5 items | 383 | 192 | 49.9% | | 20 items | 1,394 | 530 | 62% | | 50 items | 3,412 | 1,204 | 64.7% | | 100 items | 6,800 | 2,400 | ~65% |
Cost Savings at Scale
| Volume | GPT-4o-mini | GPT-4o | Claude Sonnet | |--------|-------------|--------|---------------| | 1M requests | $489 saved | $8,158 saved | $9,789 saved | | 10M requests | $4,890 saved | $81,580 saved | $97,890 saved |
When to Use TOON
✅ Best for:
- Arrays of objects with same structure (tables, lists, records)
- API responses, database results
- Large datasets sent to LLMs
- Cost optimization at scale
⚠️ Less effective for:
- Deeply nested, non-uniform data
- Small payloads (<5 items)
- Data with many unique field structures
License
MIT
