@npiesco/toonify
v1.1.7
Published
<div align="center"> <img src="toonify-logo.png" alt="TOONify Logo" width="200"/> <h1>TOONify</h1> <p><strong>High-performance JSON ↔ TOON converter built with Rust</strong></p>
Readme
What is TOONify?
TOON
ify converts between JSON and TOON (Token-Oriented Object Notation), a compact data format that reduces LLM token usage by 30-60%. Use it to cut AI API costs, optimize prompts, and speed up data pipelines.
Example:
// JSON (25 tokens)
{
"users": [
{
"id": 1,
"name": "Alice",
"role": "admin"
}
]
}# TOON (3 tokens - 88% reduction!)
users[1]{id,name,role}:
1,Alice,adminWhy TOON?
- Cut AI Costs: 30-60% fewer tokens = 30-60% lower API bills (GPT-4, Claude, etc.)
- Blazing Fast: Rust-powered with microsecond conversions
- Lossless: Perfect bidirectional JSON ↔ TOON roundtrips
- Multi-Platform: Python, TypeScript/WASM, VS Code extension
- Production Ready: REST API, gRPC, rate limiting, caching
Quick Start
Python
pip install toonifypyNote: Install as
toonifypy, import astoonify
from toonify import json_to_toon, toon_to_json, CachedConverter
import json
# Basic conversion
data = {"users": [{"id": 1, "name": "Alice", "role": "admin"}]}
toon = json_to_toon(json.dumps(data))
print(toon)
# Output: users[1]{id,name,role}:
# 1,Alice,admin
# High-performance cached converter (330x faster on hits!)
converter = CachedConverter(
cache_size=100,
cache_ttl_secs=3600,
persistent_path="./cache.db"
)
toon = converter.json_to_toon(json.dumps(data)) # <100ns on cache hit!See Python Documentation for full API reference.
TypeScript/WASM
npm install @npiesco/toonifyimport { json_to_toon, toon_to_json, WasmCachedConverter } from '@npiesco/toonify';
// Basic conversion
const json = JSON.stringify({ users: [{ id: 1, name: "Bob" }] });
const toon = json_to_toon(json);
console.log(toon);
// Cached converter (500x faster on hits!)
const converter = new WasmCachedConverter(100);
const toon1 = converter.jsonToToon(json); // ~0.5ms
const toon2 = converter.jsonToToon(json); // <1μs (500x faster!)REST API
# Start server
./target/release/toonify serve --cache-size 1000 --rate-limit 100
# Convert JSON to TOON
curl -X POST http://localhost:5000/json-to-toon \
-H "Content-Type: application/json" \
-d '{"data": "{\"users\": [{\"id\": 1, \"name\": \"Alice\"}]}"}'
# Convert TOON to JSON
curl -X POST http://localhost:5000/toon-to-json \
-H "Content-Type: application/json" \
-d '{"data": "users[1]{id,name}:\n1,Alice"}'CLI Tool
# Build from source
cargo build --release
# Convert JSON file to TOON
./target/release/toonify convert data.json --output data.toon
# Convert from stdin
echo '{"users":[{"id":1,"name":"Alice"}]}' | ./target/release/toonify convert -
# Batch convert directory
./target/release/toonify batch --input-dir ./json_files --output-dir ./toon_files --parallel
# Watch directory for changes
./target/release/toonify watch --input-dir ./source --output-dir ./outputVS Code Extension
Install the TOONify extension from the marketplace:
VS Code Marketplace | Open VSX (for Cursor/VSCodium)
Features:
- Convert JSON ↔ TOON with
Cmd+Alt+T/Cmd+Alt+J - Syntax highlighting for
.toonfiles - Intelligent caching (500x speedup on repeated conversions)
- Cache management commands
Installation:
# VS Code / Cursor
code --install-extension NicholasPiesco.toonify
# Or use Cursor URL
cursor:extension/NicholasPiesco.toonifyUse Cases
LLM API Cost Reduction
Before (JSON):
import openai
prompt = {"users": [...]} # 1000 tokens
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": json.dumps(prompt)}]
)
# Cost: $0.03 per 1K tokens = $0.03After (TOON):
from toonify import json_to_toon
toon_prompt = json_to_toon(json.dumps(prompt)) # 350 tokens (65% reduction!)
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": toon_prompt}]
)
# Cost: $0.0105 (65% savings = $19.50/month on 1M tokens!)Prompt Engineering Workflows
# Reduce context window usage
system_context = large_json_data # 5000 tokens
toon_context = json_to_toon(system_context) # 1750 tokens
# Fit 3x more examples in your prompt
messages = [
{"role": "system", "content": toon_context},
{"role": "user", "content": user_query}
]Data Pipeline Optimization
# ETL pipeline with TOON compression
cat large_export.json | \
toonify convert - | \
gzip | \
aws s3 cp - s3://bucket/data.toon.gz
# 60% smaller payloads = faster transfers, lower storage costsFeatures
Core Capabilities
- JSON ↔ TOON Conversion: Bidirectional lossless conversion
- REST API (Axum 0.8): HTTP endpoints on port 5000
- gRPC Service (Tonic 0.14): Binary protocol on port 50051
- CLI Tool: Batch processing, watch mode, compression
- Schema Validation: Type checking, regex patterns, ranges, enums
- Compression: Built-in gzip support
Performance
| Operation | Input Size | Time | Throughput | |-----------|------------|------|------------| | JSON → TOON | 1KB | 0.05ms | 20 MB/s | | JSON → TOON | 100KB | 2.5ms | 40 MB/s | | TOON → JSON | 1KB | 0.08ms | 12 MB/s | | TOON → JSON | 100KB | 4ms | 25 MB/s |
Cached Performance (CachedConverter):
| Platform | Cache Hit | Cache Miss | Speedup | |----------|-----------|------------|---------| | Python (Moka) | <100ns | ~1ms | 330x | | Python (Sled) | <1ms | ~1ms | 2x | | WASM (Browser) | <1μs | ~0.5ms | 500x | | VS Code | <1μs | ~0.5ms | 500x |
Caching Architecture
Python (UniFFI):
CachedConverter
├─ Moka: Lock-free concurrent cache (hot path, TinyLFU eviction)
└─ Sled: Persistent embedded database (survives restarts)
TypeScript/WASM:
WasmCachedConverter
└─ HashMap: In-memory browser-compatible cache
VS Code Extension:
Uses WasmCachedConverter (100 entries per session)Advanced Features
- Rate Limiting: Token bucket algorithm (Tower Governor 0.8)
- Distributed Processing: Job queue with async workers
- Schema Validation: Advanced constraints (regex, ranges, formats)
- Batch Processing: Parallel multi-file conversions
- Watch Mode: Real-time file system monitoring
- Python Bindings (UniFFI 0.29): Native performance with zero overhead
- WASM Bindings: Browser and Node.js support
- VS Code Extension: Editor integration with caching
API Reference
Python (UniFFI)
from toonify import json_to_toon, toon_to_json, CachedConverter, ToonError
# Basic conversion
toon = json_to_toon('{"users":[{"id":1}]}')
json_str = toon_to_json(toon)
# Cached converter (Moka + Sled)
converter = CachedConverter(
cache_size=100,
cache_ttl_secs=3600,
persistent_path="./cache.db"
)
toon = converter.json_to_toon(json_str)
print(converter.cache_stats())
converter.clear_cache()TypeScript/WASM
import { json_to_toon, toon_to_json, WasmCachedConverter } from 'toonify';
// Basic conversion
const toon = json_to_toon('{"users":[{"id":1}]}');
const json = toon_to_json(toon);
// Cached converter (HashMap)
const converter = new WasmCachedConverter(100);
const toon = converter.jsonToToon(json);
const stats = JSON.parse(converter.cacheStats());
converter.clearCache();REST API
| Endpoint | Method | Purpose |
|----------|--------|---------|
| / | GET | Health check |
| /json-to-toon | POST | Convert JSON → TOON |
| /toon-to-json | POST | Convert TOON → JSON |
| /jobs/submit | POST | Submit async conversion job |
| /jobs/{id}/status | GET | Check job status |
| /jobs/{id}/result | GET | Retrieve job result |
gRPC Service
service ConverterService {
rpc JsonToToon (ConvertRequest) returns (ConvertResponse);
rpc ToonToJson (ConvertRequest) returns (ConvertResponse);
}Architecture
System Overview
┌────────────────────────────────────────────────────────┐
│ Client Layer │
│ Python (UniFFI) | TypeScript (WASM) | VS Code │
└─────────────────────┬──────────────────────────────────┘
│
┌─────────────────────▼──────────────────────────────────┐
│ API Layer (Axum 0.8) │
│ REST (port 5000) | gRPC (port 50051) │
│ ├─ Rate Limiting (Tower Governor) │
│ ├─ Moka Cache (concurrent, lock-free) │
│ └─ Job Queue (async workers) │
└─────────────────────┬──────────────────────────────────┘
│
┌─────────────────────▼──────────────────────────────────┐
│ Core Conversion Engine │
│ ├─ Parser: nom 7.1 (combinator-based) │
│ ├─ Serializer: Custom TOON format │
│ └─ Validator: Schema checking & constraints │
└────────────────────────────────────────────────────────┘Installation
Prerequisites
- Rust 1.70+ (for building from source)
- Python 3.8+ (for Python bindings)
- Node.js 12+ (for WASM bindings)
Build from Source
git clone https://github.com/npiesco/TOONify.git
cd TOONify
cargo build --release
# Binary at target/release/toonify
./target/release/toonify --helpPython Bindings
# Build library and generate bindings
cargo build --lib --release --features cache,persistent-cache
cargo run --bin uniffi-bindgen -- generate \
--library target/release/libtoonify.dylib \
--language python \
--out-dir bindings/python
# Copy native library
cp target/release/libtoonify.dylib bindings/python/
# Install package
pip install -e bindings/python/WASM Bindings
# Install wasm-pack
cargo install wasm-pack
# Build WASM package
wasm-pack build --target web --out-dir pkg --no-default-features
# Package is ready at pkg/
ls pkg/
# toonify_bg.wasm toonify.js toonify.d.ts package.jsonVS Code Extension
cd vscode-extension
npm install
npm run compile
npm run package
# Install extension
code --install-extension toonify-0.1.0.vsixToken Savings Examples
| Data Type | JSON Tokens | TOON Tokens | Savings | |-----------|-------------|-------------|---------| | User list (3 items) | 45 | 12 | 73% | | Product catalog (10 items) | 180 | 48 | 73% | | API response (nested) | 120 | 35 | 71% | | Time series (100 points) | 600 | 150 | 75% |
Monthly Cost Savings (GPT-4 @ $0.03/1K tokens):
JSON: 1M tokens/month = $30/month
TOON: 350K tokens/month = $10.50/month
SAVINGS: $19.50/month (65% reduction)Tech Stack
Core:
- Rust - Systems programming language
- Axum 0.8 - Web framework
- Tonic 0.14 - gRPC framework
- Nom 7.1 - Parser combinators
- Tokio 1.0 - Async runtime
Bindings:
- UniFFI 0.29 - FFI bindings (Python/Swift/Kotlin)
- wasm-bindgen - WASM bindings
Caching:
Infrastructure:
- Tower Governor 0.8 - Rate limiting
- Serde 1.0 - JSON serialization
- Rayon 1.10 - Data parallelism
License
MIT License
Copyright (c) 2024 TOONify Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
See LICENSE for full terms.
Contributing
Contributions welcome! Please follow these guidelines:
- Write tests first - TDD approach for all features
- Run full suite - Ensure
cargo testpasses - Update docs - Keep README and examples up to date
- Commit messages - Use conventional commits (e.g.,
feat: Add Swift bindings)
Acknowledgments
Built with:
- Rust - Systems programming language
- Axum - Web framework (0.8)
- Tonic - gRPC framework (0.14)
- Nom - Parser combinators
- UniFFI - FFI bindings by Mozilla (0.29)
- Moka - High-performance concurrent cache (0.12)
- Sled - Embedded database (0.34)
- Tower Governor - Rate limiting (0.8)
Questions? Open an issue or check the documentation.
Like this project? Star the repo and share with your AI engineering team!
Need help? Join the discussion or reach out on Twitter.
