@sriinnu/pakt
v0.9.0
Published
PAKT compression engine — lossless-first L1-L3 compression with opt-in L4 semantic packing for LLM token optimization
Maintainers
Readme
PAKT (Pipe-Aligned Kompact Text) converts JSON, YAML, CSV, and markdown documents with embedded structured blocks into a compact pipe-delimited format that reduces LLM token counts by 30-50% on structured payloads while preserving data fidelity across its core L1-L3 layers. An optional budgeted L4 layer trades fidelity for additional savings only when explicitly requested.
JSON (28 tokens) PAKT (15 tokens)
------------------------------ --------------------------
{ @from json
"users": [ @dict
{ "name": "Alice", $a: dev
"role": "dev" }, @end
{ "name": "Bob",
"role": "dev" } users [2]{name|role}:
] Alice|$a
} Bob|$a| Input Type | Savings | Round-trip | |---|---|---| | JSON 10 records | 27% | Lossless | | JSON 50 records | 33% | Lossless | | Log lines (duplicates) | 57% | Lossless | | Repetitive text | 38-69% | Lossless | | Normal prose (no repetition) | 0% (passthrough) | Safe |
Install
npm install @sriinnu/paktRequires Node 18+.
Quick Usage
Compress and decompress
import { compress, decompress } from '@sriinnu/pakt';
const result = compress('{"name":"Alice","age":30,"role":"engineer"}');
console.log(result.compressed); // PAKT-encoded string
console.log(`Saved ${result.savings.totalPercent}% tokens`);
const original = decompress(result.compressed, 'json');
console.log(original.text); // original JSON restoredMixed content (markdown with embedded data blocks)
import { compressMixed } from '@sriinnu/pakt';
const markdown = '# Report\n```json\n{"users":[{"name":"Alice"}]}\n```';
const result = compressMixed(markdown);
console.log(result.compressed); // prose untouched, structured blocks compressedDetect format + count tokens
import { detect, countTokens } from '@sriinnu/pakt';
const fmt = detect('name: Alice\nage: 30');
console.log(fmt.format); // 'yaml'
const n = countTokens('{"hello":"world"}', 'gpt-4o');
console.log(n); // token countSupported tokenizers
PAKT counts tokens — and runs L3's merge-savings gate — using the tokenizer
family that matches the target model. Use getTokenizerFamily(model) to
align downstream consumers (playground, desktop, extension) with the same
encoding the core uses.
| Target model | Family | Notes |
| -------------------------------------------- | -------------- | --------------------------------------- |
| gpt-4o, gpt-4o-mini, o1, o3, o4 | o200k_base | Exact. |
| gpt-4, gpt-4-turbo, gpt-3.5-turbo | cl100k_base | Exact. |
| claude-sonnet, claude-opus, claude-haiku | cl100k_base | Approximate — see caveat below. |
| llama-3, llama-3.1 | cl100k_base | Approximate — see caveat below. |
| Unknown model strings | cl100k_base | Fallback; exact: false in the info. |
Exact Claude counts require Anthropic's tokenizer, which is not publicly
available. Llama ships a 128k SentencePiece vocab that gpt-tokenizer
does not bundle. For both, PAKT uses cl100k_base as the closest
publicly-available BPE — expect small drift from the provider's own
counts. Register a custom TokenCounter via registerTokenCounter(...)
if you need exact counts for those families.
import { getTokenizerFamily, getTokenizerFamilyInfo } from '@sriinnu/pakt';
getTokenizerFamily('gpt-4o'); // 'o200k_base'
getTokenizerFamily('claude-opus'); // 'cl100k_base'
const info = getTokenizerFamilyInfo('claude-sonnet');
if (!info.exact) console.warn(info.approximationNote);Compressibility scoring
import { estimateCompressibility } from '@sriinnu/pakt';
const score = estimateCompressibility(myJson);
console.log(score.score); // 0.72
console.log(score.label); // 'high'
console.log(score.profile); // 'tokenizer' — recommended layer profileLLM round-trip: detect PAKT on the way back
import { PAKT_SYSTEM_PROMPT, compress, interpretModelOutput } from '@sriinnu/pakt';
const packed = compress(largeJsonPayload).compressed;
// send `${PAKT_SYSTEM_PROMPT}` + `packed` to your model
const modelReply = await runModel(packed);
const resolved = interpretModelOutput(modelReply, { outputFormat: 'json' });
if (resolved.action === 'decompressed' || resolved.action === 'repaired-decompressed') {
console.log(resolved.data); // structured JSON object
} else {
console.log(resolved.text); // raw model response
}Opt-in L4 semantic compression
import { compress } from '@sriinnu/pakt';
const result = compress(largeJsonPayload, {
fromFormat: 'json',
layers: { semantic: true },
semanticBudget: 120,
});
console.log(result.reversible); // false
console.log(result.compressed); // includes @compress semantic + @warning lossyMCP Server
Add 5 lines to your MCP config. This is the agent integration path for stdio-based MCP hosts:
{
"mcpServers": {
"pakt": {
"command": "npx",
"args": ["-y", "@sriinnu/pakt", "serve", "--stdio"]
}
}
}Your AI agent gets pakt_compress, pakt_auto, pakt_inspect, and pakt_stats automatically. The tools accept optional semanticBudget for opt-in lossy L4, and pakt_inspect helps agents decide whether compression is worth it before they call it.
If you are embedding PAKT into your own MCP host, register the tools directly:
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { registerPaktTools } from '@sriinnu/pakt';
const server = new McpServer({ name: 'my-agent', version: '1.0.0' });
registerPaktTools(server);CLI
npm install -g @sriinnu/pakt
pakt compress data.json # compress to PAKT
pakt compress data.json --semantic-budget 120 # opt into lossy L4
pakt decompress data.pakt --to json # decompress
cat data.json | pakt auto # auto-detect + compress or decompress
pakt inspect data.json --model gpt-4o # inspect before packing
pakt savings data.json --model gpt-4o # token savings report
pakt stats # aggregate session stats
pakt stats --today # filter to today
pakt serve --stdio # start MCP serverKey Features
- 4-layer compression pipeline -- Structural (L1), Dictionary (L2), Tokenizer-Aware (L3), and opt-in budgeted Semantic (L4)
- Delta encoding -- Adjacent rows sharing values replaced with
~sentinels, plus+N/-Nnumeric deltas for monotonic columns (ids, timestamps, counters), saving 20-40% on repetitive tabular data - Cache-stable dictionary --
@dictaliases are assigned in lex order of their expansions so related payloads produce the same block, preserving prompt-cache hits on Anthropic and OpenAI caching APIs - Tokenizer-family aware --
getTokenizerFamily(model)/countTokens(text, model)align the L3 merge-savings gate and downstream token counts with the target model (o200k_base,cl100k_base, fallback documented for Claude / Llama) - 10 MB input cap --
compress()throws a typed error for oversize inputs with an allocation-free byte counter so the check does not materialise the input - Auto context compression -- Content-addressed dedup, text line dedup, word n-gram dictionary, whitespace normalization
- Compressibility scoring --
estimateCompressibility()returns a 0-1 score and recommended profile before you compress - Session stats --
pakt_statsMCP tool andpakt statsCLI for real-time token savings tracking - Multi-format support -- JSON, YAML, CSV, Markdown, Plain Text with auto-detection
- Lossless round-tripping -- L1-L3 preserve data fidelity; L4 is explicitly lossy. Property-based fuzzers run on every build
- MCP server + embeddable tools --
pakt serve --stdioorregisterPaktTools()for agent workflows - Small runtime --
gpt-tokenizer, MCP SDK, andzod - Full TypeScript support -- All types exported, dual ESM/CJS builds
Part of ClipForge
This is the core library inside the ClipForge monorepo. The desktop tray app, browser extension, and playground live alongside it as separate product surfaces.
Documentation
License
MIT -- Srinivas Pendela
