npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@sriinnu/pakt

v0.9.0

Published

PAKT compression engine — lossless-first L1-L3 compression with opt-in L4 semantic packing for LLM token optimization

Readme


PAKT (Pipe-Aligned Kompact Text) converts JSON, YAML, CSV, and markdown documents with embedded structured blocks into a compact pipe-delimited format that reduces LLM token counts by 30-50% on structured payloads while preserving data fidelity across its core L1-L3 layers. An optional budgeted L4 layer trades fidelity for additional savings only when explicitly requested.

JSON (28 tokens)                    PAKT (15 tokens)
------------------------------      --------------------------
{                                   @from json
  "users": [                        @dict
    { "name": "Alice",                $a: dev
      "role": "dev" },             @end
    { "name": "Bob",
      "role": "dev" }              users [2]{name|role}:
  ]                                   Alice|$a
}                                     Bob|$a

| Input Type | Savings | Round-trip | |---|---|---| | JSON 10 records | 27% | Lossless | | JSON 50 records | 33% | Lossless | | Log lines (duplicates) | 57% | Lossless | | Repetitive text | 38-69% | Lossless | | Normal prose (no repetition) | 0% (passthrough) | Safe |


Install

npm install @sriinnu/pakt

Requires Node 18+.


Quick Usage

Compress and decompress

import { compress, decompress } from '@sriinnu/pakt';

const result = compress('{"name":"Alice","age":30,"role":"engineer"}');
console.log(result.compressed);          // PAKT-encoded string
console.log(`Saved ${result.savings.totalPercent}% tokens`);

const original = decompress(result.compressed, 'json');
console.log(original.text);             // original JSON restored

Mixed content (markdown with embedded data blocks)

import { compressMixed } from '@sriinnu/pakt';

const markdown = '# Report\n```json\n{"users":[{"name":"Alice"}]}\n```';
const result = compressMixed(markdown);
console.log(result.compressed);         // prose untouched, structured blocks compressed

Detect format + count tokens

import { detect, countTokens } from '@sriinnu/pakt';

const fmt = detect('name: Alice\nage: 30');
console.log(fmt.format);     // 'yaml'

const n = countTokens('{"hello":"world"}', 'gpt-4o');
console.log(n);              // token count

Supported tokenizers

PAKT counts tokens — and runs L3's merge-savings gate — using the tokenizer family that matches the target model. Use getTokenizerFamily(model) to align downstream consumers (playground, desktop, extension) with the same encoding the core uses.

| Target model | Family | Notes | | -------------------------------------------- | -------------- | --------------------------------------- | | gpt-4o, gpt-4o-mini, o1, o3, o4 | o200k_base | Exact. | | gpt-4, gpt-4-turbo, gpt-3.5-turbo | cl100k_base | Exact. | | claude-sonnet, claude-opus, claude-haiku | cl100k_base | Approximate — see caveat below. | | llama-3, llama-3.1 | cl100k_base | Approximate — see caveat below. | | Unknown model strings | cl100k_base | Fallback; exact: false in the info. |

Exact Claude counts require Anthropic's tokenizer, which is not publicly available. Llama ships a 128k SentencePiece vocab that gpt-tokenizer does not bundle. For both, PAKT uses cl100k_base as the closest publicly-available BPE — expect small drift from the provider's own counts. Register a custom TokenCounter via registerTokenCounter(...) if you need exact counts for those families.

import { getTokenizerFamily, getTokenizerFamilyInfo } from '@sriinnu/pakt';

getTokenizerFamily('gpt-4o');            // 'o200k_base'
getTokenizerFamily('claude-opus');       // 'cl100k_base'

const info = getTokenizerFamilyInfo('claude-sonnet');
if (!info.exact) console.warn(info.approximationNote);

Compressibility scoring

import { estimateCompressibility } from '@sriinnu/pakt';

const score = estimateCompressibility(myJson);
console.log(score.score);    // 0.72
console.log(score.label);    // 'high'
console.log(score.profile);  // 'tokenizer' — recommended layer profile

LLM round-trip: detect PAKT on the way back

import { PAKT_SYSTEM_PROMPT, compress, interpretModelOutput } from '@sriinnu/pakt';

const packed = compress(largeJsonPayload).compressed;

// send `${PAKT_SYSTEM_PROMPT}` + `packed` to your model
const modelReply = await runModel(packed);

const resolved = interpretModelOutput(modelReply, { outputFormat: 'json' });

if (resolved.action === 'decompressed' || resolved.action === 'repaired-decompressed') {
  console.log(resolved.data); // structured JSON object
} else {
  console.log(resolved.text); // raw model response
}

Opt-in L4 semantic compression

import { compress } from '@sriinnu/pakt';

const result = compress(largeJsonPayload, {
  fromFormat: 'json',
  layers: { semantic: true },
  semanticBudget: 120,
});

console.log(result.reversible); // false
console.log(result.compressed); // includes @compress semantic + @warning lossy

MCP Server

Add 5 lines to your MCP config. This is the agent integration path for stdio-based MCP hosts:

{
  "mcpServers": {
    "pakt": {
      "command": "npx",
      "args": ["-y", "@sriinnu/pakt", "serve", "--stdio"]
    }
  }
}

Your AI agent gets pakt_compress, pakt_auto, pakt_inspect, and pakt_stats automatically. The tools accept optional semanticBudget for opt-in lossy L4, and pakt_inspect helps agents decide whether compression is worth it before they call it.

If you are embedding PAKT into your own MCP host, register the tools directly:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { registerPaktTools } from '@sriinnu/pakt';

const server = new McpServer({ name: 'my-agent', version: '1.0.0' });
registerPaktTools(server);

CLI

npm install -g @sriinnu/pakt

pakt compress data.json                       # compress to PAKT
pakt compress data.json --semantic-budget 120  # opt into lossy L4
pakt decompress data.pakt --to json           # decompress
cat data.json | pakt auto                     # auto-detect + compress or decompress
pakt inspect data.json --model gpt-4o         # inspect before packing
pakt savings data.json --model gpt-4o         # token savings report
pakt stats                                    # aggregate session stats
pakt stats --today                            # filter to today
pakt serve --stdio                            # start MCP server

Key Features

  • 4-layer compression pipeline -- Structural (L1), Dictionary (L2), Tokenizer-Aware (L3), and opt-in budgeted Semantic (L4)
  • Delta encoding -- Adjacent rows sharing values replaced with ~ sentinels, plus +N / -N numeric deltas for monotonic columns (ids, timestamps, counters), saving 20-40% on repetitive tabular data
  • Cache-stable dictionary -- @dict aliases are assigned in lex order of their expansions so related payloads produce the same block, preserving prompt-cache hits on Anthropic and OpenAI caching APIs
  • Tokenizer-family aware -- getTokenizerFamily(model) / countTokens(text, model) align the L3 merge-savings gate and downstream token counts with the target model (o200k_base, cl100k_base, fallback documented for Claude / Llama)
  • 10 MB input cap -- compress() throws a typed error for oversize inputs with an allocation-free byte counter so the check does not materialise the input
  • Auto context compression -- Content-addressed dedup, text line dedup, word n-gram dictionary, whitespace normalization
  • Compressibility scoring -- estimateCompressibility() returns a 0-1 score and recommended profile before you compress
  • Session stats -- pakt_stats MCP tool and pakt stats CLI for real-time token savings tracking
  • Multi-format support -- JSON, YAML, CSV, Markdown, Plain Text with auto-detection
  • Lossless round-tripping -- L1-L3 preserve data fidelity; L4 is explicitly lossy. Property-based fuzzers run on every build
  • MCP server + embeddable tools -- pakt serve --stdio or registerPaktTools() for agent workflows
  • Small runtime -- gpt-tokenizer, MCP SDK, and zod
  • Full TypeScript support -- All types exported, dual ESM/CJS builds

Part of ClipForge

This is the core library inside the ClipForge monorepo. The desktop tray app, browser extension, and playground live alongside it as separate product surfaces.


Documentation


License

MIT -- Srinivas Pendela