npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

zstdify

v1.4.0

Published

Pure TypeScript zstd compression library

Readme

zstdify

NPM Package NPM Downloads Tests Coverage

Zstdify Logo

Pure JavaScript/TypeScript zstd compression/decompression library. No native dependencies, works in Node.js and browsers.

Features

  • Pure JS/TS zstd in Node and browsers: No native dependencies, portable by default.
  • Decoder support across real-world zstd frames:
    • Raw, RLE, and compressed blocks (including Huffman/FSE-based paths).
    • Single and concatenated frames, plus skippable frame support.
    • Content checksum validation.
    • Dictionary-aware decompression, including dictionary ID checks.
  • Encoder support with adaptive strategy:
    • Raw blocks, RLE blocks, and compressed blocks.
    • Compression-level driven behavior with automatic raw fallback when compressed output is not smaller.
    • Optional frame content checksums.
    • Optional dictionary-aware frame headers (dictID) for dictionary workflows.
  • Dictionary generation:
    • Pure TypeScript dictionary training from sample payloads.
    • Zstd-inspired training options (fastcover/cover/legacy style knobs).
  • Tree-shaken bundle size (Rollup + Terser, compressed):
    • zstdify/compress: ~5.27 KiB gzip / ~4.75 KiB brotli.
    • zstdify/decompress: ~8.63 KiB gzip / ~7.67 KiB brotli.
  • Interop-focused: zstdify output is decoded by the official zstd CLI and by the zstddec npm package; zstd CLI output is decoded by zstdify.
  • Extensively tested:
    • Round-trip and property-based tests.
    • Conformance fixtures from known-good archives generated by the official zstd tool.
    • Differential tests against the official zstd CLI (both directions).
    • Corruption, boundary, and compression-regression coverage.

Usage

import { compress, decompress } from 'zstdify';

const data = new TextEncoder().encode('hello world');
const compressed = compress(data);
const restored = decompress(compressed);
// restored equals data

Tree-shaking friendly imports

The default import path remains unchanged:

import { compress, decompress, generateDictionary } from 'zstdify';

For stricter bundling control, you can import feature-specific subpaths:

import { compress } from 'zstdify/compress';
import { decompress } from 'zstdify/decompress';
import { generateDictionary } from 'zstdify/dictionary';

API

  • compress(input: Uint8Array, options?: { level?: number; checksum?: boolean; dictionary?: Uint8Array | { bytes: Uint8Array; id?: number }; noDictId?: boolean }): Uint8Array
  • decompress(input: Uint8Array, options?: { maxSize?: number; dictionary?: Uint8Array | { bytes: Uint8Array; id?: number }; validateChecksum?: boolean }): Uint8Array
  • generateDictionary(samples: Uint8Array[], options?: { maxDictSize?: number; dictId?: number; algorithm?: "fastcover" | "cover" | "legacy"; k?: number; d?: number; steps?: number; split?: number; f?: number; accel?: number; selectivity?: number; shrink?: boolean | number }): Uint8Array

Dictionary generation outputs a raw-content dictionary. If you want a specific dictID written into compressed frames, pass it to compress() via dictionary: { bytes, id }.

Dictionary workflow example

import { compress, decompress, generateDictionary } from 'zstdify';

const encoder = new TextEncoder();
const samples = [
  encoder.encode('alpha beta gamma delta'),
  encoder.encode('header vertex texture normal index'),
  encoder.encode('offset match literal sequence table'),
];
const dictionary = generateDictionary(samples, { maxDictSize: 2048, algorithm: 'fastcover' });

const payload = encoder.encode('header vertex texture offset match literal');
const compressed = compress(payload, { dictionary: { bytes: dictionary, id: 42 } });
const restored = decompress(compressed, { dictionary: { bytes: dictionary, id: 42 } });

CLI Tool

The zstdify-cli package is a command-line tool for compressing and decompressing files with zstd. Install from npm:

pnpm add -g zstdify-cli
zstdify compress input.txt output.zst
zstdify extract output.zst restored.txt

See packages/cli/README.md for full CLI documentation.

Development

pnpm install
pnpm build
pnpm --filter zstdify-tests run bench:fetch-data
pnpm test
pnpm check

Divergence debug tool

When a Node zstd -> zstdify decode mismatch appears, use the divergence tool to quickly locate where decoded output first diverges and which decode paths are involved (literals mode, sequence modes, repeat-offset candidates, and nearby block context).

Run it from zstdify-tests:

pnpm --filter zstdify-tests run debug:node-zstd-divergence -- --payload-id corpus-linux-kernel-tar --pass-level 3 --fail-level 5

You can also enable debug output directly in the interop test via ZSTDIFY_INTEROP_DEBUG=1 (with optional payload/level env vars) to print the same high-level divergence report during test runs.

How we validate

All of the following run as part of the test suite (pnpm test / pnpm vitest):

  • Round-trip: decompress(compress(x)) === x for a variety of payloads and levels, plus property-based tests with fast-check.
  • Conformance fixtures: Pre-generated .zst files from the official zstd CLI (legacy fixtures and a committed decodecorpus-style corpus with manifest); we decompress and compare. See packages/zstdify-tests/fixtures/README.md.
  • Differential (zstd ↔ zstdify): We test zstd compress → zstdify decompress and zstdify compress → zstd decompress across payloads and levels.
  • Differential (Node zstd ↔ zstdify): We test Node node:zlib zstd compress → zstdify decompress and zstdify compress → Node zstd decompress across synthetic plus local corpus-backed payloads.
  • zstddec: zstdify compress → zstddec decode at all compression levels (0–9) and with content checksum.
  • Node zstd ratio: Compare compression ratio of Node's built-in zstd vs zstdify on representative synthetic plus local corpus-backed payloads and levels; assert zstdify is within ~10% of Node's compressed size.
  • Corruption: Truncation, checksum mismatch, invalid header bits, and related error paths.
  • Compression regression: Compressed sizes for fixed payloads are checked against golden values (ratio stability).
  • Decompress robustness: Each corpus fixture is decompressed in its own test (one test per file), so the suite tracks decompress behavior per input. See upstream zstd TESTING.md for comparison.

Benchmark: zstdify vs Node built-in zstd

Throughput and compression ratio are compared against Node’s built-in node:zlib zstd on a downloaded, local real-world corpus at compression level 6 (one entry per corpus file).

Throughput (MB/s)

Throughput (MB/s)

| Payload | Category | Level | Compress zstdify | Compress Node | Decompress zstdify | Decompress Node | Decompress fzstd | Decompress zstddec | |-------------|------|----------|-------|------------------|---------------|-------------------|------------------|---------------------| | war-and-peace-txt | text | 6 | 1.52 | 102.63 | 150.75 | 1019.58 | 256.78 | 922.89 | | shakespeare-complete-txt | text | 6 | 1.35 | 96.08 | 142.93 | 970.29 | 240.50 | 887.62 | | enwik8 | text | 6 | 1.70 | 120.36 | 149.69 | 1112.28 | 247.97 | 953.65 | | linux-kernel-tar | archive | 6 | 2.72 | 177.01 | 232.84 | 1749.27 | 337.85 | 1448.23 | | apollo17-flightplan-pdf | document | 6 | 5.18 | 272.59 | 444.49 | 2968.65 | 497.71 | 2417.05 |

Compression ratio (compressed/original)

| Payload | Category | Level | zstdify | Node | |-------------|----------|-------|---------|------| | war-and-peace-txt | text | 6 | 0.4002 | 0.3280 | | shakespeare-complete-txt | text | 6 | 0.4171 | 0.3480 | | enwik8 | text | 6 | 0.3724 | 0.3248 | | linux-kernel-tar | archive | 6 | 0.2259 | 0.1995 | | apollo17-flightplan-pdf | document | 6 | 0.1315 | 0.1176 |

Before benchmarking, fetch local corpus files (downloaded and stored locally, not committed):

pnpm --filter zstdify-tests run bench:fetch-data

To regenerate the chart and tables (fetch corpus if needed, run benchmarks, render SVG):

pnpm --filter zstdify-tests run bench:update

Bundle size benchmark (Rollup)

| Target | Raw | Gzip | Brotli | |---|---:|---:|---:| | zstdify/compress | 28.82 KiB | 9.78 KiB | 8.83 KiB | | zstdify/decompress | 35.13 KiB | 10.66 KiB | 9.41 KiB | | zstddec decoder + wasm | 127.37 KiB | 49.69 KiB | 40.66 KiB |

To regenerate this snapshot:

pnpm --filter zstdify-tests run bench:bundle-size

Publishing

Publish the npm packages (library first, then CLI so it gets the correct zstdify version):

pnpm make-release:zstdify
pnpm make-release:cli

Project structure

  • packages/zstdify - Core library
  • packages/zstdify-tests - Integration tests
  • packages/cli - CLI tool (zstdify-cli on npm)
  • packages/cli-tests - Tests of the CLI tool

Acknowledgements

This project is made possible by the original zstd project by Meta and its contributors. The monorepo, project, and CLI structure were bootstrapped from hdrify, which made this project much easier to build. Many JavaScript optimization strategies were inspired by fzstd. We use simple-zstd for validation against the zstd cli tool.

License

MIT

Author

Ben Houston, Sponsored by Land of Assets