zstdify
v1.4.0
Published
Pure TypeScript zstd compression library
Maintainers
Readme
zstdify

Pure JavaScript/TypeScript zstd compression/decompression library. No native dependencies, works in Node.js and browsers.
Features
- Pure JS/TS zstd in Node and browsers: No native dependencies, portable by default.
- Decoder support across real-world zstd frames:
- Raw, RLE, and compressed blocks (including Huffman/FSE-based paths).
- Single and concatenated frames, plus skippable frame support.
- Content checksum validation.
- Dictionary-aware decompression, including dictionary ID checks.
- Encoder support with adaptive strategy:
- Raw blocks, RLE blocks, and compressed blocks.
- Compression-level driven behavior with automatic raw fallback when compressed output is not smaller.
- Optional frame content checksums.
- Optional dictionary-aware frame headers (
dictID) for dictionary workflows.
- Dictionary generation:
- Pure TypeScript dictionary training from sample payloads.
- Zstd-inspired training options (
fastcover/cover/legacystyle knobs).
- Tree-shaken bundle size (Rollup + Terser, compressed):
zstdify/compress: ~5.27 KiB gzip / ~4.75 KiB brotli.zstdify/decompress: ~8.63 KiB gzip / ~7.67 KiB brotli.
- Interop-focused:
zstdifyoutput is decoded by the officialzstdCLI and by the zstddec npm package;zstdCLI output is decoded byzstdify. - Extensively tested:
- Round-trip and property-based tests.
- Conformance fixtures from known-good archives generated by the official
zstdtool. - Differential tests against the official
zstdCLI (both directions). - Corruption, boundary, and compression-regression coverage.
Usage
import { compress, decompress } from 'zstdify';
const data = new TextEncoder().encode('hello world');
const compressed = compress(data);
const restored = decompress(compressed);
// restored equals dataTree-shaking friendly imports
The default import path remains unchanged:
import { compress, decompress, generateDictionary } from 'zstdify';For stricter bundling control, you can import feature-specific subpaths:
import { compress } from 'zstdify/compress';
import { decompress } from 'zstdify/decompress';
import { generateDictionary } from 'zstdify/dictionary';API
compress(input: Uint8Array, options?: { level?: number; checksum?: boolean; dictionary?: Uint8Array | { bytes: Uint8Array; id?: number }; noDictId?: boolean }): Uint8Arraydecompress(input: Uint8Array, options?: { maxSize?: number; dictionary?: Uint8Array | { bytes: Uint8Array; id?: number }; validateChecksum?: boolean }): Uint8ArraygenerateDictionary(samples: Uint8Array[], options?: { maxDictSize?: number; dictId?: number; algorithm?: "fastcover" | "cover" | "legacy"; k?: number; d?: number; steps?: number; split?: number; f?: number; accel?: number; selectivity?: number; shrink?: boolean | number }): Uint8Array
Dictionary generation outputs a raw-content dictionary. If you want a specific dictID written into compressed frames, pass it to compress() via dictionary: { bytes, id }.
Dictionary workflow example
import { compress, decompress, generateDictionary } from 'zstdify';
const encoder = new TextEncoder();
const samples = [
encoder.encode('alpha beta gamma delta'),
encoder.encode('header vertex texture normal index'),
encoder.encode('offset match literal sequence table'),
];
const dictionary = generateDictionary(samples, { maxDictSize: 2048, algorithm: 'fastcover' });
const payload = encoder.encode('header vertex texture offset match literal');
const compressed = compress(payload, { dictionary: { bytes: dictionary, id: 42 } });
const restored = decompress(compressed, { dictionary: { bytes: dictionary, id: 42 } });CLI Tool
The zstdify-cli package is a command-line tool for compressing and decompressing files with zstd. Install from npm:
pnpm add -g zstdify-clizstdify compress input.txt output.zst
zstdify extract output.zst restored.txtSee packages/cli/README.md for full CLI documentation.
Development
pnpm install
pnpm build
pnpm --filter zstdify-tests run bench:fetch-data
pnpm test
pnpm checkDivergence debug tool
When a Node zstd -> zstdify decode mismatch appears, use the divergence tool to quickly locate where decoded output first diverges and which decode paths are involved (literals mode, sequence modes, repeat-offset candidates, and nearby block context).
Run it from zstdify-tests:
pnpm --filter zstdify-tests run debug:node-zstd-divergence -- --payload-id corpus-linux-kernel-tar --pass-level 3 --fail-level 5You can also enable debug output directly in the interop test via ZSTDIFY_INTEROP_DEBUG=1 (with optional payload/level env vars) to print the same high-level divergence report during test runs.
How we validate
All of the following run as part of the test suite (pnpm test / pnpm vitest):
- Round-trip:
decompress(compress(x)) === xfor a variety of payloads and levels, plus property-based tests with fast-check. - Conformance fixtures: Pre-generated
.zstfiles from the official zstd CLI (legacy fixtures and a committed decodecorpus-style corpus with manifest); we decompress and compare. See packages/zstdify-tests/fixtures/README.md. - Differential (zstd ↔ zstdify): We test zstd compress → zstdify decompress and zstdify compress → zstd decompress across payloads and levels.
- Differential (Node zstd ↔ zstdify): We test Node
node:zlibzstd compress → zstdify decompress and zstdify compress → Node zstd decompress across synthetic plus local corpus-backed payloads. - zstddec: zstdify compress → zstddec decode at all compression levels (0–9) and with content checksum.
- Node zstd ratio: Compare compression ratio of Node's built-in zstd vs zstdify on representative synthetic plus local corpus-backed payloads and levels; assert zstdify is within ~10% of Node's compressed size.
- Corruption: Truncation, checksum mismatch, invalid header bits, and related error paths.
- Compression regression: Compressed sizes for fixed payloads are checked against golden values (ratio stability).
- Decompress robustness: Each corpus fixture is decompressed in its own test (one test per file), so the suite tracks decompress behavior per input. See upstream zstd TESTING.md for comparison.
Benchmark: zstdify vs Node built-in zstd
Throughput and compression ratio are compared against Node’s built-in node:zlib zstd on a downloaded, local real-world corpus at compression level 6 (one entry per corpus file).
Throughput (MB/s)
Throughput (MB/s)
| Payload | Category | Level | Compress zstdify | Compress Node | Decompress zstdify | Decompress Node | Decompress fzstd | Decompress zstddec | |-------------|------|----------|-------|------------------|---------------|-------------------|------------------|---------------------| | war-and-peace-txt | text | 6 | 1.52 | 102.63 | 150.75 | 1019.58 | 256.78 | 922.89 | | shakespeare-complete-txt | text | 6 | 1.35 | 96.08 | 142.93 | 970.29 | 240.50 | 887.62 | | enwik8 | text | 6 | 1.70 | 120.36 | 149.69 | 1112.28 | 247.97 | 953.65 | | linux-kernel-tar | archive | 6 | 2.72 | 177.01 | 232.84 | 1749.27 | 337.85 | 1448.23 | | apollo17-flightplan-pdf | document | 6 | 5.18 | 272.59 | 444.49 | 2968.65 | 497.71 | 2417.05 |
Compression ratio (compressed/original)
| Payload | Category | Level | zstdify | Node | |-------------|----------|-------|---------|------| | war-and-peace-txt | text | 6 | 0.4002 | 0.3280 | | shakespeare-complete-txt | text | 6 | 0.4171 | 0.3480 | | enwik8 | text | 6 | 0.3724 | 0.3248 | | linux-kernel-tar | archive | 6 | 0.2259 | 0.1995 | | apollo17-flightplan-pdf | document | 6 | 0.1315 | 0.1176 |
Before benchmarking, fetch local corpus files (downloaded and stored locally, not committed):
pnpm --filter zstdify-tests run bench:fetch-dataTo regenerate the chart and tables (fetch corpus if needed, run benchmarks, render SVG):
pnpm --filter zstdify-tests run bench:updateBundle size benchmark (Rollup)
| Target | Raw | Gzip | Brotli | |---|---:|---:|---:| | zstdify/compress | 28.82 KiB | 9.78 KiB | 8.83 KiB | | zstdify/decompress | 35.13 KiB | 10.66 KiB | 9.41 KiB | | zstddec decoder + wasm | 127.37 KiB | 49.69 KiB | 40.66 KiB |
To regenerate this snapshot:
pnpm --filter zstdify-tests run bench:bundle-sizePublishing
Publish the npm packages (library first, then CLI so it gets the correct zstdify version):
pnpm make-release:zstdify
pnpm make-release:cliProject structure
packages/zstdify- Core librarypackages/zstdify-tests- Integration testspackages/cli- CLI tool (zstdify-clion npm)packages/cli-tests- Tests of the CLI tool
Acknowledgements
This project is made possible by the original zstd project by Meta and its contributors. The monorepo, project, and CLI structure were bootstrapped from hdrify, which made this project much easier to build. Many JavaScript optimization strategies were inspired by fzstd. We use simple-zstd for validation against the zstd cli tool.
License
MIT
Author
Ben Houston, Sponsored by Land of Assets
