npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@derodero24/comprs

v2.0.0

Published

Rust-powered universal compression for JavaScript/TypeScript. zstd, gzip, brotli, and lz4 in one package.

Readme

comprs

Rust-powered universal compression for JavaScript/TypeScript. zstd, gzip, brotli, and lz4 in one package.

npm version npm downloads CI codecov CodSpeed License: MIT Node.js TypeScript Playground

→ Try the live playground

Table of Contents

Why comprs?

The JavaScript compression ecosystem is fragmented across 12+ packages with inconsistent APIs, mixed maintenance status, and no streaming support. comprs consolidates this into a single, fast, well-typed library:

  • Native performance — Rust core compiled via napi-rs, with WASM fallback for browsers
  • Unified API — Same interface for zstd, gzip, brotli, and lz4
  • Streaming — Web Streams API (TransformStream) for processing large data with bounded memory in Node.js
  • Universal — Node.js (native), browsers, Deno, and Bun (WASM)
  • Zero JS dependencies — Only Rust and the platform
  • Interactive playgroundTry any algorithm live in your browser, no install needed

Comparison with Alternatives

| Feature | comprs | pako | fflate | node:zlib | | --- | :---: | :---: | :---: | :---: | | zstd | ✅ | ❌ | ❌ | ⚠️ Experimental* | | gzip/deflate | ✅ | ✅ | ✅ | ✅ | | brotli | ✅ | ❌ | ❌ | ✅ | | lz4 | ✅ | ❌ | ❌ | ❌ | | Web Streams API | ✅ | ❌ | ❌ | ❌ | | Node.js Transform | ✅ | ❌ | ❌ | ✅ | | Streaming | ✅ | Chunked† | ✅ | ✅ | | Browser | ✅ | ✅ | ✅ | ❌ | | Deno/Bun | ✅ | ✅ | ✅ | ❌ | | Native performance | ✅ | ❌ | ❌ | ✅ | | TypeScript | ✅ | ✅ | ✅ | ✅ | | Dictionary | ✅ (zstd + brotli) | ❌ | ❌ | ❌ | | Zero JS deps | ✅ | ✅ | ✅ | ✅ |

* node:zlib zstd support requires Node.js ≥ 22.15 and is experimental † pako uses chunked Inflate/Deflate classes, not the Web Streams API

Installation

npm install @derodero24/comprs
# or
pnpm add @derodero24/comprs
# or
yarn add @derodero24/comprs
# or
bun add @derodero24/comprs

Quick Start

Try it livederodero24.github.io/comprs

import { zstdCompress, zstdDecompress } from '@derodero24/comprs';

const data = Buffer.from('Hello, comprs!');
const compressed = zstdCompress(data);
const decompressed = zstdDecompress(compressed);

All algorithms use the same pattern:

import { gzipCompress, brotliCompress, lz4Compress } from '@derodero24/comprs';

const gzipped = gzipCompress(data);    // gzip
const brotlied = brotliCompress(data); // brotli
const lz4ed = lz4Compress(data);       // lz4

Streaming (Web Streams API)

import { createGzipCompressStream, createGzipDecompressStream } from '@derodero24/comprs/streams';

// Pipe through compression/decompression TransformStreams
const compressed = response.body
  .pipeThrough(createGzipCompressStream());

Streaming (Node.js Transform)

import { createGzipCompressTransform } from '@derodero24/comprs/node';
import { pipeline } from 'node:stream/promises';
import { createReadStream, createWriteStream } from 'node:fs';

await pipeline(
  createReadStream('input.txt'),
  createGzipCompressTransform(),
  createWriteStream('output.gz'),
);

Auto-detect

import { decompress } from '@derodero24/comprs';

// Works with any supported format — no need to know the algorithm
const decompressed = decompress(compressedData);

Async

import { gzipCompressAsync, gzipDecompressAsync } from '@derodero24/comprs';

// Runs on the libuv thread pool — keeps the event loop free
const compressed = await gzipCompressAsync(largeData);
const decompressed = await gzipDecompressAsync(compressed);

Compression levels

import { zstdCompress } from '@derodero24/comprs';

zstdCompress(data, 1);   // fast compression
zstdCompress(data);      // default (level 3)
zstdCompress(data, 22);  // best compression
zstdCompress(data, -1);  // fast mode with negative levels

Dictionary compression

import { zstdTrainDictionary, zstdCompressWithDict, zstdDecompressWithDict } from '@derodero24/comprs';

// Train from samples of similar data
const dict = zstdTrainDictionary(samples);

// Compress/decompress with dictionary
const compressed = zstdCompressWithDict(data, dict);
const decompressed = zstdDecompressWithDict(compressed, dict);
// Brotli dictionary (no training step — provide raw dictionary bytes)
import { brotliCompressWithDict, brotliDecompressWithDict } from '@derodero24/comprs';

const dict = Buffer.from('{"id":0,"name":"","email":"@example.com"}'.repeat(10));
const compressed = brotliCompressWithDict(data, dict);
const decompressed = brotliDecompressWithDict(compressed, dict);

Deno / Bun

// Deno
import { gzipCompress } from 'npm:@derodero24/comprs';

// Bun (same as Node.js)
import { gzipCompress } from '@derodero24/comprs';

Choosing an Algorithm

| Use case | Recommended | Why | | --- | --- | --- | | Web asset delivery (CDN, HTTP) | brotli | Best compression ratio; native browser Accept-Encoding support | | General-purpose file compression | zstd | Fastest all-round with excellent compression ratio | | Real-time data / logging / IPC | lz4 | Lowest latency; optimized for speed over ratio | | Legacy systems / maximum compatibility | gzip | Universal support; every tool and platform can decompress | | Many small similar records (JSON, logs) | zstd + dictionary | Dictionary pre-seeds the compressor with expected patterns | | Brotli with domain-specific data | brotli + dictionary | Custom dictionary for repeated structures without training |

Choosing an API mode

| Mode | When to use | | --- | --- | | Sync (zstdCompress) | Small data (< 1 MB), low-latency requirements, scripts | | Async (zstdCompressAsync) | Large data or when event loop must stay free (servers, UIs) | | Streaming (createZstdCompressStream) | Unknown/unbounded data size, memory-constrained environments | | Dictionary (zstdCompressWithDict) | Compressing many small, structurally similar items |

API

One-shot

zstd

| Function | Description | | --- | --- | | zstdCompress(data, level?) | Compress with zstd. Level: -131072 to 22 (default: 3) | | zstdDecompress(data) | Decompress zstd data (max 256 MB output) | | zstdDecompressWithCapacity(data, capacity) | Decompress with explicit output size limit |

gzip / deflate

| Function | Description | | --- | --- | | gzipCompress(data, level?) | Compress with gzip. Level: 0-9 (default: 6) | | gzipCompressWithHeader(data, header, level?) | Compress with custom gzip header (filename, mtime) | | gzipReadHeader(data) | Read gzip header metadata without decompressing | | gzipDecompress(data) | Decompress gzip data | | gzipDecompressWithCapacity(data, capacity) | Decompress with explicit output size limit | | deflateCompress(data, level?) | Compress with raw deflate. Level: 0-9 (default: 6) | | deflateDecompress(data) | Decompress raw deflate data | | deflateDecompressWithCapacity(data, capacity) | Decompress with explicit output size limit |

brotli

| Function | Description | | --- | --- | | brotliCompress(data, quality?) | Compress with brotli. Quality: 0-11 (default: 6) | | brotliDecompress(data) | Decompress brotli data (max 256 MB output) | | brotliDecompressWithCapacity(data, capacity) | Decompress with explicit output size limit |

lz4

| Function | Description | | --- | --- | | lz4Compress(data) | Compress with LZ4 frame format | | lz4Decompress(data) | Decompress LZ4 data (max 256 MB output) | | lz4DecompressWithCapacity(data, capacity) | Decompress with explicit output size limit |

Auto-detect

| Function | Description | | --- | --- | | decompress(data) | Auto-detect format and decompress (zstd, gzip, brotli, lz4) | | detectFormat(data) | Detect compression format. Returns 'zstd', 'gzip', 'brotli', 'lz4', or 'unknown' |

Utilities

| Function | Description | | --- | --- | | crc32(data, initialValue?) | Compute CRC32 checksum (supports incremental computation) | | version() | Returns the library version |

zstd Dictionary

| Function | Description | | --- | --- | | zstdTrainDictionary(samples, maxDictSize?) | Train a dictionary from sample data (default max: 110 KB) | | zstdCompressWithDict(data, dict, level?) | Compress with pre-trained dictionary | | zstdDecompressWithDict(data, dict) | Decompress dictionary-compressed data | | zstdDecompressWithDictWithCapacity(data, dict, capacity) | Decompress with dictionary and explicit output size limit |

Brotli Dictionary

| Function | Description | | --- | --- | | brotliCompressWithDict(data, dict, quality?) | Compress with custom dictionary | | brotliDecompressWithDict(data, dict) | Decompress dictionary-compressed data | | brotliDecompressWithDictWithCapacity(data, dict, capacity) | Decompress with explicit capacity |

Async

All one-shot functions have async variants that run on the libuv thread pool. Append Async to any function name:

const compressed = await zstdCompressAsync(data, level);
const decompressed = await gzipDecompressAsync(compressed);

| Function | Description | | --- | --- | | zstdCompressAsync(data, level?) | Async zstd compression | | zstdDecompressAsync(data) | Async zstd decompression | | zstdDecompressWithCapacityAsync(data, capacity) | Async zstd decompression with explicit size limit | | zstdCompressWithDictAsync(data, dict, level?) | Async zstd compression with dictionary | | zstdDecompressWithDictAsync(data, dict) | Async zstd decompression with dictionary | | zstdDecompressWithDictWithCapacityAsync(data, dict, capacity) | Async zstd decompression with dictionary and size limit | | zstdTrainDictionaryAsync(samples, maxDictSize?) | Async dictionary training | | gzipCompressAsync(data, level?) | Async gzip compression | | gzipDecompressAsync(data) | Async gzip decompression | | gzipDecompressWithCapacityAsync(data, capacity) | Async gzip decompression with explicit size limit | | deflateCompressAsync(data, level?) | Async deflate compression | | deflateDecompressAsync(data) | Async deflate decompression | | deflateDecompressWithCapacityAsync(data, capacity) | Async deflate decompression with explicit size limit | | brotliCompressAsync(data, quality?) | Async brotli compression | | brotliDecompressAsync(data) | Async brotli decompression | | brotliDecompressWithCapacityAsync(data, capacity) | Async brotli decompression with explicit size limit | | brotliCompressWithDictAsync(data, dict, quality?) | Async brotli compression with dictionary | | brotliDecompressWithDictAsync(data, dict) | Async brotli decompression with dictionary | | brotliDecompressWithDictWithCapacityAsync(data, dict, capacity) | Async brotli decompression with dictionary and size limit | | lz4CompressAsync(data) | Async LZ4 compression | | lz4DecompressAsync(data) | Async LZ4 decompression | | lz4DecompressWithCapacityAsync(data, capacity) | Async LZ4 decompression with explicit size limit | | decompressAsync(data) | Async auto-detect format and decompress |

Streaming

Web Streams API (TransformStream) for all algorithms. Import from @derodero24/comprs/streams:

import { createGzipCompressStream } from '@derodero24/comprs/streams';

| Function | Description | | --- | --- | | createZstdCompressStream(level?) | Create a zstd compression TransformStream | | createZstdDecompressStream() | Create a zstd decompression TransformStream | | createGzipCompressStream(level?) | Create a gzip compression TransformStream | | createGzipDecompressStream() | Create a gzip decompression TransformStream | | createDeflateCompressStream(level?) | Create a raw deflate compression TransformStream | | createDeflateDecompressStream() | Create a raw deflate decompression TransformStream | | createBrotliCompressStream(quality?) | Create a brotli compression TransformStream | | createBrotliDecompressStream() | Create a brotli decompression TransformStream | | createLz4CompressStream() | Create an LZ4 compression TransformStream | | createLz4DecompressStream() | Create an LZ4 decompression TransformStream | | createZstdCompressDictStream(dict, level?) | Streaming zstd compression with dictionary | | createZstdDecompressDictStream(dict) | Streaming zstd decompression with dictionary | | createBrotliCompressDictStream(dict, quality?) | Streaming brotli compression with dictionary | | createBrotliDecompressDictStream(dict) | Streaming brotli decompression with dictionary | | createDecompressStream() | Auto-detect format and create a decompression TransformStream |

Node.js Transform Streams

For Node.js stream.pipeline() compatibility, import from @derodero24/comprs/node:

import { createGzipCompressTransform } from '@derodero24/comprs/node';
import { pipeline } from 'node:stream/promises';
import { createReadStream, createWriteStream } from 'node:fs';

await pipeline(
  createReadStream('input.txt'),
  createGzipCompressTransform(),
  createWriteStream('output.gz'),
);

| Function | Description | | --- | --- | | createZstdCompressTransform(level?) | Node.js Transform for zstd compression | | createZstdDecompressTransform() | Node.js Transform for zstd decompression | | createGzipCompressTransform(level?) | Node.js Transform for gzip compression | | createGzipDecompressTransform() | Node.js Transform for gzip decompression | | createDeflateCompressTransform(level?) | Node.js Transform for deflate compression | | createDeflateDecompressTransform() | Node.js Transform for deflate decompression | | createBrotliCompressTransform(quality?) | Node.js Transform for brotli compression | | createBrotliDecompressTransform() | Node.js Transform for brotli decompression | | createLz4CompressTransform() | Node.js Transform for LZ4 compression | | createLz4DecompressTransform() | Node.js Transform for LZ4 decompression | | createZstdCompressDictTransform(dict, level?) | Node.js Transform for zstd dict compression | | createZstdDecompressDictTransform(dict) | Node.js Transform for zstd dict decompression | | createBrotliCompressDictTransform(dict, quality?) | Node.js Transform for brotli dict compression | | createBrotliDecompressDictTransform(dict) | Node.js Transform for brotli dict decompression | | createDecompressTransform() | Auto-detect format and create a decompression Transform |

Supported Algorithms

| Algorithm | One-shot | Streaming | Status | | --- | --- | --- | --- | | zstd | ✅ | ✅ | Available | | gzip / deflate | ✅ | ✅ | Available | | brotli | ✅ | ✅ | Available | | lz4 | ✅ | ✅ | Available |

Platform Support

| Platform | Backend | Status | | --- | --- | --- | | Node.js ≥ 22 | Native (napi-rs) | ✅ | | Browsers | WASM | ✅ | | Deno | WASM | ✅ | | Bun | WASM | ✅ |

Build targets

| OS | Architectures | | --- | --- | | macOS | Intel (x64), Apple Silicon (ARM64) | | Linux | x64, ARM64 (glibc & musl) | | Windows | x64, ARM64 | | WASM | wasm32-unknown-unknown (wasm-bindgen) |

WASM bundle size

The browser WASM binary (wasm32-unknown-unknown) is built via wasm-pack and optimized with wasm-opt -O3 during CI builds.

| | Size | | --- | --- | | comprs-wasm_bg.wasm (optimized) | ~1.5 MB | | Gzip-compressed (typical CDN transfer) | ~600 KB |

Exact sizes are tracked on every build — see the latest CI run summary.

Browser Usage

comprs works in browsers via WASM. Use a bundler like Vite, webpack, or esbuild:

import { gzipCompress, gzipDecompress } from '@derodero24/comprs';

const data = new TextEncoder().encode('Hello from the browser!');
const compressed = gzipCompress(data);
const decompressed = gzipDecompress(compressed);

[!TIP] WASM initialization happens automatically on first use. For performance-critical applications, consider warming up the module by calling any function once during app startup.

Framework Integration (SSR)

Native modules need to be externalized in SSR frameworks:

Next.js

// next.config.js
const nextConfig = {
  serverExternalPackages: ['@derodero24/comprs'],
};

Vite SSR

// vite.config.js
export default {
  ssr: {
    external: ['@derodero24/comprs'],
  },
};

On the client side, comprs automatically falls back to WASM — no additional configuration needed.

Migration

From pako

- import pako from 'pako';
- const compressed = pako.gzip(data);
- const decompressed = pako.ungzip(compressed);
+ import { gzipCompress, gzipDecompress } from '@derodero24/comprs';
+ const compressed = gzipCompress(data);
+ const decompressed = gzipDecompress(compressed);

From fflate

- import { gzipSync, gunzipSync } from 'fflate';
- const compressed = gzipSync(data);
- const decompressed = gunzipSync(compressed);
+ import { gzipCompress, gzipDecompress } from '@derodero24/comprs';
+ const compressed = gzipCompress(data);
+ const decompressed = gzipDecompress(compressed);

comprs adds zstd, lz4, brotli, dictionary compression, and Web Streams API — none of which are available in fflate.

From node:zlib

- import { gzipSync, gunzipSync } from 'node:zlib';
- const compressed = gzipSync(data);
- const decompressed = gunzipSync(compressed);
+ import { gzipCompress, gzipDecompress } from '@derodero24/comprs';
+ const compressed = gzipCompress(data);
+ const decompressed = gzipDecompress(compressed);

Benchmarks

Benchmarks run on Apple M2, Node.js v22. Run locally with pnpm run bench. Numbers vary by machine and data type.

Compression (ops/sec, higher is better)

| Size | comprs | pako | fflate | node:zlib | | --- | ---: | ---: | ---: | ---: | | 150B patterned | 1,352 | 1,578 | 7,785 | 13,078 | | 10KB patterned | 3,605 | 133 | 345 | 1,878 | | 1MB patterned | 246 | 14 | 11 | 100 | | 150B random | 29,914 | 4,653 | 7,471 | 1,423 | | 10KB random | 400 | 42 | 652 | 3,459 | | 1MB random | 13 | 5 | 6 | 10 |

Decompression (ops/sec, higher is better)

| Size | comprs | pako | fflate | node:zlib | | --- | ---: | ---: | ---: | ---: | | 150B patterned | 162,220 | 65,141 | 521,933 | 308,493 | | 10KB patterned | 95,300 | 19,235 | 46,402 | 102,585 | | 1MB patterned | 903 | 123 | 310 | 1,451 | | 150B random | 29,040 | 1,952 | 560,840 | 140,243 | | 10KB random | 7,004 | 17,616 | 407,789 | 271,245 | | 1MB random | 1,508 | 278 | 19,341 | 4,282 |

Compression (ops/sec, higher is better)

| Size | comprs | pako | fflate | node:zlib | | --- | ---: | ---: | ---: | ---: | | 150B patterned | 106,183 | 13,666 | 55,434 | 4,107 | | 10KB patterned | 2,746 | 5,160 | 3,218 | 7,161 | | 1MB patterned | 1,963 | 91 | 217 | 472 | | 150B random | 56,537 | 18,763 | 51,817 | 89,617 | | 10KB random | 7,331 | 1,500 | 510 | 1,941 | | 1MB random | 21 | 8 | 24 | 22 |

Decompression (ops/sec, higher is better)

| Size | comprs | pako | fflate | node:zlib | | --- | ---: | ---: | ---: | ---: | | 150B patterned | 124,501 | 21,119 | 8,938 | 34,967 | | 10KB patterned | 23,416 | 5,808 | 3,061 | 9,753 | | 1MB patterned | 306 | 65 | 70 | 116 | | 150B random | 35,178 | 11,449 | 204,088 | 91,795 | | 10KB random | 7,986 | 13,079 | 62,782 | 114,130 | | 1MB random | 610 | 350 | 238 | 176 |

| Benchmark | comprs | node:zlib | Speedup | | --- | ---: | ---: | ---: | | compress 150B patterned | 24,811 | 2,258 | 11.0x | | compress 10KB patterned | 15,357 | 1,498 | 10.2x | | compress 1MB patterned | 659 | 70 | 9.5x | | compress 150B random | 22,914 | 1,197 | 19.1x | | compress 10KB random | 8,188 | 61 | 134.7x | | compress 1MB random | 336 | 2 | 155.4x | | decompress 150B patterned | 105,187 | 223,191 | 0.5x | | decompress 10KB patterned | 28,509 | 46,967 | 0.6x | | decompress 1MB patterned | 636 | 401 | 1.6x | | decompress 150B random | 113,559 | 53,013 | 2.1x | | decompress 10KB random | 1,503 | 1,156 | 1.3x | | decompress 1MB random | 738 | 5,219 | 0.1x |

Compression (ops/sec, higher is better)

| Size | zstd | gzip | brotli | lz4 | | --- | ---: | ---: | ---: | ---: | | 150B patterned | 515,284 | 78,073 | 24,300 | 274,935 | | 10KB patterned | 161,706 | 31,941 | 15,146 | 165,821 | | 1MB patterned | 5,607 | 2,252 | 642 | 4,153 | | 150B random | 643,393 | 69,616 | 23,634 | 97,111 | | 10KB random | 129,576 | 31,525 | 13,411 | 74,162 | | 1MB random | 4,320 | 2,262 | 478 | 5,419 | | JSON 84KB | 8,914 | 1,828 | 758 | 4,531 | | text 45KB | 52,262 | 31,111 | 8,022 | 22,953 |

Decompression (ops/sec, higher is better)

| Size | zstd | gzip | brotli | lz4 | | --- | ---: | ---: | ---: | ---: | | 150B patterned | 510,595 | 433,472 | 132,960 | 291,921 | | 10KB patterned | 224,034 | 135,437 | 24,028 | 84,465 | | 1MB patterned | 3,206 | 2,561 | 303 | 1,806 | | 150B random | 461,787 | 441,290 | 30,870 | 371,571 | | 10KB random | 245,513 | 135,314 | 23,516 | 89,173 | | 1MB random | 3,498 | 2,309 | 656 | 1,792 | | JSON 84KB | 17,345 | 9,737 | 4,644 | 20,356 | | text 45KB | 82,783 | 29,115 | 3,415 | 41,811 |

Key takeaways

  • zstd is the fastest all-round: highest throughput for both compression and decompression across most data sizes
  • lz4 excels at raw speed: competitive with zstd for compression, especially on random/incompressible data
  • gzip/deflate compression: comprs (Rust flate2) is significantly faster than pako and fflate on larger data (10KB+), competitive on small payloads
  • gzip/deflate decompression: performance varies by data size; comprs leads on patterned data, while fflate can be faster on small random payloads due to lower call overhead
  • brotli compression: comprs is 10--155x faster than node:zlib thanks to the Rust brotli implementation
  • Native vs WASM: these numbers are from the native (napi-rs) backend; WASM throughput is lower due to the execution overhead but still outperforms pure-JS libraries on large payloads

Notes

[!NOTE] Default decompression limit: All decompression functions cap output at 256 MB by default. Use *WithCapacity() variants for larger data:

const decompressed = zstdDecompressWithCapacity(data, 1024 * 1024 * 1024); // 1 GB

[!NOTE] Small payloads on WASM: For data under ~1 KB, the WASM runtime overhead may exceed compression time. Consider batching small items or using the native Node.js backend where possible.

[!NOTE] Brotli decompression: While comprs brotli compression is 10–155x faster than node:zlib, decompression performance varies by data type and size. For decompression-heavy workloads on Node.js, benchmark your specific data.

Ecosystem

@derodero24/comprs-middleware

HTTP compression middleware for Express, Fastify, and Hono.

npm install @derodero24/comprs @derodero24/comprs-middleware
// Express
import { comprs } from '@derodero24/comprs-middleware/express';
app.use(comprs());

// Fastify
import { comprs } from '@derodero24/comprs-middleware/fastify';
app.register(comprs);

// Hono
import { comprs } from '@derodero24/comprs-middleware/hono';
app.use(comprs());

See the middleware README for full documentation.

Contributing

See CONTRIBUTING.md for development setup and guidelines.

Security

See SECURITY.md for vulnerability reporting.

License

MIT