pdfpressor-client
v0.2.52
Published
Self-contained client-side PDF compressor. Uses npm dependencies for maximum compatibility with React and modern bundlers.
Maintainers
Readme
pdfpressor-client
A standalone client-side PDF compressor with advanced performance optimizations. It loads a PDF in the browser, rasterizes each page to a canvas at a chosen DPI, recompresses pages as JPEG with configurable quality, and rebuilds a new PDF. Returns multiple handy artifacts (Uint8Array bytes, Blob, object URL, Buffer if available) plus detailed statistics.
Features
- ✅ Pure browser operation (no server round-trip required)
- ✅ Adjustable DPI (affects rendered page resolution)
- ✅ Adjustable JPEG quality (0.1–1.0 floating value)
- ✅ Parallel page processing - Process multiple pages simultaneously
- ✅ Canvas pooling - Reuse canvases for better memory efficiency
- ✅ Chunk processing - Process pages in configurable batches
- ✅ Progress tracking - Real-time progress callbacks
- ✅ Optional detailed logging (opt-in)
- ✅ Side‑by‑side preview (original vs compressed first page)
- ✅ Download links for original and compressed output
- ✅ Simple API for React / vanilla JS
Installation (for bundlers / React)
npm install pdfpressor-client🎯 Zero Setup Required!
This package is 100% self-contained with all dependencies bundled directly. No configuration, no file copying, no CDN fallbacks needed.
Just install and use:
import { compressPdfClient } from 'pdfpressor-client';
const { compressFile } = await compressPdfClient(pdfBytes, 'output.pdf');What's included:
- ✅
pdf-lib.min.js(525 KB) - bundled - ✅
pdf.min.js(353 KB) - bundled - ✅
pdf.worker.min.js(1.38 MB) - bundled
All dependencies load automatically from the package. Works offline, no external requests.
Direct Browser Usage (CDN fallback)
You can copy the contents of this folder and serve it; dependencies are loaded from CDN automatically if module imports fail.
Serve locally (example):
npx http-server . -c-1
# then open http://localhost:8080/client-compressor/index.htmlHow It Works
This package is completely self-contained - all PDF libraries are bundled directly with the package as native ES modules.
Loading Strategy:
- Check if already loaded - Reuses existing window globals if present (for compatibility)
- Load via ES module wrappers - Uses dedicated wrapper modules that properly export UMD bundles as ES modules
- Dynamic imports - Uses
import()to load wrapper modules fromnode_modules/pdfpressor-client/ - Automatic worker resolution - Worker path is resolved relative to the module using
import.meta.url - Ready to compress! - No external dependencies, no network requests, no public folder setup
The bundled files (pdf-lib.min.js, pdf.min.js, pdf.worker.min.js) are UMD bundles wrapped by ES module files (pdf-lib-wrapper.js, pdfjs-wrapper.js) that properly export their globals. This ensures clean ES module imports that work everywhere.
Works everywhere: Browser, offline, corporate firewalls, webpack, vite, rollup, etc.
Eager Initialization (Fail Fast)
You can preload and validate all dependencies early (e.g. on app startup) to avoid first‑interaction delays:
import { initPdfCompressor } from 'pdfpressor-client';
await initPdfCompressor({ logs: true, retries: 2 });If initialization fails it throws a PdfCompressorError with a .code value such as PDF_LIB_LOAD_FAILED, PDFJS_LOAD_FAILED, or INIT_FAILED.
Offline Usage
Because local minified copies are included, you can operate fully offline after the first install:
- Disconnect network
- Call
initPdfCompressor()(optional) thencompressPdfClient(...) - Loader will skip failed CDN attempts and transparently fall back to local scripts.
Environment Customization
- Set
logs=truewhen calling eitherinitPdfCompressororcompressPdfClientfor detailed timing and fallback logs. - Adjust retry attempts via
initPdfCompressor({ retries: 3 }).
API
Basic Usage
compressPdfClient(
inputBytes: Uint8Array,
outputName: string,
dpi?: number, // default 150 (rendering DPI) quality?: number, // default 0.7 (JPEG quality 0.1–1.0)
logs?: boolean, // default false; enable console timing + diagnostic output
options?: { // Advanced options (v0.2.0+)
parallel?: boolean, // default true - parallel processing
chunkSize?: number, // default 5 - pages per chunk
reuseCanvas?: boolean, // default true - canvas pooling
progressCallback?: (progress: {
current: number, // Current page number
total: number, // Total pages
progress: number // Progress percentage (0-100)
}) => void
}
) => Promise<{ compressFile: {
bytes: Uint8Array;
blob: Blob;
objectUrl: string|null;
buffer: Buffer|null;
stats: {
outputName: string;
originalSize: number;
compressedSize: number;
reduction: number;
pageCount: number;
dpi: number;
quality: number;
};
}>>
initPdfCompressor(options?: {
logs?: boolean; // default false
retries?: number; // default 2 (script tag retry count per URL)
pdfLibUrls?: string[]; // override CDN list for pdf-lib
pdfjsUrls?: string[]; // override CDN list for pdfjs-dist main script
pdfjsWorkerUrls?: string[];// override CDN list for worker script
}): Promise<boolean>Parameters
inputBytes(Uint8Array): Raw PDF file bytes (e.g. fromfile.arrayBuffer())outputName(string): Suggested filename for downloaddpi(number, default 150): Render scale; higher DPI = larger images & potentially bigger outputquality(number, default 0.7): JPEG quality (0.1–1.0). Lower means smaller size (more compression)logs(boolean, default false): When true, enables detailed console.log, timing, and per‑page diagnostics
Returns
Single object { compressFile } where compressFile contains:
bytes: Compressed PDF as Uint8Arrayblob: Compressed PDF Blob for browser download / uploadobjectUrl: Created viaURL.createObjectURL(blob)for previewsbuffer: Node Buffer version (if Buffer exists)stats: Metadata for comparison & display
Examples
Example 1: Basic Usage (React)
import { useState } from 'react';
import { compressPdfClient } from 'pdfpressor-client';
function PdfCompressor() {
const [stats, setStats] = useState(null);
const handleFile = async (e) => {
const file = e.target.files[0];
if (!file) return;
const bytes = new Uint8Array(await file.arrayBuffer());
// Enable logs for development
const { compressFile } = await compressPdfClient(bytes, 'compressed_' + file.name, 150, 0.7, true);
setStats(compressFile.stats);
const a = document.createElement('a');
a.href = compressFile.objectUrl;
a.download = compressFile.stats.outputName;
a.click();
};
return (
<div>
<input type="file" accept="application/pdf" onChange={handleFile} />
{stats && <pre>{JSON.stringify(stats, null, 2)}</pre>}
</div>
);
}Example 2: With Progress Tracking (React)
import { useState } from 'react';
import { compressPdfClient } from 'pdfpressor-client';
function PdfCompressorWithProgress() {
const [progress, setProgress] = useState(0);
const [status, setStatus] = useState('');
const handleFile = async (e) => {
const file = e.target.files[0];
if (!file) return;
setStatus('Loading PDF...');
const bytes = new Uint8Array(await file.arrayBuffer());
setStatus('Compressing...');
const { compressFile } = await compressPdfClient(
bytes,
'compressed_' + file.name,
150,
0.7,
false,
{
progressCallback: ({ current, total, progress }) => {
setProgress(progress);
setStatus(`Processing page ${current} of ${total}...`);
}
}
);
setStatus('Done!');
setProgress(100);
// Download
const a = document.createElement('a');
a.href = compressFile.objectUrl;
a.download = compressFile.stats.outputName;
a.click();
console.log(`Reduced by ${compressFile.stats.reduction.toFixed(1)}%`);
};
return (
<div>
<input type="file" accept="application/pdf" onChange={handleFile} />
<div>{status}</div>
<progress value={progress} max="100">{progress}%</progress>
</div>
);
}Example 3: Custom Batch Size (Process 15 pages at a time)
import { compressPdfClient } from 'pdfpressor-client';
async function compressWithLargeBatches(file) {
const bytes = new Uint8Array(await file.arrayBuffer());
const { compressFile } = await compressPdfClient(
bytes,
'compressed.pdf',
150,
0.7,
true, // Enable logs to see chunk processing
{
parallel: true,
chunkSize: 15, // Process 15 pages at a time instead of default 5
progressCallback: ({ current, total, progress }) => {
console.log(`Progress: ${current}/${total} (${progress}%)`);
}
}
);
return compressFile;
}Example 4: High Quality Compression
// For documents with important text/graphics
const { compressFile } = await compressPdfClient(
pdfBytes,
'high-quality.pdf',
200, // Higher DPI for better quality
0.85, // Higher JPEG quality
false,
{
parallel: true,
chunkSize: 10
}
);Example 5: Maximum Compression
// For when file size is critical
const { compressFile } = await compressPdfClient(
pdfBytes,
'tiny.pdf',
96, // Lower DPI
0.5, // Lower quality = smaller file
false
);Example 6: Low-Memory Devices (Sequential Processing)
// Disable parallel processing for devices with limited RAM
const { compressFile } = await compressPdfClient(
pdfBytes,
'output.pdf',
150,
0.7,
false,
{
parallel: false, // Process pages one at a time
reuseCanvas: true, // Still use canvas pooling
progressCallback: ({ current, total }) => {
console.log(`Page ${current}/${total}`);
}
}
);Example 7: Vanilla JavaScript with Progress Bar
const inputEl = document.querySelector('#pdfInput');
const progressBar = document.querySelector('#progress');
const statusEl = document.querySelector('#status');
inputEl.onchange = async () => {
const file = inputEl.files[0];
if (!file) return;
statusEl.textContent = 'Loading...';
const bytes = new Uint8Array(await file.arrayBuffer());
statusEl.textContent = 'Compressing...';
const { compressFile } = await compressPdfClient(
bytes,
'compressed_' + file.name,
150,
0.7,
false,
{
chunkSize: 15, // Process 15 pages per batch
progressCallback: ({ current, total, progress }) => {
progressBar.value = progress;
statusEl.textContent = `Page ${current}/${total} (${progress}%)`;
}
}
);
statusEl.textContent = `Done! Reduced by ${compressFile.stats.reduction.toFixed(1)}%`;
// Auto-download
const link = document.createElement('a');
link.href = compressFile.objectUrl;
link.download = compressFile.stats.outputName;
link.click();
};Example 8: Upload to Server After Compression
import { compressPdfClient } from 'pdfpressor-client';
async function compressAndUpload(file) {
const bytes = new Uint8Array(await file.arrayBuffer());
const { compressFile } = await compressPdfClient(
bytes,
file.name,
150,
0.7,
false
);
// Upload the compressed blob
const formData = new FormData();
formData.append('pdf', compressFile.blob, compressFile.stats.outputName);
const response = await fetch('/api/upload', {
method: 'POST',
body: formData
});
console.log(`Uploaded! Original: ${compressFile.stats.originalSize}, Compressed: ${compressFile.stats.compressedSize}`);
return response.json();
}What It Accepts
- Input: Any standard PDF file (binary bytes as Uint8Array)
- Parameters:
outputName,dpi,quality,logs
What It Outputs
- Compressed PDF (rasterized pages) via
compressFile.bytes - Browser-friendly
BlobandobjectUrl - Statistics comparing original vs compressed sizes
Internals / How It Works
- Load PDF bytes (pdfjs-dist)
- Render each page to a canvas at
dpi/72scale (with optional canvas pooling for efficiency) - Export canvas as JPEG with specified quality
- Embed compressed images into a fresh PDF (pdf-lib)
- Process pages in parallel chunks (default) or sequentially
- Produce output artifacts and stats
- If
logsis true, output timing for load, render, encode, embed, and final save
Performance Optimizations (v0.2.0+)
- Parallel Processing: Process multiple pages simultaneously for faster compression
- Canvas Pooling: Reuse canvas elements to reduce memory allocation
- Chunk Processing: Process pages in configurable batches to balance speed and memory
- Progress Tracking: Real-time callbacks for UI updates
Limitations
- Text selectable content is lost (pages become images)
- Very large PDFs may cause memory spikes (each page rendered to canvas)
- JPEG only (no WebP/AVIF fallback due to pdf-lib restrictions)
- Compression dependent on browser's JPEG encoder
Tips
- Lower
dpi(e.g. 96) andquality(e.g. 0.5) for smaller outputs - Keep original PDF if you need selectable text
- Use parallel processing (default) for faster compression on modern browsers
- Disable parallel processing on low-memory devices with
parallel: false - Use progress callbacks to show loading indicators for large PDFs
- Enable
logsduring development, disable in production for minimal console noise - Call
initPdfCompressor()on startup to surface any blocked CDN issues early
Credits / Acknowledgements
- Inspired by Ralph Joseph Lagumen.
- Inspired by and gives credit to the original pdfpressor project (https://www.npmjs.com/package/pdfpressor) for conceptual guidance on PDF compression approaches.
License
MIT
