npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@serve.zone/containerarchive

v0.1.3

Published

content-addressed incremental backup engine with deduplication, encryption, and error correction

Readme

@serve.zone/containerarchive

A high-performance, content-addressed incremental backup engine with built-in deduplication, encryption, compression, and Reed-Solomon error correction — powered by a Rust core with a clean TypeScript API.

Issue Reporting and Security

For reporting bugs, issues, or security vulnerabilities, please visit community.foss.global/. This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a code.foss.global/ account to submit Pull Requests directly.

Install

pnpm install @serve.zone/containerarchive

🏗️ Architecture

containerarchive uses a hybrid Rust + TypeScript architecture. The heavy lifting — chunking, hashing, compression, encryption, pack file I/O, and parity — runs in a compiled Rust binary. The TypeScript layer provides a clean, idiomatic Node.js API and manages data streaming via Unix sockets through the @push.rocks/smartrust RustBridge IPC.

┌──────────────────────────────────────┐
│  Your Application  (TypeScript/JS)   │
│  ┌────────────────────────────────┐  │
│  │   ContainerArchive API        │  │
│  │   .init() .ingest() .restore()│  │
│  └────────────┬───────────────────┘  │
│               │ Unix Socket + JSON   │
│  ┌────────────▼───────────────────┐  │
│  │   Rust Engine (compiled bin)   │  │
│  │   FastCDC │ SHA-256 │ AES-GCM │  │
│  │   gzip/zstd │ Reed-Solomon    │  │
│  └────────────────────────────────┘  │
└──────────────────────────────────────┘

✨ Features

| Feature | Details | |---|---| | Content-Defined Chunking | FastCDC with gear-based rolling hash — insertions/deletions only affect nearby boundaries | | Deduplication | SHA-256 chunk addressing — identical data is stored only once across all snapshots | | Compression | gzip or zstd per-chunk compression | | Encryption | AES-256-GCM with Argon2id key derivation — passphrase-protected repositories | | Pack Files | Chunks are batched into binary pack files with binary .idx indexes for fast lookup | | Snapshots | Immutable point-in-time snapshots with metadata tags and multi-item support | | Reed-Solomon Parity | RS(20,1) erasure coding — recover any single lost pack from a group of 20 | | Incremental | Only new/changed chunks are stored on each ingest | | Streaming | Unix socket streaming between TypeScript and Rust for zero-copy data transfer | | Multi-Item Snapshots | Bundle multiple data streams (DB dumps, config tarballs, etc.) into a single snapshot | | Verification | Three-level integrity checks: quick, standard, full | | Pruning | Retention policies (keep last N, days, weeks, months) with garbage collection | | Repair | Automatic index rebuild, stale lock removal, and parity-based pack recovery |

📖 Usage

Initialize a New Repository

import { ContainerArchive } from '@serve.zone/containerarchive';

// Unencrypted repository
const repo = await ContainerArchive.init('/path/to/backup-repo');

// Encrypted repository (AES-256-GCM + Argon2id)
const encryptedRepo = await ContainerArchive.init('/path/to/secure-repo', {
  passphrase: 'my-strong-passphrase',
});

Open an Existing Repository

const repo = await ContainerArchive.open('/path/to/backup-repo');

// With passphrase for encrypted repos
const repo = await ContainerArchive.open('/path/to/secure-repo', {
  passphrase: 'my-strong-passphrase',
});

Ingest Data (Single Stream)

import * as fs from 'node:fs';

const inputStream = fs.createReadStream('/path/to/database-dump.sql');
const snapshot = await repo.ingest(inputStream, {
  tags: { service: 'postgres', environment: 'production' },
  items: [{ name: 'database.sql', type: 'database-dump' }],
});

console.log(`Snapshot ${snapshot.id} created`);
console.log(`Original: ${snapshot.originalSize} bytes`);
console.log(`Stored: ${snapshot.storedSize} bytes`);
console.log(`New chunks: ${snapshot.newChunks}, Reused: ${snapshot.reusedChunks}`);

Multi-Item Ingest

Bundle multiple data streams into one snapshot:

import * as stream from 'node:stream';

const dbDump = fs.createReadStream('/tmp/pg_dump.sql');
const configTar = fs.createReadStream('/tmp/config-volumes.tar');

const snapshot = await repo.ingestMulti([
  { stream: dbDump, name: 'database.sql', type: 'database-dump' },
  { stream: configTar, name: 'config.tar', type: 'volume-tar' },
], {
  tags: { service: 'myapp', type: 'full-backup' },
});

console.log(`Items stored: ${snapshot.items.map(i => i.name).join(', ')}`);

Restore Data

// Restore an entire snapshot
const restoreStream = await repo.restore(snapshot.id);
const writeStream = fs.createWriteStream('/tmp/restored-dump.sql');
restoreStream.pipe(writeStream);

// Restore a specific item from a multi-item snapshot
const configStream = await repo.restore(snapshot.id, { item: 'config.tar' });
configStream.pipe(fs.createWriteStream('/tmp/restored-config.tar'));

List & Filter Snapshots

// List all snapshots
const allSnapshots = await repo.listSnapshots();

// Filter by tags
const prodSnapshots = await repo.listSnapshots({
  tags: { environment: 'production' },
});

// Filter by date range
const recentSnapshots = await repo.listSnapshots({
  after: '2026-03-01T00:00:00Z',
  before: '2026-03-22T00:00:00Z',
});

// Get a specific snapshot
const snap = await repo.getSnapshot('snapshot-id-here');

Verify Repository Integrity

// Quick check — validates index consistency
const quick = await repo.verify({ level: 'quick' });

// Standard — reads pack headers and validates checksums
const standard = await repo.verify({ level: 'standard' });

// Full — decompresses and re-hashes every chunk
const full = await repo.verify({ level: 'full' });

console.log(`OK: ${full.ok}`);
console.log(`Packs checked: ${full.stats.packsChecked}`);
console.log(`Chunks checked: ${full.stats.chunksChecked}`);

Prune Old Snapshots

// Dry run first
const preview = await repo.prune({ keepLast: 5, keepDays: 30 }, true);
console.log(`Would remove ${preview.removedSnapshots} snapshots, free ${preview.freedBytes} bytes`);

// Execute for real
const result = await repo.prune({
  keepLast: 5,
  keepDays: 30,
  keepWeeks: 12,
  keepMonths: 6,
});
console.log(`Removed ${result.removedSnapshots} snapshots, ${result.removedPacks} packs`);

Repair & Maintenance

// Repair — rebuild index, remove stale locks, attempt parity recovery
const repairResult = await repo.repair();
console.log(`Index rebuilt: ${repairResult.indexRebuilt}`);
console.log(`Packs repaired via parity: ${repairResult.packsRepaired}`);

// Rebuild global index from pack .idx files
await repo.reindex();

// Remove stale locks
await repo.unlock();
await repo.unlock({ force: true }); // force-remove all locks

Event Subscriptions

Monitor ingest progress and errors with RxJS-based event streams:

// Track ingest progress
const sub = repo.on('ingest:progress', (data) => {
  console.log(`${data.operation}: ${data.percentage}% — ${data.message}`);
});

// Track completed ingests
repo.on('ingest:complete', (data) => {
  console.log(`Snapshot ${data.snapshotId} complete — ${data.newChunks} new chunks`);
});

// Track verification errors
repo.on('verify:error', (error) => {
  console.error(`Verification error in ${error.pack || error.chunk}: ${error.error}`);
});

// Unsubscribe when done
sub.unsubscribe();

Close the Repository

await repo.close();

🗂️ Repository Structure

When initialized, a repository has the following on-disk layout:

backup-repo/
├── config.json          # Repository config (chunking, compression, encryption, parity)
├── packs/
│   ├── data/            # Binary pack files (.pack) and indexes (.idx)
│   └── parity/          # Reed-Solomon parity packs
├── snapshots/           # JSON snapshot manifests
├── index/               # Global chunk index (hash → pack location)
├── keys/                # Encrypted key files (for passphrase-protected repos)
└── locks/               # Advisory locks for concurrent access

🔧 How It Works

  1. Chunking — Incoming data is split into variable-size chunks using FastCDC with a gear-based rolling hash. Chunk sizes range from 64 KB to 1 MB (avg 256 KB). Content-defined boundaries mean that insertions or edits only affect nearby chunks, maximizing dedup across versions.

  2. Hashing — Each chunk is hashed with SHA-256 for content addressing. If a chunk's hash already exists in the global index, it's deduplicated — only a reference is stored.

  3. Compression — New chunks are compressed with gzip (default) or zstd before storage. Per-chunk compression flags are stored in the index.

  4. Encryption — If a passphrase is set, a random 256-bit master key is generated, wrapped with an Argon2id-derived key, and stored in a key file. Every chunk is encrypted with AES-256-GCM using a unique nonce.

  5. Packing — Compressed (and optionally encrypted) chunks are appended into binary pack files (~8 MB target). Each pack has an associated .idx file with chunk offsets, sizes, and flags for O(1) lookup.

  6. Parity — After every group of 20 data packs, a Reed-Solomon RS(20,1) parity pack is generated. If any single pack in the group is lost or corrupted, it can be fully reconstructed.

  7. Snapshots — A JSON manifest records the chunk list, tags, sizes, and item metadata. Snapshots are immutable — pruning removes snapshots but never alters existing pack data in-place.

  8. Restore — The snapshot manifest is read, chunks are looked up in the global index, fetched from pack files, decompressed, decrypted if needed, and streamed back in order via a Unix socket.

📋 API Reference

ContainerArchive

| Method | Description | |---|---| | static init(path, options?) | Create a new repository. Returns Promise<ContainerArchive> | | static open(path, options?) | Open an existing repository. Returns Promise<ContainerArchive> | | ingest(stream, options?) | Ingest a single data stream. Returns Promise<ISnapshot> | | ingestMulti(items, options?) | Ingest multiple streams as one snapshot. Returns Promise<ISnapshot> | | restore(snapshotId, options?) | Restore a snapshot. Returns Promise<ReadableStream> | | listSnapshots(filter?) | List snapshots with optional tag/date filtering. Returns Promise<ISnapshot[]> | | getSnapshot(id) | Get a specific snapshot. Returns Promise<ISnapshot> | | verify(options?) | Verify repository integrity (quick/standard/full). Returns Promise<IVerifyResult> | | prune(retention, dryRun?) | Remove old snapshots and garbage-collect packs. Returns Promise<IPruneResult> | | repair() | Rebuild index, remove stale locks, attempt parity recovery. Returns Promise<IRepairResult> | | reindex() | Rebuild the global index from pack .idx files. Returns Promise<void> | | unlock(options?) | Remove advisory locks. Returns Promise<void> | | on(event, handler) | Subscribe to events. Returns Subscription | | close() | Close the repository and terminate the Rust process. Returns Promise<void> |

Key Interfaces

interface ISnapshot {
  id: string;
  version: number;
  createdAt: string;
  tags: Record<string, string>;
  originalSize: number;
  storedSize: number;
  chunkCount: number;
  newChunks: number;
  reusedChunks: number;
  items: ISnapshotItem[];
}

interface IRetentionPolicy {
  keepLast?: number;
  keepDays?: number;
  keepWeeks?: number;
  keepMonths?: number;
}

interface IVerifyResult {
  ok: boolean;
  errors: IVerifyError[];
  stats: {
    packsChecked: number;
    chunksChecked: number;
    snapshotsChecked: number;
  };
}

License and Legal Information

This repository contains open-source code licensed under the MIT License. A copy of the license can be found in the LICENSE file.

Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.

Trademarks

This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH or third parties, and are not included within the scope of the MIT license granted herein.

Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines or the guidelines of the respective third-party owners, and any usage must be approved in writing. Third-party trademarks used herein are the property of their respective owners and used only in a descriptive manner, e.g. for an implementation of an API or similar.

Company Information

Task Venture Capital GmbH Registered at District Court Bremen HRB 35230 HB, Germany

For any legal inquiries or further information, please contact us via email at [email protected].

By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.