npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@nolawnchairs/file-chunker

v1.0.0

Published

A memory-efficient TypeScript library for splitting large files into manageable chunks and reassembling them with built-in checksum validation.

Downloads

111

Readme

@nolawnchairs/file-chunker

A memory-efficient TypeScript library for splitting large files into manageable chunks and reassembling them with built-in checksum validation.

This is an internal library and isn't really intended for public use.

Features

  • 🚀 Streaming: Uses Node.js streams for minimal memory footprint
  • 🔒 Checksum Validation: SHA256 checksums for individual chunks and the complete file
  • 📦 TypeScript: Full TypeScript support
  • Async Iterators: Modern async/await patterns with for/await loops
  • Error Handling: Comprehensive error types for validation failures

Installation

npm install @nolawnchairs/file-chunker

Usage

Chunking Files

Split a large file into chunks with automatic checksum calculation:

import { FileChunker } from '@nolawnchairs/file-chunker'
import { writeFile } from 'node:fs/promises'

const chunker = new FileChunker('large-file.bin', {
  chunkSize: 1024 * 1024, // 1MB chunks
})

for await (const result of chunker.chunk()) {
  if (!result.done) {
    // Intermediate chunk
    console.log(`Chunk ${result.index}: ${result.checksum}`)
    
    // Save chunk to disk
    await writeFile(`chunk-${result.index}.bin`, result.chunk)
  } else {
    console.log(`Final checksum: ${result.finalChecksum}`)
  }
}

Joining Files

Reassemble chunks back into the original file with validation:

import { FileJoiner, InvalidChecksumError, InvalidFinalChecksumError } from '@nolawnchairs/file-chunker'
import { readFile, writeFile } from 'node:fs/promises'
import { createHash } from 'node:crypto'

// Load chunks from disk
const chunks = await Promise.all(
  chunkFiles.map(async (file, index) => {
    const buffer = await readFile(file)
    const checksum = createHash('sha256').update(buffer).digest('hex')
    return {
      buffer,
      index,
      checksum,
    }
  })
)

const joiner = new FileJoiner({
  finalChecksum: expectedFileChecksum,
  chunks,
})

try {
  for await (const result of joiner.join()) {
    if (!result.done) {
      // Write chunks to disk as they're validated
      await writeFile('reconstructed-file.bin', result.output, { flag: 'a' })
      console.log(`Validated chunk checksum: ${result.checksum}`)
    } else {
      // Final validation complete
      console.log(`File checksum validated: ${result.finalChecksum}`)
      console.log(`Processed ${result.sourceChunks.length} chunks`)
    }
  }
} catch (error) {
  if (error instanceof InvalidChecksumError) {
    console.error('Chunk checksum validation failed:', error.message)
  } else if (error instanceof InvalidFinalChecksumError) {
    console.error('File checksum validation failed:', error.message)
  }
}

API Reference

FileChunker

Constructor

new FileChunker(filePath: string, config: FileChunkerConfig)
  • filePath: Path to the file to chunk
  • config.chunkSize: Size of each chunk in bytes

Method

async *chunk(): AsyncGenerator<ChunkResult, void, unknown>

Returns an async generator that yields chunks of the file.

Types

type FileChunkerConfig = {
  chunkSize: number
}

type ChunkResult =
  | {
      done: false
      index: number
      chunk: Buffer
      checksum: string
    }
  | {
      done: true
      chunk: Buffer
      finalChecksum: string
    }

FileJoiner

Constructor

new FileJoiner(config: FileJoinerConfig)
  • config.finalChecksum: Expected SHA256 checksum of the complete file
  • config.chunks: Array of file chunks to join

Method

async *join(): AsyncGenerator<JoinResult, void, unknown>

Returns an async generator that yields validated chunks and validates the final file checksum.

Types

type FileChunk = {
  buffer: Buffer
  index: number
  checksum: string
}

type FileJoinerConfig = {
  finalChecksum: string
  chunks: FileChunk[]
}

type JoinResult =
  | {
      done: false
      output: Buffer
      checksum: string
    }
  | {
      done: true
      finalChecksum: string
      sourceChunks: Array<{
        index: number
        checksum: string
      }>
    }

Error Types

class InvalidChunkError extends Error
class InvalidChecksumError extends Error
class InvalidFinalChecksumError extends Error

Memory Efficiency

This library is designed for handling very large files with minimal memory usage:

  • Streaming: Files are read using Node.js streams, never loaded entirely into memory
  • Incremental Processing: Chunks are processed and yielded immediately
  • Single Buffer: Only one buffer (max chunkSize) is kept in memory at a time
  • Incremental Hashing: File checksums are calculated incrementally as data streams

License

ISC