npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@substrate-system/baowser

v0.0.2

Published

Streaming hash-based verification in the browser

Readme

baowser

tests types module semantic versioning Common Changelog install size gzip size license

Streaming hash-based verification in the browser.

This is based on the bao library. That's where the name comes from. Also, look at bab.

This is incremental verification for the browser. You can start a streaming download, and verify it is correct before you have downloaded the entire file. You only need to know the "root hash" ahead of time, and then you can verify each piece of the downlaod as it arrives.

See a live demo

Install

npm i -S @substrate-system/baowser

Example

Encoding (Server-side)

Use the write helper

Encode your data and save it to a file named by its hash.

import { write } from '@substrate-system/baowser/fs'

// Write data to a content-addressed file
const { rootHash, filePath } = await write(
  './data',  // Directory
  Buffer.from('hello world'),  // Data (Buffer, Uint8Array, or Node.js Readable)
  { chunkSize: 1024 }  // default = 1024
)

console.log(`File: ${filePath}`)  // ./data/abc123...
console.log(`Hash: ${rootHash}`)  // abc123...

// Publish rootHash via trusted channel (IPFS CID, database, etc.)
// Serve the file at filePath to clients

Option 2: Manual encoding

import {
  createEncoder,
  getRootLabel
} from '@substrate-system/baowser'

// Your file data
const fileData = new Uint8Array([/* ... */])
const chunkSize = 1024  // 1KB chunks, default

// Get the root hash - this is the CID
const rootHash = await getRootLabel(fileData, chunkSize)

// Create the encoded stream with interleaved metadata
const encodedStream = createEncoder(chunkSize, fileData)

// Publish the root hash via a trusted channel (IPFS CID, database, etc.)
// Stream the encodedStream to clients

Verification (Client-side)

The root hash is the only trusted input. This 32-byte hash is sufficient for incremental verification. At each chunk in the stream, you can prove that the data corresponds to the root hash you requested.

import { createVerifier, verify } from '@substrate-system/baowser'

// The root hash you're requesting (received via trusted channel)
const rootHash = 'abc123...'
const chunkSize = 1024

const response = await fetch('/data.abc')

// Option 1 - TransformStream API
// createVerifier returns a TransformStream that you pipe through
const verifier = createVerifier(rootHash, chunkSize, {
  onChunkVerified: (i, total) => console.log(`Verified ${i}/${total}`)
})

const verifiedStream = response.body.pipeThrough(verifier)

// Read from verifiedStream...
const reader = verifiedStream.getReader()
while (true) {
  const { done, value } = await reader.read()
  if (done) break
  // Use verified chunk...
}

// Option 2 - Promise-based API
const response2 = await fetch('/data.bab')
const verifiedData = await verify(response2.body, rootHash, chunkSize)
// verifiedData is the complete Uint8Array, fully verified
// it will throw if verification fails at any point

Modules

This exposes ESM and common JS via package.json exports field.

ESM

import {
  createEncoder,
  getRootLabel,
  createVerifier,
  verify
} from '@substrate-system/baowser'

Node only

import { write } from '@substrate-system/baowser/fs'

Common JS

const {
  createEncoder,
  getRootLabel,
  createVerifier,
  verify
} = require('@substrate-system/baowser')
const { write } = require('@substrate-system/baowser/fs')

How It Works

The encoding is a Merkle tree where hash labels are interleaved with data chunks in depth-first order. The root hash (32 bytes) is your only trusted input.

The stream contains all verification metadata (child node hashes), and that metadata is itself verified against the root hash during decoding. This enables incremental verification: at each step, you verify that subtrees match their expected labels, which ultimately chain up to verify against the root hash.

If any hash does not match during verification, the stream throws an error and aborts immediately, before the download is complete.

Node API

write(dir, data, options)

Encode data and write it to a file in the given directory, using the root hash as the filename.

async function write (
    dir:string,
    data:Buffer|Uint8Array|Readable,
    { chunkSize = 1024 }:{ chunkSize?:number } = {}
):Promise<{ rootHash:string; filePath:string }>

write Example

import { write } from '@substrate-system/baowser/fs'
import { createReadStream } from 'node:fs'

// Write with Buffer
const result1 = await write('./data', Buffer.from('hello'))

// Write with Uint8Array
const data = new Uint8Array([1, 2, 3, 4, 5])
const result2 = await write('./data', data, { chunkSize: 512 })

// Write with Node.js Readable stream
const stream = createReadStream('./input.txt')
const result3 = await write('./data', stream)

console.log(result3.rootHash)  // "abc123..."
console.log(result3.filePath)  // "./data/abc123..."

Error Handling

When a hash mismatch is detected:

  1. An error is thrown from the stream
  2. The stream aborts - no more data is downloaded
try {
  const verifiedData = await verify(stream, rootHash, chunkSize)
  // Use verified data...
} catch (error) {
  console.error('Stream verification failed:', error.message)
  // Handle error - stream has been aborted
}

See Also

Prior Art

Some Important Dependencies