npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@guilledk/arrowbatch-nodejs

v1.0.0-rc8

Published

Arrow Batch Storage protocol

Downloads

393

Readme

arrowbatch-nodejs

yarn add @guilledk/arrowbatch-nodejs

Protocol

The ArrowBatch v1 protocol is a binary format that allows for streaming new rows to a file in a way that the files can be arbitrarily large while still retaining fast random access properties. It achieves this by sequentially appending random access Apache Arrow tables of a specific batch size plus a small header before each, to form a bigger table.

File Structure

The ArrowBatch v1 file structure consists of a global header followed by a sequence of batches, each containing a batch header and the Arrow random access file bytes.

+-----------------+
|  Global Header  |
+-----------------+
|  Batch 0 Header |
+-----------------+
|    Arrow Table  |
|     Batch 0     |
+-----------------+
|  Batch 1 Header |
+-----------------+
|    Arrow Table  |
|     Batch 1     |
+-----------------+
|      ...        |
+-----------------+

Global Header

The global header contains the following information:

  • Version constant: ASCII string "ARROW-BATCH1"

Batch Header

Each batch header contains the following fields:

  • Batch header constant: ASCII string "ARROW-BATCH-TABLE"
  • Batch byte size: 64-bit unsigned integer representing the size of the Arrow table batch in bytes
  • Compression: 8-bit unsigned integer indicating the compression method used (0 for uncompressed, 1 for zstd)

Streaming

The ArrowBatch v1 format can be streamed easily by reading each batch header as it comes in, then expecting to read the new full Arrow random access batch sent.

Random Access

To perform random access on a disk ArrowBatch v1 file:

  1. Read the global header.
  2. Before reading any actual Arrow table data, read all Arrow batch headers in order by seeking around on the file using the specified metadata values on the batch headers.
  3. Once the batch that contains the desired row is reached, read that batch and perform queries that only affect that small batch.

Compression

The ArrowBatch v1 protocol supports compression of the Arrow table batches. The supported compression methods are:

  • 0: Uncompressed
  • 1: zstd (Zstandard compression)

The compression method is specified in the batch header.

Data Model

The ArrowBatch v1 protocol defines a data model using Arrow table mappings. Each table mapping specifies the name, data type, and additional properties of the fields in the table.

The supported data types include:

  • Unsigned integers (u8, u16, u32, u64, uintvar)
  • Signed integers (i64)
  • Bytes (string, bytes, base64)
  • Digests (checksum160, checksum256)

The table mappings also allow specifying optional fields, fixed-length fields, arrays, and references to other tables.

Conclusion

The ArrowBatch v1 protocol provides an efficient way to store and access large amounts of structured data while supporting streaming and random access. By leveraging the Apache Arrow format and incorporating compression, it offers flexibility and performance for various data processing scenarios.