npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@veksa/big-json

v1.0.4

Published

A stream based implementation of JSON.parse and JSON.stringify for big data

Downloads

35

Readme

@veksa/big-json

Optimized fork of https://github.com/DonutEspresso/big-json

npm version npm downloads license

A stream based implementation of JSON.parse and JSON.stringify for big data

There exist many stream based implementations of JSON parsing or stringifying for large data sets. These implementations typically target time series data, new line delimited data or other array-like data, e.g., logging records or other continuous flowing data.

This module hopes to fill a gap in the ecosystem: parsing large JSON objects that are just really big objects. With large in-memory objects, it is possible to run up against the V8 string length limitation, which is currently limited to 512MB. Thus, if your large object has enough keys or values, it is possible to exceed the string length limit when calling JSON.stringify.

Similarly, when retrieving stored JSON from disk or over the network, if the JSON stringified representation of the object exceeds the string length limit, the process will throw when attempting to convert the Buffer into a string.

The only way to work with such large objects is to use a streaming implementation of both JSON.parse and JSON.stringify. This module does just that by normalizing the APIs for different modules that have previously published, combining both parse and stringify functions into a single module. These underlying modules are subject to change at anytime.

The major caveat is that the reconstructed POJO must be able to fit in memory. If the reconstructed POJO cannot be stored in memory, then it may be time to reconsider the way these large objects are being transported and processed.

This module currently uses JSONStream for parsing, and json-stream-stringify for stringification.

Getting Started

Install the module with: npm install @veksa/big-json

Usage

To parse a big JSON coming from an external source:

import fs from 'fs';
import {createParseStream} from '@veksa/big-json';

const readStream = fs.createReadStream('big.json');
const parseStream = createParseStream();

parseStream.on('data', function(data) {
    // => receive reconstructed data
});

readStream.pipe(parseStream);

To stringify JSON:

import {createStringifyStream} from '@veksa/big-json';

const stringifyStream = createStringifyStream({
    body: BIG_DATA
});

stringifyStream.on('data', function(strChunk) {
    // => BIG_DATA will be sent out in JSON chunks as the object is traversed
});

Promise-based API for parsing:

import {parse} from '@veksa/big-json';

const result = await parse({
    body: jsonString
});
// => result contains the parsed JSON object/array

Promise-based API for stringifying:

import {stringify} from '@veksa/big-json';

const jsonString = await stringify({
    body: bigObject
});
// => jsonString contains the stringified representation of bigObject

API

createParseStream()

Create a JSON.parse that uses a stream interface. The underlying implementation is handled by JSONStream. This is merely a thin wrapper for convenience that handles the reconstruction/accumulation of each individually parsed field.

The advantage of this approach is that by also using a streams interface, any JSON parsing or stringification of large objects won't block the CPU.

Returns: Stream - A stream interface for parsing JSON

createStringifyStream(params)

Create a JSON.stringify readable stream.

Parameters:

  • params Object - An options object
    • params.body Object - The JS object to JSON.stringify
    • params.space number (optional) - Number of spaces for indentation
    • params.bufferSize number (optional) - Size of the internal buffer (default: 512)

Returns: Stream - A readable stream with JSON string chunks

parse(params)

Stream based JSON.parse with promise-based API.

Parameters:

  • params Object - Options to pass to parse stream
    • params.body String|Buffer - String or buffer to parse

Returns: Promise<Object|Array> - The parsed JSON

stringify(params)

Stream based JSON.stringify with promise-based API.

Parameters:

  • params Object - Options to pass to stringify stream
    • params.body Object - The object to be stringified
    • params.space number (optional) - Number of spaces for indentation

Returns: Promise<string> - The stringified JSON

Contributing

This project welcomes contributions and suggestions. To contribute:

  1. Fork and clone the repository
  2. Install dependencies: yarn install
  3. Make your changes
  4. Run linting: yarn lint
  5. Run tests: yarn test
  6. Submit a pull request

Available Scripts

  • yarn clean - Remove the dist directory
  • yarn build - Clean and build the project
  • yarn compile - Run TypeScript compiler without emitting files
  • yarn lint - Run ESLint
  • yarn test - Run tests

License

MIT