npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

split-into-streams

v0.0.18

Published

Split stream into multiple streams by defining flexible delimiter

Downloads

32

Readme

split-into-streams

Split a stream into multiple streams by defining flexible delimiter or a delimiting function that returns index of separation. Each new resulting substream starts when the reading of previous is finished. There are 2 methods: stream of streams splitted by delimiter or explicit function that returns substream that ends at next delimiter.

Installation

$ yarn add split-into-streams
$ npm i split-into-streams

First way: (stream of streams)

const SplitStream = require('split-into-streams');

const rs = new SplitStream(readableStream, {
  explicitRead: false,  // set as non explicit
  splitAt: '\n',        // split at newline
})
rs.on('data', stream => {
  // this stream will end after next line break
  stream.on('data', data => { ... })
});

Second way: (explicit function)

const SplitStream = require('split-into-streams');

const rs = new SplitStream(readableStream, {
  explicitRead: true,   // set as explicit
})
const stream = await rs.readUntil('\n');
// received stream will end after next line break
stream.on('data', data => { ... })

with this method you can also provide different delimiter to each next readUntil().

NOTE: this method will automatically pause the given stream on creation, and resume & pause when reading each next chunk, this will force the main stream to stay until everything is read when we read from stdout of spawn process for example.

Options

explicitRead

default: false

To specify one of two ways above

splitAt / or as argument to readUntil()

mandatory field

The delimiter value that should separate streams, can be string, regex, array of numbers or function that returns point of separation.

  • when string, will separate at place where toString() values of bytes in buffer match the string.
  • when regex, will separate at place where toString() values of bytes in buffer match the regex.
  • when array of numbers, will spearate at place where bytes match the values.
  • when function, will call that function on chunk of data and expect an index of separation to be returned.

example: to separate immediately after line break, you can pass '\n', /\n/, [10], or provide function:

splitAt: nextChunkData => nextChunkData.toString().indexOf('\n')

to separate before the delimiter, simply decrease by 1 position:

splitAt: nextChunkData => nextChunkData.toString().indexOf('\n') - 1

to split next stream by different delimiter than the first, you can make counter inside this function and provide different implementation on second call, return -1 if you dont want to split yet and continue passing chunks to currently read substream.

maxPrevMemory

default: 30

Sometimes long delimiters can begin at end of one chunk (that is read internally) and end at start of next, in order to consider these the library doesn't push the entire chunk into substream after its read from main stream, but rather leaves out some bytes at the end, to be pushed before next chunk. The length of that ending is defined by maxPrevMemory. Use this if you are dealing with fairly long delimiters and set it to be the max possible length of your delimiter.