npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

level-bufferstreams

v0.1.1

Published

Pure buffer (multibuffer) streams for leveldb. Faster/less memory overhead than the default streams, useful for bulk operations.

Downloads

6

Readme

level-bufferstreams

NPM

david-dm david-dm

level-bufferstreams provides lower-level pure-buffer read and write stream interfaces to a LevelUP instance.

Why do this? objectMode streams and key/value conversion add considerable CPU and memory overhead. This allows you to skip the advanced parsing the LevelUP streams do and use pure buffer streams to optimize bulk stream operations with LevelUP.

If all you want to do is create live copies from LevelUP->LevelUP, you should use level-rawcopy which uses this library.

var level = require("level")
var source = level("./db_with_stuff_in_it")
var target = level("./empty_db")

var bufferstreams = require("level-bufferstreams")

// A faster/lower memory footprint way to copy an open LevelUP instance!
bufferstreams.rawReader(source)
  .pipe(bufferstreams.rawWriter(target))

// For cases like the above, see http://npm.im/level-rawcopy

API

bufferstreams(db)

Instantiate a factory for raw read and write streams for LevelUP instance db. Returned instance will have methods .rawReader(options) and .rawWriter(hint) methods that behave identically to the below functions.

.rawReader(db, options)

Create a raw pure-buffer Readable stream that will stream the contents of the database. The options is directly passed to LevelDOWN iterator, and thus supports features such as start, end, limit and others.

The stream content uses multibuffers to avoid objectMode streams. Each streamed record will be a multibuffer of the Key+Value, and if you would like to consume the record, will work with the multibuffer's unpack(chunk) method.

Emits standard Readable stream events.

.rawWriter(db, hint)

Create a raw pure-buffer Writable stream that accept multibuffers of Key/Value pairs and save them in batches to the provided LevelUP db instances.

The stream content uses multibuffers to avoid objectMode streams. Each streamed record is expected to be a tuple of Key+Value as would be encoded via multibuffer.pack([key, value]).

The hints allow you to tune the write operations:

  • sync: [default: false] Pass the 'sync' flag to LevelUP's chained batch implementation.
  • batchSize: [default: undefined] If specified, automatically force batches with this many records.
  • writeBufferBytes: [default: 4194304] If specified (and no batchSize specified), attempt to break batches into records no larger than this many bytes. It does this by tracking the average bytes per batch thus far, and if the current_batch_bytes + average > writeBufferBytes it will send the batch and start a new one. This means it cannot guarantee the batches will all be <= writeBufferBytes.

Events: When complete, it will emit a stats event that will contain some stats about the write operation. Will also emit standard Writable stream events.

Setting your LevelUP instance's writeBufferSize to match writeBufferBytes is suggested. (They both default to 4194304)

LICENSE

MIT