npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

taskcluster-azure-blobstream

v0.3.0

Published

taskcluster-azure-blobstream ==================

Readme

taskcluster-azure-blobstream

Stream interface built on top of azure for incrementally pushing buffers and committing them. Designed for "live" logging and bursty streams of data (which is expected to end eventually).

There are many common cases which the azure client handles much better then this library if you doing any of the following use the azure client:

  • writing never ending data (that may be rolled over)
  • randomly accessing or updating blocks/pages
  • uploading files already on disk
  • uploading a stream indefinitely

Strategy

The algorithm is very simple (dumb)

  • let node stream handle buffering/backpressure
  • write block (BlobBlock) and commit it in same write operation (_write in node streams)
  • now that block is readable

Due to how node streams work while we are writing the readable side will buffer its writes up to the high water mark.

Note about performance for node 0.10

This stream does not do any special internal buffering to optimize backed up writes. What this means is if your goal is to "append" as fast as possible without much concern for memory node 0.10 will be much slower then 0.11 because writes are done in order and each buffer is written sepeartely without merges of the buffers that have yet to be written. Node 0.11 introduces _writev which is used here to merge any pending buffers before the write which avoids most latency issues.

Example

var AzureStream = require('taskcluster-azure-blobstream');

var azure = require('azure');
var blobService = azure.createBlobService();

var azureWriter = new AzureStream(
  blobService,
  'mycontainer',
  'myfile.txt'
);

// any kind of node readable stream here
var nodeStream;

nodeStream.pipe(azureWriter);
azureWriter.once('finish', function() {
  // yey data was written
  // get the url
  console.log(blobService.getBlobUrl('mycontainer', 'myfile.txt'));
});

RANDOM NOTES

the azure-storage module is slow to load (147ms) and takes up 19mb of memory (as of 0.3). We don't use very many azure blob api calls so ideally we could extract (or help the primary lib extract) the url signing part of authentication into its own lib and then just directly call http for our operations... The ultimate goal here is to consume around 5mb (including https overhead) of memory and load in under 20ms.

To correctly consume the url from azure the x-ms-version header must be set to something like 2013-08-15 this allows open ended range requests (range: byte=500-). In combination with etags (and if conditions) we can build a very fast client (even a fast polling client).