npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

proofmeqr

v1.1.0

Published

Envelop big blob of data into frames that can be displayed in series of QR Codes

Downloads

7

Readme

ProofmeQr

Envelop big blob of data into frames that can be displayed in series of QR Codes.

NB. this library is generic enough to not even be used with QR Codes but still take optimization decision in regard to how QR code works and from empirical tests.

Install

npm i proofmeqr

API

There are 2 parts of the library, the "exporter" that want to export the data via QR codes and the "importer" that will scan these QR codes and accumulate the frames until it reaches the final result.

exporter

The exporter only have 1 function to use: dataToFrames.

import { dataToFrames } from "proofmeqr";

// examples
const frames: string[] = dataToFrames("hello world");
const frames = dataToFrames(Buffer.from([ 0x00, 0x01, ... ]));
const frames = dataToFrames(data, 140, 2);

// dataToFrames( data[, dataSize, loops ])
// data: the complete data to encode in a series of QR code frames
// dataSize: the number of bytes to use from data for each frame
// loops: (>= 1) the total number of loops to repeat the frames with and with varying nonce and fountain codes frames. More there is loop, better the chance to not be stuck on a frame.

importer

There are a few functions you can use to be able to consume and accumulate the frames over time.

The main function is parseFramesReducer that you feed with each QR Code data and will accumulate a state. Consider that state a black box and prefer using the utility functions to extract out information.

import {
  parseFramesReducer,
  areFramesComplete,
  framesToData,
  progressOfFrames
} from "proofmeqr";

const onResult = finalResult => console.log({ finalResult });

let frames = null;

const onBarCodeScanned = (data: string) => {
  try {
    frames = parseFramesReducer(frames, data);
    if (areFramesComplete(frames)) {
      onResult(framesToData(frames).toString());
    } else {
      console.log("Progress:", progressOfFrames(frames));
    }
  } catch (e) {
    console.warn(e); // a qrcode might fail. maybe the data is corrupted or you scan something that is not relevant.
  }
};

Trade-offs

You do not need this if...

  • You do not need this if your data can always fit in one big QR Code (check QR limits and test on phones).

troubleshooting frames not getting caught

Since this is an unidirectional data stream, we can't tell the emitter to slow down or inform it what are the missing frames. Therefore, the emitter can just loop over all the frames until they are all parsed.

Statistically, this means the phone will catch many frames at the beginning and it will get harder and harder to catch the last frame. Statistically, the phone will eventually get all the frames but it can be a frustrating experience to be stuck with one last missing frame.

To troubleshoot this, you can try different FPS speed. Experience have shown phones are able to scan about 30 frames per second (depends on implementations) but in practice it's better to be at max 5 fps.

We also have empirically found that some frames are randomly harder for phone to catch. Therefore, we have in this library a concept of "replicas" which basically replicates frames with a nonce: one byte in the QR Code data completely change the qrcode, increasing our chance of falling on an "easy" frame.

Finally, we have implemented "fountain codes" inspired from Luby transform code that allows to recover frames faster.

base64 on each frame

Even though we can technically put binary data in QR Code, some reader implementation does not support this properly (for instance on iOS unless relying on undocumented hack https://stackoverflow.com/questions/32429480/read-binary-qr-code-with-avfoundation ). We have therefore chosen to convert frames to base64 (because built in Buffer). The overhead is acceptable.

Data validation using a checksum

On top of QRCode built-in checksums, we have a data length check and md5 checksum validation over the data to make sure some frame are not corrupted. The library is also able to recover from any possible frame corruption state (if you continue scanning, it should eventually correct).

Encoding complex objects

To encode complex objects like JavaScript objects over the data, you can just use JSON.stringify. Since the result of JSON.stringify is not really optimized, you can then compress it using any compression algorithm like GZIP or node-lzw (my preferred because concise).