npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

flumelog-offset

v3.4.4

Published

a flumelog based on offset into a file

Downloads

583

Readme

flumelog-offset

An flumelog where the offset into the file is the key. Each value is appended to the log with a double ended framing, and the "sequence" is the position in the physical file where the value starts, this means if you can do a read in O(1) time!

Also, this is built on top of aligned-block-file so that caching works very well.

Usage

initialize with a file and a codec, and wrap with flumedb.

var OffsetLog = require('flumelog-offset')
var codec = require('flumecodec')
var Flume = require('flumedb')

var db = Flume(OffsetLog(filename, {codec: codec.json}))
  .use(...) //also add some flumeviews

db.append({greets: 'hello!'}, function (cb) {

})

Options

var OffsetLog = require('flumelog-offset')
var log = OffsetLog('/data/log', {
  blockSize: 1024,        // default is 1024*16
  codec: {encode, decode} // defaults to no codec, expects buffers. for json use flumecodec/json
  flags: 'r',             // default is 'r+' (from aligned-block-file)
  cache: {set, get}       // default is require('hashlru')(1024)
  offsetCodec: {          // default is require('./frame/offset-codecs')[32]
    byteWidth,            // with the default offset-codec, the file can have
    encode,               // a size of 4GB max.
    decodeAsync
  }
})

legacy

if you used flumelog-offset before 3, and want to read your old data, use require('flumelog-offset/legacy')

recovery

If your system crashes while an append is in progress, it's unlikely but possible to have a partially written state. flumelog-offset will rewind to the last good state on the next start up.

After running this for several months (in my personal secure-scuttlebutt instance) I eventually got an error, which lead to the changes in this version.

format

data is stored in a append only log, where the byte index of the start of a record is the primary key (offset).

offset-><data.length (UInt32BE)>
        <data ...>
        <data.length (UInt32BE)>
        <file_length (UInt32BE or Uint48BE or Uint53BE)>

by writing the length of the data both before and after each record it becomes possible to scan forward and backward (like a doubly linked list)

It's very handly to be able to scan backwards, as often you want to see the last N items, and so you don't need an index for this.

future ideas

  • secured file (hashes etc)
  • encrypted file
  • make the end of the record be the primary key. this might make other code nicer...

License

MIT