npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

ipfs-hyperlog

v5.1.2

Published

IPFS Merkle DAG that replicates based on scuttlebutt logs and causal linking

Downloads

9

Readme

ipfs-hyperlog

IPFS-compatible Merkle DAG that replicates based on scuttlebutt logs and causal linking

npm install ipfs-hyperlog

build status

Background

ipfs-hyperlog is a drop-in replacement for @mafintosh's hyperlog. Its key difference is that it creates a Merkle DAG that is binary compatible with IPFS objects. This means any node of any DAG built using ipfs-hyperlog can be replicated to and from the IPFS network as well!

Why IPFS?

The peer-to-peer IPFS network excels at serving and replicating immutable, highly available, permanent data.

Hyperlog DAGs can now be replicated to IPFS for permanent storage!

Why hyperlog?

Hyperlog is great for quick replication over a transport-agnostic stream!

In addition, it has a great ecosystem of powerful modules that IPFS can now take advantage of:

  1. hyperlog-index - forking indexes for hyperlog
  2. hyperkv - p2p key/value store over a hyperlogusing a multi-value register conflict strategy
  3. swarmlog - create a p2p webrtc swarm around a hyperlog
  4. and many more!

Create and link nodes

var hyperlog = require('ipfs-hyperlog')

var log = hyperlog(db) // where db is a levelup instance

// add a node with value 'hello' and no links
log.add(null, 'hello', function(err, node) {
  console.log('inserted node', node)

  // insert 'world' with a link back to the above node
  log.add([node.key], 'world', function(err, node) {
    console.log('inserted new node', node)
  })
})

Replicate graph

To replicate this log with another one simply use log.replicate() and pipe it together with a replication stream from another log.

var l1 = hyperlog(db1)
var l2 = hyperlog(db2)

var s1 = l1.replicate()
var s2 = l2.replicate()

s1.pipe(s2).pipe(s1)

s1.on('end', function() {
  console.log('replication ended')
})

A detailed write-up on how this replication protocol works will be added to this repo in the near future. For now see the source code.

API

log = hyperlog(db, [options])

Create a new log instance. Options include:

{
  id: 'a-globally-unique-peer-id',
  valueEncoding: 'a levelup-style encoding property' // example: 'json'
}

You can also pass in a identity and a sign and verify function which can be used to create a signed log

{
  identity: aPublicKeyBuffer, // will be added to all nodes you insert
  sign: function (node, cb) {
    // will be called with all nodes you add
    var signatureBuffer = someCrypto.sign(node.key, mySecretKey)
    cb(null, signatureBuffer)
  },
  verify: function (node, cb) {
    // will be called with all nodes you receive
    if (!node.signature) return cb(null, false)
    cb(null, someCrypto.verify(node.key, node.signature. node.identity))
  }
}

log.add(links, value, opts={}, [cb])

Add a new node to the graph. links should be an array of node keys that this node links to. If it doesn't link to any nodes use null or an empty array. value is the value that you want to store in the node. This should be a string or a buffer. The callback is called with the inserted node:

log.add([link], value, function(err, node) {
  // node looks like this
  {
    change: ... // the change number for this node in the local log
    key:   ... // the hash of the node. this is also the key of the node
    value:  ... // the value (as a buffer) you inserted
    log:    ... // the peer log this node was appended to
    seq:    ... // the peer log seq number
    links: ['hash-of-link-1', ...]
  }
})

Optionally supply an opts.valueEncoding.

log.append(value, opts={}, [cb])

Add a value that links all the current heads.

Optionally supply an opts.valueEncoding.

log.get(hash, opts={}, cb)

Lookup a node by its hash. Returns a node similar to .add above.

Optionally supply an opts.valueEncoding.

log.heads(opts={}, cb)

Get the heads of the graph as a list. A head is node that no other node links to.

log.heads(function(err, heads) {
  console.log(heads) // prints an array of nodes
})

The method also returns a stream of heads which is useful if, for some reason, your graph has A LOT of heads

var headsStream = log.heads()

headsStream.on('data', function(node) {
  console.log('head:', node)
})

headsStream.on('end', function() {
  console.log('(no more heads)')
})

Optionally supply an opts.valueEncoding.

changesStream = log.createReadStream([options])

Tail the changes feed from the log. Everytime you add a node to the graph the changes feed is updated with that node.

var changesStream = log.createReadStream({live:true})

changesStream.on('data', function(node) {
  console.log('change:', node)
})

Options include:

{
  since: changeNumber     // only returns changes AFTER the number
  live: false             // never close the change stream
  tail: false             // since = lastChange
  limit: number           // only return up to `limit` changes
  until: number           // (for non-live streams) only returns changes BEFORE the number
  valueEncoding: 'binary'
}

replicationStream = log.replicate([options])

Replicate the log to another one using a replication stream. Simply pipe your replication stream together with another log's replication stream.

var l1 = hyperlog(db1)
var l2 = hyperlog(db2)

var s1 = l1.createReplicationStream()
var s2 = l2.createReplicationStream()

s1.pipe(s2).pipe(s1)

s1.on('end', function() {
  console.log('replication ended')
})

Options include:

{
  mode: 'push' | 'pull' | 'sync', // set replication mode. defaults to sync
  live: true, // keep the replication stream open. defaults to false
  metadata: someBuffer, // send optional metadata as part of the handshake
  frame: true // frame the data with length prefixes. defaults to true
}

If you send metadata it will be emitted as an metadata event on the stream. A detailed write up on how the graph replicates will be added later.

log.on('preadd', function (node) {})

On the same tick as log.add() is called, this event fires with the node about to be inserted into the log. At this stage of the add process, node has these properties:

  • node.log
  • node.key
  • node.value
  • node.links

log.on('add', function (node) {})

After a node has been successfully added to the log, this event fires with the full node object that the callback to .add() gets.

License

MIT