npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

kappa-sparse-indexer

v0.7.4

Published

A sparse indexer for hypercores and kappa-core

Downloads

10

Readme

kappa-sparse-indexer

An indexer for hypercores with support for sparse indexing and exposing the indexed data to Kappa views. The indexer listens on download and append events for all feeds in a set of feeds. It then builds a local materialized log of all downloaded or appended messages. Thus, the indexer provides local total ordering for a set of sparsely synced hypercores. The local ordering gives each message from all feeds a unique, sequential "local sequence number" or lseq. This greatly simplifies state handling for subscriptions or derived views, as their state consists of a single integer (their cursor into the local materialized log). Otherwise, to track indexing state for sparsely synced feeds, each view would have to maintain a bitfield for each feed and compare those to the feed's bitfield.

Works great with an in-progress kappa-core version.

See example.js for a full example on how this can be used with hypercore-query-extension and multifeed to do efficient sparse syncing of collections of hypercores.

API

const Indexer = require('kappa-sparse-indexer')

const indexer = new Indexer(leveldb, opts)

Create a new indexer. leveldb must be a level instance (or compatible). opts are:

  • name: string A name (for debugging purposes only)
  • loadValue: function (message, next) A callback to load a value from message object { key, seq, lseq }. Call next with the updated message object. If unset and if a feed key was added to the indexer, will get the block from that feed and add as value to the message object. If set to false value loading will be skipped.

indexer.add(feed, opts)

Add a feed to the indexer. Opts are:

  • scan: false Set to true to scan for undownloaded blocks initially. This is required if you cannot ensure that the feed has always been added to the indexer before appending or replicating.

*TODO: An onwrite hook set in feed construction would be the safest way to not ever have to use scan. When not ever using scan, the log's deduplicate opt could be set to false, improving performance.

indexer.createReadStream({ start: 0, end: Infinity, limit, live: false })

Create a read stream on the local materialized log. Messages emitted look like this:

{
  key: "hex-encoded key of a feed",
  seq: Number, // The seq of this message in its feed
  lseq: Number, // The "local seq" of this message in the materialized log
  value: object // The value if opts.loadValue isn't false
}

indexer.read({ start, end, limit }, cb)

Similar to createReadStream but collect messages and calls cb with (err, result), where result is:

{
  messages, // Array of messages
  cursor: Number, // The last lseq of the batch of messages,
  finished: bool, // true if there are no more messages to read after this batch
}

indexer.createSubscription()

Create a stateful subscription (where each read call returns the same as above plus an ack function that when called advances the cursor so that the next read call returns the next batch of messages).

indexer.source()

Create a source for a kappa-core@experimental. Similar to createSubscription but with a little boilerplate so that it can be passed directly into kappa.use

Subscriptions

indexer.createSubscription returns a stateful subscription onto the local log. Stateful means that the subscription can track its cursor, making it easy to receive all messages only once. The cursor is not advanced automatically. When calling pull, the next set of messages is returned together with an ack callback that advances the cursor to the the lseq of the last message of the current message set. Thus, the next read call would start reading from there. When using createPullStream, users have to advance the cursor themselves after each message by calling subscription.setCursor(message.lseq)