npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

swarm-peer

v1.2.0

Published

Swarm peer ==========

Downloads

2

Readme

Swarm peer

Swarm is a sync-centric database implementing Commutative Replicated Data Types (CmRDTs). CmRDTs are op-based, hence Swarm is built on top of a partially ordered log of operations, very much like classic databases are built on top of totally ordered logs.

A peer is the part of Swarm that does "synchronization" per se. Peers disseminate and store the op log. They also serve it to clients which implement actual CRDT datatypes (Hosts, see swarm-syncable). A peer is mostly oblivious to data types and logic; its mission is to get all the ops delivered to all the object's replicas, peferably once.

This Peer implementation keeps its data in a storage engine accessed through the [LevelUp][levleup] interface. Normally, that is LevelDB on the server, IndexedDB on the client (in case you'd like to download a full db into the browser).

  • [x] LevelOp - op database implemented on top of LevelDOWN
  • [x] {OpStream} LogOpStream - implements partially ordered log handling
  • [ ] {OpStream} PatchOpStream - manages patches
    • pass-through for mutations
    • accepts on/offs
    • reads db, produces patches
    • optionally, produces snapshots, writes them to db
    • emits mutations (p-t), patches (scoped), reciprocal ons/offs
  • [x] {OpStream} SwitchOpStream - sends/receives ops to clients, manages subscription tables

OpStreams

Some little clarification on the Swarm domain model and terms. What are users, sessions, clocks, databases and subscriptions (ons) ?

First of all, a user is an "end user" identified by a login (alphanum string up to 32 characters long, underscores permitted, like gritzko). A user may have an arbitrary number of sessions (like apps on mobile devices, browser tabs, desktop applications). Sessions have unique identifiers too, like gritzko~1kz (tilde then a serial in Base64). Session ids may be recursive, like gritzko~1kz~2. Each session has a clock that produces a monotonous sequence of Lamport timestamps or simply stamps which consist of the local time value and session id, like 2Ax1k+gritzko~1kz. Every op is timestamped at its originating process.

Swarm's synchronized CRDT objects are packed into databases identified by alphanumeric strings. A session may use multiple databases, but as the relative order of operations in different databases does not matter, each db is subscribed to in a separate connection. The implementation may or may not guarantee to preserve the relative order of changes to different objects in the same database. The client's session is linked to a de-facto local process (and storage), so it is likely to be shared between dbs (same as clocks).

Per-object subscriptions are optional (that depends on a particular database). Similarly, access control policies are set at per-db granularity (sample policy: the object's owner can write, others can read).

The most common interface in the system is an OpStream. That is a single-database op stream going from one session to another.

TODO

Rework (1.1)

Goals: manageable state snapshotting, general clean-up and simplification. Storage/network/subscriptions go to Replica entirely; Entry becomes passive, merges with EntryState.

Method: rewire refactoring. Stages:

  • [ ] send ~ done ~ save
  • [ ] read db by callbacks
  • [ ] O queue
  • [ ] I queue
  • [ ] Entry ~ EntryState

Full job list:

  • [ ] move subscribers, write to Replica (replica.appendNewOp(op))
    • [ ] subscribers
    • [ ] append new op
    • [ ] send
    • [ ] save
    • [ ] Q selective ack, error -- mailbox ? entry.error
  • [ ] replica.snapshotting[typeid] -> [stream_id], check on relaying new ops
  • [ ] replica to save meta, Entry to stay passive
    • [ ] replica.op_queue, backpressure
    • [ ] unite Entry/EntryState
    • [ ] replica.readTail(fn, end) -> fn(op)* fn(null)|end()
    • [ ] replica to maintain a common pending queue
  • [ ] descending state hooks (last snapshot size, tail size)