npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@filecoin-station/spark-evaluate

v1.3.0

Published

Evaluate service

Readme

spark-evaluate

Evaluate service

Dry-run evaluation

You can evaluate a round locally by running the script bin/dry-run.js.

Remember to obtain a Glif API token first. You can store the token in the GLIF_TOKEN environment variable or in the .env file in the root directory of your local clone of this repository.

GLIF_TOKEN="<value>"

IMPORTANT

The script needs to query the chain to list all historic MeasurementsAdded events. Glif, the RPC API provider we use, keeps only ~16 hours of event history. As a result, if you want to evaluate an older round, you must provide the list of all CIDs containing measurements submitted for that round.

CACHING

The dry-run script caches the list of MeasurementsAdded and the content of CIDs in the .cache directory. This speeds up subsequent invocations of the script at the expense of increased disk usage. Feel free to delete any files in the cache directory to reclaim disk space.

Evaluate the round before the last one

$ node bin/dry-run.js

Evaluate a round of the current smart contract version

To evaluate round index 123:

$ node bin/dry-run.js 123

Evaluate a round of a given smart contract version

To evaluate round index 123 of the smart contract with address 0xabc:

$ node bin/dry-run.js 0xabc 123

Specify CIDs of measurements

$ node bin/dry-run.js [contract] round [list of CIDs]

Save evaluated measurements

You can also save the evaluated measurements for further processing by running the script with the environment variable DUMP set to a non-empty value. The script will write the evaluated measurements to a CSV file. This CSV file can be easily converted to a spreadsheet, which makes it easy to perform further data analysis.

Save all measurements

$ DUMP=1 node bin/dry-run.js 7970
(...lots of logs...)
Evaluated measurements saved to measurements-7970-all.csv

Save measurements of one miner

Set DUMP to the miner ID you are interested in (f0123 in the example below):

$ DUMP=f0123 node bin/dry-run.js 7970
(...lots of logs...)
Storing measurements for miner id f0123
Evaluated measurements saved to measurements-7970-f0123.csv

Save measurements from one participant

Set DUMP to the participant address you are interested in (0xdead in the example below):

$ DUMP=0xdead node bin/dry-run.js 7970
(...lots of logs...)
Storing measurements from participant address 0xdead
Evaluated measurements saved to measurements-7970-0xdead.csv

Development

Set up PostgreSQL with default settings:

  • Port: 5432
  • User: your system user name
  • Password: blank
  • Database: spark_evaluate

Alternatively, set the environment variable $DATABASE_URL with postgres://${USER}:${PASS}@${HOST}:${PORT}/${DATABASE}.

The Postgres user and database need to exist already, and the user needs full management permissions for the database.

You can also run the following command to set up the PostgreSQL server via Docker:

docker run -d --name spark-db \
  -e POSTGRES_HOST_AUTH_METHOD=trust \
  -e POSTGRES_USER=$USER \
  -e POSTGRES_DB=spark_evaluate \
  -p 5432:5432 \
  postgres

If you are sharing the same Postgres instance for multiple projects, run the following command to create a new spark_evaluate database for this project:

psql postgres://localhost:5432/ -c "CREATE DATABASE spark_evaluate;"

Run the tests

$ npm test

Run the service

$ WALLET_SEED=$(cat secrets/mnemonic) npm start

Troubleshooting

You can perform a dry-run evaluation of a given Meridan round using the script bin/dry-run.js.

  1. Get your GLIF API access token at https://api.node.glif.io/

  2. Save the token to the .env file in project's root directory:

    GLIF_TOKEN="...your-token..."
  3. Run the dry-run script. By default, the script evaluates the last round of the current smart contract version.

    node bin/dry-run.js

You can optionally specify the smart contract address, round index and list of CIDs of measurements to load. For example, run the following command to evaluate round 273 of the Meridian version 0x3113b83ccec38a18df936f31297de490485d7b2e with measurements from CID bafybeie5rekb2jox77ow64wjjd2bjdsp6d3yeivhzzd234hnbpscfjarv4:

node bin/dry-run.js \
  0x3113b83ccec38a18df936f31297de490485d7b2e \
  273 \
  bafybeie5rekb2jox77ow64wjjd2bjdsp6d3yeivhzzd234hnbpscfjarv4

Deployment

$ git push

Publish

Publish a new version of @filecoin-station/spark-evaluate:

$ npm run release