npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

strobbery

v0.1.0

Published

An SPA capture and archival application

Downloads

3

Readme

Strobbery
hand over your source code
this is a strobbery

strobbery

a simple runtime to download single-page web apps and execute them locally

usage

to create a strobbery archive, you need to first write a descriptor file in yaml for the site you intend to capture. for the sake of the example, let's download https://gsplat.tech, a cool tech demo for a 3D rendering technology that would be nice to preserve:

strobbery: 1 # required to indicate the file signature

entryPoint: https://gsplat.tech # the url at which we enter the app
originGuard:
  resource: # all origins we'd like to capture. you'll have to dig into the network tab for these
    - https://gsplat.tech
    - https://kit.fontawesome.com
    - https://cdn.jsdelivr.net
    - https://ka-f.fontawesome.com
    - https://f005.backblazeb2.com
  navigable: # all origins you want to see in the address bar, usually a very short list
    - https://gsplat.tech
  blacklisted: # all origins that should never be loaded. useful for removing trackers
    - https://getinsights.io

allowNetworkFallback: false # whether resources outside the resource origin list can be loaded
networkFallbackMode: whitelist # controls whether unknown origins can be loaded
continuousCapture: false # whether we'd like the fallback responses to be captured as well

allowOutlinks: true # whether we allow the app to open links (these will open in the default browser)

# if the app you're capturing is a client to something, this allows for undisturbed api access
allowApiHosts: false
apiHosts:
  - https://api.example.com

then simply open strobbery, add this as a new capture (file -> new capture, you'll need to select the yaml descriptor file you created, and capture mode will be enabled by default), and navigate through the site. strobbery will capture all the documents in the background, and when you press "save as", it creates an archive of all of them.

if you already have an archive, you can go to file -> open and open the archive. strobbery will load the settings and the captured data from the file and you can browse the site as if it were live.

if you ever need to extend a capture because you realized you haven't fully explored a site in capture mode, you can continue your work by

  • loading up the file (file -> open)
  • switching to capture mode (runtime -> capture mode)
  • refreshing the site (runtime -> refresh site)

whenever you refresh or reload, strobbery clears all site data in-browser, so the site will behave as if you were visiting it for the first time. if this creates issues for you, report a bug -- handling of local site data is currently in the ideation phase.

if you'd like to test out your capture, just save it (file -> save as) and reload from disk (runtime -> reload from disk). this will turn capture mode off and load the site from the archive. if you have it configured with network fallback, consider turning it off and refreshing the site to see how it behaves without network access.

file format

strobbery files are just fancy zip files. if you'd like to look inside, just open the file in your favorite archive manager. the file structure is as follows:

  • strobbery.yaml: the descriptor file, as discussed above. it is copied into the strobbery archive verbatim, so if you'd like to make any changes, you can do it here.
  • captured/: a folder with the captured files. the autogenerated folder structure follows the file structure online, so for example https://example.com/dog.jpg would be found under captured/example.com/dog.jpg. this is done to help make the file structure navigable, but it holds no semantic meaning, files are matched according to their url, not their location in the folder structure.
    • captured/{file} holds the content of the file. this is a raw binary, so if the file is something that would normally make sense (like a jpeg image) you can export it directly.
    • captured/{file}.strb.yaml is the per-file metadata descriptor. this holds data like original headers and url at the time of capture.
  • continuous/: a folder with files captured during continuous capture. the structure of this folder is identical to captured/, with the main distinction being the lower importance of continuous capture -- this allows the "clear continuous caputre" setting to work between sessions.

here's an example for the per-file metadata descriptor:

url: https://gsplat.tech/5cbcc55e748139370334.jpg
headers:
  accept-ranges: bytes
  access-control-allow-origin: "*"
  # ...

internally, strobbery takes these and replays captured requests to the browser, keeping the request body and headers intact. matching is done based on the url field of the descriptor.