npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

swarpc

v0.20.0

Published

Full type-safe RPC library for service worker -- move things off of the UI thread with ease!

Readme

RPC for Service (and other types of) Workers -- move that heavy computation off of your UI thread!

npm bundle size NPM Version GitHub branch check runs


Features

  • Fully typesafe
  • Lightweight: no dependencies, less than 5 kB (minified+gzipped)
  • Supports any Standard Schema-compliant validation library (ArkType, Zod, Valibot, etc.)
  • Cancelable requests
  • Parallelization with multiple worker instances
  • Automatic transfer of transferable values from- and to- worker code
  • A way to polyfill a pre-filled localStorage to be accessed within the worker code
  • First-class support for signaling progress updates (and e.g. display a progress bar)
  • Supports Service workers, Shared workers and Dedicated workers

Installation

npm add swarpc

Also add a Standard-Schema-compliant validation library of your choosing

# For example
npm add arktype

If you want to use the latest commit instead of a published version, you can, either by using the Git URL:

npm add git+https://github.com/gwennlbh/swarpc.git

Or by straight up cloning the repository and pointing to the local directory (very useful to hack on sw&rpc while testing out your changes on a more substantial project):

mkdir -p vendored
git clone https://github.com/gwennlbh/swarpc.git vendored/swarpc
npm add file:vendored/swarpc

This works thanks to the fact that dist/ is published on the repository (and kept up to date with a CI workflow).

Usage

[!NOTE] We use ArkType in the following examples, but, as stated above, any validation library is a-okay (provided that it is Standard Schema v1-compliant)

1. Declare your procedures in a shared file

import type { ProceduresMap } from "swarpc";
import { type } from "arktype";

export const procedures = {
  searchIMDb: {
    // Input for the procedure
    input: type({ query: "string", "pageSize?": "number" }),
    // Function to be called whenever you can update progress while the procedure is running -- long computations are a first-class concern here. Examples include using the fetch-progress NPM package.
    progress: type({ transferred: "number", total: "number" }),
    // Output of a successful procedure call
    success: type({
      id: "string",
      primary_title: "string",
      genres: "string[]",
    }).array(),
  },
} as const satisfies ProceduresMap;

2. Register your procedures in worker

In your worker file:

import fetchProgress from "fetch-progress"
import { Server } from "swarpc"
import { procedures } from "./procedures.js"

// 1. Give yourself a server instance
const swarpc = Server(procedures)

// 2. Implement your procedures
swarpc.searchIMDb(async ({ query, pageSize = 10 }, onProgress) => {
  const queryParams = new URLSearchParams({
    page_size: pageSize.toString(),
    query,
  })

  return fetch(`https://rest.imdbapi.dev/v2/search/titles?${queryParams}`)
    .then(fetchProgress({ onProgress }))
    .then((response) => response.json())
    .then(({ titles } => titles)
})

// ...

// 3. Start the event listener
swarpc.start(self)

3. Call your procedures from the client

Here's a Svelte example!

<script>
    import { Client } from "swarpc"
    import { procedures } from "./procedures.js"

    const swarpc = Client(procedures)

    let query = $state("")
    let results = $state([])
    let progress = $state(0)
</script>

<search>
    <input type="text" bind:value={query} placeholder="Search IMDb" />
    <button onclick={async () => {
        results = await swarpc.searchIMDb({ query }, (p) => {
            progress = p.transferred / p.total
        })
    }}>
        Search
    </button>
</search>

{#if progress > 0 && progress < 1}
    <progress value={progress} max="1" />
{/if}

<ul>
    {#each results as { id, primary_title, genres } (id)}
        <li>{primary_title} - {genres.join(", ")}</li>
    {/each}
</ul>

4. Registering your worker

Service Workers

If you use SvelteKit, just name your service worker file src/service-worker.ts

If you use any other (meta) framework, please contribute usage documentation here :)

Dedicated or Shared Workers

Preferred over service workers for heavy computations, since you can run multiple instances of them (see Configure parallelism)

If you use Vite, you can import files as Web Worker classes:

import { Client } from "swarpc";
import { procedures } from "$lib/off-thread/procedures.ts";
import OffThreadWorker from "$lib/off-thread/worker.ts?worker";

const client = Client(procedures, {
  worker: OffThreadWorker, // don't instanciate the class, sw&rpc does it
});

Configure parallelism

By default, when a worker is passed to the Client's options, the client will automatically spin up navigator.hardwareConcurrency worker instances and distribute requests among them. You can customize this behavior by setting the Client:options.nodes option to control the number of nodes (worker instances).

When Client:options.worker is not set, the client will use the Service worker (and thus only a single instance).

Send to multiple nodes

Use Client#(method name).broadcast to send the same request to all nodes at once. This method returns a Promise that resolves to an array of PromiseSettledResult (with an additional property, node, the ID of the node the request was sent to), one per node the request was sent to.

For example:

const client = Client(procedures, {
  worker: MyWorker,
  nodes: 4,
});

for (const result of await client.initDB.broadcast("localhost:5432")) {
  if (result.status === "rejected") {
    console.error(
      `Could not initialize database on node ${result.node}`,
      result.reason,
    );
  }
}

You also have a very convenient way to aggregate the results of all nodes, if you don't need to handle errors in a fine-grained way:

const userbase = await client.tableSize.broadcast
  .orThrow("users")
  .then((counts) => sum(counts))
  .catch((e) => {
    // e is an AggregateError with every failing node's error
    console.error("Could not get total user count:", e);
  });

Otherwise, you have access to a handful of convenience properties on the returned array, to help you narrow down what happened on each node:

async function userbase() {
  const counts = await client.tableSize.broadcast("users");

  if (counts.ko) {
    throw new Error(
      `All nodes failed to get table size: ${counts.failureSummary}`,
    );
  }

  return {
    exact: counts.ok,
    count:
      sum(counts.successes) +
      average(counts.successes) * counts.failures.length,
  };
}

Make cancelable requests

Implementation

To make your procedures meaningfully cancelable, you have to make use of the AbortSignal API. This is passed as a third argument when implementing your procedures:

server.searchIMDb(async ({ query }, onProgress, { abortSignal }) => {
  // If you're doing heavy computation without fetch:
  // Use `abortSignal?.throwIfAborted()` within hot loops and at key points
  for (...) {
    abortSignal?.throwIfAborted();
    ...
  }

  // When using fetch:
  await fetch(..., { signal: abortSignal })
})

Call sites

Instead of calling await client.myProcedure() directly, call client.myProcedure.cancelable(). You'll get back an object with

  • async cancel(reason): a function to cancel the request
  • request: a Promise that resolves to the result of the procedure call. await it to wait for the request to finish.

Example:

// Normal call:
const result = await swarpc.searchIMDb({ query });

// Cancelable call:
const { request, cancel } = swarpc.searchIMDb.cancelable({ query });
setTimeout(() => cancel().then(() => console.warn("Took too long!!")), 5_000);
await request;

Call in "once" mode

The "once" mode allows you to automatically cancel any previous ongoing call before running a new one. This is useful for scenarios like search-as-you-type, where you only care about the latest request.

Method-scoped once mode

Cancel any previous call of the same method:

// If any previous call of searchIMDb is ongoing, it gets cancelled beforehand
const result = await swarpc.searchIMDb.once({ query });

Method-scoped once mode with key

Cancel any previous call of the same method with the same key:

// If any previous call of searchIMDb with "foo" as the key is ongoing,
// it gets cancelled beforehand
const result = await swarpc.searchIMDb.onceBy("foo", { query });

This allows multiple concurrent calls with different keys:

// These two calls can run concurrently
const result1 = await swarpc.searchIMDb.onceBy("search-bar", {
  query: "action",
});
const result2 = await swarpc.searchIMDb.onceBy("sidebar", { query: "comedy" });

Global once mode

Cancel any ongoing call with the same global key, across all methods:

// Any call from ANY procedure with "global-search" key gets cancelled beforehand
const result = await swarpc.onceBy("global-search").searchIMDb({ query });

This is useful when you want to ensure only one operation of a certain type is running at a time, regardless of which procedure is being called.

With broadcasting

You can combine "once" mode with broadcasting as well, just use .broadcast.once or .broadcast.onceBy instead of .once or .onceBy:

// Load the inference model on all nodes. If we call this again before the previous model finishes loading,
// the previous load requests get cancelled.
await swarpc.loadInferenceModel.broadcast.once({ url });

Polyfill a localStorage for the Server to access

You might call third-party code that accesses on localStorage from within your procedures.

Some workers don't have access to the browser's localStorage, so you'll get an error.

You can work around this by specifying to swarpc localStorage items to define on the Server, and it'll create a polyfilled localStorage with your data.

An example use case is using Paraglide, a i18n library, with the localStorage strategy:

// In the client
import { getLocale } from "./paraglide/runtime.js";

const swarpc = Client(procedures, {
  localStorage: {
    PARAGLIDE_LOCALE: getLocale(),
  },
});

await swarpc.myProcedure(1, 0);

// In the server
import { m } from "./paraglide/runtime.js";
const swarpc = Server(procedures);

swarpc.myProcedure(async (a, b) => {
  if (b === 0) throw new Error(m.cannot_divide_by_zero());
  return a / b;
});