npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@volynets/reflex

v0.3.4

Published

Public Reflex facade with a connected runtime

Readme

@volynets/reflex

npm version runtime: Reflex scheduler: explicit typed with TypeScript tested with Vitest module formats: ESM+CJS CI npm downloads license: MIT

Small signal-style reactivity on top of the Reflex runtime.

@volynets/reflex is the product-facing API for building reactive state, derived values, effects, and event-driven state without dropping down to the lower-level runtime primitives.

This package is intentionally the facade/scheduler layer. Policy such as flush(), effectStrategy, batch() behavior, and event delivery semantics lives here, not in @reflex/runtime.

It gives you:

  • a compact signal-style API
  • runtime-backed execution with explicit effect flushing
  • disposable typed models with batched untracked actions
  • event sources plus composition helpers like map(), filter(), merge(), scan(), and hold()
  • predictable semantics for lazy derived values and scheduled effects

Under the hood it is built on:

Install

npm install @volynets/reflex

Quick Start

import { computed, createRuntime, effect, signal } from "@volynets/reflex";

const rt = createRuntime();

const [count, setCount] = signal(1);
const doubled = computed(() => count() * 2);

effect(() => {
  console.log("doubled =", doubled());
});

setCount(2);
rt.flush();

What is happening here:

  • createRuntime() configures the active Reflex runtime and returns runtime controls
  • signal(), computed(), and effect() use that active runtime
  • effect() runs once immediately when created
  • later effect re-runs are scheduled, so with the default runtime you call rt.flush() to run them
  • computed() stays lazy and does not need flush() just to produce the latest value on read

Mental Model

Think of the package as one connected model:

  1. Call createRuntime() once during setup.
  2. Build state with signal(), computed(), memo(), and effect().
  3. Create event sources with rt.event().
  4. Compose event sources with map(), filter(), merge(), and use scan() / hold() to turn events into readable state.
  5. Call rt.flush() when you want scheduled effects to run in the default "flush" mode.

The top-level primitives are not methods on rt, but they are still runtime-backed. createRuntime() is not only for event() and flush(): it configures the runtime that the public primitives run against.

Design Goals

  • Keep the public API small and easy to teach.
  • Preserve explicit runtime control instead of hiding scheduling.
  • Make derived state cheap to read through lazy cached computeds.
  • Support both state-style and event-style reactive flows.
  • Provide a small ownership boundary for feature-local reactive state.
  • Expose low-level escape hatches only when needed, without forcing them into normal usage.

Core Primitives

Signals and derived values

import { computed, createRuntime, memo, signal } from "@volynets/reflex";

createRuntime();

const [price, setPrice] = signal(100);
const [tax, setTax] = signal(20);

const total = computed(() => price() + tax());
const warmed = memo(() => total() * 2);

console.log(total());  // 120
console.log(warmed()); // 240

setPrice(120);
setTax(25);

console.log(total());  // 145
console.log(warmed()); // 290

Events and accumulated state

import { computed, createRuntime, effect, hold, scan } from "@volynets/reflex";

const rt = createRuntime();
const updates = rt.event<number>();

const [total, disposeTotal] = scan(updates, 0, (acc, value) => acc + value);
const [latest, disposeLatest] = hold(updates, 0);
const summary = computed(() => `${latest()} / ${total()}`);

effect(() => {
  console.log(summary());
});

updates.emit(1);
updates.emit(2);
rt.flush();

disposeTotal();
disposeLatest();

scan() and hold() return tuples on purpose:

  • the first item is the accessor you read from
  • the second item is a disposer that unsubscribes from the event source and releases the internal node

Models

import {
  computed,
  createModel,
  createRuntime,
  own,
  signal,
} from "@volynets/reflex";

createRuntime();

const createCounterModel = createModel((ctx, initial = 0) => {
  const [count, setCount] = signal(initial);
  const doubled = computed(() => count() * 2);

  const timer = own(ctx, {
    [Symbol.dispose]() {
      console.log("timer disposed");
    },
  });

  return {
    count,
    doubled,
    inc: ctx.action(() => setCount((value) => value + 1)),
    reset: ctx.action(() => setCount(initial)),
    timer,
  };
});

const counter = createCounterModel(1);

counter.inc();
console.log(counter.doubled()); // 4

counter[Symbol.dispose]();

Model rules:

  • return only readable reactive values, ctx.action(...), and nested objects
  • model actions run untracked and inside the active batch()
  • use ctx.onDispose(...) or own(ctx, value) for owned resources
  • do not return effect() from a model; effects are rejected by both types and runtime validation
  • in dev builds, own() warns if the resource looks already disposed

Full contract: docs/models.md

Model Semantics (Contract)

  1. Lifecycle
    • created when you call the factory returned by createModel(...)
    • disposed when model[Symbol.dispose]() is called
    • disposal is idempotent: calling model[Symbol.dispose]() multiple times has no additional effect
    • a disposed model is "dead": actions throw, and cleanup hooks will not run again
    • dead models are not reusable; construct a new instance instead
  2. Ownership contract
    • own(ctx, value) registers one disposal; sharing the same resource across multiple models will dispose it multiple times
    • passing an already-disposed resource is allowed but discouraged; own() does not guard against it
    • dispose order is guaranteed LIFO (last registered cleanup runs first)
    • nested models can be owned: own(ctx, createChildModel()) is valid
  3. Action semantics
    • actions can be nested; each action participates in the active batch()
    • if no batch is active, the outermost action creates one; nested actions reuse it and do not flush independently
    • actions run untracked; dependency tracking is suspended during the action
    • if an action throws, the error is rethrown and tracking/batch state is restored
    • return values pass through unchanged
    • actions are synchronous for reactive correctness; async work runs outside the batch/untracked scope
  4. Post-dispose behavior
    • actions always throw after disposal
    • during disposal, the model is already marked dead; actions invoked from cleanups throw the same as after disposal
    • reads from previously returned accessors are outside the model contract: they may appear to work, but are not guaranteed to be valid or stable
    • effects are not allowed in models; subscriptions or external resources must be torn down via ctx.onDispose()/own()
  5. Visibility
    • anything returned from the model is part of its public API
    • own(ctx, value) and ctx.onDispose(...) are lifecycle primitives, not public surface
    • keep internal details private by not returning them, or document them explicitly

Error policy for dispose:

  • all cleanups run in LIFO order
  • cleanup errors are logged and do not prevent remaining cleanups from running

Event composition

import {
  createRuntime,
  filter,
  map,
  merge,
  subscribeOnce,
} from "@volynets/reflex";

const rt = createRuntime();
const clicks = rt.event<number>();
const submits = rt.event<string>();

const importantClicks = filter(clicks, (value) => value > 0);
const labels = merge(
  map(importantClicks, (value) => `click:${value}`),
  map(submits, (value) => `submit:${value}`),
);

subscribeOnce(labels, (value) => {
  console.log("first label =", value);
});

Guarantees

  • computed(fn) is lazy. It does not run until the first read.
  • computed(fn) is cached. Repeated clean reads reuse the last value.
  • memo(fn) is computed(fn) plus one eager warm-up read.
  • effect(fn) runs once immediately on creation.
  • If an effect returns cleanup, that cleanup runs before the next effect run and on dispose.
  • With the default runtime, invalidated effects run on rt.flush().
  • With createRuntime({ effectStrategy: "sab" }), invalidated effects stay lazy during a batch and auto-deliver when the outermost batch exits.
  • With createRuntime({ effectStrategy: "eager" }), invalidated effects flush automatically.
  • Pure signal and computed reads do not require flush().
  • Same-value signal writes do not force recomputation.
  • Derived events created with map(), filter(), and merge() are lazy and subscribe upstream only while observed.
  • scan() and hold() update only from event deliveries.
  • Nested event emits are delivered after the current delivery finishes, preserving order.

Runtime

createRuntime() returns an object with the runtime-facing pieces of the model:

const rt = createRuntime({
  effectStrategy: "flush", // or "sab" / "eager"
  hooks: {
    onSinkInvalidated(node) {
      // low-level integration hook
    },
  },
});

Options:

  • effectStrategy: "flush" | "sab" | "eager" controls whether invalidated effects wait for rt.flush(), stabilize after batch(), or run automatically
  • hooks.onSinkInvalidated(node) is the low-level hook for integrations that want to observe sink invalidation

Returned API:

  • rt.event<T>() creates an event source with emit(value) and subscribe(fn)
  • rt.flush() runs queued effects
  • rt.ctx exposes the underlying runtime context for low-level integration, debugging, or tests

Important notes:

  • For normal app code, create one runtime near startup and keep using the top-level primitives.
  • ctx is low-level. Most users should not need it.
  • Creating a new runtime resets the shared runtime state. It is best treated as app setup or test isolation, not as something you create repeatedly inside feature code.

Unstable

Experimental helpers live under @volynets/reflex/unstable.

optimistic(valueOrFn)

Creates a temporary optimistic overlay on top of either a fixed fallback value or a tracked derived fallback.

import { createRuntime, signal } from "@volynets/reflex";
import { optimistic, transition } from "@volynets/reflex/unstable";

createRuntime();

const [serverTitle, setServerTitle] = signal("Draft");
const [title, setTitle] = optimistic(() => serverTitle());

await transition(async () => {
  setTitle("Saving...");
  setServerTitle("Published");
  await Promise.resolve();
});

console.log(title()); // "Published"

Useful patterns:

  • fixed fallback with automatic microtask revert
const [status, setStatus] = optimistic("idle");

setStatus("saving");
console.log(status()); // "saving"

await Promise.resolve();
console.log(status()); // "idle"
  • updater functions build on the latest optimistic value
const [count, setCount] = optimistic(10);

setCount((prev) => prev + 5);
setCount((prev) => prev * 2);

console.log(count()); // 30

transition(fn)

Keeps optimistic layers created during fn alive until the transition settles. For sync callbacks that means until fn returns. For async callbacks that means until the returned promise resolves or rejects.

API Reference

signal(initialValue)

Creates writable reactive state.

const [value, setValue] = signal(0);
  • value() reads the current value
  • setValue(next) writes a new value
  • setValue((prev) => next) supports updater functions

computed(fn)

Creates a lazy derived accessor.

const doubled = computed(() => count() * 2);
  • tracks dependencies dynamically while fn runs
  • caches the last computed value
  • recomputes on demand when dirty and read again

memo(fn)

Creates a computed accessor and warms it once immediately.

const total = memo(() => price() + tax());

Use it when you want computed semantics with an eager first read.

effect(fn)

Creates a reactive effect.

const stop = effect(() => {
  console.log(count());
});
  • runs immediately once
  • tracks reactive reads
  • may return a cleanup function
  • cleanup runs before the next execution and on dispose
  • returns a callable disposer with .dispose()

createModel(factory)

Creates a typed disposable model factory.

const createTodoModel = createModel((ctx) => {
  const [title, setTitle] = signal("");

  return {
    title,
    rename: ctx.action((next: string) => setTitle(next)),
  };
});
  • returned model instances are disposable via model[Symbol.dispose]()
  • ctx.action(...) creates the only supported function values inside the model shape
  • actions are batched and untracked
  • nested objects are allowed
  • effect() values are forbidden inside the returned model shape

own(ctx, value)

Registers a nested disposable so it is disposed together with the model.

const socket = own(ctx, {
  [Symbol.dispose]() {
    ws.close();
  },
});

isModel(value)

Returns true when value exposes the Reflex model disposal surface, including models created by createModel().

rt.event<T>()

Creates an event source.

const clicks = rt.event<number>();
  • clicks.emit(value) delivers an event
  • clicks.subscribe(fn) subscribes to events

subscribeOnce(source, fn)

Subscribes to the next value from source, then unsubscribes automatically.

subscribeOnce(clicks, (value) => {
  console.log("first click =", value);
});

map(source, project)

Projects each event value into a new event stream.

const labels = map(clicks, (value) => `click:${value}`);

filter(source, predicate)

Forwards only the values that satisfy predicate.

const positive = filter(clicks, (value) => value > 0);

merge(...sources)

Combines multiple event sources into one event stream.

const all = merge(clicks, submits);

scan(source, seed, reducer)

Accumulates event values over time and returns [read, dispose].

const [total, dispose] = scan(clicks, 0, (acc, value) => acc + value);

reducer should be a pure event reducer. If you want to combine the accumulated value with reactive state, do that outside the reducer via computed().

hold(source, initial)

Stores the latest event payload and returns [read, dispose].

const [latest, dispose] = hold(updates, "idle");

Equivalent in behavior to:

scan(updates, "idle", (_, value) => value);

FAQ

Are signal(), computed(), and effect() global, or tied to rt?

They are exported as top-level functions, but they run against the currently configured Reflex runtime. createRuntime() sets up that runtime and gives you the runtime controls such as event(), flush(), and ctx.

Do I always need to call flush()?

No. You need flush() for scheduled effects when using the default effectStrategy: "flush". In effectStrategy: "sab", effects auto-deliver after the outermost batch(). You do not need flush() just to read up-to-date signal() or computed() values.

Is computed() lazy or eager?

Lazy. It does not run until the first read. After that it behaves like a cached derived value that recomputes only when dirty and read again.

What is the difference between computed() and memo()?

memo() is a warmed computed(). It performs one eager read immediately after creation, then keeps the same accessor semantics.

Does effect() run immediately?

Yes. It runs once on creation. Future re-runs happen after invalidation, either on rt.flush(), at the end of an outermost batch in sab, or automatically when using the eager effect strategy.

Why do scan() and hold() return tuples instead of only an accessor?

Because they own an event subscription. The accessor lets you read the current accumulated state, and the disposer lets you unsubscribe and clean up explicitly.

Should I use rt.ctx?

Usually no. ctx is a low-level escape hatch for integration code, tests, and runtime debugging.

Can I create multiple runtimes?

Treat createRuntime() as creating the active runtime for an app instance or test. Creating a new runtime resets shared runtime state, so this is not intended as a pool of concurrently active runtimes inside one reactive graph.

License

MIT