npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

graphenix

v2.0.0

Published

Graphenix is a **graph execution planner**.

Readme

Graphenix (v2)

Graphenix is a graph execution planner.

It does not run your code for you — it tells you which node(s) should run next, and it helps you track execution state (history, variables, outputs).

This version supports two execution semantics:

  • Flow / workflow execution (forward chaining)“do step 1 → step 2 → step 3a+3b → step 4”
  • Dependency / goal execution (backward chaining)“to answer the goal, resolve these prerequisites first”

Install

npm i graphenix

Package is ESM ("type": "module"). Import using ESM syntax.


Core idea

Graphenix separates the problem into two layers:

  1. Graph definition (nodes + edges)
  2. Execution semantics
    • edge.semantic = "flow" (default): forward / workflow DAG
    • edge.semantic = "dependency": backward / goal decomposition DAG

You can mix both in the same graph (executionMode: "hybrid").


Terminology: Job, Task, Skill

Graphenix terms (how you should think about it):

  • Job: one full graph execution (one run of a graph instance)
  • Task: one node execution (each node is a task)
  • Skill: internal implementation step(s) inside a task (inside your node runner)

How this maps to @athenices/execution-memory-manager

Graphenix stores the Job (graph run) as a RunHistory object, and each Task (node run) becomes a MemoryObject appended on engine.commit(...).

This keeps the memory timeline aligned with your execution model:

  • Graphenix Job (graph execution) => RunHistory
  • Graphenix Task (node execution) => one MemoryObject per commit
  • Graphenix Skill (internal) => optional additional MemoryObjects added by the node runner

Data model

Nodes

Nodes can be any type string you want. A few common ones are built in:

  • start, end
  • standard, decision, parallel, merge
  • goal, aggregation

For goal/dependency execution, nodes can optionally declare:

  • requires?: string[] — keys that must exist before this node is runnable
  • provides?: string[] — keys that become satisfied when this node commits successfully

Edges

Edges have two orthogonal concepts:

  • edge.semantic:

    • flow (default) — workflow edges
    • dependency — prerequisite edges (from is required for to)
  • edge.type (how the edge is chosen):

    • direct
    • conditional (uses edge.condition)
    • probabilistic (uses edge.weight)
    • dimensional (placeholder for future dimension filtering)

GraphBuilder

import { GraphBuilder } from 'graphenix';

const graph = new GraphBuilder()
  .id('example')
  .dimensions('default')
  .variables({})
  .executionMode('forward')          // 'forward' | 'backward' | 'hybrid'
  .outputStore('both')               // 'byNodeId' | 'byKey' | 'both'
  .goals('main-goal')                // optional default goal(s)
  .entryPoint('start')
  .addNode({ id: 'start', type: 'start', coordinates: { default: '0,0' } })
  // ... add nodes/edges ...
  .build();

Notes:

  • If you don’t set .goals(...), goals are inferred from nodes where node.type === "goal".

GraphEngine

import { GraphEngine } from 'graphenix';

const engine = new GraphEngine(graph, /* initialState */ {});

Planning vs committing

Graphenix separates:

  • plan(...) — ask “what should run next?”
  • commit(...) — record the result of a node run

For backward-compatibility with classic workflow usage, getNextNodes(result) is still provided.

API

  • engine.plan({ mode?, fromNodeId?, goalNodeId?, dimension? })
  • engine.commit({ nodeId, output?, error?, metadata? })
  • engine.getNextNodes(result)commit(result) + plan({ mode: 'forward', fromNodeId: result.nodeId })

Introspection:

  • engine.getContext() → state, variables, outputs, history
  • engine.getHistory()
  • engine.getVariable(name) / engine.setVariable(name, value)
  • engine.reset(initialState?)

If memory integration is enabled (see below):

  • engine.getJobHistory() => the RunHistory object
  • engine.getJobMemories(options?) => all MemoryObjects (includes clips + objects by default)
  • engine.getJobMemoriesMD(options?) => markdown formatted history
  • engine.finalizeJobHistory(metadata?) => set endTime and merge metadata

Execution memory integration

Graphenix can automatically write each node execution into a RunHistory object using @athenices/execution-memory-manager.

Enable it via graph.config.memory.enabled:

import { GraphBuilder, GraphEngine } from 'graphenix';

const graph = new GraphBuilder()
  .id('my-graph')
  .dimensions('default')
  .variables({})
  .entryPoint('start')
  .config({
    memory: {
      enabled: true,
      autoAddOnCommit: true,
      // Role strategy (recommended): makes every node commit queryable by its nodeId
      // e.g. $memory.last.task:planner
      roleStrategy: 'nodeId',
      rolePrefix: 'task',
      clipOnErrorLabel: 'errors',
      historyContext: 'My Graph Job',
      historyObjective: 'Execute my-graph'
    }
  })
  // ... nodes/edges ...
  .build();

const engine = new GraphEngine(graph, {});

What gets written

Each engine.commit({ nodeId, output, error }) appends one MemoryObject to the job history:

  • context: Task <nodeId>
  • objective: node.metadata.objective/label/title (fallback: node.type)
  • role (Task role):
    • if memory.useNodeMetadataRole !== false and node.metadata.role is set => use it
    • else if memory.roleStrategy is set => derived role (see below)
    • else if memory.defaultRole is set => use it (legacy)
    • else => task:<nodeId> (default)
  • memory: { nodeId, nodeType, status, timestamp, provides, output, error, metadata }

If error is present, the memory is clipped under clipOnErrorLabel.

Role strategies (Task roles)

Graphenix uses the memory role as the primary way to query task executions via $memory.*.

Default behavior (recommended): each task (node) is recorded with a role in the form:

  • task:<nodeId>

This makes it easy to reference a specific node’s latest run:

  • $memory.last.task:planner — last memory for nodeId planner
  • $memory.last.task:planner.memory.output — output of the last planner run

Role configuration

You can control role derivation in graph.config.memory:

  • roleStrategy: 'nodeId' (default) + rolePrefix: 'task' => task:<nodeId>
  • roleStrategy: 'nodeType' + rolePrefix: 'task' => taskType:<nodeType>
  • roleStrategy: 'custom' + customRole: '...' => always uses that role

Node override

If memory.useNodeMetadataRole !== false (default) and node.metadata.role is set, it takes priority.

Note about dots in node IDs

$memory.* uses . as a path separator, so roles cannot include .. If your nodeId contains dots, set node.metadata.role explicitly (e.g. task:my_node) or use a customRole.


Using memory in conditions ($memory.*)

Graphenix conditions can reference the current job memory using $memory.* paths:

  • $memory.last — last task memory (any role)
  • $memory.last.<role> — last task memory for a role
  • $memory.last.<role>[N] — last N task memories for a role (array)
  • $memory.first / $memory.first.<role> — same, from the start
  • $memory.clip.<label> — clipped memories for a label (array)
  • $memory.all — all memories (includes clips + objects)
  • $memory.all.<role> — all memories filtered by role

Example edge condition:

{
  from: 'decide',
  to: 'fallback',
  semantic: 'flow',
  type: 'conditional',
  condition: {
    type: 'simple',
    simple: {
      // Example: check whether the last run of node "decide" had an error
      left: '$memory.last.task:decide.memory.error',
      operator: 'exists',
      right: true
    }
  }
}

Execution mode A — dependency / goal execution (backward chaining)

This models Graph A: a main goal depends on sub-questions, which may depend on sub-sub-questions.

How to model it

  • Use edge.semantic: "dependency"
  • Direction is prerequisite → dependent:
    • subQuestion -> mainGoal
    • subSub -> subQuestion

Optional (recommended):

  • Use node.type: "goal" for your goal node.
  • Use node.provides to declare what a node satisfies.

How it runs

  • engine.plan({ mode: 'backward', goalNodeId }) returns the set of runnable unmet prerequisites (often parallelizable).
  • You execute those nodes (your code), then call engine.commit(...).
  • Repeat planning until status is complete.

Example

const plan1 = engine.plan({ mode: 'backward', goalNodeId: 'main' });

let plan = plan1;
while (plan.status === 'continue') {
  // Run leaves (often parallel)
  for (const n of plan.nextNodes) {
    const output = await runNode(n.nodeId);
    engine.commit({ nodeId: n.nodeId, output });
  }

  plan = engine.plan({ mode: 'backward', goalNodeId: 'main' });
}

if (plan.status === 'complete') {
  // goal is resolved (its prerequisites are satisfied and/or it ran)
}

When is a node considered “resolved”?

A node is resolved if any of these are true:

  • it was committed successfully (nodeStatus[nodeId] === 'success')
  • an output exists in outputs.byNodeId[nodeId]
  • OR it has provides, and all those keys exist in outputs.byKey

Execution mode B — workflow execution (forward chaining)

This models Graph B: a procedural pipeline with optional branching and fork–join.

How to model it

  • Use edge.semantic: "flow" (or omit it; flow is default)
  • Use edge type to control routing (direct, conditional, ...)

Typical usage

// After you run node "step2" in your own executor:
const plan = engine.getNextNodes({ nodeId: 'step2', output: { ok: true } });

// plan.nextNodes tells you what to execute next
for (const n of plan.nextNodes) {
  await runNode(n.nodeId);
}

Conditional edges

{
  from: 'decision',
  to: 'pathA',
  semantic: 'flow',
  type: 'conditional',
  condition: {
    type: 'simple',
    simple: { left: '$state.score', operator: '>=', right: 80 }
  }
}

If a decision node has no matching outgoing flow edge, planForward() returns status: 'waiting'.


Hybrid execution

Set graph.config.executionMode = 'hybrid' (or .executionMode('hybrid') in the builder).

Default hybrid behavior:

  • If you call plan({ goalNodeId }) → backward planning
  • Otherwise → forward planning

This lets you mix:

  • a workflow pipeline that contains subgoal decomposition graphs
  • or a goal decomposition graph that triggers small workflows once leaves are runnable

Variables, state, and outputs

context.state

Your mutable application state (passed in to new GraphEngine(graph, initialState))

context.variables

Graph variables (starts as graph.variables, then changes via assignments)

context.outputs

Stored results recorded via commit():

  • outputs.byNodeId[nodeId] = output
  • outputs.byKey[key] = output (for each key in node.provides)

The storage behavior is controlled by graph.config.outputStore:

  • byNodeId
  • byKey
  • both (default)

LogicEngine helpers

Graphenix includes a small logic engine used by conditional edges and variable assignments.

Value references

Inside conditions/assignments, you can reference values using $... paths:

  • $foo.barcontext.variables.foo.bar
  • $state.x.ycontext.state.x.y
  • $outputs.byKey.answer:subQ1context.outputs.byKey["answer:subQ1"]
  • $outputs.byNodeId.nodeAcontext.outputs.byNodeId["nodeA"]

Loops (forward mode)

A node can loop (re-run itself) using node.config.loop:

{
  id: 'poll',
  type: 'standard',
  coordinates: { default: '0,0' },
  config: {
    loop: {
      enabled: true,
      counterVar: 'pollCount',
      maxIterations: 5,
      condition: {
        type: 'simple',
        simple: { left: '$state.done', operator: '==', right: false }
      }
    }
  }
}

The engine increments counterVar each time it schedules the loop.


Design constraints (intentional)

  • Graphenix is planner/orchestrator only — you provide the node executor.
  • Backward planning uses dependency edges and optional requires/provides.
  • Forward planning uses flow edges and edge routing conditions.