graphenix
v2.0.0
Published
Graphenix is a **graph execution planner**.
Readme
Graphenix (v2)
Graphenix is a graph execution planner.
It does not run your code for you — it tells you which node(s) should run next, and it helps you track execution state (history, variables, outputs).
This version supports two execution semantics:
- Flow / workflow execution (forward chaining) — “do step 1 → step 2 → step 3a+3b → step 4”
- Dependency / goal execution (backward chaining) — “to answer the goal, resolve these prerequisites first”
Install
npm i graphenixPackage is ESM (
"type": "module"). Import using ESM syntax.
Core idea
Graphenix separates the problem into two layers:
- Graph definition (nodes + edges)
- Execution semantics
edge.semantic = "flow"(default): forward / workflow DAGedge.semantic = "dependency": backward / goal decomposition DAG
You can mix both in the same graph (executionMode: "hybrid").
Terminology: Job, Task, Skill
Graphenix terms (how you should think about it):
- Job: one full graph execution (one run of a graph instance)
- Task: one node execution (each node is a task)
- Skill: internal implementation step(s) inside a task (inside your node runner)
How this maps to @athenices/execution-memory-manager
Graphenix stores the Job (graph run) as a RunHistory object, and each Task (node run) becomes a MemoryObject appended on engine.commit(...).
This keeps the memory timeline aligned with your execution model:
- Graphenix Job (graph execution) =>
RunHistory - Graphenix Task (node execution) => one
MemoryObjectper commit - Graphenix Skill (internal) => optional additional
MemoryObjects added by the node runner
Data model
Nodes
Nodes can be any type string you want. A few common ones are built in:
start,endstandard,decision,parallel,mergegoal,aggregation
For goal/dependency execution, nodes can optionally declare:
requires?: string[]— keys that must exist before this node is runnableprovides?: string[]— keys that become satisfied when this node commits successfully
Edges
Edges have two orthogonal concepts:
edge.semantic:flow(default) — workflow edgesdependency— prerequisite edges (fromis required forto)
edge.type(how the edge is chosen):directconditional(usesedge.condition)probabilistic(usesedge.weight)dimensional(placeholder for future dimension filtering)
GraphBuilder
import { GraphBuilder } from 'graphenix';
const graph = new GraphBuilder()
.id('example')
.dimensions('default')
.variables({})
.executionMode('forward') // 'forward' | 'backward' | 'hybrid'
.outputStore('both') // 'byNodeId' | 'byKey' | 'both'
.goals('main-goal') // optional default goal(s)
.entryPoint('start')
.addNode({ id: 'start', type: 'start', coordinates: { default: '0,0' } })
// ... add nodes/edges ...
.build();Notes:
- If you don’t set
.goals(...), goals are inferred from nodes wherenode.type === "goal".
GraphEngine
import { GraphEngine } from 'graphenix';
const engine = new GraphEngine(graph, /* initialState */ {});Planning vs committing
Graphenix separates:
plan(...)— ask “what should run next?”commit(...)— record the result of a node run
For backward-compatibility with classic workflow usage, getNextNodes(result) is still provided.
API
engine.plan({ mode?, fromNodeId?, goalNodeId?, dimension? })engine.commit({ nodeId, output?, error?, metadata? })engine.getNextNodes(result)→commit(result)+plan({ mode: 'forward', fromNodeId: result.nodeId })
Introspection:
engine.getContext()→ state, variables, outputs, historyengine.getHistory()engine.getVariable(name)/engine.setVariable(name, value)engine.reset(initialState?)
If memory integration is enabled (see below):
engine.getJobHistory()=> theRunHistoryobjectengine.getJobMemories(options?)=> allMemoryObjects (includes clips + objects by default)engine.getJobMemoriesMD(options?)=> markdown formatted historyengine.finalizeJobHistory(metadata?)=> setendTimeand merge metadata
Execution memory integration
Graphenix can automatically write each node execution into a RunHistory object using @athenices/execution-memory-manager.
Enable it via graph.config.memory.enabled:
import { GraphBuilder, GraphEngine } from 'graphenix';
const graph = new GraphBuilder()
.id('my-graph')
.dimensions('default')
.variables({})
.entryPoint('start')
.config({
memory: {
enabled: true,
autoAddOnCommit: true,
// Role strategy (recommended): makes every node commit queryable by its nodeId
// e.g. $memory.last.task:planner
roleStrategy: 'nodeId',
rolePrefix: 'task',
clipOnErrorLabel: 'errors',
historyContext: 'My Graph Job',
historyObjective: 'Execute my-graph'
}
})
// ... nodes/edges ...
.build();
const engine = new GraphEngine(graph, {});What gets written
Each engine.commit({ nodeId, output, error }) appends one MemoryObject to the job history:
context:Task <nodeId>objective:node.metadata.objective/label/title(fallback:node.type)role(Task role):- if
memory.useNodeMetadataRole !== falseandnode.metadata.roleis set => use it - else if
memory.roleStrategyis set => derived role (see below) - else if
memory.defaultRoleis set => use it (legacy) - else =>
task:<nodeId>(default)
- if
memory:{ nodeId, nodeType, status, timestamp, provides, output, error, metadata }
If error is present, the memory is clipped under clipOnErrorLabel.
Role strategies (Task roles)
Graphenix uses the memory role as the primary way to query task executions via $memory.*.
Default behavior (recommended): each task (node) is recorded with a role in the form:
task:<nodeId>
This makes it easy to reference a specific node’s latest run:
$memory.last.task:planner— last memory for nodeIdplanner$memory.last.task:planner.memory.output— output of the lastplannerrun
Role configuration
You can control role derivation in graph.config.memory:
roleStrategy: 'nodeId'(default) +rolePrefix: 'task'=>task:<nodeId>roleStrategy: 'nodeType'+rolePrefix: 'task'=>taskType:<nodeType>roleStrategy: 'custom'+customRole: '...'=> always uses that role
Node override
If memory.useNodeMetadataRole !== false (default) and node.metadata.role is set, it takes priority.
Note about dots in node IDs
$memory.* uses . as a path separator, so roles cannot include .. If your nodeId contains dots, set node.metadata.role explicitly (e.g. task:my_node) or use a customRole.
Using memory in conditions ($memory.*)
Graphenix conditions can reference the current job memory using $memory.* paths:
$memory.last— last task memory (any role)$memory.last.<role>— last task memory for a role$memory.last.<role>[N]— last N task memories for a role (array)$memory.first/$memory.first.<role>— same, from the start$memory.clip.<label>— clipped memories for a label (array)$memory.all— all memories (includes clips + objects)$memory.all.<role>— all memories filtered by role
Example edge condition:
{
from: 'decide',
to: 'fallback',
semantic: 'flow',
type: 'conditional',
condition: {
type: 'simple',
simple: {
// Example: check whether the last run of node "decide" had an error
left: '$memory.last.task:decide.memory.error',
operator: 'exists',
right: true
}
}
}Execution mode A — dependency / goal execution (backward chaining)
This models Graph A: a main goal depends on sub-questions, which may depend on sub-sub-questions.
How to model it
- Use
edge.semantic: "dependency" - Direction is prerequisite → dependent:
subQuestion -> mainGoalsubSub -> subQuestion
Optional (recommended):
- Use
node.type: "goal"for your goal node. - Use
node.providesto declare what a node satisfies.
How it runs
engine.plan({ mode: 'backward', goalNodeId })returns the set of runnable unmet prerequisites (often parallelizable).- You execute those nodes (your code), then call
engine.commit(...). - Repeat planning until status is
complete.
Example
const plan1 = engine.plan({ mode: 'backward', goalNodeId: 'main' });
let plan = plan1;
while (plan.status === 'continue') {
// Run leaves (often parallel)
for (const n of plan.nextNodes) {
const output = await runNode(n.nodeId);
engine.commit({ nodeId: n.nodeId, output });
}
plan = engine.plan({ mode: 'backward', goalNodeId: 'main' });
}
if (plan.status === 'complete') {
// goal is resolved (its prerequisites are satisfied and/or it ran)
}When is a node considered “resolved”?
A node is resolved if any of these are true:
- it was committed successfully (
nodeStatus[nodeId] === 'success') - an output exists in
outputs.byNodeId[nodeId] - OR it has
provides, and all those keys exist inoutputs.byKey
Execution mode B — workflow execution (forward chaining)
This models Graph B: a procedural pipeline with optional branching and fork–join.
How to model it
- Use
edge.semantic: "flow"(or omit it;flowis default) - Use edge
typeto control routing (direct,conditional, ...)
Typical usage
// After you run node "step2" in your own executor:
const plan = engine.getNextNodes({ nodeId: 'step2', output: { ok: true } });
// plan.nextNodes tells you what to execute next
for (const n of plan.nextNodes) {
await runNode(n.nodeId);
}Conditional edges
{
from: 'decision',
to: 'pathA',
semantic: 'flow',
type: 'conditional',
condition: {
type: 'simple',
simple: { left: '$state.score', operator: '>=', right: 80 }
}
}If a decision node has no matching outgoing flow edge, planForward() returns status: 'waiting'.
Hybrid execution
Set graph.config.executionMode = 'hybrid' (or .executionMode('hybrid') in the builder).
Default hybrid behavior:
- If you call
plan({ goalNodeId })→ backward planning - Otherwise → forward planning
This lets you mix:
- a workflow pipeline that contains subgoal decomposition graphs
- or a goal decomposition graph that triggers small workflows once leaves are runnable
Variables, state, and outputs
context.state
Your mutable application state (passed in to new GraphEngine(graph, initialState))
context.variables
Graph variables (starts as graph.variables, then changes via assignments)
context.outputs
Stored results recorded via commit():
outputs.byNodeId[nodeId] = outputoutputs.byKey[key] = output(for each key innode.provides)
The storage behavior is controlled by graph.config.outputStore:
byNodeIdbyKeyboth(default)
LogicEngine helpers
Graphenix includes a small logic engine used by conditional edges and variable assignments.
Value references
Inside conditions/assignments, you can reference values using $... paths:
$foo.bar→context.variables.foo.bar$state.x.y→context.state.x.y$outputs.byKey.answer:subQ1→context.outputs.byKey["answer:subQ1"]$outputs.byNodeId.nodeA→context.outputs.byNodeId["nodeA"]
Loops (forward mode)
A node can loop (re-run itself) using node.config.loop:
{
id: 'poll',
type: 'standard',
coordinates: { default: '0,0' },
config: {
loop: {
enabled: true,
counterVar: 'pollCount',
maxIterations: 5,
condition: {
type: 'simple',
simple: { left: '$state.done', operator: '==', right: false }
}
}
}
}The engine increments counterVar each time it schedules the loop.
Design constraints (intentional)
- Graphenix is planner/orchestrator only — you provide the node executor.
- Backward planning uses dependency edges and optional
requires/provides. - Forward planning uses flow edges and edge routing conditions.
