foxflow
v0.0.2
Published
Foxflow workflow library. Runtime distro only: contact for license.
Downloads
18
Readme
FoxFlow
A modern, TypeScript-first workflow engine that makes AI-native, file-centric automation approachable.
Define flows in a pragmatic DSL (a subset of the CNCF Serverless Workflow spec with focused extensions), run them in-process as a library, and get powerful extras: an in-memory virtual filesystem, jq expressions, LLM "agent" steps, and MCP tool integrations.
- CNCF Serverless Workflow alignment (subset + extensions)
- First-class AI agents (LLM configuration steps) with schema enforcement, context injection sugar, and tool calling
- In-memory virtual filesystem with workspaces, safe merges, and spread writes
- jq expressions with rich context and state mutation
- Model Context Protocol (MCP) tool discovery and execution
- Structured events with optional OpenTelemetry exporting
- JSON Schema validation for inputs/outputs and agent results
Table of contents
- Overview
- Installation
- Quick start
- Workflow DSL
- Agents
- Virtual filesystem
- Expressions
- MCP integrations and custom functions
- Events and telemetry
- Resilience (retry, wait)
- Data flow and state
- Sub-workflows
- API surface (exports)
- Configuration
- Limitations and gotchas
- Troubleshooting
- License
1) Overview
FoxFlow is a library. You provide a workflow definition (JSON or YAML) plus input, then run it in your application. The DSL aims to be familiar if you've seen the Cloud Native Computing Foundation's Serverless Workflow spec, while adding pragmatic features commonly needed for AI workflows.
What you get:
- A clean workflow runner that validates the DSL, processes input, executes tasks, and finalizes output.
- A lightweight state manager that tracks workflow state, context, and a virtual filesystem.
- An "agent" function that wraps LLMs, schema enforcement, context/knowledge injection, and file outputs.
- A function registry that discovers and runs MCP tools, and executes simple (static) HTTP or JavaScript functions.
- Emitted events you can subscribe to; optional OpenTelemetry exporting for traces/metrics/logs.
2) Installation
npm install foxflowNode 18+ is recommended.
Requirements:
- jq must be available on your system PATH for expression evaluation (via node-jq).
- isolated-vm is used to execute
run.scriptsafely. It is a native module and requires a C/C++ toolchain at install time.- macOS: install Xcode Command Line Tools
- Ubuntu/Debian:
sudo apt-get install -y python3 make g++ build-essential - Windows: follow the official node-gyp setup instructions
Node 20+ note: isolated-vm requires running Node with --no-node-snapshot. When running FoxFlow locally or running tests, ensure NODE_OPTIONS=--no-node-snapshot is set (the included package scripts already do this).
3) Quick start
TypeScript
import { runWorkflow, type WorkflowDefinition } from 'foxflow';
const wf: WorkflowDefinition = {
document: { dsl: '1.0.0', namespace: 'demo', name: 'hello', version: '1.0.0' },
do: [
{
greet: {
call: 'agent',
with: {
// Constant schema => no model call; the value is validated/returned
schema: {
type: 'object',
properties: {
message: { const: 'Hello from FoxFlow!' }
}
}
}
}
}
],
output: { as: '${.}' }
};
const result = await runWorkflow(wf, {});
console.log(result); // { message: 'Hello from FoxFlow!' }Configured runner and telemetry
import { createWorkflowRunner, initTelemetry } from 'foxflow';
await initTelemetry(); // optional; enables OpenTelemetry exporters (console by default)
const runner = createWorkflowRunner({
apiKeys: {
openai: process.env.OPENAI_API_KEY,
anthropic: process.env.ANTHROPIC_API_KEY,
provider: process.env.PROVIDER_API_KEY // e.g., Nebius (OpenAI-compatible)
},
telemetry: true // sets an env flag; initTelemetry() starts the SDK/exporters
});
const out = await runner.execute(wf, {});4) Workflow DSL
FoxFlow's DSL is grounded in the CNCF Serverless Workflow spec:
Alignment (subset)
- document: dsl | namespace | name | version
- input/output with JSON Schema validation (Draft 7)
- Control flow: do, for, switch, set, try, wait, raise
- Task-level retry policy (delay/backoff/jitter/limits)
- RFC 7807 Problem Details error model (typed errors)
Extensions
- call: "agent" (AI step with schema enforcement, tools, VFS writes)
- jq expressions with recursive evaluation and state mutation
- Virtual filesystem with workspaces, safe merges, and spread writes
- MCP tool discovery and execution (integration://)
- ViewSchema support for HTML-based schema definitions
Intentional differences/limitations
- Expressions are jq-based (not FEEL or JSONPath)
- try.catch.retry is accepted by schema but not executed today (see "Limitations")
- task timeout is defined by schema but not enforced; wait is implemented
- Sub-workflows are resolved from in-memory context maps (no registry/imports)
Minimal YAML example
document:
dsl: '1.0.0'
namespace: 'docs'
name: 'pipeline'
version: '1.0.0'
input:
schema:
document:
type: object
properties:
city: { type: string }
from: "${.}"
metadata:
agents:
writer:
systemPrompt: "You write clear technical notes"
model: "gpt-4o-mini"
# integrations: {} # optional MCP servers (see section 8)
do:
- write:
call: agent
with:
agentId: writer
instructions: "Write a short note about ${.city}"
outputPath: "/out/note.md"
output:
as: "${.}"Top-level properties
- document: workflow identity and DSL version
- input?: { schema.document?, from? }
- metadata?: { agents?, integrations?, workspaces? }
- use?: { functions? } // for HTTP or js scripts (see section 8)
- do: ordered list of named tasks
- output?: { schema.document?, as? }
Task types (shared fields)
- if?: string // jq expression; skip task if falsy
- input?: { schema?, from? }
- output?: { schema?, as? }
- export?: { schema?, as? } // write into $context
- retry?: RetryPolicy // task-level retry (supported)
- then?: 'continue' | 'exit' | 'end' | 'taskName'
- timeout?: Duration // present in schema; not enforced today
Supported tasks
call
- 'agent' → AI agent (see section 5)
- 'integration://discover'
- 'integration://{integration}/{tool}'
- custom function name → use.functions entry (HTTP or js)
run
- run.script: { language: "js", code, arguments? }
- run.workflow: { namespace, name, version, input? }
- Subflows are resolved from in-memory context maps (see section 12)
do: sequential list of tasks
for: iteration over collections
- processItems: for: each: item in: "${.items}" at: index while: "${ $item.active == true }" do: - processOne: set: { result: "${ $item.value * 2 }" }switch: conditional branching; the first matching case runs, else default
set: mutate current data with an object or expression
wait: pause by ISO 8601 or inline duration (implemented)
try: run tasks, catch errors conditionally, and run fallback tasks
raise: throw a typed problem details error (or simple string detail)
5) Agents
Agents provide LLM steps with schema enforcement and file output, plus optional tool calling and knowledge/context ingestion.
with fields (selected)
- agentId?: string // reuse metadata.agents entry
- instructions?: string // prompt instructions
- systemPrompt?: string // system prompt
- model?: string // model name (e.g., 'gpt-4o-mini')
- schema?: JSONSchema // enforce output structure; const values skip LLM calls
- outputPath?: string | OutputPathMap // write files to virtual filesystem
- tools?: string[] | 'none' // tool allowlist; omit for auto-discovery
- workspace?: string // VFS workspace (default: 'default')
- searchContext?: SearchConfig // inject knowledge/context
- inject?: Record<string, any> // additional template variables
Prompts and templating
- Three distinct templating mechanisms:
- jq expressions:
"${.expression}"— full context, anywhere expressions are accepted - Path templating:
/docs/{version}/...— useswith.injectvalues only, in outputPath keys - Agent prompt injection:
<name/>— inserts values in instructions; objects render as YAML
- jq expressions:
Schema shortcuts
- When schema uses only
constvalues, the LLM call is skipped and the constant structure is returned - This provides fast, validated outputs for deterministic steps
ViewSchema support
Schemas can be defined using viewschema HTML templates instead of inline JSON Schema. This enables visual, UI-centric schema definitions where the same template serves as:
- Schema definition — JSON Schema is automatically extracted for validation
- UI template — The HTML renders task output in a human-friendly format
- Editing guidance —
view-promptattributes guide LLMs and provide field descriptions
Per-task output schemas
Each task can define its own output.schema with format: "viewschema". This is the canonical approach for defining structured outputs:
do:
- research:
call: agent
with:
agent: "@agents/researcher"
instructions: "Research the topic: ${.topic}"
output:
schema:
format: viewschema
document: |
<section class="research-output" view-prompt="Research findings">
<h2>Research Summary</h2>
<div class="sources">
<h3>Sources</h3>
<template view-each="sources">
<div class="source-item">
<a view-attr:href="url" view-text="title" target="_blank">Source</a>
<span view-text="relevance">high</span>
</div>
</template>
</div>
<div class="key-findings">
<h3>Key Findings</h3>
<ul>
<template view-each="keyFindings">
<li view-text=".">Finding</li>
</template>
</ul>
</div>
</section>
- draft:
call: agent
with:
agent: "@agents/writer"
output:
schema:
format: viewschema
document: |
<article class="draft" view-prompt="Full article draft">
<div view-markdown="content" view-prompt="Article content in markdown">
Draft content here...
</div>
<footer>
<p>Word count: <span view-text="wordCount" view-type="number">0</span></p>
</footer>
</article>The JSON Schema is automatically extracted from the template at validation time. This approach:
- Provides a visual representation of your data structure
- Enables rich UI generation from the same schema (the Workflows app renders these automatically)
- Supports all viewschema directives (
view-text,view-attr:*,view-object,view-each,view-markdown, etc.) - Uses
view-promptto provide descriptions for both LLMs and UI form labels - Uses
view-typeto specify JSON Schema types (number,boolean,integer)
UI integration
When using the Workflows app (apps/workflows), task outputs with format: "viewschema" are automatically rendered using the ViewSchema template. This means:
- Task outputs display as rich HTML instead of raw JSON
- Human editors can modify structured outputs via auto-generated forms
- The same template defines both the data structure and its presentation
This "single source of truth" approach eliminates the gap between data schema and UI representation.
Agent reuse
metadata:
agents:
writer:
systemPrompt: "You write clear technical notes"
model: "gpt-4o-mini"
do:
- writeNote:
call: agent
with:
agentId: writer
instructions: "Create documentation for ${.topic}"Output path examples
with:
inject:
name: "auth"
version: "v1"
outputPath:
"/docs/{version}/overview.md": "${ .overview }"
"/docs/...<name>.md": "${ .modules }" # spread (see section 6)
"/docs/index.json": "fs://docs/**/*.md" # forward VFS contents (literal fs://, not ${...})6) Virtual filesystem
A per-run, in-memory filesystem powers file-centric workflows. It's isolated by "workspace" and is primarily written from workflow steps (especially call: agent) and read via fs:// expressions.
Key ideas for DSL/API consumers
- Lifetime and isolation
- The VFS exists only for the duration of a single workflow run.
- Workspaces provide isolation across sets of files within the same run. If you don't set a workspace explicitly,
defaultis used.
- How files get in
- input seeding: Workflow input may include a special
filesmap. These files are written to the VFS before any transforms. - agent output:
call: agentwithwith.outputPathwrites files (see below). - scripts:
run.scriptcan also write by returning content into subsequent steps that write via agents or by feeding intofs://-based expressions.
- input seeding: Workflow input may include a special
- How files get out
- Expressions can read the VFS using literal
fs://URIs (not inside${ ... }), returning either the file's content or a path→content map for globs. - Agents can return the collection of written files when
outputPathis used (e.g., reading them back viaoutput: { as: "fs://**" }).
- Expressions can read the VFS using literal
- Path normalization and returned keys
- Paths are normalized internally (leading slashes are removed, multiple slashes collapsed). For example, writing
/docs/file.mdreturns asdocs/file.mdinfs://**results. - Treat paths as workspace-rooted; there is no concept of an absolute disk path.
- Paths are normalized internally (leading slashes are removed, multiple slashes collapsed). For example, writing
Persistence across runs (host app examples / sketches)
- Even though the VFS is designed to be per-run and in-memory, you can persist and restore it between runs by:
- Persisting a snapshot of files after a run (configure the workflow to return files via
output: { as: "fs://**" }). - Restoring by seeding
input.fileson the next invocation of runWorkflow().
- Persisting a snapshot of files after a run (configure the workflow to return files via
Minimal example:
# In your workflow (any flow where you want to capture files)
output:
as: "fs://**"// App code (persist + restore)
const previousSnapshot = yourJSONRetrivalFunction(workflowId) // -> { [path]: content } or null
Example of a previousSnapshot value via `"output": { "as": "fs://**"}`
```json
{
"docs/about.md": "# About",
"docs/home.md": "# Home"
}// Run with prior files injected const result = await runWorkflow(workflowDefinition, { ...(userInput || {}), files: previousSnapshot || {} })
// Persist new snapshot of files for this workflow await yourJSONInsertFunction(workflowId, result) // result is the fs://** map when configured as above (can also be nested alongside of other more complex outputs via output.as)
- You can store snapshots as JSON (e.g., a JSONB column), keyed by workflow/workspace if you use multiple workspaces.
- To isolate multiple workspaces, run separate snapshots per workspace (set `with.workspace` in agent steps).
Reading files with expressions
- Use literal `fs://` URIs where an expression value is accepted:
- Exact file → the file's content (JSON auto-parsed when possible)
- Glob → a map of `path: content`
- Examples:
- Workflow output: return everything that exists
```yaml
output:
as: "fs://**"
```
- Return all Markdown docs under docs/
```yaml
output:
as: "fs://docs/**/*.md"
```
- Read a single file (auto-parses JSON if the content is JSON)
```yaml
output:
as: "fs://data/config.json"
```
Writing files from agents
The `agent` function writes files via `with.outputPath`. You can specify:
- A single path: `"/path/to/file.ext"`
- A map of path → content-expression:
```yaml
with:
outputPath:
"/docs/index.md": "${ .readme }"
"/data/config.json": "${ .config }"- An array mixing strings and path→content maps:
with: outputPath: - "/docs/index.md" - "/data/config.json": "${ .config }"
Modifiers and merge behavior
Use modifiers on the file path to control behavior when the file or target directory already exists.
- Default (no modifier): write only if the file does not already exist
!(force):- For a single file: always overwrite.
- For spread writes (see below): clear the target folder first, then write fresh files.
+(append/merge) for single-file writes:- Strings: appended with smart newline handling (preserves a single newline boundary).
- Arrays: concatenated.
- Objects: deep-merged (nested keys merged, arrays concatenated).
+with spread writes:- Existing files are kept; new files are added if they don't exist. Existing files are not modified by spread writes.
Examples (single-file with merge)
do:
- mergeExamples:
call: agent
with:
instructions: "Merge example content"
outputPath:
"/notes/append.txt+": "${ .note }"
"/data/list.json+": "${ .items }"
"/data/config.json+": "${ .partialConfig }"
schema:
type: object
properties:
note: { const: "new line" }
items: { const: ["a", "b"] }
partialConfig: { const: { nested: { k: "v" } } }
# when invoking:
# execute(workflow, {
# files: {
# "/notes/append.txt": "original content\n",
# "/data/list.json": ["original"],
# "/data/config.json": { "originalKey": "value" }
# }
# })Spread writes (bulk creation)
Use a spread pattern on the final path segment: /folder/...<prop>.ext
- The agent evaluates the right-hand expression to an array of items.
- Each item must include the property named in the spread (e.g.,
prop) to form the file name. - Content selection per item:
- If the item has exactly two keys and one is the spread property, the other key's value becomes the file content.
- Otherwise, the item minus the spread property is written (objects are JSON-stringified).
- Existing files and modifiers:
- No modifier: if the target folder already has any files, the spread write is skipped entirely (safety).
!: the target folder is emptied first, then files are written.+: new files are added when they don't already exist; existing files are left untouched.
Examples (spread writes)
- Basic spread:
do: - makePages: call: agent with: instructions: "Create simple Markdown pages" outputPath: "/docs/...<slug>.md": "${ .pages }" schema: type: object properties: pages: const: - { slug: "home", content: "# Home" } - { slug: "about", content: "# About" } - Directory-level
!(force refresh):do: - refreshArtifacts: call: agent with: instructions: "Refresh artifacts" outputPath: "/artifacts/...<name>.txt!": "${ .items }" schema: type: object properties: items: const: - { name: "a", content: "one" } - { name: "b", content: "two" } # when invoking: # execute(workflow, { # files: { # "/artifacts/existing.txt": "will be cleared by the ! modifier" # } # }) - Directory-level
+(additive, keep existing):do: - addIfMissing: call: agent with: instructions: "Add missing snippets" outputPath: "/snippets/...<slug>.txt+": "${ .snips }" schema: type: object properties: snips: const: - { slug: "tip1", content: "first" } - { slug: "tip2", content: "second" } # when invoking: # execute(workflow, { # files: { # "/snippets/existing.txt": "keep me (the + modifier preserves existing files)" # } # })
Skipping agent generation when files already exist
If an agent's outputPath is fully satisfied by existing files (and you did not use + or !), the agent step is skipped and the existing files are returned. This is useful for idempotent workflows.
- If any path still needs to be generated (e.g., a file is missing or a spread target is empty and allowed), the agent runs.
Content typing and serialization
- Strings: written as-is.
- Objects: serialized as pretty JSON by default.
- Arrays: JSON arrays.
- JSON readback: when reading with
fs://, JSON content is auto-parsed into objects/arrays; other types are returned as strings.
Conditional and contextual content
- The right-hand content expression is full jq:
- Use
${ ... }to compute the content. $currentPathis available inside expressions for content, indicating the path being written.
- Use
- You can skip a specific file by returning
nullfor its content:with: outputPath: "/out/conditional.json": "${ if .shouldWrite then .data else null end }"
Workspaces
- Target a workspace by setting
with.workspacein the agent step:do: - writeToOther: call: agent with: instructions: "Write to secondary workspace" workspace: "secondary" outputPath: "/secondary/data.json": "${ .payload }" schema: type: object properties: payload: { const: { ok: true } } - Workspaces are isolated; files written in one are not visible in another.
- Agents include a lightweight ASCII tree overview of the active workspace in their prompt context (to help models reason about the current project structure).
Knowledge index (auto-generated)
- Any write under
references/**/*updates a workspace-rootREADME.mdthat:- Lists "Categories" derived from filenames (the token before the extension in
basename.slug.topic.extlayouts). - Groups files by folder, with columns for file, description, and word count (if available).
- Honors frontmatter fields when present:
author→ shown as authorexcerptortitle→ descriptionword_count→ word count
- Lists "Categories" derived from filenames (the token before the extension in
- This index is per-workspace (the workspace root README.md is updated).
Common patterns (quick reference)
- Seed files via input:
input: from: "${ . }" # later when invoking // execute(workflow, { // files: { // "seed/info.txt": "Hello", // "seed/config.json": { enabled: true } // } // }) - Read all files back:
output: as: "fs://**" - Mix of writes with merging and spreading:
do: - generate: call: agent with: instructions: "Generate mixed outputs" outputPath: "/docs/intro.md": "${ .intro }" "/docs/appendix.md+": "${ .appendix }" "/docs/...<slug>.md": "${ .pages }" schema: type: object properties: intro: { const: "# Intro" } appendix: { const: "Appendix A" } pages: const: - { slug: "a", content: "# A" } - { slug: "b", content: "# B" } # when invoking: # execute(workflow, { # files: { # "/docs/appendix.md": "Existing appendix\n" # } # })
Gotchas and tips
- Don't wrap
fs://URIs in${ ... }. Use them as literal string expressions when a value is accepted. - For spread writes without
!or+, if the target folder is not empty, the write is skipped (safety). Use!to refresh or+to add missing files. - Paths are normalized; prefer consistent paths and be aware that leading slashes are removed in returned maps.
- Objects are JSON-stringified when written; if you need Markdown/CSV/etc., provide strings for those files.
7) Expressions
Expressions are jq-based with recursive evaluation across objects, arrays, and strings.
- String interpolation
"Hello ${.name}"// string result"${ .value * 2 }"// raw evaluated value (number/object/array/string)
- Recursive evaluation
{ x: "${.a}", y: "${.b}" }
- Context variables available to jq
$workflow, $context, $input, $output, $task, $secrets- Plus any variables passed via
withfor a task (e.g.$item,$index,$currentPath)
- Filesystem reads
- "fs://docs/info.json" // exact path → file content (parse JSON if possible)
- "fs://docs/**/*.md" // glob →
{ path: content, ... }map
8) MCP integrations and custom functions
JS Scripting Tasks
JavaScript scripts (run.script.language: "js") are executed inside an isolated V8 context with significant limitations for safety.
Isolation model
- Runs in a sandboxed V8 isolate with no Node.js APIs
- No
require,import,process,fetch, filesystem, or network access.
Arguments
script.argumentsare evaluated via expressions before execution and injected as immutable constants into your script.- Example:
- arguments:
{ a: "${.left}", b: "${.right}" }→ your code seesconst a = ...; const b = ...;
- arguments:
Synchronous code only
- Your script must compute and return a value synchronously.
- Top‑level
await, returning Promises, or scheduling async work is not supported here. - For async operations (I/O, network, long‑running tasks), prefer:
- An MCP tool/integration, or
- Restructuring logic so
run.scriptperforms synchronous shaping/combining of already‑available data.
Return value must be JSON‑serializable
- Allowed: numbers, strings, booleans, null, arrays, and plain objects composed of these types.
- Not allowed: functions, class instances, Dates, Maps/Sets, BigInt, cyclic structures, or any object that can’t be JSON‑encoded.
- If you need to return richer data:
- Convert to plain objects first (e.g., extract fields).
- Serialize to a string (e.g., JSON string or base64 for binary) and return that string.
Error behavior
throw new Error("message")will surface as a workflow/runtime error:Script execution failed: message.- Throwing non‑Error values is converted to a string and surfaced similarly.
Resource limits (per script)
- You can control memory/time limits via
script.environment:memoryMB(default 128, min 8, max 2048)timeoutMs(default 1000)filename(used in stack traces)
- You can control memory/time limits via
Example:
use:
functions:
compute:
run:
script:
language: "js"
code: "return { sum: a + b }"
arguments:
a: "${.left}"
b: "${.right}"
environment:
memoryMB: 128
timeoutMs: 1000
filename: "user-script.js"Use‑cases:
- Simple numeric/string operations and transformations.
- Shaping objects from evaluated arguments.
- Lightweight, synchronous data munging to be consumed by later tasks.
If you need dynamic HTTP, filesystem, or other I/O, use an MCP tool/integration or model those steps as separate tasks rather than inside run.script.
MCP integrations (metadata)
metadata:
integrations:
myserver:
command: "node"
args: ["./path/to/mcp-server.js"]
env:
NODE_ENV: "production"- Tools are discovered and registered as
{integration}/{tool}. - As an agent tool: set
with.toolsto a list (simple or full names), or omit to allow all discovered tools by default (see "Limitations"). - Direct calls:
call: "integration://discover"with{ type: 'integrations' | 'tools' | 'all' }call: "integration://{integration}/{tool}"with the tool's input schema
Example:
do:
- listTools:
call: "integration://discover"
with: { type: "tools" }
- runExampleTool:
call: "integration://myserver/myTool"
with:
param1: "value"
# ... provide fields matching the tool's input schemaCustom functions (use.functions)
HTTP call (static only; no expression evaluation within the request object):
use: functions: getInfo: call: url: "https://api.example.com/info" method: "POST" headers: { "content-type": "application/json" } body: { fixed: "payload" } # staticFor dynamic HTTP, prefer an MCP tool or a custom integration.
run.scriptexecutes in an isolated V8 context with no Node.js APIs and nofetchby default.JavaScript script:
use: functions: compute: run: script: language: "js" code: "return a + b" arguments: a: "${.left}" b: "${.right}"Notes:
- Scripts run in an isolated V8 context (via isolated-vm), with no Node.js APIs and no
require/import/fetchby default. - The value your code returns must be JSON-serializable (plain objects/arrays/numbers/strings/booleans/null). If you need to return non-JSON types (e.g., BigInt, class instances), convert or serialize them yourself before returning.
- You can set per-script environment limits:
use: functions: compute: run: script: language: "js" code: "return { sum: a + b }" arguments: a: "${.left}" b: "${.right}" environment: memoryMB: 128 # default 128, min 8, max 2048 timeoutMs: 1000 # default 1000 filename: "user-script.js" # used in stack traces- Scripts run in an isolated V8 context (via isolated-vm), with no Node.js APIs and no
9) Events and telemetry
Operation wrapper
- Internal actions run inside an operation that:
- Creates a span (OpenTelemetry API)
- Emits a workflow event
- Records success/failure/metadata
- Normalizes errors to RFC 7807
Subscribe to events
import { workflowEvents } from 'foxflow';
const off = workflowEvents.on('workflow.**', (evt) => {
// evt.type, evt.status, evt.metadata, evt.timestamp, evt.error?
});OpenTelemetry
import { initTelemetry } from 'foxflow';
// With no options → console exporters
await initTelemetry();
// Or with OTLP HTTP endpoint:
// await initTelemetry({ endpoint: 'https://otel-collector.example', serviceName: 'my-app' });Note: setting telemetry: true in the runner config toggles an env var; you still need to call initTelemetry() to start the SDK/exporters.
10) Resilience (retry, wait)
Task-level retry (supported)
do:
- retryableTask:
call: agent
with:
instructions: "Create something"
schema: { type: object, properties: { ok: { const: true } } }
retry:
delay: { seconds: 1 }
backoff: { exponential: {} } # or linear | constant
limit:
attempt: { count: 5, duration: "PT1M" } # stop after count (optionally stop by time)
jitter:
from: { milliseconds: 100 }
to: { milliseconds: 1000 }- Works for any task that fails (agent, run.script, etc.).
Wait
waittask supports ISO 8601 ("PT10S") or inline object durations.
Timeout
- A
timeoutfield exists in the schema but is not enforced at runtime yet.
11) Data flow and state
State surfaces
- workflow: { id, definition, input, output?, startedAt }
- context: mutable global state (your working context)
- input: current task input
- output: current task output
- fileSystem: virtual filesystem API
- runtime/secrets/task: engine/runtime info
Input processing
- Input may include a special
filesmap that is written to the VFS before any transforms. input.schema.documentvalidates input (excludingfiles).input.fromtransforms input; the result is written to$context,$input, and$output.
Task I/O and export
- input: validate and/or transform before task execution
- output: transform and/or validate after task execution
- export: write to
$context(with optional schema validation)
Flow directives
- Any task may return a directive;
thenmay override:- 'continue' | 'exit' | 'end' | 'taskName' (jump)
- Both
exitandendimmediately terminate the workflow with the current output
12) Sub-workflows
run.workflow executes an in-memory subflow:
- runChild:
run:
workflow:
namespace: "ns"
name: "child"
version: "1.0.0"
input:
x: "${.value}"- Subflows are resolved from
$context.workflows['ns/name@version'](or$context.subflows[...]). - Execution is in-process and synchronous; the task output is
{ workflow: subOutput }.
13) API surface (exports)
From foxflow:
- Classes/functions
- Flow, executeWorkflow, runWorkflow, createWorkflowRunner
- StateManager, VirtualFileSystem, FunctionRegistry, AgentManager
- validateJSONSchema, validateWorkflowSchema, validateAgainstSchema
- WorkflowError, isWorkflowError
- workflowEvents, initTelemetry
- Types (selection)
- WorkflowDefinition, WorkflowDocument, Task, TaskList, NamedTask
- InputConfig, OutputConfig, ExportConfig, SchemaConfig
- CallTask, DoTask, ForTask, RaiseTask, RunTask, SetTask, SwitchTask, TryTask, WaitTask
- RetryPolicy, Duration, FlowDirective
- AgentConfig, FunctionDefinition, IntegrationConfig
- FileSystemFile, FileSystemOperations
- WorkflowState, ProblemDetails, Expression, ExprOptions
- DSL schema export
import schema from 'foxflow/schema'(JSON)
14) Configuration
Runner configuration
import { createWorkflowRunner } from 'foxflow';
const runner = createWorkflowRunner({
apiKeys: {
openai: process.env.OPENAI_API_KEY,
anthropic: process.env.ANTHROPIC_API_KEY,
provider: process.env.PROVIDER_API_KEY // e.g., Nebius (OpenAI-compatible)
},
telemetry: true // toggles env flag
});Note: Call initTelemetry() to actually start OpenTelemetry exporting. MCP integrations should be configured on the workflow itself under metadata.integrations (see section 8). The runner's configuration does not apply MCP integrations or a default workspace.
15) Limitations and gotchas
CNCF Serverless Workflow
- This is a subset with extensions (agents, MCP, VFS, jq). Where behavior differs, FoxFlow prioritizes pragmatic AI workflows.
Custom HTTP functions
use.functions.*.call(HTTP) is static today. Expressions inside the URL/headers/body are not evaluated/substituted.- For dynamic HTTP, use
run.script(js) or an MCP tool.
try.catch.retry
- Present in the schema but not executed today. Prefer task-level retry or model a retry loop explicitly.
timeout
- The
timeoutfield is not enforced yet. Usewaitwhere appropriate or manage timeouts externally.
- The
Agent path/content interpolation
- Left side (paths) supports simple
{var}interpolation fromwith.injectonly; other variables (e.g., workflow, workspace) are not interpolated unless injected. - Right side (content) supports full
${...}jq;$currentPathis available.
- Left side (paths) supports simple
Agent tools default
- When MCP integrations are configured and
toolsis omitted (and not 'none'), all discovered tools are available by default.
- When MCP integrations are configured and
returnToolResult
- Behavior depends on the generation path. For non-schema tool usage, tool results are returned by default; for schema output, tool results may be incorporated into structured output. The
returnToolResultflag is not consistently applied across paths yet.
- Behavior depends on the generation path. For non-schema tool usage, tool results are returned by default; for schema output, tool results may be incorporated into structured output. The
Telemetry
- Setting
telemetry: trueon the runner only toggles an env flag; callinitTelemetry()to start the SDK/exporters.
- Setting
Sub-workflows
- Resolved only from in-memory context maps (
$context.workflows/$context.subflows) in this version.
- Resolved only from in-memory context maps (
16) Troubleshooting
This section lists common issues and their fixes, grounded in the current engine behavior and tests.
isolated-vm on Node 20+
- Set
NODE_OPTIONS=--no-node-snapshot(see Installation). Without it you may see initialization errors when running tests or the runner.
- Set
Script returns not JSON-serializable
run.scriptexecutes in an isolated context and returns values via JSON. Returning functions, class instances, BigInt, or circular structures will fail.- Return plain JSON-compatible data or serialize to strings yourself inside the script.
Expression errors (JQ or script)
- Symptoms:
- type: https://serverlessworkflow.io/spec/1.0.0/errors/expression
- title: "JQ Expression Error" or "Script Execution Error"
- Causes:
- Invalid jq (e.g., indexing into non-array/object, bad syntax)
- Throwing non-Error values in run.script (e.g., throw "string error")
- Fixes:
- Validate expressions in small steps
- For run.script, prefer throwing Error objects and ensure code compiles
- Use minimal test flows to isolate the failing expression or code
- Symptoms:
Task retry stops early or never succeeds
- Symptoms:
- "Retry Limit Reached" (status 500) when attempt.count is exhausted
- "Retry Time Limit Reached" (status 408) when limit.duration is exceeded
- Fixes:
- Increase limit.attempt.count or limit.duration to allow more retries
- Adjust delay/backoff/jitter to your expected timing
- For deterministic tests, use milliseconds delays and avoid large jitter
- Symptoms:
Invalid flow directives
- Symptoms: Validation error "Invalid Flow Directive" when using an unknown task name
- Fixes:
- Use only 'continue' | 'exit' | 'end' or a valid task name present in the workflow
- When jumping, ensure the destination task exists later in the workflow
File system reads/writes in expressions
- Behavior:
- Use fs:// literal expressions where the engine accepts expressions directly (e.g., workflow.output.as: "fs://**")
- Returning a string beginning with fs:// in output.as will return a map of matching files
- Fixes:
- Do not wrap fs:// URIs in "${ ... }" — literal fs:// strings are resolved by the engine
- Provide input.files if you need seed files at the start of a run
- Behavior:
Try/catch not catching expected errors
- Symptoms: Errors bubble despite a catch block
- Causes: Catch filters do not match (e.g., errors.with.type differs from actual error.type)
- Fixes:
- Ensure errors.with matches the thrown error's properties (type/status/title)
- Use 'if' on catch to further scope catch behavior only when desired
Sub-workflows not found
- Symptoms: "Sub-Workflow Not Found" (status 404)
- Fixes:
- Provide subflows via $context.workflows['ns/name@version'] or $context.subflows[...] before run.workflow
- Ensure namespace/name/version match exactly
Switch default path behavior
- Behavior: If no cases match and no explicit default is provided, execution continues to the next task with unchanged input
- Fixes:
- Add an explicit default case if you need a fallback branch
jq availability
- Symptoms: Expression evaluation failures or missing jq capability
- Fixes:
- Ensure jq is installed and available on your system PATH (node-jq relies on it)
17) License
This software is proprietary and confidential. All rights reserved.
See the LICENSE file for details. For licensing inquiries, contact [email protected].
