@agenv/workstreams
v0.8.0
Published
Workstream management library and CLI.
Maintainers
Readme
@agenv/workstreams
Workstream management library and CLI.
Install
npm install -g @agenv/workstreams
# or
bun install -g @agenv/workstreamsQuick Start
work init --sqlite
work create --name "my-feature"
work current --set "001-my-feature"
work validate requirements
work plan create --stages 2
work validate plan
work check plan
work approve planCore Workflow
- Create a draft workstream container:
work create --name "my-feature" - Fill
REQUIREMENTS.mdand add extra inputs underresources/ - Set the current workstream (or pass
--stream):work current --set "001-my-feature" - Validate requirements:
work validate requirements - Scaffold plan stages:
work plan create --stages 2 - Edit
PLAN.md - Validate/check the plan:
work validate planwarns but succeeds for an empty draft planwork check planhighlights open questions and missing inputs
- Approve plan:
work approve plan(user role, requires at least one stage) - Fill
TASKS.md - Approve tasks:
work approve tasks(user role) - Manually
/forkthe session and ask the forked session to supervise the approved work - The supervision branch uses
work supervise,work status,work tree, andwork batch-statusto drive the next batch and review loop - Implementation agents use
implementing-workstreamsto inspect assigned scope and update task state withwork update - The supervisor reports back; the user approves the completed stage with
work approve stage <n> - Repeat the supervision loop for the next stage
- If new stages are needed after the original plan, use the revision flow:
work revision --name "follow-up" [--after-stage N]work approve revisionwork approve tasks
- Finalize the report with the
evaluating-workstreamsskill:
work report validate
The optional managed install profile preserves the older Root Agent management-launch workflow. The default manual profile omits the management skill and launch tool so the user controls the /fork handoff.
Shortcut:
work create --name "my-feature" --stages 2still creates the draft container and scaffolds stages immediately.
Generated files on work create:
REQUIREMENTS.mdfor the human-authored summary, deliverables, dependencies, and resourcesresources/for supplemental inputs referenced fromREQUIREMENTS.mdPLAN.mdfor staged execution planningtasks.jsoncompatibility JSON for projected machine statedocs/for extra workstream notes
Useful Commands
work status
work tree
work tree --batch "01.01"
work batch-status --batch "01.01" --format json
work list --tasks
work list --tasks --thread "01.01.01"
work update --task "01.01.01.01" --status in_progress
work update --task "01.01.01.01" --status completed --report "Implemented X"
work report metrics --blockers
work export --format jsonSqlite-authoritative bootstrap and migration
For new repositories, prefer sqlite-authoritative initialization:
work init --sqliteThat bootstraps work/db.sqlite as the canonical structured store. agents.yaml and github.json are still created on disk, and markdown documents plus resources/ remain filesystem content.
For an existing repository that already has work/index.json, work/<stream-id>/tasks.json, or legacy runtime artifacts, run the same command:
work init --sqliteDuring initialization the CLI hydrates legacy filesystem state into sqlite, keeps the current-stream pointer, imports approval/runtime history, and rewrites compatibility projections when legacy data is present.
After cutover, normal operator commands read canonical sqlite state first:
work status
work tree --batch "01.01"
work list --tasks --thread "01.01.01"
work batch-status --batch "01.01" --format jsonRebuild compatibility JSON from sqlite whenever you need fresh index.json / tasks.json rollback or old-tooling projections:
work rebuild-compat
work rebuild-compat --stream current
work rebuild-compat --output-root /tmp/sqlite-compat-snapshot- default rebuild writes
work/index.jsonplus projectedwork/<stream-id>/tasks.jsonfiles back into the repo --streamonly refreshes that workstream'stasks.json--output-rootwrites a rollback-safe snapshot forindex.json/tasks.jsonsomewhere else so you can inspect the projection before replacing live compatibility fileswork rebuild-compatdoes not currently rebuild legacy runtime compatibility artifacts likethreads.json,supervisor-state.json, orbatch-status/*.json
For a direct sqlite-vs-compatibility parity check on one workstream:
STREAM_ID=001-my-feature bun --eval '
import { inspectCriticalWorkflowDualWriteParitySync } from "@agenv/workstreams"
const inspection = inspectCriticalWorkflowDualWriteParitySync(process.cwd(), process.env.STREAM_ID)
console.log(JSON.stringify({
parity: inspection?.parity,
summary: inspection?.divergences.summary,
intentionalMismatches: inspection?.intentionalMismatches,
}, null, 2))
'Rollback-safe inspection flow:
- Inspect the live sqlite-backed view with
work status,work tree,work list, orwork batch-status. - Run
work rebuild-compat --output-root /tmp/sqlite-compat-snapshot. - Inspect
/tmp/sqlite-compat-snapshot/work/index.jsonand/tmp/sqlite-compat-snapshot/work/<stream-id>/tasks.jsonwithout mutating the live repo. - Treat that snapshot as an
index.json/tasks.jsoncompatibility snapshot only; it does not currently includethreads.json,supervisor-state.json, orbatch-status/*.json. - If you need to hand legacy JSON back to older tooling, rerun
work rebuild-compatwithout--output-rootafter you are satisfied with the snapshot.
When legacy files still matter:
- during one-time hydration of pre-sqlite repositories
- when you need compatibility JSON for rollback drills, audits, or older tooling that still reads
index.json/tasks.json - when inspecting a rollback-safe snapshot produced by
work rebuild-compat --output-root ...
When they do not matter:
- for normal sqlite-authoritative command execution after
work init --sqlitehas completed successfully - as the source of truth for structured workstream state while
work/db.sqliteis present and healthy - for legacy runtime wrappers like
threads.json,supervisor-state.json, andbatch-status/*.json, except as migration inputs or explicit compatibility artifacts
Current sqlite-native runtime note:
work batch-statusis authoritative and sqlite-backed; do not treatwork/<stream-id>/batch-status/*.jsonas a required live runtime surface.- sqlite-native/new workstreams no longer normally project
threads.jsonorbatch-status/*.json. supervisor-state.jsonmay still appear as a compatibility/runtime artifact during the current transition period.
Storage architecture notes
work/db.sqliteis the canonical structured source of truth in sqlite-authoritative repos.work/index.jsonandwork/<stream-id>/tasks.jsonare compatibility projections rebuilt from sqlite for legacy tooling and inspection workflows.threads.jsonandbatch-status/*.jsonare no longer normal projected runtime artifacts for sqlite-native workstreams.supervisor-state.jsonremains transitional compatibility/runtime output where current supervision flows still expect it.- Markdown workstream docs,
resources/, and artifact-like outputs remain filesystem-based. - Use
work rebuild-compatwhen you need to regenerate compatibility JSON from canonical sqlite state. - Local-first sqlite architecture:
../../docs/LOCAL_FIRST_SQLITE_ARCHITECTURE.md - Storage adapter package/refactor recommendation:
../../docs/STORAGE_PACKAGE_BOUNDARIES.md
Supervision Workflow (manual default; managed profile optional)
Use work supervise as the branch execution/recovery primitive. In the default manual profile, the user creates that branch with /fork; in the optional managed profile, the Root Agent can launch it with the management tool:
work supervise
work supervise --batch "01.01"
work supervise --dry-runwork supervise launches work multi --headless --async, waits for the batch to become terminal, and produces deterministic review evidence from canonical execution state (task status/report fields, runtime thread metadata, and persisted batch status).
The supervisor branch then either:
- continues automatically to the next incomplete batch,
- runs one automatic fix cycle (default), or
- escalates to the user based on escalation/stage-boundary policy.
Within that supervised batch execution, implementation agents commonly inspect scope with:
work status
work tree --batch "01.01"
work list --tasks --thread "01.01.01"They are expected to keep task state accurate while they work:
work update --task "01.01.01.01" --status in_progress
work update --task "01.01.01.01" --status completed --report "1-2 sentence summary"In practice, orchestration continues only when the supervising branch decides the batch is safe to continue. It stops when the supervising branch decides user input is needed, a stage boundary is reached, there is no next batch, or execution/wait fails.
If --timeout-ms is reached before the batch becomes terminal, the wait fails and work supervise exits without reviewing the incomplete batch. That interrupted run remains resumable and is preferred on the next work supervise rerun.
After any stop, inspect the unified persisted state before resuming:
# 1) task/thread snapshot for the batch that just ran
work tree --batch "01.01"
# 2) persisted execution state for that batch
work batch-status --batch "01.01" --format json
# 3) projected tasks.json compatibility view (includes runtime_state.supervision)
cat work/<stream-id>/tasks.jsonInterpretation quick-guide (what success looks like vs what to inspect):
- Terminal success / safe resume:
work batch-statusiscompletedandtasks.json→runtime_state.supervisionincludes the batch underreviewed_batchesplus persisted finalization evidence for the run decision. - Timeout/wait failure: batch status remains non-terminal; inspect the batch plus
tasks.json→runtime_state.supervision, then rerun plainwork superviseto resume that same interrupted batch before later incomplete batches. - Escalation/stage stop: inspect
escalationsandstage_stopsto confirm what operator action is required. - Terminal failed run:
work batch-statusisfailed; inspect failed thread summaries before retrying.
Timeout resume smoke checklist (short drill):
- Force interruption:
work supervise --batch "01.01" --timeout-ms 100 - Inspect persisted state:
work batch-status --batch "01.01" --format jsonandcat work/<stream-id>/tasks.json - Resume normally: run plain
work superviseand confirm it resumes01.01first (does not skip to later incomplete batches) - Verify outcome class:
- Resumed success: batch becomes terminal
completed,reviewed_batchescontains01.01, and persisted finalization evidence for that same batch appears intasks.json→runtime_state.supervision - Still interrupted/non-terminal: batch remains non-terminal and no new
reviewed_batchesentry exists yet (wait/investigate before treating as complete)
- Resumed success: batch becomes terminal
Persisted-state note: prefer tasks.json runtime ordering/evidence to verify same-batch resume, rather than relying only on transient console logs.
Escalation policy: branch runs escalate to the Root Agent; the Root Agent escalates to the user.
Key projected inspection file:
work/<stream-id>/tasks.json(tasks[]plusruntime_state.{threads,batches,supervision})
Reporting model (v1)
For v1 supervision, task-level report text plus canonical workstream state are the primary review inputs.
This is sufficient for current automated follow-up decisions, but reporting may evolve in a later revision toward a richer structured format if operator workflows require more granular machine-readable evidence.
Quick post-fix verification checklist:
- Successful completion path: batch status is terminal and
tasks.json→runtime_state.supervisionrecords both review evidence (reviewed_batches) and persisted finalization evidence for that batch/run. - Timeout/failure path: batch status remains non-terminal at timeout; no new reviewed entry is recorded for the incomplete batch, and supervisor state keeps the interrupted run resumable until review can continue.
Resume examples:
# timeout/resume drill: force interruption on a known batch
work supervise --batch "01.01" --timeout-ms 100
# resume interrupted work first (same batch), otherwise continue from next incomplete batch
work supervise
# rerun a specific batch after manual fixes or policy edits
work supervise --batch "01.01"For the timeout/resume drill, verify recovery by confirming 01.01 reaches reviewed/finalized persisted state before Root Agent orchestration progresses to any later incomplete batch.
Validation checks for Root Agent ownership (drift reduction)
To confirm reduced drift vs the previous self-contained work supervise model:
- Verify each fix/escalate decision is grounded in persisted state inside
tasks.json(runtime_state.batches+runtime_state.supervision), not transient logs alone. - Confirm
reviewed_batches,fix_cycles, andescalationsentries match the Root Agent decision taken for that batch. - Run regression tests from
packages/workstreams:
bun run test tests/supervise.test.ts tests/supervisor-state.test.tsThese tests validate deterministic review evidence, escalation/fix-cycle persistence, and interruption-safe resume behavior relied on by Root Agent orchestration.
Prompt-first branch handoff live validation
For the current one-level prompt-first experiment:
- treat supervision branches as single-hop children of the Root Agent only
- if a branch tries to launch another supervision branch, the launch must fail with a guardrail error and the branch should yield back upward
- validate lineage and branch status from
work/<stream-id>/tasks.json→runtime_state.supervision.branch_sessions[] - validate the final branch report by exporting the child native session transcript and reading the last completed assistant message
- classify drift when the branch acts like the Root Agent or proposes deeper branching instead of reporting its
work superviseoutcome
For full operator guidance, verification drills, and branching background, see ../../docs/SUPERVISOR.md, ../../docs/supervision-manual-verification-checklist.md, and ../../docs/ROOT_AGENT_BRANCHING_ARCHITECTURE.md.
