stable-harness
v0.0.16
Published
Stable application runtime and operator control plane for agent workspaces.
Readme
stable-harness
Stable runtime and operator control plane for agent applications.
stable-harness lets a team keep its chosen agent framework while adding the
production surfaces that real workspaces need: YAML inventory, runtime requests,
sessions, event traces, artifacts, memory lifecycle, governance hooks, recovery,
tool repair, and protocol access.
It is not another agent execution framework. Upstream frameworks own execution semantics. Stable Harness owns the runtime boundary around them.
Why Use It
Agent frameworks are good at deciding what an agent should do next. Production applications also need a stable layer that can be inspected, governed, resumed, replayed, and called through predictable APIs.
Stable Harness gives you that layer without rewriting the backend:
- define agents, tools, models, memory, workflows, and protocol exposure in YAML
- run the same workspace through CLI, SDK, HTTP, and OpenAI-compatible clients
- keep DeepAgents and LangGraph behavior upstream-native through thin adapters
- validate and repair tool calls at the runtime gateway before execution
- observe upstream tool, planning, delegation, progress, memory, and artifact events
- keep downstream product logic in the workspace, not inside the framework
Install
npm install stable-harnessStable Harness currently targets Node.js >=24 <25.
Create a workspace without cloning this repo:
npx stable-harness init ./my-agent-app
stable-harness -w ./my-agent-app
stable-harness -w ./my-agent-app --agent orchestra --tool echo_tool --tool-args-json '{"value":"hello"}'First Run
Clone the repo when developing the framework itself:
git clone [email protected]:botbotgo/stable-harness.git
cd stable-harness
npm install
npm run build
npm run check:rules
npm test
npm run example:minimalRun an existing Stable Harness workspace:
stable-harness -w ./examples/minimal-deepagents "hello stable harness"Inspect the workspace without running an agent:
stable-harness -w ./examples/minimal-deepagents
stable-harness agent render orchestra -w ./examples/minimal-deepagents
stable-harness workflow render review-shell -w ./examples/minimal-deepagentsStart the OpenAI-compatible facade:
stable-harness start -w ./examples/minimal-deepagents --port 8642Then point compatible clients at:
http://127.0.0.1:8642/v1Embed In An App
import { createStableHarnessRuntime } from "stable-harness";
const runtime = await createStableHarnessRuntime("/path/to/workspace");
const response = await runtime.request({
input: "Review the current release evidence.",
agentId: "orchestra",
});
console.log(response.output);The runtime also exposes subscribe, inspect, getRun, listRequests,
listSessions, inspectRequest, cancel, and stop so applications can build
operator workflows around the same execution surface.
Workspace Shape
A workspace is a folder with Kubernetes-style YAML documents:
config/
runtime/workspace.yaml
agents/orchestra.yaml
catalogs/models.yaml
catalogs/tools.yaml
workflows/review-shell.yaml
resources/
tools/
skills/Minimal runtime:
apiVersion: stable-harness.dev/v1
kind: Runtime
metadata:
name: app-runtime
spec:
routing:
defaultAgentId: orchestra
protocols:
inProcess: true
openaiCompatible:
bearerToken: ${env:STABLE_HARNESS_API_KEY}Minimal agent:
apiVersion: stable-harness.dev/v1
kind: Agent
metadata:
name: orchestra
spec:
backend: deepagents
modelRef: local-dev
systemPrompt: You are a concise workspace agent.
tools:
- shell
subagents:
- reviewerRuntime Boundary
flowchart LR
Client["CLI / SDK / HTTP / OpenAI-compatible client"]
Runtime["Stable Harness runtime"]
Inventory["YAML workspace inventory"]
Gateway["Tool gateway + repair policy"]
Adapter["Thin backend adapter"]
Backend["DeepAgents / LangGraph / future backend"]
Ops["Events / runs / memory / approvals / artifacts"]
Client --> Runtime
Inventory --> Runtime
Runtime --> Gateway
Runtime --> Adapter
Adapter --> Backend
Runtime --> OpsStable Harness owns lifecycle, governance, observability, persistence, recovery, protocol access, and tool-gateway policy. It does not infer routing from user keywords, synthesize upstream planning calls, or rebuild backend-native agent semantics.
Current Backends
| Backend | Status | Boundary |
| --- | --- | --- |
| DeepAgents | Primary adapter | Upstream execution, skills, planning, delegation, and built-ins are passed through; Stable Harness observes and governs the runtime edge. |
| LangGraph | Runtime and workflow adapter | Stable Harness can compile explicit workflow topology and expose LangGraph-compatible protocol surfaces. |
| Custom adapters | Supported through SDK | Implement RuntimeAdapter and declare the backend in workspace YAML. |
Tool Reliability
Stable Harness uses @botbotgo/better-call at the tool-gateway boundary. The
default CLI path configures repair mode for registered tools, so malformed or
near-miss tool calls can be repaired before execution while agent inventory,
schema validation, semantic validators, and governance policy still define what
is allowed.
This is constrained repair, not silent magic:
- unknown or unauthorized tools stay blocked
- semantic validators remain authoritative
- upstream built-ins stay upstream-owned
- repaired calls are observable through runtime events and traces
Protocols
- OpenAI-compatible facade: docs/protocols/openai-compatible.md
- LangGraph-compatible facade: docs/protocols/langgraph-compatible.md
- HTTP runtime protocol: docs/protocols/http-runtime.md
Documentation
- Documentation index
- Getting started
- Workspace authoring
- Integration guide
- Operator runbook
- Adoption playbook
- Market positioning
Product Boundary
Read these before adding public runtime behavior:
The short rule: pass through upstream execution semantics first, then add only small, typed, replaceable runtime capabilities around them.
