npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@botbotgo/agent-harness

v0.0.53

Published

Workspace runtime for multi-agent applications

Downloads

2,861

Readme

@botbotgo/agent-harness

Product Overview

@botbotgo/agent-harness is a workspace-shaped application runtime for real agent products.

It is not a new agent framework. It is the runtime layer around LangChain v1 and DeepAgents that turns one workspace into one operable application runtime.

The boundary is strict:

  • LangChain v1 and DeepAgents own agent execution semantics
  • agent-harness owns application-level orchestration and lifecycle management

That means:

  • public API stays small
  • complex setup and operating policy live in YAML
  • application-level orchestration and lifecycle management stays in the harness
  • runtime lifecycle stays stable even if backend implementations change

What the runtime provides:

  • createAgentHarness(...), run(...), subscribe(...), inspection methods, and stop(...)
  • YAML-defined workspace assembly for routing, models, tools, stores, backends, MCP, recovery, and maintenance
  • backend-adapted execution with current LangChain v1 and DeepAgents adapters
  • local resources/tools/ and resources/skills/ discovery
  • persisted threads, runs, approvals, events, queue state, and recovery metadata

Quick Start

Install:

npm install @botbotgo/agent-harness

Workspace layout:

your-workspace/
  config/
    workspace.yaml
    agent-context.md
    models.yaml
    embedding-models.yaml
    vector-stores.yaml
    stores.yaml
    backends.yaml
    tools.yaml
    mcp.yaml
    agents/
      direct.yaml
      orchestra.yaml
  resources/
    package.json
    tools/
    skills/

Minimal usage:

import { createAgentHarness, run, stop } from "@botbotgo/agent-harness";

const runtime = await createAgentHarness("/absolute/path/to/workspace");

try {
  const result = await run(runtime, {
    agentId: "auto",
    input: "Explain what this workspace is for.",
  });

  console.log(result.output);
} finally {
  await stop(runtime);
}

Feature List

  • Workspace runtime for multi-agent applications
  • Small public runtime contract
  • YAML-defined host routing and runtime policy
  • LangChain v1 and DeepAgents backend adaptation
  • Auto-discovered local tools and SKILL packages
  • provider-native tools, MCP tools, and workspace-local tool modules
  • persisted threads, runs, approvals, lifecycle events, and queued runs
  • runtime-managed recovery and checkpoint maintenance
  • structured output and multimodal content preservation in run results
  • MCP bridge support for agent-declared MCP servers
  • MCP server support for exposing harness tools outward

How To Use

Create A Runtime

import { AgentHarnessRuntime, createAgentHarness } from "@botbotgo/agent-harness";

const runtime: AgentHarnessRuntime = await createAgentHarness("/absolute/path/to/workspace");

createAgentHarness(...) loads one workspace, resolves resources/, initializes persistence under runRoot, and starts runtime maintenance.

Run A Request

import { run } from "@botbotgo/agent-harness";

const result = await run(runtime, {
  agentId: "orchestra",
  input: "Summarize the runtime design.",
  invocation: {
    context: {
      requestId: "req-123",
    },
    inputs: {
      visitCount: 1,
    },
    attachments: {
      "/tmp/spec.md": { content: "draft" },
    },
  },
});

run(runtime, { ... }) creates or continues a persisted thread and returns threadId, runId, state, and a simple text output. When upstream returns richer output, the runtime also preserves outputContent, contentBlocks, and structuredResponse without making the basic API larger.

Use invocation as the runtime-facing request envelope:

  • invocation.context for request-scoped execution context
  • invocation.inputs for additional structured runtime inputs
  • invocation.attachments for attachment-like payloads that the active backend can interpret

Let The Runtime Route

const result = await run(runtime, {
  agentId: "auto",
  input: "Inspect the repository and explain the release flow.",
});

agentId: "auto" evaluates ordered routing.rules, then routing.defaultAgentId, and only falls back to model routing when routing.modelRouting: true.

Stream Output And Events

const result = await run(runtime, {
  agentId: "orchestra",
  input: "Inspect the workspace and explain the available tools.",
  listeners: {
    onChunk(chunk) {
      process.stdout.write(chunk);
    },
    onContentBlocks(blocks) {
      console.log(blocks);
    },
    onEvent(event) {
      console.log(event.eventType, event.payload);
    },
  },
});

subscribe(...) is a read-only observer surface over stored lifecycle events.

The runtime event stream includes:

  • run.created
  • run.queued
  • run.dequeued
  • run.state.changed
  • approval.requested
  • approval.resolved
  • output.delta

Inspect Threads And Approvals

import {
  getApproval,
  getThread,
  listApprovals,
  listThreads,
} from "@botbotgo/agent-harness";

const threads = await listThreads(runtime);
const thread = await getThread(runtime, threads[0]!.threadId);
const approvals = await listApprovals(runtime, { status: "pending" });
const approval = approvals[0] ? await getApproval(runtime, approvals[0].approvalId) : null;

These methods return runtime-facing records, not raw persistence or backend checkpoint objects.

Bridge MCP Servers Into Agents

apiVersion: agent-harness/v1alpha1
kind: Agent
metadata:
  name: orchestra
spec:
  execution:
    backend: deepagent
    modelRef: model/default
    mcpServers:
      - name: browser
        command: node
        args: ["./mcp-browser-server.mjs"]

The runtime discovers MCP tools, filters them through agent configuration, and exposes them like other tools.

Expose Runtime Tools As An MCP Server

import { createToolMcpServer, serveToolsOverStdio } from "@botbotgo/agent-harness";

const server = await createToolMcpServer(runtime, { agentId: "orchestra" });
await serveToolsOverStdio(runtime, { agentId: "orchestra" });

Stop The Runtime

import { stop } from "@botbotgo/agent-harness";

await stop(runtime);

How To Configure

Use Kubernetes-style YAML:

  • collection files use apiVersion, plural kind, and spec: []
  • single-object files use apiVersion, singular kind, metadata, and spec

Core workspace files:

  • config/workspace.yaml
  • config/agent-context.md
  • config/models.yaml
  • config/embedding-models.yaml
  • config/vector-stores.yaml
  • config/stores.yaml
  • config/backends.yaml
  • config/tools.yaml
  • config/mcp.yaml
  • config/agents/direct.yaml
  • config/agents/orchestra.yaml
  • resources/tools/
  • resources/skills/

There are three configuration layers:

  • runtime policy in config/workspace.yaml
  • reusable object catalogs in config/*.yaml
  • agent assembly in config/agents/*.yaml

The repository-owned default config layer is intentionally full-shaped. The shipped YAML keeps explicit default values for the main runtime knobs so teams can start from concrete config instead of reconstructing adapter defaults from code.

config/workspace.yaml

Use this file for runtime-level policy shared by the whole workspace.

Primary fields:

  • runRoot
  • concurrency.maxConcurrentRuns
  • routing.defaultAgentId
  • routing.rules
  • routing.systemPrompt
  • routing.modelRouting
  • maintenance.checkpoints
  • recovery.enabled
  • recovery.resumeResumingRunsOnStartup
  • recovery.maxRecoveryAttempts

If runRoot is omitted, the runtime defaults to <workspace-root>/run-data.

Queued runs are persisted under runRoot and continue after process restart. running runs are only replayed on startup when the bound tools are retryable.

config/agent-context.md

Use this file for shared bootstrap context loaded into agents at construction time.

Put stable project context here. Do not use it as mutable long-term memory.

config/models.yaml

Use named chat-model presets:

apiVersion: agent-harness/v1alpha1
kind: Models
spec:
  - name: default
    provider: openai
    model: gpt-4.1
    temperature: 0.2

These load as model/<name>.

config/embedding-models.yaml

Use named embedding-model presets for retrieval-oriented tools.

config/vector-stores.yaml

Use named vector-store presets referenced by retrieval tools.

config/stores.yaml

Use reusable store and checkpointer presets:

apiVersion: agent-harness/v1alpha1
kind: Stores
spec:
  - kind: Store
    name: default
    storeKind: FileStore
    path: store.json
  - kind: Checkpointer
    name: default
    checkpointerKind: MemorySaver
  - kind: Checkpointer
    name: sqlite
    checkpointerKind: SqliteSaver
    path: checkpoints.sqlite

Built-in store kinds today:

  • FileStore
  • InMemoryStore

Built-in checkpointer kinds today:

  • MemorySaver
  • FileCheckpointer
  • SqliteSaver

If you need other store or checkpointer implementations, inject them through runtime resolvers instead of treating them as built-in harness features.

config/backends.yaml

Use reusable DeepAgent backend presets so filesystem and long-term memory topology stays in YAML instead of application code:

apiVersion: agent-harness/v1alpha1
kind: Backends
spec:
  - kind: Backend
    name: default
    backendKind: CompositeBackend
    state:
      kind: VfsSandbox
      rootDir: .
      virtualMode: true
      timeout: 600
    routes:
      /memories/:
        kind: StoreBackend

config/tools.yaml

Use this file for reusable tool objects.

Supported tool families in the built-in runtime include:

  • function tools
  • backend tools
  • MCP tools
  • provider-native tools
  • bundles

Provider-native tools are declared in YAML and resolved directly to upstream provider tool factories such as OpenAI and Anthropic tool objects.

Use retryable carefully. Mark a tool retryable only when repeated execution is safe or intentionally idempotent.

config/mcp.yaml

Use this file for named MCP server presets.

config/agents/*.yaml

Agents are always declared with kind: Agent and spec.execution.backend.

Use two nested sections inside each agent:

  • spec.runtime for harness-owned runtime placement such as spec.runtime.runRoot
  • spec.execution for upstream execution semantics and adapter-facing config

This keeps the public product model small while letting LangChain v1 and DeepAgents concepts pass through with minimal translation.

Example lightweight host:

apiVersion: agent-harness/v1alpha1
kind: Agent
metadata:
  name: direct
spec:
  runtime:
    runRoot: ./.agent
  execution:
    backend: langchain-v1
    modelRef: model/default
    tools: []
    skills: []
    memory: []
    subagents: []
    mcpServers: []
    config:
      checkpointer:
        ref: checkpointer/default
      store:
        ref: store/default
      interruptOn: {}
      middleware: []
      systemPrompt: Answer simple requests directly.

Example main execution host:

apiVersion: agent-harness/v1alpha1
kind: Agent
metadata:
  name: orchestra
spec:
  runtime:
    runRoot: ./.agent
  execution:
    backend: deepagent
    modelRef: model/default
    memory:
      - path: config/agent-context.md
    tools: []
    skills: []
    subagents: []
    mcpServers: []
    config:
      store:
        ref: store/default
      checkpointer:
        ref: checkpointer/default
      backend:
        ref: backend/default
      interruptOn: {}
      middleware: []
      generalPurposeAgent: true
      taskDescription: Complete delegated sidecar work and return concise results.

Client-configurable agent fields include:

  • metadata.name
  • metadata.description
  • spec.execution.backend
  • spec.runtime.runRoot
  • spec.execution.modelRef
  • spec.execution.tools
  • spec.execution.skills
  • spec.execution.memory
  • spec.execution.subagents
  • spec.execution.mcpServers
  • spec.execution.config.systemPrompt
  • spec.execution.config.checkpointer
  • spec.execution.config.store
  • spec.execution.config.backend
  • spec.execution.config.middleware
  • spec.execution.config.responseFormat
  • spec.execution.config.contextSchema
  • spec.execution.config.stateSchema
  • spec.execution.config.interruptOn
  • spec.execution.config.filesystem
  • spec.execution.config.taskDescription
  • spec.execution.config.generalPurposeAgent
  • spec.execution.config.includeAgentName
  • spec.execution.config.version

For backend-specific agent options, prefer passing the upstream concept directly inside spec.execution.config. The loader keeps a small stable product shape, but it also preserves adapter-facing passthrough fields so new LangChain v1 or DeepAgents parameters can flow into adapters without expanding the public API surface.

Upstream feature coverage is tracked in docs/upstream-feature-matrix.md.

resources/

Use resources/ for executable local extensions:

  • resources/tools/ for tool modules
  • resources/skills/ for SKILL packages

Tool modules are discovered from resources/tools/*.js, resources/tools/*.mjs, and resources/tools/*.cjs.

The preferred tool module format is exporting tool({...}).

SKILL packages are discovered from resources/skills/ and attached to agents through YAML.

Design Notes

  • public runtime contract stays generic and small
  • application-level orchestration and lifecycle management stays in the harness
  • upstream LangChain v1 and DeepAgents concepts should be expressed as directly as possible in YAML
  • recovery, approvals, threads, runs, and events are runtime concepts, not backend-specific escape hatches
  • backend implementation details should stay internal unless product requirements force exposure
  • new LangChain v1 or DeepAgents public config should land in YAML passthrough and compatibility tests before adding new public runtime APIs

In short: agent-harness is a public runtime contract generic enough to survive backend changes, while the deep execution semantics stay upstream.

API Summary

Primary exports:

  • createAgentHarness
  • run
  • subscribe
  • listThreads
  • getThread
  • deleteThread
  • listApprovals
  • getApproval
  • createToolMcpServer
  • serveToolsOverStdio
  • stop