@openduo/duoduo
v0.2.4
Published
**An autonomous agent runtime where intelligence is durable, not disposable.**
Readme
duoduo
An autonomous agent runtime where intelligence is durable, not disposable.
Most agent stacks are request/response wrappers around an LLM: prompt in, answer out, state gone. duoduo is built as a long-lived runtime with a durable body (filesystem), explicit event history (WAL), and a dual-loop cognitive model (foreground + subconscious).
Why duoduo
duoduo is opinionated around one idea:
An agent should behave like a living system with continuity, not a stateless function call.
That means:
- durable event history before execution
- recoverable session actors with mailbox scheduling
- background reflection that updates shared memory
- prompt topology the system can evolve over time
Core Innovations
1. Filesystem-First, Event-Sourced Runtime
Durable state is not hidden in process memory. duoduo persists execution-critical data to files:
- Spine WAL: append-only canonical events
- Mailbox: session work queue
- Outbox: durable delivery records
- Registry: resumable runtime/session state
If the process dies, the system rehydrates from files and continues.
2. WAL-Before-Execute at the Gateway Boundary
Ingress follows a strict contract:
Channel input -> canonical Spine event -> mailbox enqueue -> session wake
Events are durably written before any agent turn executes. This gives replayability, auditability, and deterministic crash recovery.
3. One External Identity, Many Internal Sessions
Externally, duoduo can present one coherent agent identity. Internally, it orchestrates multiple session actors (channel sessions, job sessions, meta session), each with explicit lifecycle and concurrency control.
This separates user-facing continuity from execution scalability.
4. Dual-Loop Cognition (Cortex + Subconscious)
- Cortex (foreground): handles live user/task turns.
- Subconscious (background): cadence-driven, playlist-scheduled partition execution.
Subconscious partitions are stateless per tick and file-defined under kernel/subconscious/*/CLAUDE.md, enabling periodic reflection, maintenance, and memory consolidation without blocking foreground work.
5. Self-Programming Cognitive Topology
Subconscious partitions can evolve their own prompt layer:
- modify partition prompts
- add/remove partitions
- adjust execution cadence
The runtime ships a scaffold; long-term behavior is increasingly authored by the agent through durable files.
6. Radical Minimal Runtime Layer
duoduo intentionally keeps application code thin and delegates reasoning/tool orchestration to the foundation model + SDK. The runtime owns only what the model cannot reliably own by itself: durability, lifecycle, scheduling, and concurrency boundaries.
How It Works
Channel -> Gateway -> Spine WAL -> Mailbox -> Session Manager/Runner -> SDK -> Outbox -> Channel
|
+-> Cadence -> Meta Session -> Playlist Partitions -> Memory updatesForeground Loop
channel.ingress / channel.command -> Gateway normalization -> Spine append -> session mailbox -> runner drain -> outbox emission -> channel delivery.
Background Loop
Cadence rhythm triggers subconscious execution (playlist + inbox merge + partition run), producing durable artifacts and shared-memory updates for future turns.
10-Minute Architecture Walkthrough
Scenario: a user on session stdio:default sends:
"Summarize what changed in this repo and highlight risky parts."
- Client calls
channel.ingresswithsession_key=stdio:default. - Gateway normalizes input and appends a canonical event to
~/.aladuo/var/events/<yyyy-mm-dd>.jsonl. - Gateway enqueues a mailbox item to
~/.aladuo/var/sessions/<sha256(session_key)>/inbox/*.pending. - Session Manager receives
session.wake, starts/rehydrates the actor, and begins mailbox drain. - Runner merges inbox into
mailbox.md, resolves@evt(...)from Spine, and executes one SDK turn. - During execution, stream chunks are emitted as
session.stream; final output is persisted to~/.aladuo/var/outbox/<channel_kind>/obx_*.jsonand emitted assession.output. - Client receives output through WebSocket notifications or
channel.pull;channel.ackadvances delivery cursor. - On cadence ticks, Meta Session runs playlist partitions; memory-weaver writes fragments/dossiers and curates
~/aladuo/memory/CLAUDE.md. memory-committerreviews allowlisted kernel changes and commits meaningful memory/subconscious/config evolution into the kernel Git history.- Next foreground turn loads updated broadcast memory automatically, so behavior evolves with durable context, while Git preserves how that cognition changed over time.
What this demonstrates in one pass: deterministic ingest, recoverable execution, durable delivery, and subconscious memory consolidation feeding back into future turns.
Prompt Loading Chain
Every SDK turn combines prompt sources from the runner and the filesystem:
bootstrap/meta-prompt.md Identity prompt source (runner append chain)
kernel/config/<kind>.md Kind prompt (runner systemPrompt layer)
descriptor.md Channel prompt (runner systemPrompt layer)
memory/CLAUDE.md Broadcast board (agent-curated, via additionalDirectories)
cwd/CLAUDE.md Work/partition prompt (SDK auto-load)
cwd/CLAUDE.local.md Local runtime notes (SDK auto-load)Ownership model:
CLAUDE.md: code-owned scaffoldCLAUDE.local.md: agent-managed runtime notes
Memory Model (Dossier-First)
memory/
├── CLAUDE.md Cross-session broadcast board (auto-loaded)
├── index.md Bounded memory index
├── entities/ Entity dossiers
├── topics/ Topic dossiers
├── fragments/ Raw per-tick observations
└── state/ Memory cursor/stateMemory formation path:
Spine events -> memory-weaver partition -> fragments -> dossiers -> broadcast board
Memory evolution path:
dossiers/broadcast/config delta -> memory-committer partition -> kernel git commit
Runtime Filesystem
~/.aladuo/ Runtime state (ephemeral)
var/events/ Spine WAL (JSONL)
var/registry/ Session/runtime snapshots
var/outbox/ Durable outbound records
var/sessions/ Session mailbox + metadata
var/jobs/active/ Job definitions + sidecars
var/cadence/ Cadence queue + inbox
run/locks/ Session lease locks
~/aladuo/ Kernel (persistent agent state)
.git/ Memory evolution history
.gitignore Runtime-noise exclusions for kernel history
CLAUDE.md System-plane prompt
CLAUDE.local.md Runtime notes
subconscious/ Partition system
memory/ Shared memory surfaceThe kernel is not just a directory of current state. Runtime initialization ensures it is also a dedicated Git repository with a genesis commit, so duoduo's durable memory has both a current surface and an auditable evolution path. The tracked history is intentionally curated:
memory/CLAUDE.md,memory/index.md,memory/entities/,memory/topics/subconscious/*/CLAUDE.md,subconscious/playlist.mdconfig/**/*.md
Runtime noise stays out of that history via .gitignore, including memory/state/,
memory/fragments/, subconscious/inbox/, .claude/, *.tmp, and CLAUDE.local.md.
Engineering Invariants
- Container-first truth: production behavior is defined by container runtime.
- Deterministic I/O: atomic durable writes around critical transitions.
- Boundary clarity: JSON schema at machine boundaries, Markdown for agent cognition.
- No hidden side effects: explicit lifecycle, explicit wake/schedule semantics.
Project Structure (pnpm Monorepo)
. Root workspace (duoduo kernel)
├── src/ Kernel source code
├── packages/
│ ├── protocol/ @openduo/protocol — shared types + validators
│ └── channel-feishu/ @openduo/channel-feishu — Feishu/Lark gateway
├── pnpm-workspace.yaml Workspace definition
├── tsconfig.base.json Shared TS compiler options
└── docker-compose.dev.yml Dev container with hot-reloadThe project uses pnpm workspaces. Always use pnpm (not npm) for all operations.
@openduo/protocol— pure types and validators shared between kernel and channel packages. Zero runtime dependencies.@openduo/channel-feishu— independent Feishu gateway process that communicates with the daemon via JSON-RPC/WebSocket. Kernel does not depend on it.
Getting Started
Super Entry (Host Quick Start)
npx @openduo/duoduoDefault behavior:
- Try existing daemon (
ALADUO_DAEMON_URL/127.0.0.1:20233). - If unavailable, start an embedded host daemon in the same process.
- Attach current workspace session and enter chat immediately.
Advanced commands:
duoduo daemon status
duoduo daemon start
duoduo daemon stop
duoduo channel list
duoduo channel install ./duoduo-channel-feishu-<ver>.tgz
duoduo channel feishu start --gatewayWhere to use npx @openduo/duoduo:
- In any terminal directory where you want to chat/attach that workspace session.
- No prior global install is required.
Install options:
- Run directly from local tarball (offline-friendly, no global install):
npx --yes --package /absolute/path/to/duoduo-<version>.tgz duoduo- Install globally, then use
duoduoeverywhere:
npm install -g /absolute/path/to/duoduo-<version>.tgz
duoduoNote: this repo package is private by default, so npx @openduo/duoduo from public npm registry is not available unless you publish to a reachable private registry and remove private: true.
Public Distribution Direction
For external distribution, the practical split is:
- Publish JavaScript packages to the public npm registry.
- Publish the production container image to GHCR.
Rationale:
- npm is the frictionless default for
npx @openduo/duoduo, global installs, and plugin tarballs. - GitHub's npm registry is workable for private/internal installs, but public package installs still add consumer-side auth friction, which is a poor fit for a zero-friction CLI entry.
- GHCR public containers are a better fit for private-source delivery because users can pull and run built images without checking out the repository.
The release build already defaults to minified artifacts:
pnpm run build:releaseIf you need inspectable artifacts during local debugging:
pnpm run build:release:plainMinification raises reverse-engineering cost, but it does not make Node code secret. The real boundary is "ship built artifacts only" plus "do not publish src/".
See also: docs/40-ops/DistributionStrategy.md
Private Tarball Distribution (Core + Feishu Plugin)
pnpm run release:offlineOutputs (default: dist/offline/):
duoduo-<version>.tgzduoduo-channel-feishu-<version>.tgzinstall-offline.sh(prints offline install/run commands)
Scope note: this flow packages private duoduo code as local tarballs. It does not
vendor all public npm dependencies; install hosts still need access to npm (or a
reachable internal mirror/cache) for public packages such as
@anthropic-ai/claude-agent-sdk.
With pre-pack verification:
bash scripts/release-offline.sh --verifyDevelopment (Container-First)
pnpm install # Install all workspace dependencies
pnpm run dev # Start daemon in dev container (hot-reload)
pnpm run start:stdio # Connect CLI to running daemon
pnpm run start:acp # Start ACP gateway over stdio
pnpm test # Local tests (fast)
pnpm run test:container # Container test suite (source of truth)For machine-specific mounts, use a local override file that is not committed:
cp docker-compose.dev.local.example.yml docker-compose.dev.local.ymlACP gateway emits structured session/update tool lifecycle events (tool_call + tool_call_update)
with pending/completed status, parsed rawInput/rawOutput, and text content for client rendering compatibility.
When streaming chunks are available, ACP suppresses duplicated final assistant text and keeps full tool payloads
(rawInput/tool output summaries) without fixed 200-character truncation.
Real-time Claude thinking deltas are forwarded as agent_thought_chunk (not only <think>...</think> post-processing).
scripts/container-acp.sh can be invoked via absolute path from any working directory, for example:
bash /abs/path/to/aladuo/scripts/container-acp.shFeishu Channel Gateway
# Set credentials in .env:
# FEISHU_APP_ID=cli_xxx
# FEISHU_APP_SECRET=xxx
pnpm run start:feishu # Foreground (interactive)
pnpm run start:feishu:bg # Background (detached)
pnpm run stop:feishu # Stop background instance
pnpm run logs:feishu # View logs
pnpm run logs:feishu:tail # Tail logs (live)Host plugin mode (super entry + plugin manager):
# Build plugin artifact first (creates packages/channel-feishu/dist/plugin.js)
pnpm run build:channel:feishu
# Pack plugin tarball
pnpm --dir packages/channel-feishu pack
# Install and run from core CLI
duoduo channel install ./packages/channel-feishu/duoduo-channel-feishu-<ver>.tgz
duoduo channel feishu start --gateway
duoduo channel feishu status
duoduo channel feishu logs
duoduo channel feishu stopThe Feishu gateway supports inbound/outbound media bridging (image/file/audio/video/sticker and post embedded media) via im.messageResource + channel.file.upload/download.
Uploaded files are persisted at ${ALADUO_WORK_DIR}/inbox/<original-filename>/<sha256>.<ext> with source metadata in meta.d/<sha256>.yaml.
When a user replies to an earlier Feishu image/file message, the gateway now rehydrates the referenced attachment, re-uploads it idempotently to recover the same local path, includes that path in the reply context text, and forwards recovered parent attachments through ingress for multimodal turns.
The gateway also persists observed Feishu session keys at ~/.cache/feishu-channel/watched-sessions.json and re-opens pull subscriptions on startup, so job callbacks can resume proactive push after channel process restarts.
Production (Single duoduo Repo)
This repo now includes a production compose path for a single independent duoduo instance:
pnpm run build:release # Build dist/release (default minify=true)
pnpm run build:release:plain # Build dist/release without minify
pnpm run prod:build # Build runtime app image (uses container/Dockerfile.app)
pnpm run prod # Start daemon only
pnpm run prod:feishu # Start daemon + feishu-gateway (profile=feishu)
pnpm run prod:logs # Tail logs
pnpm run prod:down # StopMinify is enabled by default in release build. Override with:
- local build:
ALADUO_MINIFY=false pnpm run build:release - image build:
ALADUO_MINIFY=false pnpm run prod:build
Files:
docker-compose.prod.ymlcontainer/Dockerfile.appscripts/prod.sh
For npm tarball distribution, package.json.files includes bootstrap/ so runtime seeding assets are shipped with the package.
Container QuickStart Without Repo Source
Once the production image is published, users can launch a daemon container directly from the installed CLI:
duoduo container plan --name demo
duoduo container up --name demo --env-file .env
duoduo --daemon-url http://127.0.0.1:20233Default instance data lives under ~/.aladuo/containers/<name>/:
workspace/kernel/runtime/claude/
Users can fully override those host mounts:
duoduo container up \
--name prod \
--workspace-dir ~/data/duoduo/workspace \
--kernel-dir ~/data/duoduo/kernel \
--runtime-dir ~/data/duoduo/runtime \
--claude-dir ~/data/duoduo/claude \
--env-file ~/data/duoduo/prod.envOperational commands:
duoduo container ps --name prod
duoduo container logs --name prod --follow
duoduo container down --name prodOne Gateway, Multiple Independent duoduo Instances (Claim/Routing)
Each duoduo is still an independent runtime volume/process. To run multiple isolated instances on one host:
- Start each daemon with a unique env file and project/container names:
pnpm run prod -- --env-file .env.owner --project-name aladuo-owner --no-build
pnpm run prod -- --env-file .env.tenant-a --project-name aladuo-tenant-a --no-buildWhen --project-name is set, scripts/prod.sh auto-derives unique container names:
- daemon:
<project-name>-daemon - feishu:
<project-name>-feishu
- Start only one Feishu gateway (usually owner/root side):
pnpm run prod -- --env-file .env.owner --project-name aladuo-owner --with-feishu --no-buildIn production compose, feishu-gateway defaults to --gateway mode.
You can override mode/args through env:
FEISHU_CLI_ARGS=--channelFEISHU_CLI_ARGS=\"--channel --bootstrap-session-key lark:oc_xxx:ou_xxx --root-chat-id oc_root\"
- In CS mode, use bind flow to claim target chat session to a target daemon URL:
/cs request <target_session_key> <daemon_url>- user confirms in target chat:
/bind <code>
Example: route one chat to tenant daemon
/cs request lark:oc_xxx:ou_xxx http://aladuo-tenant-a-daemon:20233
docker-compose.prod.yml uses shared external network ${ALADUO_NETWORK:-aladuo} so gateway can route to other daemon containers by URL.
Key Environment Variables
ALADUO_DAEMON_URL=http://127.0.0.1:20233
ALADUO_LOG_LEVEL=debug # debug|info|warn|error
ALADUO_LOG_RUNNER_THOUGHT_CHUNKS=0 # default 0; set 1 to show runner thought_chunk debug logs
ALADUO_LOG_SESSION_LIFECYCLE=0 # default 0; set 1 to show session-manager/registry lifecycle debug logs
ACP_LOG_LEVEL=warn # ACP gateway log level (default: warn; stdio-safe)
ALADUO_CADENCE_INTERVAL_MS=600000 # compose default dev+prod (runtime fallback in code: 300000)
ALADUO_RUNTIME_MODE=yolo # yolo|container (host mode uses yolo)
SYSTEM_PROMPT=... # daemon-level custom system prompt
APPEND_SYSTEM_PROMPT=... # append to Claude Code preset prompt
ALADUO_META_PROMPT_PATH=... # optional meta-prompt file to prepend into append chain
FEISHU_APP_ID= # Feishu app credentials (optional)
FEISHU_APP_SECRET=Runtime notes:
containermode syncs~/.claude/settings.jsonand${ALADUO_KERNEL_DIR}/.claude/settings.json.- Runtime init does not install/overwrite
~/.claude/CLAUDE.md. - Runner prompt injection uses append chain:
meta-prompt+APPEND_SYSTEM_PROMPT. runnerdebug logs suppress high-frequencythought_chunkentries by default; setALADUO_LOG_RUNNER_THOUGHT_CHUNKS=1when debugging streaming thoughts.session-manager/registrylifecycle debug logs are suppressed by default; setALADUO_LOG_SESSION_LIFECYCLE=1when debugging wakes, idle transitions, or registry session upserts.- Container mode rejects both provided and previously bound workspaces outside
ALADUO_WORK_DIR.
JSON-RPC Surface
| Method | Purpose |
| ------------------------------ | ----------------------------- |
| channel.ingress | Ingest user message |
| channel.command | Ingest channel/system command |
| channel.file.upload/download | File transfer |
| channel.pull | Pull/replay outputs |
| channel.ack | Advance delivery cursor |
| job.create/get/list | Job lifecycle management |
WebSocket notifications: session.stream (token chunks), session.output (final records).
channel.pull supports channel outbound capability declaration:
channel_capabilities.outbound.accept_mime: string[](required when declared)channel_capabilities.outbound.max_bytes?: number- MIME wildcard patterns: exact (
image/png),type/*(image/*),*/* *.*is not protocol-valid; adapters may normalize it to*/*before request.- If omitted, daemon defaults to
accept_mime: [](no outbound attachment support).
Documentation
- Architecture overview
- Offline-first session model
- Filesystem spec
- Runner spec
- Memory spec
- Gateway spec
- Channel protocol
License
Private. All rights reserved.
