npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

oomi-ai

v0.2.39

Published

Oomi OpenClaw channel plugin and bridge tooling

Readme

oomi-ai

OpenClaw channel plugin and bridge tooling for Oomi managed chat and voice.

Current Focus

0.2.39 keeps the persona automation lane, adds a stable local persona runtime manager, upgrades the Docker dev harness from a package simulator to a real OpenClaw runtime, and introduces a shared OpenClaw profile contract so local onboarding, Docker bootstrap, and future hosted agents use the same setup model:

  • WebSpatial-based persona scaffolding for generated Oomi apps
  • a high-level oomi personas create-managed command for agent-driven persona creation
  • a stable oomi personas launch-managed flow for local persona hosting under OPENCLAW_WORKSPACE/personas
  • a matching oomi personas delete flow that stops managed runtimes and removes the persona workspace from the OpenClaw machine
  • shared OpenClaw path handling for isolated local or containerized dev roots
  • versioned oomi openclaw profile init|apply commands for deterministic local/dev or hosted setup flows
  • explicit model auth modes so onboarding can default to oomi-managed while internal testing can still opt into direct provider auth
  • a repo-local openclaw debug persona-runtime smoke test for managed persona runtime launch/reuse/stop
  • a Docker-based OpenClaw dev harness that runs a real openclaw gateway inside an isolated container
  • device-authenticated persona runtime registration and job callbacks
  • automatic bridge-side polling for queued persona_job control messages
  • one shared spoken-metadata normalizer used by both the extension and the bridge
  • a repo-backed local tts-pipeline replay that can validate assistant-final -> backend -> real Qwen TTS before publishing
  • spoken-metadata handling that preserves natural pauses like ... and keeps the managed voice contract valid on the real chat session path

This package is for two audiences:

  • OpenClaw operators who need to connect a machine to Oomi and keep chat or voice healthy
  • Developers evaluating the plugin on npm and deciding whether it matches their OpenClaw + Oomi setup

What This Package Ships

The npm package contains two Oomi integration surfaces:

  1. OpenClaw channel extension
  • File: openclaw.extension.js
  • Purpose: stable managed text transport through the Oomi backend channel API
  • This is the preferred integration surface for normal chat
  1. Local bridge + CLI
  • Files: bin/oomi-ai.js, bin/sessionBridgeState.js
  • Purpose: pair a device, manage the OpenClaw bridge worker, and support managed gateway traffic needed by device-backed chat and voice
  • This is the part that deals with broker sockets, local gateway sessions, challenge auth, and bridge health

In practical terms:

  • If you only need a clean managed chat channel, the extension is the main reason to install this package
  • If you need Oomi device-backed chat or voice on an OpenClaw machine, you also need the bridge tooling in this package

When To Install It

Install oomi-ai if all of the following are true:

  • you use OpenClaw
  • you want Oomi as a managed channel inside OpenClaw
  • you want device-backed Oomi chat, Oomi voice, or both

Do not install it just to use the Oomi web app by itself.

Install And Upgrade

Global install:

pnpm add -g oomi-ai@latest

Fallback:

npm install -g oomi-ai@latest

Install the OpenClaw plugin:

openclaw plugins install oomi-ai@latest

Upgrade an existing machine:

pnpm add -g oomi-ai@latest
openclaw plugins install oomi-ai@latest

Operator Quick Start

The packaged operator instructions live in agent_instructions.md. That is the primary reference for:

  • pairing a device
  • installing the plugin
  • configuring channels.oomi.accounts.default
  • running or supervising the bridge
  • checking whether the system is healthy
  • troubleshooting chat and voice failures

Fast-path install flow:

oomi openclaw pair --app-url https://www.oomi.ai --no-start
openclaw plugins install oomi-ai@latest
oomi openclaw plugin --show-secrets --backend-url https://api.oomi.ai

Then apply the printed channels.oomi.accounts.default config and restart OpenClaw.

Configuration

OpenClaw channel config lives under:

{
  "channels": {
    "oomi": {
      "defaultAccountId": "default",
      "accounts": {
        "default": {
          "backendUrl": "https://api.oomi.ai",
          "deviceToken": "...",
          "defaultSessionKey": "agent:main:webchat:channel:oomi",
          "requestTimeoutMs": 15000
        }
      }
    }
  }
}

Required fields:

  • backendUrl
  • deviceToken

Optional fields:

  • defaultSessionKey
  • requestTimeoutMs

Runtime Model

There are two runtime contracts worth understanding.

Managed Text Chat

Managed text chat uses the OpenClaw channel extension and the Oomi backend channel API. This path is the more stable contract and should be preferred when evaluating the plugin for normal chat.

Device-Backed Chat And Voice

Device-backed chat and voice use the local bridge. That bridge:

  • keeps a broker socket open to Oomi
  • opens local gateway sessions on demand
  • enforces connect-first request ordering
  • preserves or synthesizes idempotencyKey for chat.send
  • keeps voice-session faults from poisoning normal provider health where possible

This is the part of the package most likely to matter when debugging voice turn failures.

For managed cloned-voice replies, the canonical contract is:

  • visible assistant content stays user-facing
  • hidden metadata.spoken carries the backend TTS payload
  • the shared helper in lib/spokenMetadata.js is used by both the extension and the local bridge to preserve or normalize that sidecar before it reaches the backend

The backend cloned-voice path is intentionally strict. If metadata.spoken does not reach Oomi, backend TTS fails instead of speaking a flat fallback voice.

Bridge Logging

The bridge is intentionally quiet by default in production so normal deploys do not spam logs with frame-level transport noise.

To enable verbose bridge tracing temporarily, set:

OOMI_BRIDGE_DEBUG=1

With that flag enabled, the bridge will emit low-level session, frame, and spoken-metadata debug logs again.

Local TTS Validation

If you are developing this package inside the Oomi repo, you can now validate the managed voice path locally before publishing.

This local gate does three things:

  • replays an assistant chat.final frame through the same spoken-metadata normalization path used by the OpenClaw extension and the bridge
  • feeds that normalized frame into the Rails backend replay harness
  • optionally calls the real Qwen cloned-voice provider and confirms that audio deltas come back

Important:

  • this is a repo developer workflow, not a generic npm-only operator command
  • it expects the Oomi repo checkout, the Rails backend, and local provider env vars
  • the real-provider replay can auto-enroll a disposable default sample voice profile from assets/voice/source/nemu-enrollment-sample.mp3

Assistant-final contract only:

oomi openclaw debug assistant-final --text "Hey Justin! How is the testing going?" --json

Full local backend replay:

oomi openclaw debug tts-pipeline --text "When your voice reaches me, it gets turned into text, I read it and think about it, then I speak back through the managed chat session." --json

Real Qwen provider replay:

oomi openclaw debug tts-pipeline --text "When your voice reaches me, it gets turned into text, I read it and think about it, then I speak back through the managed chat session." --live-provider --env-file .env.local --provider-timeout-ms 20000 --json

What a good result looks like:

  • backend.success = true
  • managed.assistantSpeechFinal.present = true
  • qwen.errorCode = null
  • qwen.audioDeltaCount > 0 when --live-provider is used

This is the preferred pre-publish gate for managed voice regressions, because it is much faster than publishing to npm and testing through a live OpenClaw machine first.

Local OpenClaw Dev Harness

For plugin/runtime work, the preferred pre-publish loop is:

  1. run the repo-local CLI directly from source
  2. run the same flow inside the Dockerized OpenClaw dev harness using a local packed tarball
  3. only then update a real OpenClaw machine

Fast source smoke from the repo checkout:

node packages/oomi-ai/bin/oomi-ai.js openclaw debug persona-runtime --name "Chef Dev" --json

Containerized real-runtime smoke:

docker compose -f docker/openclaw-dev/compose.yml build openclaw-dev
docker compose -f docker/openclaw-dev/compose.yml up -d openclaw-dev
docker compose -f docker/openclaw-dev/compose.yml exec -T openclaw-dev openclaw gateway health --url ws://127.0.0.1:18789 --token dev-gateway-token --json
docker compose -f docker/openclaw-dev/compose.yml exec -T openclaw-dev oomi-local openclaw debug persona-runtime --name "Chef Dev" --json

The local managed-chat smoke uses a dedicated session key separate from the browser shell so repeated sentinel prompts do not leak into the interactive conversation history.

oomi-local is a deterministic container wrapper that executes the installed packed oomi-ai artifact directly with Node. In the Docker harness, it is only the package wrapper. The agent itself is the real OpenClaw runtime running in the foreground.

Shared profile contract smoke:

node packages/oomi-ai/bin/oomi-ai.js openclaw profile init --profile-id oomi-dev-local --label "Oomi Local Dev" --backend-url http://127.0.0.1:3001 --device-token dev-device-token --json
node packages/oomi-ai/bin/oomi-ai.js openclaw profile apply --profile ~/.openclaw/oomi-openclaw-profile.json --openclaw-home ~/.openclaw --json

What the harness does:

  • bootstraps an isolated OpenClaw home rooted at HOME/.openclaw
  • runs openclaw onboard --non-interactive ...
  • writes and applies HOME/.openclaw/oomi-dev-profile.json using the same shared profile contract the future onboarding UI and hosted-agent bootstrap should use
  • enables the Oomi channel account through that applied profile and relies on local OpenClaw plugin auto-discovery for the installed oomi-ai plugin
  • writes device identity material used by the oomi-ai bridge tooling
  • packs the local packages/oomi-ai checkout into a .tgz
  • installs that tarball globally in the container
  • installs the same tarball as a real OpenClaw plugin
  • defaults model auth to oomi-managed so onboarding/bootstrap does not require end-user provider keys
  • runs openclaw gateway as the foreground container process

Useful env overrides for local integration:

  • OOMI_DEV_BACKEND_URL
  • OOMI_DEV_DEVICE_TOKEN
  • OOMI_DEV_MODEL_AUTH_MODE
  • OPENCLAW_GATEWAY_TOKEN
  • OPENCLAW_GATEWAY_PASSWORD

Recommended local modes:

  • onboarding/runtime checks without provider keys
    • OOMI_DEV_MODEL_AUTH_MODE=oomi-managed
  • internal real-response smoke before publish
    • OPENROUTER_API_KEY=...
    • optional explicit override: OOMI_DEV_MODEL_AUTH_MODE=provider-env

The default container config is intentionally safe for onboarding and runtime testing. It does not require a published npm version, and it does not require end-user provider keys.

To make the Dockerized OpenClaw runtime actually answer managed chat locally today, add this to the repo .env.local:

OOMI_DEV_MODEL_AUTH_MODE=provider-env
OPENROUTER_API_KEY=<your-openrouter-key>

The local harness uses the openrouter-free preset for direct-provider smoke. If OPENROUTER_API_KEY is present in .env.local, pnpm run dev:openclaw-local automatically uses the provider-backed testing path. Without that key, it boots in oomi-managed mode and waits on a future Oomi-managed provider relay.

Persona Scaffolding

Use the scaffold flow when OpenClaw needs to build a managed persona app that will live inside Oomi:

oomi personas scaffold market-analyst --name "Market Analyst" --description "Private app for reviewing my broker positions and risk." --out ~/.openclaw/workspace/personas/market-analyst

Use:

  • oomi personas create <id> for repo-local manifest work
  • oomi personas create-managed --name "Cooking Persona" --description "Private cooking workspace" for the end-to-end Oomi-managed persona flow
  • oomi personas scaffold <slug> for a WebSpatial-based Oomi app shell with runtime metadata and health documents
  • oomi persona-jobs execute --message-file <job.json> when OpenClaw receives a structured persona orchestration job from Oomi

Additional persona runtime commands:

oomi personas launch-managed market-analyst --name "Market Analyst" --description "Private app for reviewing my broker positions and risk."
oomi personas status market-analyst
oomi personas stop market-analyst
oomi personas delete market-analyst
oomi personas runtime-register market-analyst --local-port 4789
oomi personas heartbeat market-analyst --local-port 4789
oomi persona-jobs start pj_123
oomi persona-jobs succeed pj_123 --workspace-path ~/.openclaw/workspace/personas/market-analyst --local-port 4789
oomi persona-jobs fail pj_123 --code JOB_FAILED --message "Scaffold generation failed."

Recommended agent flow:

oomi personas create-managed --name "Cooking Persona" --description "Private cooking workspace for recipes, meal planning, and kitchen notes."

That command creates the managed persona record in Oomi using the linked device identity. The backend then enqueues the persona_job, and the running bridge consumes that job automatically. The poll path is filtered to metadata.type = persona_job, so it does not consume normal queued chat traffic.

If you want to explicitly host or reuse the persona app on the OpenClaw machine outside the queued-job path, use:

oomi personas launch-managed cooking-persona --entry-url https://your-relay.example/oomi/cooking-persona

This command:

  • reuses OPENCLAW_WORKSPACE/personas/<slug> as the stable workspace
  • scaffolds only when the workspace is missing
  • installs dependencies only when needed or forced
  • allocates or reuses a free local port
  • starts or reuses the local runtime
  • registers the runtime URL back to Oomi unless --no-register is set

For existing managed personas that are already open in Oomi, the safe edit flow is:

oomi personas status <slug> --json

The agent should use editableWorkspacePath from that output as the authoritative directory for reads, edits, and verification. compatibilityWorkspacePath is only a fallback for older installs.

Bridge Health States

The bridge status file is written locally and should roughly be interpreted as:

  • starting: process booting or waiting for managed subscription
  • connected: broker connected and managed subscription confirmed
  • reconnecting: broker or gateway transport dropped and reconnect is scheduled
  • degraded: bridge is still alive but hit a runtime fault that needs attention
  • error: startup or auth-level failure that prevents useful operation
  • stopped: bridge is not running or was intentionally stopped

For voice support, a voice_session_* failure should be treated as narrower than a full provider outage.

Troubleshooting

invalid handshake: first request must be connect

Meaning:

  • a gateway request was forwarded before the session had accepted connect

What to check:

  • update to the latest oomi-ai
  • restart the bridge worker
  • confirm only one active bridge worker exists for the device

duplicate plugin id detected

Meaning:

  • OpenClaw can see more than one oomi-ai plugin source

What to check:

  • ensure there is only one active install under OpenClaw plugin discovery paths
  • remove stale local extension copies before reinstalling

Bridge keeps flipping between reconnecting, degraded, or stopped

What to check:

  • oomi openclaw bridge ps
  • oomi openclaw bridge service status
  • tail -f ~/.openclaw/logs/oomi-bridge-live.log
  • tail -f ~/.openclaw/logs/gateway.log

If the process is alive but runtime faults are being caught, expect degraded rather than an immediate hard stop.

Voice STT works but the agent does not answer

This usually means one of these:

  • the managed gateway/device side is not actually ready
  • the bridge or agent run failed after delivery
  • the OpenClaw run stopped with an upstream provider network_error

In that situation, inspect:

  • ~/.openclaw/logs/gateway.log
  • ~/.openclaw/logs/gateway.err.log
  • the relevant session JSONL in ~/.openclaw/agents/main/sessions/

Voice text works but cloned TTS fails with MISSING_SPOKEN_METADATA

Meaning:

  • the assistant text arrived
  • the backend voice relay never received valid hidden metadata.spoken

What to check:

  • run the local replay gate before publishing:
    • oomi openclaw debug assistant-final --text "..."
    • oomi openclaw debug tts-pipeline --text "..."
  • if the package local replay succeeds but the live machine fails, verify the OpenClaw machine is actually running the updated bridge binary
  • if the local replay fails, fix the assistant-final contract first instead of debugging the browser or backend deployment

Developer Notes

If you are inspecting this package on npm, the main architectural points are:

  • the extension path is the stable managed text contract
  • the local bridge exists because Oomi also needs device-backed and voice-capable flows
  • the bridge has been hardened for:
    • strict connect-first forwarding
    • method-specific request shaping
    • idempotencyKey handling
    • bridge status that does not report connected before managed subscription is ready
    • runtime fault isolation so local session failures are less likely to crash the whole provider
    • one shared hidden managed-voice speech metadata helper used by both the extension and the local bridge

If you are developing the plugin, test the packaged surface with:

cd packages/oomi-ai
node --test test/*.test.mjs
npm pack --dry-run

For managed voice changes, do not stop at the package tests. Run the local replay gate from the repo root as well, especially before publishing:

oomi openclaw debug tts-pipeline --text "Local managed voice validation text." --json
oomi openclaw debug tts-pipeline --text "Local managed voice validation text." --live-provider --env-file .env.local --provider-timeout-ms 20000 --json

Release Process

Before publishing:

cd packages/oomi-ai
node --test test/*.test.mjs
npm pack --dry-run

For voice-related changes, also run the repo-backed local replay gate before publish:

oomi openclaw debug tts-pipeline --text "Local managed voice validation text." --json
oomi openclaw debug tts-pipeline --text "Local managed voice validation text." --live-provider --env-file .env.local --provider-timeout-ms 20000 --json

Then publish the bumped version:

pnpm check
pnpm publish --dry-run --no-git-checks --access public
pnpm publish --access public