@emotion-machine/em-voice
v0.3.5
Published
Emotion Machine workspace + hot_context integration for OpenClaw
Readme
Emotion Machine Voice Plugin for OpenClaw
This plugin connects OpenClaw to Emotion Machine’s voice workspace API so long‑running tasks can update hot_context and be spoken by the Fast Brain.
What It Does
- Provides an
em_voice_tasktool for logging task lifecycle events to EM/voice/workspace - Exposes gateway RPC methods (
em-voice.start,em-voice.done, etc.) - Optionally syncs contacts from
USER.mdinto EM relationships
Install
# From npm
openclaw plugins install @emotion-machine/em-voice
# From local folder (development)
openclaw plugins install ./openclaw-pluginRestart the OpenClaw gateway after install or config changes.
Publish
cd openclaw-plugin
npm run clean
npm run build
npm publishNotes:
- Package includes compiled
dist/output viaprepare/prepublishOnly. - Scoped package is configured for public publish (
publishConfig.access=public).
Configure
Add to your OpenClaw config:
{
plugins: {
entries: {
"em-voice": {
enabled: true,
config: {
em: {
apiKey: "em_live_...",
companionId: "comp_abc123...",
workspaceApiToken: "voice_ws_...",
// baseUrl: "https://api.emotionmachine.ai"
},
workspace: {
path: "~/.openclaw/workspace"
}
}
}
}
}
}Notes:
workspaceApiTokenmust matchVOICE_WORKSPACE_API_TOKENon the EM server.relationshipIdis required in production when callingem_voice_task.- If you want a single global hot_context for non‑phone tasks, set
em.defaultRelationshipIdand omitrelationshipIdin those tool calls.
Usage
Tool call from the agent
{
"action": "done",
"taskId": "abc123",
"relationshipId": "rel_...",
"result": "Email draft ready."
}Gateway RPC (manual / debugging)
openclaw rpc em-voice.start --params '{"relationshipId":"rel_...","taskId":"abc123","query":"Draft email"}'
openclaw rpc em-voice.done --params '{"relationshipId":"rel_...","taskId":"abc123","result":"Draft ready"}'
openclaw rpc em-voice.status --params '{"relationshipId":"rel_...","taskId":"abc123"}'One-Command Setup
openclaw em-voice initThis will:
- Open a browser to authorize Emotion Machine (OAuth + PKCE)
- Derive a fast-brain prompt (and pick a voice) from your local
SOUL.md/IDENTITY.md - Create a default
globalrelationship - Print a single
openclaw config set ...command to apply - Print the Twilio inbound webhook URL for this companion
The init is idempotent: it will reuse an existing companion if found in config or workspace.
Use --reinstall to force a new companion.
You will be prompted to apply the config and restart the gateway automatically.
The companion/relationship IDs are also stored in:
~/.openclaw/workspace/.em/em-voice.json
If the browser cannot reach your CLI, you can copy the code from the page and paste it back into the terminal.
For local dev:
openclaw em-voice init --base-url http://localhost:8100 --auth-url http://localhost:3000Flags:
--auth-urlto override the dashboard origin.--manualto skip the localhost callback and paste the code manually.--base-urlis only needed for dev/staging; default is production (https://api.emotionmachine.ai).--companion-idto reuse an existing companion.--reinstallto force re-create the companion.--voice-name,--preset,--stt-provider,--tts-providerto control the voice pipeline.--caller-phoneto createphone:<digits>relationship for your own phone.--twilio-account-sid,--twilio-auth-token,--twilio-phone-numberto enable BYO Twilio.
What Gets Synced
| OpenClaw File | → | EM Entity |
|---------------|---|-----------|
| SOUL.md / IDENTITY.md | → | companion.config.system_prompt |
| USER.md (contacts) | → | relationship.profile |
| memory/contacts.md | → | relationship.profile |
Profile Structure in EM
{
"user": {
"name": "Sample User",
"email": "[email protected]",
"phone": "+1 555 010 0000",
"timezone": "America/Los_Angeles"
},
"relationship": {
"role": "employee_1",
"trust_level": "full",
"source": "USER.md",
"synced_at": "2026-02-06T10:00:00Z"
}
}How It Works
- Webhook received → Task logged to
hot_context.md - Fast brain → Reads context, generates immediate acknowledgment (~200ms)
- Response sent → User hears "Got it, checking..." via TTS
- Slow brain → OpenClaw agent processes the full request (async)
- Callback → Full result POSTed back to your voice app
- TTS → Your app speaks the complete answer
Race Condition Safety
The hot_context.md file uses an append-only event log pattern:
- No locking required (POSIX atomic appends)
- Fast brain can read while slow brain writes
- System naturally converges on next read
Inspired by Cursor's self-driving codebase research.
License
MIT © Emotion Machine, Inc.
