@avatar-state-machine-interface/runtime
v0.1.8
Published
Runtime state-machine evaluator for ASMI avatars (Avatar State Machine Interface). Pure state machine + LLM bridge; bring your own LLM provider. Design-time tooling lives at https://broen.tech/apps/asmi.
Maintainers
Readme
@avatar-state-machine-interface/runtime
Runtime state-machine evaluator for ASMI avatars (Avatar State Machine
Interface). ASMI is a design-time tool — you build the avatar at
broen.tech/apps/asmi, then embed the
resulting AvatarDefinition into your own site with this package.
At runtime your site owns the chat UI, the LLM call, and the session state. This package evaluates the state machine: it decides what the avatar "does" when the user sends a message (intent classification, response generation, expression swaps, outbound events, proactive triggers, awareness context). It has zero runtime dependencies on ASMI's backend — if you un-deploy the avatar in ASMI, an implementation that already shipped keeps working.
npm install @avatar-state-machine-interface/runtimeBuilding for React?
For React sites, use the higher-level
@avatar-state-machine-interface/react
package instead — it wraps this runtime with a drop-in hook
(useAsmiSession), a transparent animated face primitive
(<AsmiFace>), and a turnkey widget (<AsmiAvatar>). It encapsulates
the correctness-critical details (live mid-turn expression swaps,
animation playback with all four trigger types, idle auto-return,
face-outside-shell transparency) that coding AIs consistently get
wrong when wiring this runtime by hand.
This lower-level runtime is the right choice for:
- Non-React stacks (Vue, Svelte, vanilla JS, backend scripts)
- Server-side evaluation (your backend processes each turn, your client just renders the face)
- Custom React setups where
useAsmiSessionis too opinionated
Minimum viable integration
import { processMessage, type LlmProvider } from "@avatar-state-machine-interface/runtime";
// 1. Your LLM provider. Bring your own API key.
const llmProvider: LlmProvider = {
async generate({ systemPrompt, userPrompt, history, temperature, maxTokens }) {
// Call OpenAI / Anthropic / Gemini / whatever your site already uses.
// Return the assistant's text response as a string.
const response = await yourLlmClient.complete({
system: systemPrompt,
user: userPrompt,
history,
temperature,
maxTokens,
});
return response.text;
},
};
// 2. Session state — you persist this per user (localStorage, Redis, DB,
// whatever). A fresh session starts in the idle / neutral states.
let sessionState = {
currentState: { conversation: "idle", expression: "neutral" },
history: [] as Array<{ role: string; content: string }>,
context: {},
metadata: {
turnCount: 0,
topicHistory: [],
intentHistory: [],
clarificationCount: 0,
handoffOffered: false,
sessionStartedAt: Date.now(),
},
};
// 3. Every user message goes through processMessage.
async function handleUserMessage(userMessage: string) {
const result = await processMessage(definition, sessionState.currentState, userMessage, {
history: sessionState.history,
sessionContext: { visitorTimezone: Intl.DateTimeFormat().resolvedOptions().timeZone },
metadata: sessionState.metadata,
}, llmProvider);
// Render the response in your chat UI
appendAssistantMessage(result.response);
// Swap the avatar face to the new expression
setFaceExpression(result.newState.expression);
// Fire outbound events your app cares about
// (e.g. 'asmi:handoff', 'asmi:satisfaction_pulse')
for (const event of result.outboundEvents ?? []) {
dispatchAppEvent(event);
}
// Persist the updated state
sessionState = {
currentState: result.newState,
history: [
...sessionState.history,
{ role: "user", content: userMessage },
{ role: "model", content: result.response },
],
context: sessionState.context,
metadata: result.updatedMetadata ?? sessionState.metadata,
};
}Where to get definition
Your coding AI fetches it via the ASMI MCP server:
get_avatar→ fullAvatarDefinitionJSONget_avatar_markdown→ human-readable specget_avatar_assets→ expression image URLsget_embedding_guide→ per-avatar, step-by-step recipe
See the public SKILLS doc at broen.tech/skills/asmi-implementation.md for the full per-client connector config (Lovable, v0, Cursor, Claude Code, Claude Desktop, Windsurf, Zed, ChatGPT Developer Mode, Replit).
What processMessage does for you
- Intent + sentiment classification via
llm_classifyaction. - State-machine transition based on guards (
isAnswerableIntent,isFrustratedAnswerable,isLowConfidenceOrUnclear, etc.). - Entry/exit action execution —
emit_expression,llm_generate,emit_app_event. - Response generation using the avatar's compiled system prompt (brand voice + awareness context + identity anchor).
- Awareness resolution — time of day, business hours, holidays, visitor locale.
- Metadata tracking — turn count, topic history, intent history.
API
processMessage(definition, currentState, userMessage, context, llmProvider)
Processes one user message through the state machine. Returns:
{
response: string; // Assistant's text reply to render
newState: SessionState; // Updated state (persist this)
expressionChanges: string[]; // All expressions emitted during this turn
intent: string; // Classified intent
sentiment: string; // Classified sentiment
classification?: { topic, confidence, … };
outboundEvents?: string[]; // App-level events to dispatch
updatedMetadata?: SessionMetadata;
trace?: TraceEntry[]; // Debug trace (when ctx.debug = true)
traceStartedAt?: number;
}LlmProvider
The interface you implement to connect any LLM. Just one method:
interface LlmProvider {
generate(params: {
systemPrompt: string;
userPrompt: string;
history: Array<{ role: string; content: string }>;
temperature: number;
maxTokens: number;
}): Promise<string>;
}Additional exports
resolveAwareness(awareness, now, visitorTimezone)— compute time-of-day, business-hours, holiday context.compileAwarenessPrompt(awareness, ctx)— turn awareness context into prompt prefix text.compilePrompt(template, context)— template expander with nested{{ context.foo }}support.summarizeHistory(history)— last-N-turns summary for long contexts.evaluateGuard(guard, context)— evaluate a structured guard.buildGuardContext(classification, metadata, state)— build the guard evaluator's context object.getUpcomingHolidays(calendar, now, windowDays)— holiday helper.
License
MIT. See LICENSE.
