npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@avatar-state-machine-interface/runtime

v0.1.8

Published

Runtime state-machine evaluator for ASMI avatars (Avatar State Machine Interface). Pure state machine + LLM bridge; bring your own LLM provider. Design-time tooling lives at https://broen.tech/apps/asmi.

Readme

@avatar-state-machine-interface/runtime

Runtime state-machine evaluator for ASMI avatars (Avatar State Machine Interface). ASMI is a design-time tool — you build the avatar at broen.tech/apps/asmi, then embed the resulting AvatarDefinition into your own site with this package.

At runtime your site owns the chat UI, the LLM call, and the session state. This package evaluates the state machine: it decides what the avatar "does" when the user sends a message (intent classification, response generation, expression swaps, outbound events, proactive triggers, awareness context). It has zero runtime dependencies on ASMI's backend — if you un-deploy the avatar in ASMI, an implementation that already shipped keeps working.

npm install @avatar-state-machine-interface/runtime

Building for React?

For React sites, use the higher-level @avatar-state-machine-interface/react package instead — it wraps this runtime with a drop-in hook (useAsmiSession), a transparent animated face primitive (<AsmiFace>), and a turnkey widget (<AsmiAvatar>). It encapsulates the correctness-critical details (live mid-turn expression swaps, animation playback with all four trigger types, idle auto-return, face-outside-shell transparency) that coding AIs consistently get wrong when wiring this runtime by hand.

This lower-level runtime is the right choice for:

  • Non-React stacks (Vue, Svelte, vanilla JS, backend scripts)
  • Server-side evaluation (your backend processes each turn, your client just renders the face)
  • Custom React setups where useAsmiSession is too opinionated

Minimum viable integration

import { processMessage, type LlmProvider } from "@avatar-state-machine-interface/runtime";

// 1. Your LLM provider. Bring your own API key.
const llmProvider: LlmProvider = {
  async generate({ systemPrompt, userPrompt, history, temperature, maxTokens }) {
    // Call OpenAI / Anthropic / Gemini / whatever your site already uses.
    // Return the assistant's text response as a string.
    const response = await yourLlmClient.complete({
      system: systemPrompt,
      user: userPrompt,
      history,
      temperature,
      maxTokens,
    });
    return response.text;
  },
};

// 2. Session state — you persist this per user (localStorage, Redis, DB,
//    whatever). A fresh session starts in the idle / neutral states.
let sessionState = {
  currentState: { conversation: "idle", expression: "neutral" },
  history: [] as Array<{ role: string; content: string }>,
  context: {},
  metadata: {
    turnCount: 0,
    topicHistory: [],
    intentHistory: [],
    clarificationCount: 0,
    handoffOffered: false,
    sessionStartedAt: Date.now(),
  },
};

// 3. Every user message goes through processMessage.
async function handleUserMessage(userMessage: string) {
  const result = await processMessage(definition, sessionState.currentState, userMessage, {
    history: sessionState.history,
    sessionContext: { visitorTimezone: Intl.DateTimeFormat().resolvedOptions().timeZone },
    metadata: sessionState.metadata,
  }, llmProvider);

  // Render the response in your chat UI
  appendAssistantMessage(result.response);

  // Swap the avatar face to the new expression
  setFaceExpression(result.newState.expression);

  // Fire outbound events your app cares about
  // (e.g. 'asmi:handoff', 'asmi:satisfaction_pulse')
  for (const event of result.outboundEvents ?? []) {
    dispatchAppEvent(event);
  }

  // Persist the updated state
  sessionState = {
    currentState: result.newState,
    history: [
      ...sessionState.history,
      { role: "user", content: userMessage },
      { role: "model", content: result.response },
    ],
    context: sessionState.context,
    metadata: result.updatedMetadata ?? sessionState.metadata,
  };
}

Where to get definition

Your coding AI fetches it via the ASMI MCP server:

  • get_avatar → full AvatarDefinition JSON
  • get_avatar_markdown → human-readable spec
  • get_avatar_assets → expression image URLs
  • get_embedding_guide → per-avatar, step-by-step recipe

See the public SKILLS doc at broen.tech/skills/asmi-implementation.md for the full per-client connector config (Lovable, v0, Cursor, Claude Code, Claude Desktop, Windsurf, Zed, ChatGPT Developer Mode, Replit).

What processMessage does for you

  1. Intent + sentiment classification via llm_classify action.
  2. State-machine transition based on guards (isAnswerableIntent, isFrustratedAnswerable, isLowConfidenceOrUnclear, etc.).
  3. Entry/exit action executionemit_expression, llm_generate, emit_app_event.
  4. Response generation using the avatar's compiled system prompt (brand voice + awareness context + identity anchor).
  5. Awareness resolution — time of day, business hours, holidays, visitor locale.
  6. Metadata tracking — turn count, topic history, intent history.

API

processMessage(definition, currentState, userMessage, context, llmProvider)

Processes one user message through the state machine. Returns:

{
  response: string;              // Assistant's text reply to render
  newState: SessionState;        // Updated state (persist this)
  expressionChanges: string[];   // All expressions emitted during this turn
  intent: string;                // Classified intent
  sentiment: string;             // Classified sentiment
  classification?: { topic, confidence, … };
  outboundEvents?: string[];     // App-level events to dispatch
  updatedMetadata?: SessionMetadata;
  trace?: TraceEntry[];          // Debug trace (when ctx.debug = true)
  traceStartedAt?: number;
}

LlmProvider

The interface you implement to connect any LLM. Just one method:

interface LlmProvider {
  generate(params: {
    systemPrompt: string;
    userPrompt: string;
    history: Array<{ role: string; content: string }>;
    temperature: number;
    maxTokens: number;
  }): Promise<string>;
}

Additional exports

  • resolveAwareness(awareness, now, visitorTimezone) — compute time-of-day, business-hours, holiday context.
  • compileAwarenessPrompt(awareness, ctx) — turn awareness context into prompt prefix text.
  • compilePrompt(template, context) — template expander with nested {{ context.foo }} support.
  • summarizeHistory(history) — last-N-turns summary for long contexts.
  • evaluateGuard(guard, context) — evaluate a structured guard.
  • buildGuardContext(classification, metadata, state) — build the guard evaluator's context object.
  • getUpcomingHolidays(calendar, now, windowDays) — holiday helper.

License

MIT. See LICENSE.

Links