npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@adia-ai/a2ui-compose

v0.5.6

Published

AdiaUI A2UI compose engine — framework-agnostic. Takes natural-language intents + a catalog and produces A2UI protocol messages. Pairs with `@adia-ai/a2ui-retrieval` (intent classification, catalog lookup) and `@adia-ai/a2ui-validator` (schema + semantic

Readme

@adia-ai/a2ui-compose

Framework-agnostic UI generation engine. Constitution doc: docs/specs/compose-constitution.md. Takes a natural-language intent + an A2UI component catalog and produces a tree of A2UI protocol messages ready for a renderer.

This package is pipeline runtime only. UI components live in @adia-ai/web-components; the A2UI runtime (renderer, registry, streams, wiring) in @adia-ai/a2ui-runtime; the pattern corpus in @adia-ai/a2ui-corpus; the MCP server in @adia-ai/a2ui-mcp.

Published to the public @adia-ai scope on 2026-04-24 alongside a2ui-corpus, a2ui-mcp, a2ui-retrieval, and a2ui-validator.

Install

npm install @adia-ai/a2ui-compose

Typically paired with @adia-ai/a2ui-corpus (the pattern corpus the engine reads from) and @adia-ai/llm (the LLM client; runtime peer-dep):

npm install @adia-ai/a2ui-compose @adia-ai/a2ui-corpus @adia-ai/llm

What it does

intent  ─▶  classify  ─▶  retrieve  ─▶  compose / adapt  ─▶  validate  ─▶  A2UI
           (concepts)    (patterns)     (engine-specific)     (score ≥70)    JSON

One entry point, two generation strategies, pluggable LLM back-end.

import { generateUI } from '@adia-ai/a2ui-compose/core';

const result = await generateUI({
  intent: 'login form with email, password, and remember-me',
  engine: 'zettel',        // 'monolithic' | 'zettel'
  mode:   'pro',           // monolithic only: instant | pro | thinking
  model:  'claude-sonnet-4-7',
});

// result.components   — A2UI message array
// result.validation   — { score, checks, warnings }
// result.debug        — pattern matches, LLM prompt, token usage, …

Layout

a2ui-compose/
├── core/                   orchestrator — state, dispatch, pipeline (formerly engine/)
│   ├── generator.js        generateUI() — the one public entry point
│   ├── state.js            ArtifactStore + PipelineEngine singletons
│   └── pipeline/           6-stage pipeline engine
│
├── strategies/             pluggable engines via registerEngine() (formerly engines/)
│   ├── registry.js         engine selector + reserved-name guard
│   ├── monolithic/         pattern-match + LLM-adapt (3 modes)
│   │   ├── generate-instant.js   no LLM — pattern-match only
│   │   ├── generate-pro.js       pattern + LLM adaptation, non-streaming
│   │   └── generate-thinking.js  streaming LLM + repair loop
│   └── zettel/             fragment-graph composition
│       ├── generator-adapter.js  entry point
│       ├── composer.js           assembles fragments → compositions
│       └── session-store.js      multi-turn state (Phase A)
│
├── retrieval/
│   ├── catalog.js          loads component schemas from sibling .a2ui.json
│   ├── pattern-library.js  keyword-ranked pattern search (corpus + embeddings)
│   ├── fragments.js        atomic-shape lookup for zettel
│   ├── anti-patterns.js    catalog of canonical anti-patterns
│   └── feedback-store.js   accumulates user feedback → disk
│
├── llm/
│   ├── llm-bridge.js       unified adapter (Anthropic / OpenAI / Gemini)
│   ├── env.js              Vite + Node env-var routing
│   └── prompts/            system prompts per engine mode
│
├── validation/
│   └── validator.js        15-check A2UI validator, weighted 0–100 score
│
├── intelligence/           intent classification + concept extraction
│   ├── classifier.js
│   ├── concepts.js
│   └── steelman.js
│
└── evals/
    └── harness.mjs         held-out intent benchmark runner

Engines

Monolithic — pattern-match against full-canvas templates, optionally adapt via LLM. Three modes:

| Mode | LLM? | Speed | Use for | |------------|------|--------|-------------------------------------------------| | instant | no | <50ms | High-confidence intents with exact pattern hit | | pro | yes | ~2s | Most requests — adapt a template to the intent | | thinking | yes | ~5s | Complex requests; streams + runs a repair loop |

Zettel — fragment-graph composition. Retrieves atomic fragments (form-field, card-header, action-row, …) by keyword + concept-tag overlap and assembles them into compositions. Verbatim retrieval above a threshold (score ≥ 40); LLM synthesis from fragments below. Preserves session state across multi-turn iterations.

import { registerEngine } from '@adia-ai/a2ui-compose/strategies/registry';

registerEngine('my-engine', async (ctx) => {
  // ctx: intent, catalog, patterns, concepts, session, llm, …
  return { components: [...], validation: {...}, debug: {...} };
});

Reserved names: monolithic, monolithic-*, zettel, mcp.

Validation

Every generated output runs through validation/validator.js — 15 weighted checks covering structural validity, card/grid conventions, intent alignment (F1), and anti-patterns. Result:

{
  score: 92,                    // 0-100
  passed: true,                 // score ≥ 70
  checks: [
    { id: 'structure',   ok: true,  weight: 10 },
    { id: 'intent-f1',   ok: true,  weight: 8, value: 0.84 },
    { id: 'card-grid',   ok: true,  weight: 6 },
    { id: 'anti-pattern', ok: false, weight: 4, hit: 'chart-legend' },
    …
  ]
}

_fallback surfaces score 0 by design — ensure the engine returns real output, not a safety net.

LLM bridge

Multi-provider adapter with a common interface:

import { getAdapter } from '@adia-ai/llm';

const adapter = getAdapter('anthropic');     // or 'openai', 'gemini'
const stream = await adapter.streamChat({ model, messages, tools });

Env-var routing via llm/env.js — works under Node (process.env) and Vite (import.meta.env). Browser calls proxy through server.js at the repo root (holds API keys); Node calls go direct.

Evals

npm run evals              # held-out intent benchmark
npm run eval:diff -- --engine zettel    # diff against baseline

The held-out fixture lives in gen-ui-training/evals/held-out.jsonl. Regression thresholds the pipeline must hold:

  • Zettel coverage 100%, avgScore ≥ 88, MRR ≥ 0.94
  • Monolithic coverage 100%, avgScore ≥ 95
  • Fragment reuse ratio ≥ 29.9% (167 refs / 559 nodes)

Full gate sweep: see AGENTS.md at repo root, or run /verification-sweep.

Gotchas

  • Component catalog is read-only. .a2ui.json sidecars in web-components/components/*/ are build outputs; edit the sibling YAML instead.
  • Zettel loading is lazy. The corpus is only parsed on first zettel call — avoids Node fs/path imports reaching the browser bundle.
  • Validator score ≥ 70 is required for downstream consumers to trust the output. Below that, callers should treat the tree as advisory.
  • Engine name reservations are enforced at registration time — registerEngine('zettel-v2', …) passes; registerEngine('zettel', …) throws.

License

MIT