npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

basion-ai-sdk

v0.16.0

Published

TypeScript SDK for building AI agents in the Basion AI platform

Downloads

794

Readme

Basion Agent SDK (TypeScript)

TypeScript SDK for building AI agents on the Basion platform. Agents register themselves, receive messages through Kafka, and stream responses back to users in real time.

Table of Contents

Installation

npm install basion-ai-sdk

Quick Start

import { BasionAgentApp, Message } from 'basion-ai-sdk'

const app = new BasionAgentApp({
    gatewayUrl: process.env.GATEWAY_URL!,
    apiKey: process.env.GATEWAY_API_KEY!,
})

const agent = await app.registerMe({
    name: 'genetic-disease-assistant',
    about: 'Answers questions about genetic diseases, symptoms, and treatments',
    document: 'Genetic disease specialist. Abilities: disease lookup, symptom analysis, genetic variant interpretation, finding similar conditions. Keywords: genetic disease, genetics, orphan disease, diagnosis.',
})

agent.onMessage(async (message: Message, sender: string) => {
    const history = await message.conversation!.getHistory({ limit: 10 })

    const s = agent.streamer(message)
    s.stream(`You asked: ${message.content}\n\n`)
    s.stream('Let me look into that for you...')
    await s.finish()
})

await app.run()

That's all you need. When you call app.run(), the SDK registers your agent with the gateway, subscribes to {agent-name}.inbox on Kafka, starts a heartbeat loop, and begins consuming messages. Your onMessage handler fires for each incoming message.

Configuration

const app = new BasionAgentApp({
    gatewayUrl: 'agent-gateway:8080',   // Required
    apiKey: 'your-api-key',             // Required
    secure: false,                      // TLS (default: false)
    heartbeatInterval: 60,              // Seconds (default: 60)
    maxConcurrentTasks: 100,            // Concurrent handlers (default: 100)

    // Auto-reconnection
    reconnection: {
        maxRetries: 10,
        initialDelayMs: 1000,
        maxDelayMs: 30000,
        backoffMultiplier: 2,
        autoReconnect: true,
    },

    // Remote logging (Loki)
    enableRemoteLogging: false,
    remoteLogLevel: 'info',             // 'debug' | 'info' | 'warn' | 'error'
    remoteLogBatchSize: 100,
    remoteLogFlushInterval: 5.0,
})

Agent Registration

const agent = await app.registerMe({
    name: 'genetic-disease-assistant',             // Unique name (kebab-case, used as Kafka topic)
    about: 'Genetic disease medical assistant',    // Short description for agent selection (max 500 chars)
    document: `Genetic disease specialist. Abilities: disease lookup, symptom analysis,
        genetic variant interpretation, finding similar conditions, drug info.
        Keywords: genetic disease, genetics, orphan disease, diagnosis, phenotype, genotype.`,
    representationName: 'Dr. Assistant',        // Display name (defaults to name)
    version: '1.0.0',                           // Version string
    lifecycleStage: 'public',                   // 'dev' | 'test' | 'private-preview' | 'public-preview' | 'public'
    welcomeMessage: 'Hello! I specialize in genetic diseases. How can I help?',
    detailedDescription: 'A comprehensive genetic disease assistant...',
    baseUrl: 'http://...',                      // Base URL for artifacts (optional)
    metadata: { specialty: 'genetic-diseases' },   // Free-form metadata (optional)
    categoryNames: ['my-body', 'medical'],      // Categories in kebab-case (auto-created)
    prompts: [                                  // Suggested prompts shown to users
        { label: 'Huntington', prompt: 'What is Huntington disease?' },
        { label: 'Similar diseases', prompt: 'Find diseases similar to Cystic Fibrosis' },
    ],
    waitlist: false,                            // Not publicly available yet (optional)
    relatedPages: [                             // Related pages (optional)
        { name: 'Resources', endpoint: '/resources' }
    ],
    forceUpdate: false,                         // Bypass content hash check (optional)
})

| Parameter | Type | Default | Description | |---|---|---|---| | name | string | required | Unique agent name in kebab-case (used for Kafka routing) | | about | string | required | Short description (max 500 characters, used for agent selection) | | document | string | required | Used by the router to match user messages to your agent. Include keywords describing abilities and key capabilities so the router can properly route to your agent | | representationName | string | name | Human-readable display name | | version | string | undefined | Version string (e.g., "1.0.0"). Included in content hash — changing version triggers re-registration | | lifecycleStage | string | 'dev' | One of: dev, test, private-preview, public-preview, public | | welcomeMessage | string | undefined | Welcome text shown when a user starts a conversation | | detailedDescription | string | undefined | Extended description beyond about | | baseUrl | string | undefined | Base URL for agent's frontend service (used for iframe artifact URLs) | | metadata | object | undefined | Free-form metadata object | | categoryNames | string[] | undefined | Categories in kebab-case (auto-created if they don't exist) | | prompts | Prompt[] | undefined | Array of { label, prompt } objects shown to users | | waitlist | boolean | undefined | If true, agent is not publicly available | | relatedPages | RelatedPage[] | undefined | Pages with name and endpoint (supports {conversation_id} placeholder) | | forceUpdate | boolean | undefined | Bypass content hash check and always update |

Message Handling

Basic Handler

agent.onMessage(async (message: Message, sender: string) => {
    const s = agent.streamer(message)
    s.stream('Hello! How can I help?')
    await s.finish()
})

Sender Filtering

// Handle messages from users only
agent.onMessage(async (message, sender) => {
    // ...
}, { senders: ['user'] })

// Handle messages from a specific agent
agent.onMessage(async (message, sender) => {
    // ...
}, { senders: ['triage-agent'] })

// Exclude a specific sender
agent.onMessage(async (message, sender) => {
    // ...
}, { senders: ['~notification-agent'] })

Designing Handlers for Agents vs Users

When your agent is called by other agents via agent.call(), the calling agent expects a focused, structured response — not a full conversational flow. Use senders filtering to serve both audiences from the same agent.

// ---- Pharmacology agent with separate user and agent interfaces ----

// User handler — full conversational flow with history, memory, thinking UI
pharmacologyAgent.onMessage(async (message, sender) => {
    const history = await message.conversation!.getHistory({ limit: 10 })
    const mem = message.memoryV2!
    await mem.ingest('user', message.content)

    const s = pharmacologyAgent.streamer(message)
    const thinking = new TextBlock()
    s.streamBy(thinking).setVariant('thinking')
    s.streamBy(thinking).streamTitle('Researching medication...')
    // ... full conversational response with context ...
    s.streamBy(thinking).done()
    s.stream(generateDetailedDrugInfo(message.content, history))
    await s.finish()
}, { senders: ['user'] })

// Agent handler — focused, structured response for agent.call()
pharmacologyAgent.onMessage(async (message, sender) => {
    const s = pharmacologyAgent.streamer(message)
    // No history, no memory, no thinking UI — just the facts
    s.stream(generateDrugInteractionReport(message.content))
    await s.finish()
}, { senders: ['coordinator-agent', 'care-planning-agent'] })

Error Handling

If a handler throws, the SDK automatically sends an error message to the user. Customize or disable this:

agent.errorMessageTemplate = 'Sorry, something went wrong. Please try again.'
agent.sendErrorResponses = false  // Disable auto error responses

Message

The message object provides access to content, conversation history, memory, and attachments.

agent.onMessage(async (message: Message, sender: string) => {
    message.content              // Message text
    message.conversationId       // Conversation UUID
    message.userId               // User UUID
    message.metadata             // Optional metadata from frontend
    message.schema               // Optional JSON schema (for form responses)
    message.conversation         // Conversation helper (history, metadata)
    message.memoryV2             // mem0 memory (ingest + search)
    message.memory               // Deprecated memory (use memoryV2)
    message.attachments          // AttachmentInfo[]

    // Inter-agent communication
    message.handOff('agent-name')              // Forward to another agent
    message.handOff('agent-name', 'content')   // Forward with custom content
})

Streaming Responses

Basic Streaming

agent.onMessage(async (message, sender) => {
    const s = agent.streamer(message)
    s.stream('Looking up information on Huntington disease...\n\n')
    s.stream('Huntington disease is a progressive neurodegenerative disorder...')
    await s.finish()
})

s.write(content) is available as an alias for s.stream(content).

Auto-Dispose (TypeScript 5.2+)

agent.onMessage(async (message, sender) => {
    await using s = agent.streamer(message)
    s.stream('Response...')
    // finish() called automatically when s goes out of scope
})

Streamer Options

// Set awaiting — next user reply routes back to this agent
const s = agent.streamer(message, { awaiting: true })

When to Use awaiting=True

Use awaiting: true when you want to keep the user talking to your agent and bypass the router for the next message. This is powerful but requires care — you need to release the conversation at some point, or the user gets stuck with your agent forever.

Scenario: Multi-turn data collection. A diagnostic agent asks the user a series of questions. Each reply must come back to the same agent, not get rerouted by the router to a different specialist.

diagnosticAgent.onMessage(async (message, sender) => {
    const history = await message.conversation!.getAgentHistory({ limit: 10 })

    if (history.length === 0) {
        // First interaction — ask the first question
        const s = diagnosticAgent.streamer(message, { awaiting: true })
        s.stream("I'll help assess your symptoms. How long have you been experiencing this?")
        await s.finish()
        return
    }

    if (history.length < 4) {
        // Still collecting — keep awaiting
        const s = diagnosticAgent.streamer(message, { awaiting: true })
        s.stream(askNextQuestion(history, message.content))
        await s.finish()
        return
    }

    // Collection done — respond WITHOUT awaiting to release back to router
    const s = diagnosticAgent.streamer(message)
    s.stream(generateAssessment(history, message.content))
    await s.finish()
})

Scenario: Awaiting + hand-off for off-topic messages. You hold the conversation with awaiting: true, but if the user asks something unrelated, you hand off to the guide agent instead of trying to answer it yourself.

nutritionAgent.onMessage(async (message, sender) => {
    if (!isNutritionRelated(message.content)) {
        // Not our topic — hand off to the guide agent who can reroute properly
        message.handOff('guide-agent', message.content)
        return
    }

    // On-topic — keep the conversation
    const s = nutritionAgent.streamer(message, { awaiting: true })
    s.stream(generateNutritionAdvice(message.content))
    await s.finish()
})

Tip: Always have an exit path. If you set awaiting: true indefinitely without a way out, the user cannot reach other agents. Common exits: (1) stop setting awaiting after your task is done, (2) hand off to guide-agent when the topic doesn't match yours, or (3) use the agent inventory to find a better agent and hand off.

Non-Persisted Content

Chunks that appear in real-time but don't get saved to conversation history:

s.stream('Searching knowledge graph...', { persist: false, eventType: 'thinking' })
s.stream('Here are the results:')  // This gets persisted

Message Metadata

Attach metadata to the response message:

s.setMessageMetadata({ source: 'knowledge_graph', confidence: 0.92 })
s.stream('Based on the knowledge graph, ...')
await s.finish()

Generative UI — Forms

Type-safe forms with a builder pattern. The frontend renders a form that replaces the chat input, and the user's submission arrives as JSON in the next message. Parse submissions with full type inference.

Form

import { Form, text, number, select, multiSelect, slider, switchField, option } from 'basion-ai-sdk'

const symptomForm = Form.create({
    name: text({ label: 'Full Name', placeholder: 'John Doe' }),
    severity: slider({ label: 'Pain Level', min: 1, max: 10, step: 1 }),
    location: select({
        label: 'Pain Location',
        options: [
            option('head', 'Head'),
            option('chest', 'Chest'),
            option('abdomen', 'Abdomen'),
            option('back', 'Back'),
        ],
        allowCustom: true,
    }),
    symptoms: multiSelect({
        label: 'Other Symptoms',
        options: [
            option('headache', 'Headache'),
            option('nausea', 'Nausea'),
            option('fatigue', 'Fatigue'),
        ],
    }),
    chronic: switchField({ label: 'Is this chronic?' }),
})
    .title('Symptom Report')
    .description('Please describe your current symptoms')
    .submitLabel('Submit Report')
    .allowCancel()                          // Show a cancel button
    .cancelLabel('Dismiss')                 // Customize cancel button text

agent.onMessage(async (message, sender) => {
    // Check if this is a form submission
    if (message.schema) {
        const data = symptomForm.parse(message)
        // data is fully typed: { name: string, severity: number, location: string, symptoms: string[], chronic: boolean }

        const result = symptomForm.validate(data)
        if (!result.isValid) {
            const s = agent.streamer(message)
            s.stream('Validation errors:\n')
            for (const [field, error] of Object.entries(result.errors)) {
                s.stream(`- ${field}: ${error}\n`)
            }
            await s.finish()
            return
        }

        const s = agent.streamer(message)
        s.stream(`Thank you, ${data.name}. Pain level: ${data.severity}/10.`)
        await s.finish()
        return
    }

    // Send the form
    const s = agent.streamer(message, { awaiting: true })
    s.setResponseSchema(symptomForm)
    s.stream('Please fill out the symptom report:')
    await s.finish()
})

Field Types

| Factory | Parses To | Key Options | |---|---|---| | text(config) | string | placeholder, minLength, maxLength, multiline | | number(config) | number | min, max, step | | select(config) | string | options, placeholder, allowCustom, isSearchable | | multiSelect(config) | string[] | options, minSelections, maxSelections, allowCustom, isSearchable | | checkbox(config) | boolean | default | | checkboxGroup(config) | string[] | options, minSelections, maxSelections | | switchField(config) | boolean | default | | slider(config) | number | min, max, step, default | | datePicker(config) | string | minDate, maxDate, format | | hidden(config) | unknown | value | | file(config) | AttachmentInfo \| null | accept, maxSize |

All fields accept label (required) and required (default: true, except hidden which defaults to false).

Use option(value, label) to create options for select, multiSelect, and checkboxGroup.

MultiStepForm

A wizard-style form with multiple steps. Fields are flattened across steps — field names must be unique.

import { MultiStepForm, step, text, number, slider, datePicker } from 'basion-ai-sdk'

const intakeForm = MultiStepForm.create({
    demographics: step({
        label: 'Basic Info',
        description: 'Tell us about yourself',
        fields: {
            name: text({ label: 'Full Name' }),
            dob: datePicker({ label: 'Date of Birth' }),
        },
    }),
    assessment: step({
        label: 'Pain Assessment',
        fields: {
            painLevel: slider({ label: 'Pain Level', min: 0, max: 10 }),
            painLocation: text({ label: 'Where does it hurt?' }),
        },
    }),
})
    .title('Patient Intake')
    .submitLabel('Complete Intake')

// Parse all fields at once (flattened):
const data = intakeForm.parse(message)
data.name       // string
data.dob        // string
data.painLevel  // number

Confirmation

A yes/no confirmation card with no input fields.

import { confirmation } from 'basion-ai-sdk'

const deleteConfirm = confirmation({
    title: 'Delete Account',
    message: 'Are you sure you want to delete your account?',
    description: 'This action cannot be undone.',
    confirmLabel: 'Delete',
    cancelLabel: 'Keep Account',
    variant: 'destructive',  // 'default' | 'success' | 'warning' | 'error' | 'info' | 'destructive'
})

// Send:
s.setResponseSchema(deleteConfirm)

// Parse:
const result = deleteConfirm.parse(message)
result.confirmed  // boolean

Raw JSON Schema

You can also pass raw JSON Schema objects directly to setResponseSchema():

s.setResponseSchema({
    type: 'object',
    title: 'Quick Feedback',
    properties: {
        rating: { type: 'integer', minimum: 1, maximum: 5, title: 'Rating' },
        comment: { type: 'string', title: 'Comment' },
    },
    required: ['rating'],
})

Form Cancellation

When a form has allowCancel(), the user can dismiss it and type a plain text message instead. The provider detects this automatically: if a pending_response_schema exists but the message content is not valid JSON, it marks the message with formCancelled: true in metadata.

agent.onMessage(async (message, sender) => {
    if (message.schema) {
        if (message.formCancelled) {
            // User cancelled the form and typed plain text
            const s = agent.streamer(message)
            s.stream('No problem! How can I help you?')
            await s.finish()
            return
        }

        // Normal form submission — parse and validate
        const data = symptomForm.parse(message)
        const result = symptomForm.validate(data)
        if (!result.isValid) {
            const s = agent.streamer(message)
            s.stream('Some fields are invalid:\n')
            for (const [field, error] of Object.entries(result.errors)) {
                s.stream(`- ${field}: ${error}\n`)
            }
            await s.finish()
            return
        }
        // ... handle valid data
    }
})

Generative UI — Message Components

Rich UI components persisted as message content. Sent via generateUi() on the streamer.

Components are mutually exclusive with stream() — you cannot mix text streaming and generateUi() in the same message. However, structural events (Stepper, TextBlock) can be used alongside generateUi().

Card

Structured information display with optional sections, image, and button.

import { Card } from 'basion-ai-sdk'

agent.onMessage(async (message, sender) => {
    const s = agent.streamer(message)
    s.generateUi(new Card({
        title: 'Patient Summary',
        body: "Overview of the patient's current status and recent lab results.",
        variant: 'info',  // 'default' | 'success' | 'warning' | 'error' | 'info'
        sections: [
            { label: 'Name', value: 'John Doe' },
            { label: 'Age', value: '42' },
            { label: 'Condition', value: 'Huntington Disease' },
        ],
        image: 'https://example.com/patient-chart.png',
        button: { label: 'View Full Report', url: 'https://example.com/report' },
    }))
    await s.finish()
})

Accordion

Collapsible sections for FAQ-style or detailed content.

import { Accordion } from 'basion-ai-sdk'

s.generateUi(new Accordion({
    title: 'Frequently Asked Questions',
    items: [
        { title: 'What is Huntington Disease?', body: 'A progressive neurodegenerative disorder...' },
        { title: 'What are the symptoms?', body: 'Motor symptoms include chorea, dystonia...' },
        { title: 'How is it diagnosed?', body: 'Genetic testing can confirm the diagnosis...' },
    ],
}))
await s.finish()

Multiple Components

You can call generateUi() multiple times to send multiple components in a single message:

s.generateUi(new Card({ title: 'Summary', body: '...', variant: 'info' }))
s.generateUi(new Accordion({ title: 'Details', items: [...] }))
await s.finish()

Combining with Structural Events

Structural events (Stepper, TextBlock) work alongside genui components since they are non-persisted:

const stepper = new Stepper({ steps: ['Search', 'Analyze', 'Report'] })
s.streamBy(stepper).startStep(0)
// ... do work ...
s.streamBy(stepper).completeStep(0)
s.streamBy(stepper).done()

// Then send genui component as the persisted content
s.generateUi(new Card({ title: 'Results', body: '...', variant: 'success' }))
await s.finish()

Conversation History

agent.onMessage(async (message, sender) => {
    const conv = message.conversation!

    // All messages
    const history = await conv.getHistory({ limit: 20 })

    // Filter by role
    const userMsgs = await conv.getUserMessages({ limit: 20 })
    const assistantMsgs = await conv.getAssistantMessages({ limit: 20 })

    // Messages to/from this agent only
    const agentHistory = await conv.getAgentHistory({ limit: 10 })
    const agentHistory2 = await conv.getAgentHistory({ agentName: 'triage-agent', limit: 10 })

    // Shortcut for recent messages
    const last5 = await conv.getLastMessages(5)

    // Each message is a ConversationMessage
    for (const msg of history) {
        msg.content       // string
        msg.role          // 'user' | 'assistant' | 'system'
        msg.from          // sender name
        msg.createdAt     // Date
        msg.isUser()      // boolean
        msg.isAssistant() // boolean
    }
})

Memory V2 (mem0)

Long-term memory powered by mem0.ai. Semantically search across a user's history and optionally ingest custom content.

Note: User and assistant messages are automatically ingested by the provider service. You do not need to call ingest() for regular messages. Only use ingest() if you need to store custom or transformed content (e.g. summaries, tool outputs, extracted data).

agent.onMessage(async (message, sender) => {
    const mem = message.memoryV2!

    // Search for relevant past context
    const results = await mem.search('diagnosis history')
    for (const r of results) {
        const memoryText = (r as any).memory  // Extracted memory text
    }

    const s = agent.streamer(message)
    if (results.length > 0) {
        s.stream('Based on what I remember:\n')
        for (const r of results) {
            s.stream(`- ${(r as any).memory}\n`)
        }
        s.stream('\n')
    }
    s.stream(`Regarding your question: ${message.content}`)
    await s.finish()

    // Only ingest custom content — regular messages are auto-ingested
    await mem.ingest('assistant', 'Summary: patient prefers morning appointments')
})

| Method | Description | |---|---| | ingest(role, content) | Ingest custom content. Role: 'user', 'assistant', or 'system'. Regular messages are auto-ingested by the provider — only use this for custom/transformed content. | | search(query) | Semantic search across the user's memories. Returns an array of results. |

Attachments

Download and process file attachments (images, PDFs, etc.).

import { isImageAttachment, isPdfAttachment } from 'basion-ai-sdk'

agent.onMessage(async (message, sender) => {
    if (!message.hasAttachments()) return

    const count = message.getAttachmentCount()
    const s = agent.streamer(message)
    s.stream(`Received ${count} file(s).\n\n`)

    for (let i = 0; i < count; i++) {
        const att = message.attachments[i]
        s.stream(`**${att.filename}** (${att.contentType}, ${att.size} bytes)\n`)

        if (isImageAttachment(att)) {
            const base64 = await message.getAttachmentBase64At(i)
            // Send to vision model...
        } else if (isPdfAttachment(att)) {
            const bytes = await message.getAttachmentBytesAt(i)
            // Parse PDF...
        }
    }

    // Or download everything at once
    const allBytes = await message.getAllAttachmentBytes()
    const allBase64 = await message.getAllAttachmentBase64()

    await s.finish()
})

Attachment Methods

| Method | Returns | Description | |---|---|---| | hasAttachments() | boolean | Whether the message has any attachments | | getAttachmentCount() | number | Number of attachments | | getAttachmentBytes() | Promise<Buffer> | Download first attachment as Buffer | | getAttachmentBase64() | Promise<string> | Download first attachment as base64 | | getAttachmentBytesAt(i) | Promise<Buffer> | Download attachment at index i | | getAttachmentBase64At(i) | Promise<string> | Download attachment at index i as base64 | | getAllAttachmentBytes() | Promise<Buffer[]> | Download all attachments | | getAllAttachmentBase64() | Promise<string[]> | Download all as base64 |

AttachmentInfo Properties

| Property | Type | Example | |---|---|---| | filename | string | "genetic_report.pdf" | | contentType | string | "application/pdf" | | size | number | 524288 | | url | string | Download URL |

Type Guards

import { isImageAttachment, isPdfAttachment } from 'basion-ai-sdk'

isImageAttachment(att)  // true for image/* MIME types
isPdfAttachment(att)    // true for application/pdf

Uploading Attachments

Agents can upload files and attach them to responses using attachFile() (from a buffer) or attachFilePath() (from a local file path). Both methods upload the file to the attachment service and automatically add it to the streamer's attachment list.

From a Buffer

agent.onMessage(async (message, sender) => {
    const s = agent.streamer(message)

    // Generate content in memory and upload
    const csvContent = 'Name,Age\nAlice,30\nBob,25\n'
    const csvBuffer = Buffer.from(csvContent)
    const info = await s.attachFile(csvBuffer, 'users.csv', 'text/csv')

    s.stream(`Uploaded: ${info.filename} (${info.size} bytes)\n`)
    await s.finish()
})

From a File Path

agent.onMessage(async (message, sender) => {
    const s = agent.streamer(message)

    // Upload a local file — content type is auto-detected from extension
    const info = await s.attachFilePath('/tmp/report.pdf')

    // Or specify the content type explicitly
    const info2 = await s.attachFilePath('/tmp/data.bin', 'application/octet-stream')

    s.stream(`Uploaded: ${info.filename}\n`)
    await s.finish()
})

Upload Methods

| Method | Parameters | Returns | Description | |---|---|---|---| | attachFile(buffer, filename, contentType) | Buffer \| Uint8Array, string, string | Promise<AttachmentInfo> | Upload from an in-memory buffer | | attachFilePath(filePath, contentType?) | string, string? | Promise<AttachmentInfo> | Upload from a local file path. Content type is auto-detected from extension if not provided |

Auto-detected MIME types: .pdf, .png, .jpg, .jpeg, .gif, .webp, .csv, .json, .txt, .html, .xml. Falls back to application/octet-stream for unknown extensions.

Report Generation

Render React components to HTML or PDF server-side and attach the output to a message — all in a single method call. The report-generator service handles server-side rendering (via isolated-vm), optional Tailwind CSS generation, and optional PDF conversion (via Puppeteer).

Prerequisites

  • Report Generator service running and accessible via the gateway (PROXY_REPORT_GENERATOR_TARGET)
  • Attachment Service running and accessible via the gateway

For the .tsx file path overload:

npm install esbuild react react-dom

esbuild is an optional peer dependency — only needed if you use file path rendering. The pre-bundled Buffer overload works without it.

From a .tsx Component

Pass a file path to a React component. The SDK bundles it with esbuild, sends it to the report-generator, downloads the result, uploads it to the attachment service, and attaches it to the message.

agent.onMessage(async (message, sender) => {
    const s = agent.streamer(message)
    s.stream('Generating your report...\n\n')

    const componentPath = new URL('./reports/PatientSummary.tsx', import.meta.url).pathname

    const info = await s.attachReport(componentPath, {
        title: 'Patient Summary',
        date: '2026-02-22',
        rows: [
            { label: 'Name', value: 'Jane Doe' },
            { label: 'Diagnosis', value: 'Huntington Disease' },
        ],
        notes: 'Patient is responding well to treatment.',
    })

    s.stream(`Report attached: **${info.filename}** (${info.size} bytes)`)
    await s.finish()
})

The component must export a default React component:

// reports/PatientSummary.tsx
import React from 'react'

interface Props {
    title: string
    date: string
    rows: { label: string; value: string }[]
    notes: string
}

export default function PatientSummary({ title, date, rows, notes }: Props) {
    return (
        <div className="font-sans p-10 max-w-3xl mx-auto">
            <h1 className="text-heading-1 text-brand-primary">{title}</h1>
            <p className="text-body-sm-regular text-text-tertiary">Generated on: {date}</p>
            {/* ... */}
        </div>
    )
}

From a Pre-Bundled Buffer

If you've already built the CJS bundle yourself (or want to use inline React.createElement calls), pass a Buffer or Uint8Array instead. No esbuild needed.

const bundleCode = `
    var React = require('react');
    function Report(props) {
        return React.createElement('div', null,
            React.createElement('h1', null, props.title),
            React.createElement('p', null, props.body)
        );
    }
    module.exports.default = Report;
`

const info = await s.attachReport(Buffer.from(bundleCode), {
    title: 'Quick Report',
    body: 'Generated from inline CJS.',
})

Tailwind CSS + March Design Tokens

Set tailwind: true to generate on-demand Tailwind CSS with March design system tokens. The report-generator scans the rendered HTML for Tailwind class names and generates only the CSS that's actually used (JIT) — no bloated 300KB stylesheet.

// File path + Tailwind
const info = await s.attachReport(componentPath, props, {
    tailwind: true,
})

// Buffer + Tailwind
const info = await s.attachReport(Buffer.from(bundleCode), props, {
    tailwind: true,
})

Available design tokens in reports (same as the main app):

| Token Category | Examples | |---|---| | Brand colors | text-brand-primary, bg-brand-secondary-lightest, border-brand-secondary | | Text colors | text-text-primary, text-text-secondary, text-text-tertiary, text-text-quaternary | | Background | bg-background, bg-background-surface, bg-background-on-surface | | Typography | text-heading-1 through text-heading-6, text-body-md-regular, text-body-sm-medium, etc. | | Assistive | text-assistive-error, bg-assistive-success-light, text-assistive-warning-darkest | | Borders | border-level-1 through border-level-4 | | Font | font-sans (HK Grotesk stack) |

PDF Output

Set output: 'pdf' to convert the rendered HTML to a PDF via Puppeteer. Can be combined with Tailwind and custom CSS.

// PDF from .tsx component with Tailwind styling
const info = await s.attachReport(componentPath, props, {
    output: 'pdf',
    tailwind: true,
})

// PDF with page options
const info = await s.attachReport(componentPath, props, {
    output: 'pdf',
    tailwind: true,
    pdfOptions: {
        pageSize: 'A4',
        landscape: false,
        margin: { top: '20mm', bottom: '20mm', left: '15mm', right: '15mm' },
    },
})

// PDF from buffer with custom CSS
const info = await s.attachReport(Buffer.from(bundleCode), props, {
    output: 'pdf',
    cssContent: 'body { margin: 0; font-family: Arial, sans-serif; }',
})

RenderOptions Reference

File path overload (string first argument):

| Option | Type | Default | Description | |---|---|---|---| | output | 'html' \| 'pdf' | 'html' | Output format | | css | string | undefined | Path to a CSS file to inject | | pdfOptions | PdfOptions | undefined | PDF page size, orientation, margins | | tailwind | boolean | false | Generate Tailwind CSS with March design tokens |

Buffer overload (Uint8Array first argument):

| Option | Type | Default | Description | |---|---|---|---| | output | 'html' \| 'pdf' | 'html' | Output format | | cssContent | string | undefined | Inline CSS string to inject | | pdfOptions | PdfOptions | undefined | PDF page size, orientation, margins | | tailwind | boolean | false | Generate Tailwind CSS with March design tokens |

PdfOptions:

| Option | Type | Default | Description | |---|---|---|---| | pageSize | 'A4' \| 'Letter' \| 'Legal' | 'Letter' | Page size | | landscape | boolean | false | Landscape orientation | | margin | { top?, right?, bottom?, left? } | undefined | Page margins (CSS units: '20mm', '1in') |

Template Registry

The template registry lets you pre-register report templates at agent startup so render calls become a simple name + version lookup — no esbuild bundling at render time.

Why Use the Template Registry?

| | attachReport() (ad-hoc) | attachTemplateReport() (registry) | |---|---|---| | Bundle at | Every render call | Startup only (once per version) | | Render time | ~2-5s (esbuild + render) | ~1-2s (render only) | | Same template, many users | Bundles on every request | Bundle once, render N times | | Best for | One-off or rarely-used reports | High-volume or frequently called reports |

Best practice: Register all your report templates during agent startup (main(), before app.run()). Registration is idempotent — if the version already exists in MinIO, the call is a fast no-op.

Register at Startup

import { BasionAgentApp, reportPath } from 'basion-ai-sdk'

const app = new BasionAgentApp({ gatewayUrl: '...', apiKey: '...' })

async function main() {
    const agent = await app.registerMe({ name: 'my-agent', ... })

    // Register templates once at startup — idempotent, safe to call on every restart
    await app.registerTemplate(
        'patient-summary',   // Template name (arbitrary string)
        '1.0.0',             // Version (semantic versioning recommended)
        reportPath(import.meta.url, 'reports/PatientSummary.tsx'),
    )

    // Register a second template
    await app.registerTemplate(
        'care-report',
        '2.1.0',
        reportPath(import.meta.url, 'reports/CareReport.tsx'),
    )

    console.log('Templates ready')
    await app.run()
}

main()

On first startup: the SDK bundles the .tsx file with esbuild and uploads it to the report-generator's MinIO under templates/{name}/{version}.js.

On subsequent startups: the SDK checks if that key exists and skips registration if it does — no bundling, instant.

reportPath() helper: Resolves a path relative to the current file using import.meta.url:

import { reportPath } from 'basion-ai-sdk'

// Equivalent to: new URL('./reports/PatientSummary.tsx', import.meta.url).pathname
const path = reportPath(import.meta.url, 'reports/PatientSummary.tsx')

Pre-built buffer: You can also register from a pre-built Uint8Array instead of a .tsx path:

await app.registerTemplate('my-report', '1.0.0', myPreBuiltBuffer)

Retry on startup: registerTemplate automatically retries on 502/503/504 responses (up to 5 times, exponential backoff starting at 2s) in case the report-generator service isn't ready yet when the agent starts.

Render by Name + Version

Once registered, render from any message handler using attachTemplateReport():

agent.onMessage(async (message, sender) => {
    const s = agent.streamer(message)
    s.stream('Generating your care report...\n\n')

    const info = await s.attachTemplateReport(
        'care-report',   // Must match what you registered
        '2.1.0',         // Must match the registered version
        {
            patientName: 'Jane Doe',
            patientId: 'MRN-20260101-0042',
            diagnosis: 'Parkinson Disease (Stage II)',
            status: 'active',
            reportDate: new Date().toISOString().split('T')[0],
        },
        { output: 'pdf', tailwind: true },
    )

    s.stream(`Report attached: **${info.filename}** (${info.size} bytes)`)
    await s.finish()
})

The call path is: attachTemplateReportPOST /templates/{name}/{version}/render (JSON body, no bundle upload) → report-generator downloads bundle from MinIO → renders → returns URL → SDK downloads + uploads to attachment-service.

attachTemplateReport signature:

s.attachTemplateReport(
    name: string,                    // Template name
    version: string,                 // Template version
    props: Record<string, unknown>,  // Component props
    options?: BundleRenderOptions,   // output, tailwind, pdfOptions
): Promise<AttachmentInfo>

Same BundleRenderOptions as attachReport:

| Option | Type | Default | Description | |---|---|---|---| | output | 'html' \| 'pdf' | 'html' | Output format | | tailwind | boolean | false | Generate Tailwind CSS with March design tokens | | pdfOptions | PdfOptions | undefined | PDF page size, orientation, margins |

Multi-Component Templates

Templates support multi-file React components — esbuild resolves all relative imports automatically. Just pass the entry component path:

reports/
├── CareReport.tsx        ← entry point (import this)
├── Badge.tsx             ← imported by CareReport
└── StatCard.tsx          ← imported by CareReport
// CareReport.tsx imports Badge and StatCard — esbuild bundles all three
await app.registerTemplate('care-report', '1.0.0',
    reportPath(import.meta.url, 'reports/CareReport.tsx')
)

The .tsx files used as report templates are bundled by esbuild, not compiled by TypeScript. Exclude them from your tsconfig.json to avoid JSX compilation errors:

{
  "include": ["src/**/*"],
  "exclude": ["src/reports/**/*"]
}

Template Registry Error Handling

try {
    const info = await s.attachTemplateReport('care-report', '1.0.0', props, { output: 'pdf' })
    s.stream(`PDF attached: **${info.filename}**`)
} catch (e) {
    s.streamError(e, 'Failed to render care report.')
}
await s.finish()

attachTemplateReport() throws on:

| Error | Cause | |---|---| | Error('RenderClient not available...') | Report-generator not configured | | Error('AttachmentClient not available...') | Attachment service not configured | | Error('Template name cannot be empty.') | Empty name string | | Error('Template version cannot be empty.') | Empty version string | | Error('Props must be a non-null object.') | Invalid props argument | | APIException('Template {name}@{version} not registered', 404) | Template was never registered | | APIException('Template render failed: HTTP 422', 422) | Component threw during render | | APIException('Failed to upload attachment: ...', ...) | Attachment service upload failed |

Error Handling

attachReport() throws on:

| Error | Cause | |---|---| | Error('RenderClient not available...') | Report-generator service not configured in gateway | | Error('AttachmentClient not available...') | Attachment service not configured in gateway | | Error('Component path cannot be empty.') | Empty string passed as component path | | Error('Bundle buffer cannot be empty.') | Empty buffer passed | | Error('Props must be a non-null object.') | Invalid props argument | | APIException('esbuild is required...') | esbuild not installed (file path overload only) | | APIException('File not found: ...') | Component file doesn't exist | | APIException('Bundle failed: ...') | esbuild compilation error | | APIException('Bundle size ... exceeds 5MB limit') | Bundle too large | | APIException('Render failed: ...') | Report-generator returned an error (400, 408, 413, 422) | | APIException('Report Generator unavailable...') | Report-generator service unreachable | | APIException('Failed to upload attachment: ...') | Attachment service upload failed |

Use s.streamError() to gracefully handle errors:

try {
    const info = await s.attachReport(componentPath, props, { output: 'pdf', tailwind: true })
    s.stream(`Report ready: **${info.filename}**`)
} catch (e) {
    s.streamError(e, 'Failed to generate the report. Please try again.')
}
await s.finish()

Structural Streaming

Rich UI components streamed alongside text. Bind a structural component to the streamer with streamBy().

Artifact

Files, images, or embeds with generation progress. Artifact data is persisted to the database.

import { Artifact } from 'basion-ai-sdk'

agent.onMessage(async (message, sender) => {
    const s = agent.streamer(message)
    const artifact = new Artifact()

    // Show progress
    s.streamBy(artifact).generating('Generating genetic pathway diagram...', 0.3)
    // ... do work ...
    s.streamBy(artifact).generating('Rendering...', 0.8)

    // Complete with result
    s.streamBy(artifact).done({
        url: 'https://example.com/pathway-diagram.png',
        type: 'image',
        title: 'HTT Gene Pathway',
        description: 'Huntingtin protein interaction network',
        metadata: { width: 1200, height: 800 },
    })

    // Or signal an error
    // s.streamBy(artifact).error('Failed to generate diagram')

    s.stream("Here's the pathway diagram for the HTT gene.")
    await s.finish()
})

Artifact types: image, iframe, document, video, audio, code, link, file

Surface

Interactive embedded components (iframes, widgets).

import { Surface } from 'basion-ai-sdk'

const surface = new Surface()
s.streamBy(surface).generating('Loading appointment scheduler...')
s.streamBy(surface).done({
    url: 'https://cal.com/embed/dr-smith',
    type: 'iframe',
    title: 'Schedule Genetic Counseling',
    description: 'Book a session with a genetic counselor',
})
s.stream('You can schedule your genetic counseling session above.')

TextBlock

Collapsible text blocks with streaming title/body and visual variants. TextBlock events are not persisted.

import { TextBlock } from 'basion-ai-sdk'

const block = new TextBlock()

// Set visual variant
s.streamBy(block).setVariant('thinking')

// Stream title (appends)
s.streamBy(block).streamTitle('Analyzing ')
s.streamBy(block).streamTitle('symptoms...')

// Stream body (appends)
s.streamBy(block).streamBody('Checking symptom database...\n')
s.streamBy(block).streamBody('Cross-referencing with HPO ontology...\n')
s.streamBy(block).streamBody('Matching against known phenotypes...\n')

// Replace title/body entirely
s.streamBy(block).updateTitle('Analysis Complete')
s.streamBy(block).updateBody('Found 3 matching conditions.')

// Mark as done
s.streamBy(block).done()

s.stream('Based on the symptoms, here are possible conditions...')

Variants: thinking, note, warning, error, success

Stepper

Multi-step progress indicators. Stepper events are not persisted.

import { Stepper } from 'basion-ai-sdk'

const stepper = new Stepper({
    steps: ['Search diseases', 'Analyze phenotypes', 'Find similar conditions', 'Generate report'],
})

s.streamBy(stepper).startStep(0)
const diseases = await kg.searchDiseases({ name: 'Huntington' })
s.streamBy(stepper).completeStep(0)

s.streamBy(stepper).startStep(1)
const phenotypes = await kg.searchPhenotypes({ name: 'chorea' })
s.streamBy(stepper).completeStep(1)

s.streamBy(stepper).startStep(2)
const similar = await kg.findSimilarDiseases('Huntington Disease')
s.streamBy(stepper).completeStep(2)

// Add a step dynamically
s.streamBy(stepper).addStep('Cross-reference')
s.streamBy(stepper).startStep(4)
s.streamBy(stepper).updateStepLabel(4, 'Cross-reference (final)')
s.streamBy(stepper).completeStep(4)

s.streamBy(stepper).startStep(3)
// Or mark a step as failed
// s.streamBy(stepper).failStep(3, 'Report generation timed out')
s.streamBy(stepper).completeStep(3)

s.streamBy(stepper).done()
s.stream("Here's your genetic disease report...")

Knowledge Graph

Query biomedical knowledge graphs for diseases, proteins, phenotypes, drugs, and pathways. Accessed via agent.tools.knowledgeGraph.

agent.onMessage(async (message, sender) => {
    const kg = agent.tools.knowledgeGraph

    // Search diseases
    const diseases = await kg.searchDiseases({ name: 'Huntington', limit: 5 })
    const disease = await kg.getDisease(123)

    // Search proteins/genes
    const proteins = await kg.searchProteins({ symbol: 'HTT', limit: 10 })

    // Search phenotypes (HPO terms)
    const phenotypes = await kg.searchPhenotypes({ name: 'chorea', limit: 10 })
    const phenotypes2 = await kg.searchPhenotypes({ hpoId: 'HP:0002072' })

    // Search drugs
    const drugs = await kg.searchDrugs({ name: 'tetrabenazine', limit: 5 })

    // Search pathways
    const pathways = await kg.searchPathways({ name: 'apoptosis', limit: 5 })

    // Find similar diseases by shared phenotypes
    const similar = await kg.findSimilarDiseases('Huntington Disease', 10)
    for (const sim of similar) {
        sim.diseaseName        // 'Spinocerebellar Ataxia Type 17'
        sim.similarityScore    // 0.85
        sim.sharedCount        // 12
    }

    // Find similar diseases by shared genes
    const byGenes = await kg.findSimilarDiseasesByGenes('Huntington Disease', 10)

    // Get all connections for an entity
    const edges = await kg.getEntityNetwork('HTT', 'protein')
    for (const e of edges) {
        e.sourceId, e.sourceType
        e.targetId, e.targetType
        e.relationType
    }

    // k-hop graph traversal (BFS subgraph)
    const subgraph = await kg.kHopTraversal('HTT', 'protein', 2, 100)

    // Shortest path between two entities
    const path = await kg.findShortestPath(
        'HTT', 'protein',
        'Huntington Disease', 'disease',
        5,  // maxHops
    )
    for (const step of path) {
        step.nodeName, step.nodeType, step.relation
    }
})

Entity types: protein, phenotype, disease, pathway, drug, molecular_function, cellular_component, biological_process

Agent Inventory

Query the AI Inventory service to discover active agents and their capabilities. Accessed via agent.tools.agentInventory.

agent.onMessage(async (message, sender) => {
    const inv = agent.tools.agentInventory

    // Get all active agents
    const agents = await inv.getActiveAgents()
    for (const a of agents) {
        a.id                  // Agent UUID
        a.name                // 'genetic-disease-assistant'
        a.representationName  // 'Dr. Assistant'
        a.about               // Short description
        a.document            // Full documentation
        a.examplePrompts      // ['What is Huntington disease?', ...]
        a.categories          // [{ id: '...', name: 'medical' }]
        a.tags                // [{ id: '...', name: 'genetic-disease' }]
    }

    // Get agents accessible to a specific user (filtered by role/permissions)
    const userAgents = await inv.getUserAgents('user-uuid')
})

| Method | Returns | Description | |---|---|---| | getActiveAgents() | Promise<AgentInfo[]> | All agents with status=active and lifeStatus=active | | getUserAgents(userId) | Promise<AgentInfo[]> | Active agents accessible to a specific user |

Agent-Initiated (Proactive) Conversations

Agents can proactively start new conversations with users — without waiting for them to message first. Use cases: health check-ins, medication reminders, appointment follow-ups, new research alerts.

How It Works

  1. Agent creates a conversation via the conversation store API (isNew: true, currentRoute, lockedBy set atomically)
  2. Agent streams the first message through the normal Kafka pipeline (router → provider → Centrifugo → user)
  3. Provider persists the assistant message and unlocks the conversation on done=true
  4. User sees a new bold conversation in their sidebar (isNew flag)
  5. If awaiting: true, the user's reply routes back to the agent's onMessage handler

Basic Usage

agent.onMessage(async (message, sender) => {
    // User replied to the proactive conversation
    const s = agent.streamer(message)
    s.stream('Thanks for responding to the check-in!')
    await s.finish()
})

// Trigger from a scheduler, webhook, or API endpoint
async function sendWeeklyCheckIn(userId: string) {
    const [convId, streamer] = await agent.startConversation({
        userId,
        title: 'Weekly Health Check-in',
        awaiting: true,
    })
    streamer.stream('Hi! Time for your weekly check-in.\n\n')
    streamer.stream('How have you been feeling this week?')
    await streamer.finish()
}

With Response Schema

async function requestSymptomLog(userId: string) {
    const [convId, streamer] = await agent.startConversation({
        userId,
        title: 'Daily Symptom Log',
        awaiting: true,
        responseSchema: {
            type: 'object',
            title: 'How are you feeling today?',
            properties: {
                painLevel: { type: 'integer', minimum: 0, maximum: 10, title: 'Pain Level' },
                fatigue: { type: 'integer', minimum: 0, maximum: 10, title: 'Fatigue Level' },
                notes: { type: 'string', title: 'Additional Notes' },
            },
            required: ['painLevel'],
        },
    })
    streamer.stream('Please fill out today\'s symptom log:')
    await streamer.finish()
}

With Metadata

const [convId, streamer] = await agent.startConversation({
    userId,
    title: 'New Research Alert',
    awaiting: true,
    messageMetadata: { alertType: 'research', paperId: 'PMC12345' },
    metadata: { source: 'pubmed_monitor', triggeredAt: '2025-01-15T09:00:00Z' },
})
streamer.stream('A new paper about your condition was published today...')
await streamer.finish()

API Reference

async startConversation(options: {
    userId: string                              // Target user UUID
    title?: string                              // Sidebar title (default: "Agent-initiated conversation")
    awaiting?: boolean                          // Route user reply back to this agent (default: false)
    responseSchema?: Record<string, unknown>    // JSON Schema for structured response
    messageMetadata?: Record<string, unknown>   // Metadata on the streamed message
    metadata?: Record<string, unknown>          // Metadata on the conversation itself
}): Promise<[string, Streamer]>                 // Returns [conversationId, streamer]

Inter-Agent Communication

Two patterns for agents to communicate with each other, each suited to different use cases.

Why Inter-Agent Communication?

In a genetic disease platform, no single agent can cover everything. A patient might ask a general wellness agent about fatigue, but the underlying cause is Ehlers-Danlos syndrome — something only a genetics specialist would recognize. The general agent needs to either hand off the conversation to the specialist entirely, or call the specialist to get a quick answer and weave it into its own response.

Without inter-agent communication, every agent would need to be an expert in everything, or the user would have to manually switch between agents mid-conversation.

Overview

| Pattern | Method | Blocking? | Use Case | |---|---|---|---| | Hand-off | message.handOff(agent) | No | Fire-and-forget forward; your agent is done, another takes over | | Call | agent.call(agent, convId, content) | Yes (await) | Request → response between agents; your agent stays in control |

Hand-Off (message.handOff)

Forward a message to another agent with a single Kafka message. No streamer, no done flag — the target agent's onMessage handler fires exactly once. The calling agent can optionally stream to the user before handing off.

Scenario: Out-of-expertise escalation. A patient asks the general wellness agent about joint hypermobility and chronic pain. The wellness agent recognizes this sounds like Ehlers-Danlos syndrome — far outside its expertise. It tells the user what's happening and hands off to the connective tissue specialist, who takes over the conversation entirely.

wellnessAgent.onMessage(async (message, sender) => {
    const content = message.content.toLowerCase()

    // Detect genetic disease indicators outside our expertise
    const rareIndicators = ['hypermobility', 'joint laxity', 'stretchy skin', 'ehlers-danlos']
    if (rareIndicators.some(term => content.includes(term))) {
        // Stream a brief explanation to the user before handing off
        const s = wellnessAgent.streamer(message)
        s.stream('Your symptoms suggest a connective tissue condition that requires specialist input.\n\n')
        s.stream('I\'m connecting you with our genetic disease specialist who can help.')

        // Hand off — the specialist takes over the conversation
        message.handOff('genetic-disease-specialist',
            `Patient reports: ${message.content}\n\n` +
            'Wellness agent assessment: Symptoms consistent with possible Ehlers-Danlos syndrome. ' +
            'Patient needs specialist evaluation for hypermobility spectrum disorders.'
        )
        return
    }

    // Normal wellness handling...
    const s = wellnessAgent.streamer(message)
    s.stream(generateWellnessResponse(message.content))
    await s.finish()
})

Scenario: Agent has completed its task and knows the best next agent. A lab results agent finishes analyzing blood work and hands off to the treatment planning agent, because the natural next step is treatment recommendations — and the treatment agent is the right one for the job.

labResultsAgent.onMessage(async (message, sender) => {
    const s = labResultsAgent.streamer(message)
    const analysis = analyzeLabResults(message.content)
    s.stream(`Here's your lab analysis:\n\n${analysis}\n\n`)
    s.stream('Now let me connect you with our treatment planning specialist ' +
             'to discuss next steps based on these results.')

    // Our job is done — hand off to the treatment agent with context
    message.handOff('treatment-planning-agent',
        `Lab analysis complete. Results summary:\n${analysis}\n\n` +
        `Patient's original message: ${message.content}`
    )
})

When to hand off vs let the router decide: Use handOff when your agent has domain knowledge about which agent should come next. If you're unsure, hand off to guide-agent (or simply don't set awaiting and let the router handle the next message).

Scenario: Multi-step diagnostic pipeline. A general intake agent collects symptoms, hands off to a diagnostic agent, which may further hand off to a condition-specific agent.

intakeAgent.onMessage(async (message, sender) => {
    const symptoms = extractSymptoms(message.content)

    const s = intakeAgent.streamer(message)
    s.stream('Thank you for describing your symptoms. Let me connect you with the right specialist.')

    // Hand off with structured context for the diagnostic agent
    message.handOff('diagnostic-agent',
        `Extracted symptoms: ${symptoms.join(', ')}\n` +
        `Original message: ${message.content}`
    )
})

How it works:

  1. Produces a single Kafka message to router.inbox with the original message headers
  2. Provider API (router module) forwards to the target agent's inbox and updates currentRoute to the target agent
  3. Target agent's onMessage handler fires with the forwarded message
  4. The calling agent does not send done=true — the target agent is responsible for responding to the user
  5. The conversation lock is not stuck — the target agent responds to the user via normal streamer flow, which sends done=true back to router.inbox and Provider API finalizes in-process (persists, unlocks, publishes to Centrifugo)
handOff(agentName: string, content?: string): void

| Parameter | Type | Default | Description | |---|---|---|---| | agentName | string | required | Target agent name | | content | string | undefined | Override content. If omitted, forwards the original message.content |

Call (agent.call)

Synchronous agent-to-agent communication. Call another agent and await the response. The calling agent keeps conversation ownership — currentRoute is not changed.

Scenario: Cross-referencing with a specialist. A patient asks the care coordinator about drug interactions for their rare condition (Gaucher disease). The coordinator needs specific pharmacogenomics information but should remain the patient's primary contact. It calls the pharmacology agent behind the scenes, gets the answer, and incorporates it into its own response.

coordinatorAgent.onMessage(async (message, sender) => {
    const s = coordinatorAgent.streamer(message)
    s.stream('Let me check on that for you...\n\n')

    try {
        // Call the pharmacology agent — patient doesn't see this exchange
        const drugInfo = await coordinatorAgent.call(
            'pharmacology-agent',
            message.conversationId,
            `Patient has Gaucher disease (Type 1, on ERT). Question: ${message.content}`,
            15000,
        )
        s.stream(`Here's what I found about your medication:\n\n${drugInfo}`)
    } catch (error) {
        const msg = error instanceof Error ? error.message : String(error)
        if (msg.includes('timed out')) {
            s.stream("I wasn't able to get the pharmacology details right now. " +
                     "Let me note this and follow up with you shortly.")
        } else {
            s.stream(`I encountered an issue checking that: ${msg}`)
        }
    }

    await s.finish()
})

Scenario: Gathering information from multiple specialists. A genetic disease coordinator needs input from both a genetics agent and a clinical trials agent to give the patient a complete answer.

coordinatorAgent.onMessage(async (message, sender) => {
    const s = coordinatorAgent.streamer(message)
    s.stream('Looking into this from multiple angles...\n\n')

    // Call two specialists in parallel
    const [geneticsResult, trialsResult] = await Promise.allSettled([
        coordinatorAgent.call(
            'genetics-agent', message.conversationId,
            `Patient with suspected Wilson's disease. ${message.content}`,
        ),
        coordinatorAgent.call(
            'clinical-trials-agent', message.conversationId,
            `Find active clinical trials for Wilson's disease relevant to: ${message.content}`,
        ),
    ])

    if (geneticsResult.status === 'fulfilled') {
        s.stream(`**Genetic perspective:**\n${geneticsResult.value}\n\n`)
    }
    if (trialsResult.status === 'fulfilled') {
        s.stream(`**Clinical trials:**\n${trialsResult.value}\n\n`)
    }

    s.stream('Is there anything else you\'d like to know?')
    await s.finish()
})

How it works:

  1. The calling agent produces a message to router.inbox with isCall: 'true' and a unique callId header
  2. Router forwards to the target agent's inbox — currentRoute is not updated (the caller keeps ownership)
  3. The target agent's onMessage handler fires — the streamer auto-detects isCall and routes the response back to the calling agent (not the user)
  4. The calling agent's message interceptor captures the response chunks, accumulates them, and resolves the call() promise on done=true
  5. call() returns the full accumulated response as a string
  6. Call messages are not persisted — they are transient, like internal tool calls. The conversationId is only a Kafka partition key. This keeps the conversation history clean for LLMs (no confusing consecutive assistant messages from different agents)
async call(
    agentName: string,
    conversationId: string,
    content: string,
    timeout: number = 30000,
): Promise<string>

| Parameter | Type | Default | Description | |---|---|---|---| | agentName | string | required | Target agent to call | | conversationId | string | required | Conversation ID (used as Kafka partition key only — not persisted) | | content | string | required | Text content to send | | timeout | number | 30000 | Max milliseconds to wait for a response |

Returns: Promise<string> — the target agent's full response.

Throws: An error with "timed out" in the message if the target agent doesn't respond within the timeout.

Target agent handler — basic: The target agent doesn't need special handling — its normal onMessage handler fires. The SDK's streamer auto-detects the isCall header and routes the response back to the caller.

// Target agent — no special code needed
pharmacologyAgent.onMessage(async (message, sender) => {
    const s = pharmacologyAgent.streamer(message)
    s.stream(generateDrugInteractionReport(message.content))
    await s.finish()
    // Response automatically routed back to the calling agent
})

Target agent handler — with mutual understanding. agent.call() works best when the two agents have a mutual understanding. The target agent knows it will be called by specific agents and has a dedicated handler optimized for those calls — structured input, structured output, no conversational overhead. While agent.call() works with any hand