basion-ai-sdk
v0.9.0
Published
TypeScript SDK for building AI agents in the Basion AI platform
Downloads
559
Maintainers
Readme
Basion Agent SDK (TypeScript)
TypeScript SDK for building AI agents on the Basion platform. Agents register themselves, receive messages through Kafka, and stream responses back to users in real time.
Table of Contents
- Installation
- Quick Start
- Configuration
- Agent Registration
- Message Handling
- Message
- Streaming Responses
- Conversation History
- Memory V2 (mem0)
- Attachments
- Structural Streaming
- Knowledge Graph
- Agent Inventory
- Agent-Initiated (Proactive) Conversations
- Inter-Agent Communication
- Remote Logging (Loki)
- Connection Management
- Extensions
- Full Example
- Architecture
- Message Properties Reference
Installation
npm install basion-ai-sdkQuick Start
import { BasionAgentApp, Message } from 'basion-ai-sdk'
const app = new BasionAgentApp({
gatewayUrl: process.env.GATEWAY_URL!,
apiKey: process.env.GATEWAY_API_KEY!,
})
const agent = await app.registerMe({
name: 'rare-disease-assistant',
about: 'Answers questions about rare diseases, symptoms, and treatments',
document: 'Rare disease specialist. Abilities: disease lookup, symptom analysis, genetic variant interpretation, finding similar conditions. Keywords: rare disease, genetics, orphan disease, diagnosis.',
})
agent.onMessage(async (message: Message, sender: string) => {
const history = await message.conversation!.getHistory({ limit: 10 })
const s = agent.streamer(message)
s.stream(`You asked: ${message.content}\n\n`)
s.stream('Let me look into that for you...')
await s.finish()
})
await app.run()That's all you need. When you call app.run(), the SDK registers your agent with the gateway, subscribes to {agent-name}.inbox on Kafka, starts a heartbeat loop, and begins consuming messages. Your onMessage handler fires for each incoming message.
Configuration
const app = new BasionAgentApp({
gatewayUrl: 'agent-gateway:8080', // Required
apiKey: 'your-api-key', // Required
secure: false, // TLS (default: false)
heartbeatInterval: 60, // Seconds (default: 60)
maxConcurrentTasks: 100, // Concurrent handlers (default: 100)
// Auto-reconnection
reconnection: {
maxRetries: 10,
initialDelayMs: 1000,
maxDelayMs: 30000,
backoffMultiplier: 2,
autoReconnect: true,
},
// Remote logging (Loki)
enableRemoteLogging: false,
remoteLogLevel: 'info', // 'debug' | 'info' | 'warn' | 'error'
remoteLogBatchSize: 100,
remoteLogFlushInterval: 5.0,
})Agent Registration
const agent = await app.registerMe({
name: 'rare-disease-assistant', // Unique name (kebab-case, used as Kafka topic)
about: 'Rare disease medical assistant', // Short description for agent selection (max 500 chars)
document: `Rare disease specialist. Abilities: disease lookup, symptom analysis,
genetic variant interpretation, finding similar conditions, drug info.
Keywords: rare disease, genetics, orphan disease, diagnosis, phenotype, genotype.`,
representationName: 'Dr. Assistant', // Display name (defaults to name)
version: '1.0.0', // Version string
lifecycleStage: 'public', // 'dev' | 'test' | 'private-preview' | 'public-preview' | 'public'
welcomeMessage: 'Hello! I specialize in rare diseases. How can I help?',
detailedDescription: 'A comprehensive rare disease assistant...',
baseUrl: 'http://...', // Base URL for artifacts (optional)
metadata: { specialty: 'rare-diseases' }, // Free-form metadata (optional)
categoryNames: ['my-body', 'medical'], // Categories in kebab-case (auto-created)
prompts: [ // Suggested prompts shown to users
{ label: 'Huntington', prompt: 'What is Huntington disease?' },
{ label: 'Similar diseases', prompt: 'Find diseases similar to Cystic Fibrosis' },
],
waitlist: false, // Not publicly available yet (optional)
relatedPages: [ // Related pages (optional)
{ name: 'Resources', endpoint: '/resources' }
],
forceUpdate: false, // Bypass content hash check (optional)
})| Parameter | Type | Default | Description |
|---|---|---|---|
| name | string | required | Unique agent name in kebab-case (used for Kafka routing) |
| about | string | required | Short description (max 500 characters, used for agent selection) |
| document | string | required | Used by the router to match user messages to your agent. Include keywords describing abilities and key capabilities so the router can properly route to your agent |
| representationName | string | name | Human-readable display name |
| version | string | undefined | Version string (e.g., "1.0.0"). Included in content hash — changing version triggers re-registration |
| lifecycleStage | string | 'dev' | One of: dev, test, private-preview, public-preview, public |
| welcomeMessage | string | undefined | Welcome text shown when a user starts a conversation |
| detailedDescription | string | undefined | Extended description beyond about |
| baseUrl | string | undefined | Base URL for agent's frontend service (used for iframe artifact URLs) |
| metadata | object | undefined | Free-form metadata object |
| categoryNames | string[] | undefined | Categories in kebab-case (auto-created if they don't exist) |
| prompts | Prompt[] | undefined | Array of { label, prompt } objects shown to users |
| waitlist | boolean | undefined | If true, agent is not publicly available |
| relatedPages | RelatedPage[] | undefined | Pages with name and endpoint (supports {conversation_id} placeholder) |
| forceUpdate | boolean | undefined | Bypass content hash check and always update |
Message Handling
Basic Handler
agent.onMessage(async (message: Message, sender: string) => {
const s = agent.streamer(message)
s.stream('Hello! How can I help?')
await s.finish()
})Sender Filtering
// Handle messages from users only
agent.onMessage(async (message, sender) => {
// ...
}, { senders: ['user'] })
// Handle messages from a specific agent
agent.onMessage(async (message, sender) => {
// ...
}, { senders: ['triage-agent'] })
// Exclude a specific sender
agent.onMessage(async (message, sender) => {
// ...
}, { senders: ['~notification-agent'] })Designing Handlers for Agents vs Users
When your agent is called by other agents via agent.call(), the calling agent expects a focused, structured response — not a full conversational flow. Use senders filtering to serve both audiences from the same agent.
// ---- Pharmacology agent with separate user and agent interfaces ----
// User handler — full conversational flow with history, memory, thinking UI
pharmacologyAgent.onMessage(async (message, sender) => {
const history = await message.conversation!.getHistory({ limit: 10 })
const mem = message.memoryV2!
await mem.ingest('user', message.content)
const s = pharmacologyAgent.streamer(message)
const thinking = new TextBlock()
s.streamBy(thinking).setVariant('thinking')
s.streamBy(thinking).streamTitle('Researching medication...')
// ... full conversational response with context ...
s.streamBy(thinking).done()
s.stream(generateDetailedDrugInfo(message.content, history))
await s.finish()
}, { senders: ['user'] })
// Agent handler — focused, structured response for agent.call()
pharmacologyAgent.onMessage(async (message, sender) => {
const s = pharmacologyAgent.streamer(message)
// No history, no memory, no thinking UI — just the facts
s.stream(generateDrugInteractionReport(message.content))
await s.finish()
}, { senders: ['coordinator-agent', 'care-planning-agent'] })Error Handling
If a handler throws, the SDK automatically sends an error message to the user. Customize or disable this:
agent.errorMessageTemplate = 'Sorry, something went wrong. Please try again.'
agent.sendErrorResponses = false // Disable auto error responsesMessage
The message object provides access to content, conversation history, memory, and attachments.
agent.onMessage(async (message: Message, sender: string) => {
message.content // Message text
message.conversationId // Conversation UUID
message.userId // User UUID
message.metadata // Optional metadata from frontend
message.schema // Optional JSON schema (for form responses)
message.conversation // Conversation helper (history, metadata)
message.memoryV2 // mem0 memory (ingest + search)
message.memory // Deprecated memory (use memoryV2)
message.attachments // AttachmentInfo[]
// Inter-agent communication
message.handOff('agent-name') // Forward to another agent
message.handOff('agent-name', 'content') // Forward with custom content
})Streaming Responses
Basic Streaming
agent.onMessage(async (message, sender) => {
const s = agent.streamer(message)
s.stream('Looking up information on Huntington disease...\n\n')
s.stream('Huntington disease is a progressive neurodegenerative disorder...')
await s.finish()
})s.write(content) is available as an alias for s.stream(content).
Auto-Dispose (TypeScript 5.2+)
agent.onMessage(async (message, sender) => {
await using s = agent.streamer(message)
s.stream('Response...')
// finish() called automatically when s goes out of scope
})Streamer Options
// Set awaiting — next user reply routes back to this agent
const s = agent.streamer(message, { awaiting: true })When to Use awaiting=True
Use awaiting: true when you want to keep the user talking to your agent and bypass the router for the next message. This is powerful but requires care — you need to release the conversation at some point, or the user gets stuck with your agent forever.
Scenario: Multi-turn data collection. A diagnostic agent asks the user a series of questions. Each reply must come back to the same agent, not get rerouted by the router to a different specialist.
diagnosticAgent.onMessage(async (message, sender) => {
const history = await message.conversation!.getAgentHistory({ limit: 10 })
if (history.length === 0) {
// First interaction — ask the first question
const s = diagnosticAgent.streamer(message, { awaiting: true })
s.stream("I'll help assess your symptoms. How long have you been experiencing this?")
await s.finish()
return
}
if (history.length < 4) {
// Still collecting — keep awaiting
const s = diagnosticAgent.streamer(message, { awaiting: true })
s.stream(askNextQuestion(history, message.content))
await s.finish()
return
}
// Collection done — respond WITHOUT awaiting to release back to router
const s = diagnosticAgent.streamer(message)
s.stream(generateAssessment(history, message.content))
await s.finish()
})Scenario: Awaiting + hand-off for off-topic messages. You hold the conversation with awaiting: true, but if the user asks something unrelated, you hand off to the guide agent instead of trying to answer it yourself.
nutritionAgent.onMessage(async (message, sender) => {
if (!isNutritionRelated(message.content)) {
// Not our topic — hand off to the guide agent who can reroute properly
message.handOff('guide-agent', message.content)
return
}
// On-topic — keep the conversation
const s = nutritionAgent.streamer(message, { awaiting: true })
s.stream(generateNutritionAdvice(message.content))
await s.finish()
})Tip: Always have an exit path. If you set
awaiting: trueindefinitely without a way out, the user cannot reach other agents. Common exits: (1) stop settingawaitingafter your task is done, (2) hand off toguide-agentwhen the topic doesn't match yours, or (3) use the agent inventory to find a better agent and hand off.
Non-Persisted Content
Chunks that appear in real-time but don't get saved to conversation history:
s.stream('Searching knowledge graph...', { persist: false, eventType: 'thinking' })
s.stream('Here are the results:') // This gets persistedMessage Metadata
Attach metadata to the response message:
s.setMessageMetadata({ source: 'knowledge_graph', confidence: 0.92 })
s.stream('Based on the knowledge graph, ...')
await s.finish()Generative UI — Forms
Type-safe forms with a builder pattern. The frontend renders a form that replaces the chat input, and the user's submission arrives as JSON in the next message. Parse submissions with full type inference.
Form
import { Form, text, number, select, multiSelect, slider, switchField, option } from 'basion-ai-sdk'
const symptomForm = Form.create({
name: text({ label: 'Full Name', placeholder: 'John Doe' }),
severity: slider({ label: 'Pain Level', min: 1, max: 10, step: 1 }),
location: select({
label: 'Pain Location',
options: [
option('head', 'Head'),
option('chest', 'Chest'),
option('abdomen', 'Abdomen'),
option('back', 'Back'),
],
allowCustom: true,
}),
symptoms: multiSelect({
label: 'Other Symptoms',
options: [
option('headache', 'Headache'),
option('nausea', 'Nausea'),
option('fatigue', 'Fatigue'),
],
}),
chronic: switchField({ label: 'Is this chronic?' }),
})
.title('Symptom Report')
.description('Please describe your current symptoms')
.submitLabel('Submit Report')
.allowCancel() // Show a cancel button
.cancelLabel('Dismiss') // Customize cancel button text
agent.onMessage(async (message, sender) => {
// Check if this is a form submission
if (message.schema) {
const data = symptomForm.parse(message)
// data is fully typed: { name: string, severity: number, location: string, symptoms: string[], chronic: boolean }
const result = symptomForm.validate(data)
if (!result.isValid) {
const s = agent.streamer(message)
s.stream('Validation errors:\n')
for (const [field, error] of Object.entries(result.errors)) {
s.stream(`- ${field}: ${error}\n`)
}
await s.finish()
return
}
const s = agent.streamer(message)
s.stream(`Thank you, ${data.name}. Pain level: ${data.severity}/10.`)
await s.finish()
return
}
// Send the form
const s = agent.streamer(message, { awaiting: true })
s.setResponseSchema(symptomForm)
s.stream('Please fill out the symptom report:')
await s.finish()
})Field Types
| Factory | Parses To | Key Options |
|---|---|---|
| text(config) | string | placeholder, minLength, maxLength, multiline |
| number(config) | number | min, max, step |
| select(config) | string | options, placeholder, allowCustom, isSearchable |
| multiSelect(config) | string[] | options, minSelections, maxSelections, allowCustom, isSearchable |
| checkbox(config) | boolean | default |
| checkboxGroup(config) | string[] | options, minSelections, maxSelections |
| switchField(config) | boolean | default |
| slider(config) | number | min, max, step, default |
| datePicker(config) | string | minDate, maxDate, format |
| hidden(config) | unknown | value |
| file(config) | AttachmentInfo \| null | accept, maxSize |
All fields accept label (required) and required (default: true, except hidden which defaults to false).
Use option(value, label) to create options for select, multiSelect, and checkboxGroup.
MultiStepForm
A wizard-style form with multiple steps. Fields are flattened across steps — field names must be unique.
import { MultiStepForm, step, text, number, slider, datePicker } from 'basion-ai-sdk'
const intakeForm = MultiStepForm.create({
demographics: step({
label: 'Basic Info',
description: 'Tell us about yourself',
fields: {
name: text({ label: 'Full Name' }),
dob: datePicker({ label: 'Date of Birth' }),
},
}),
assessment: step({
label: 'Pain Assessment',
fields: {
painLevel: slider({ label: 'Pain Level', min: 0, max: 10 }),
painLocation: text({ label: 'Where does it hurt?' }),
},
}),
})
.title('Patient Intake')
.submitLabel('Complete Intake')
// Parse all fields at once (flattened):
const data = intakeForm.parse(message)
data.name // string
data.dob // string
data.painLevel // numberConfirmation
A yes/no confirmation card with no input fields.
import { confirmation } from 'basion-ai-sdk'
const deleteConfirm = confirmation({
title: 'Delete Account',
message: 'Are you sure you want to delete your account?',
description: 'This action cannot be undone.',
confirmLabel: 'Delete',
cancelLabel: 'Keep Account',
variant: 'destructive', // 'default' | 'success' | 'warning' | 'error' | 'info' | 'destructive'
})
// Send:
s.setResponseSchema(deleteConfirm)
// Parse:
const result = deleteConfirm.parse(message)
result.confirmed // booleanRaw JSON Schema
You can also pass raw JSON Schema objects directly to setResponseSchema():
s.setResponseSchema({
type: 'object',
title: 'Quick Feedback',
properties: {
rating: { type: 'integer', minimum: 1, maximum: 5, title: 'Rating' },
comment: { type: 'string', title: 'Comment' },
},
required: ['rating'],
})Form Cancellation
When a form has allowCancel(), the user can dismiss it and type a plain text message instead. The provider detects this automatically: if a pending_response_schema exists but the message content is not valid JSON, it marks the message with formCancelled: true in metadata.
agent.onMessage(async (message, sender) => {
if (message.schema) {
if (message.formCancelled) {
// User cancelled the form and typed plain text
const s = agent.streamer(message)
s.stream('No problem! How can I help you?')
await s.finish()
return
}
// Normal form submission — parse and validate
const data = symptomForm.parse(message)
const result = symptomForm.validate(data)
if (!result.isValid) {
const s = agent.streamer(message)
s.stream('Some fields are invalid:\n')
for (const [field, error] of Object.entries(result.errors)) {
s.stream(`- ${field}: ${error}\n`)
}
await s.finish()
return
}
// ... handle valid data
}
})Generative UI — Message Components
Rich UI components persisted as message content. Sent via generateUi() on the streamer.
Components are mutually exclusive with stream() — you cannot mix text streaming and generateUi() in the same message. However, structural events (Stepper, TextBlock) can be used alongside generateUi().
Card
Structured information display with optional sections, image, and button.
import { Card } from 'basion-ai-sdk'
agent.onMessage(async (message, sender) => {
const s = agent.streamer(message)
s.generateUi(new Card({
title: 'Patient Summary',
body: "Overview of the patient's current status and recent lab results.",
variant: 'info', // 'default' | 'success' | 'warning' | 'error' | 'info'
sections: [
{ label: 'Name', value: 'John Doe' },
{ label: 'Age', value: '42' },
{ label: 'Condition', value: 'Huntington Disease' },
],
image: 'https://example.com/patient-chart.png',
button: { label: 'View Full Report', url: 'https://example.com/report' },
}))
await s.finish()
})Accordion
Collapsible sections for FAQ-style or detailed content.
import { Accordion } from 'basion-ai-sdk'
s.generateUi(new Accordion({
title: 'Frequently Asked Questions',
items: [
{ title: 'What is Huntington Disease?', body: 'A progressive neurodegenerative disorder...' },
{ title: 'What are the symptoms?', body: 'Motor symptoms include chorea, dystonia...' },
{ title: 'How is it diagnosed?', body: 'Genetic testing can confirm the diagnosis...' },
],
}))
await s.finish()Multiple Components
You can call generateUi() multiple times to send multiple components in a single message:
s.generateUi(new Card({ title: 'Summary', body: '...', variant: 'info' }))
s.generateUi(new Accordion({ title: 'Details', items: [...] }))
await s.finish()Combining with Structural Events
Structural events (Stepper, TextBlock) work alongside genui components since they are non-persisted:
const stepper = new Stepper({ steps: ['Search', 'Analyze', 'Report'] })
s.streamBy(stepper).startStep(0)
// ... do work ...
s.streamBy(stepper).completeStep(0)
s.streamBy(stepper).done()
// Then send genui component as the persisted content
s.generateUi(new Card({ title: 'Results', body: '...', variant: 'success' }))
await s.finish()Conversation History
agent.onMessage(async (message, sender) => {
const conv = message.conversation!
// All messages
const history = await conv.getHistory({ limit: 20 })
// Filter by role
const userMsgs = await conv.getUserMessages({ limit: 20 })
const assistantMsgs = await conv.getAssistantMessages({ limit: 20 })
// Messages to/from this agent only
const agentHistory = await conv.getAgentHistory({ limit: 10 })
const agentHistory2 = await conv.getAgentHistory({ agentName: 'triage-agent', limit: 10 })
// Shortcut for recent messages
const last5 = await conv.getLastMessages(5)
// Each message is a ConversationMessage
for (const msg of history) {
msg.content // string
msg.role // 'user' | 'assistant' | 'system'
msg.from // sender name
msg.createdAt // Date
msg.isUser() // boolean
msg.isAssistant() // boolean
}
})Memory V2 (mem0)
Long-term memory powered by mem0.ai. Ingest messages and semantically search across a user's history.
agent.onMessage(async (message, sender) => {
const mem = message.memoryV2!
// Ingest the user's message
await mem.ingest('user', message.content)
// Search for relevant past context
const results = await mem.search('diagnosis history')
for (const r of results) {
const memoryText = (r as any).memory // Extracted memory text
}
const s = agent.streamer(message)
if (results.length > 0) {
s.stream('Based on what I remember:\n')
for (const r of results) {
s.stream(`- ${(r as any).memory}\n`)
}
s.stream('\n')
}
s.stream(`Regarding your question: ${message.content}`)
await s.finish()
// Ingest the assistant's response too
await mem.ingest('assistant', '...')
})| Method | Description |
|---|---|
| ingest(role, content) | Ingest a message. Role: 'user', 'assistant', or 'system'. |
| search(query) | Semantic search across the user's memories. Returns an array of results. |
Attachments
Download and process file attachments (images, PDFs, etc.).
import { isImageAttachment, isPdfAttachment } from 'basion-ai-sdk'
agent.onMessage(async (message, sender) => {
if (!message.hasAttachments()) return
const count = message.getAttachmentCount()
const s = agent.streamer(message)
s.stream(`Received ${count} file(s).\n\n`)
for (let i = 0; i < count; i++) {
const att = message.attachments[i]
s.stream(`**${att.filename}** (${att.contentType}, ${att.size} bytes)\n`)
if (isImageAttachment(att)) {
const base64 = await message.getAttachmentBase64At(i)
// Send to vision model...
} else if (isPdfAttachment(att)) {
const bytes = await message.getAttachmentBytesAt(i)
// Parse PDF...
}
}
// Or download everything at once
const allBytes = await message.getAllAttachmentBytes()
const allBase64 = await message.getAllAttachmentBase64()
await s.finish()
})Attachment Methods
| Method | Returns | Description |
|---|---|---|
| hasAttachments() | boolean | Whether the message has any attachments |
| getAttachmentCount() | number | Number of attachments |
| getAttachmentBytes() | Promise<Buffer> | Download first attachment as Buffer |
| getAttachmentBase64() | Promise<string> | Download first attachment as base64 |
| getAttachmentBytesAt(i) | Promise<Buffer> | Download attachment at index i |
| getAttachmentBase64At(i) | Promise<string> | Download attachment at index i as base64 |
| getAllAttachmentBytes() | Promise<Buffer[]> | Download all attachments |
| getAllAttachmentBase64() | Promise<string[]> | Download all as base64 |
AttachmentInfo Properties
| Property | Type | Example |
|---|---|---|
| filename | string | "genetic_report.pdf" |
| contentType | string | "application/pdf" |
| size | number | 524288 |
| url | string | Download URL |
Type Guards
import { isImageAttachment, isPdfAttachment } from 'basion-ai-sdk'
isImageAttachment(att) // true for image/* MIME types
isPdfAttachment(att) // true for application/pdfStructural Streaming
Rich UI components streamed alongside text. Bind a structural component to the streamer with streamBy().
Artifact
Files, images, or embeds with generation progress. Artifact data is persisted to the database.
import { Artifact } from 'basion-ai-sdk'
agent.onMessage(async (message, sender) => {
const s = agent.streamer(message)
const artifact = new Artifact()
// Show progress
s.streamBy(artifact).generating('Generating genetic pathway diagram...', 0.3)
// ... do work ...
s.streamBy(artifact).generating('Rendering...', 0.8)
// Complete with result
s.streamBy(artifact).done({
url: 'https://example.com/pathway-diagram.png',
type: 'image',
title: 'HTT Gene Pathway',
description: 'Huntingtin protein interaction network',
metadata: { width: 1200, height: 800 },
})
// Or signal an error
// s.streamBy(artifact).error('Failed to generate diagram')
s.stream("Here's the pathway diagram for the HTT gene.")
await s.finish()
})Artifact types: image, iframe, document, video, audio, code, link, file
Surface
Interactive embedded components (iframes, widgets).
import { Surface } from 'basion-ai-sdk'
const surface = new Surface()
s.streamBy(surface).generating('Loading appointment scheduler...')
s.streamBy(surface).done({
url: 'https://cal.com/embed/dr-smith',
type: 'iframe',
title: 'Schedule Genetic Counseling',
description: 'Book a session with a genetic counselor',
})
s.stream('You can schedule your genetic counseling session above.')TextBlock
Collapsible text blocks with streaming title/body and visual variants. TextBlock events are not persisted.
import { TextBlock } from 'basion-ai-sdk'
const block = new TextBlock()
// Set visual variant
s.streamBy(block).setVariant('thinking')
// Stream title (appends)
s.streamBy(block).streamTitle('Analyzing ')
s.streamBy(block).streamTitle('symptoms...')
// Stream body (appends)
s.streamBy(block).streamBody('Checking symptom database...\n')
s.streamBy(block).streamBody('Cross-referencing with HPO ontology...\n')
s.streamBy(block).streamBody('Matching against known phenotypes...\n')
// Replace title/body entirely
s.streamBy(block).updateTitle('Analysis Complete')
s.streamBy(block).updateBody('Found 3 matching conditions.')
// Mark as done
s.streamBy(block).done()
s.stream('Based on the symptoms, here are possible conditions...')Variants: thinking, note, warning, error, success
Stepper
Multi-step progress indicators. Stepper events are not persisted.
import { Stepper } from 'basion-ai-sdk'
const stepper = new Stepper({
steps: ['Search diseases', 'Analyze phenotypes', 'Find similar conditions', 'Generate report'],
})
s.streamBy(stepper).startStep(0)
const diseases = await kg.searchDiseases({ name: 'Huntington' })
s.streamBy(stepper).completeStep(0)
s.streamBy(stepper).startStep(1)
const phenotypes = await kg.searchPhenotypes({ name: 'chorea' })
s.streamBy(stepper).completeStep(1)
s.streamBy(stepper).startStep(2)
const similar = await kg.findSimilarDiseases('Huntington Disease')
s.streamBy(stepper).completeStep(2)
// Add a step dynamically
s.streamBy(stepper).addStep('Cross-reference')
s.streamBy(stepper).startStep(4)
s.streamBy(stepper).updateStepLabel(4, 'Cross-reference (final)')
s.streamBy(stepper).completeStep(4)
s.streamBy(stepper).startStep(3)
// Or mark a step as failed
// s.streamBy(stepper).failStep(3, 'Report generation timed out')
s.streamBy(stepper).completeStep(3)
s.streamBy(stepper).done()
s.stream("Here's your rare disease report...")Knowledge Graph
Query biomedical knowledge graphs for diseases, proteins, phenotypes, drugs, and pathways. Accessed via agent.tools.knowledgeGraph.
agent.onMessage(async (message, sender) => {
const kg = agent.tools.knowledgeGraph
// Search diseases
const diseases = await kg.searchDiseases({ name: 'Huntington', limit: 5 })
const disease = await kg.getDisease(123)
// Search proteins/genes
const proteins = await kg.searchProteins({ symbol: 'HTT', limit: 10 })
// Search phenotypes (HPO terms)
const phenotypes = await kg.searchPhenotypes({ name: 'chorea', limit: 10 })
const phenotypes2 = await kg.searchPhenotypes({ hpoId: 'HP:0002072' })
// Search drugs
const drugs = await kg.searchDrugs({ name: 'tetrabenazine', limit: 5 })
// Search pathways
const pathways = await kg.searchPathways({ name: 'apoptosis', limit: 5 })
// Find similar diseases by shared phenotypes
const similar = await kg.findSimilarDiseases('Huntington Disease', 10)
for (const sim of similar) {
sim.diseaseName // 'Spinocerebellar Ataxia Type 17'
sim.similarityScore // 0.85
sim.sharedCount // 12
}
// Find similar diseases by shared genes
const byGenes = await kg.findSimilarDiseasesByGenes('Huntington Disease', 10)
// Get all connections for an entity
const edges = await kg.getEntityNetwork('HTT', 'protein')
for (const e of edges) {
e.sourceId, e.sourceType
e.targetId, e.targetType
e.relationType
}
// k-hop graph traversal (BFS subgraph)
const subgraph = await kg.kHopTraversal('HTT', 'protein', 2, 100)
// Shortest path between two entities
const path = await kg.findShortestPath(
'HTT', 'protein',
'Huntington Disease', 'disease',
5, // maxHops
)
for (const step of path) {
step.nodeName, step.nodeType, step.relation
}
})Entity types: protein, phenotype, disease, pathway, drug, molecular_function, cellular_component, biological_process
Agent Inventory
Query the AI Inventory service to discover active agents and their capabilities. Accessed via agent.tools.agentInventory.
agent.onMessage(async (message, sender) => {
const inv = agent.tools.agentInventory
// Get all active agents
const agents = await inv.getActiveAgents()
for (const a of agents) {
a.id // Agent UUID
a.name // 'rare-disease-assistant'
a.representationName // 'Dr. Assistant'
a.about // Short description
a.document // Full documentation
a.examplePrompts // ['What is Huntington disease?', ...]
a.categories // [{ id: '...', name: 'medical' }]
a.tags // [{ id: '...', name: 'rare-disease' }]
}
// Get agents accessible to a specific user (filtered by role/permissions)
const userAgents = await inv.getUserAgents('user-uuid')
})| Method | Returns | Description |
|---|---|---|
| getActiveAgents() | Promise<AgentInfo[]> | All agents with status=active and lifeStatus=active |
| getUserAgents(userId) | Promise<AgentInfo[]> | Active agents accessible to a specific user |
Agent-Initiated (Proactive) Conversations
Agents can proactively start new conversations with users — without waiting for them to message first. Use cases: health check-ins, medication reminders, appointment follow-ups, new research alerts.
How It Works
- Agent creates a conversation via the conversation store API (
isNew: true,currentRoute,lockedByset atomically) - Agent streams the first message through the normal Kafka pipeline (router → provider → Centrifugo → user)
- Provider persists the assistant message and unlocks the conversation on
done=true - User sees a new bold conversation in their sidebar (
isNewflag) - If
awaiting: true, the user's reply routes back to the agent'sonMessagehandler
Basic Usage
agent.onMessage(async (message, sender) => {
// User replied to the proactive conversation
const s = agent.streamer(message)
s.stream('Thanks for responding to the check-in!')
await s.finish()
})
// Trigger from a scheduler, webhook, or API endpoint
async function sendWeeklyCheckIn(userId: string) {
const [convId, streamer] = await agent.startConversation({
userId,
title: 'Weekly Health Check-in',
awaiting: true,
})
streamer.stream('Hi! Time for your weekly check-in.\n\n')
streamer.stream('How have you been feeling this week?')
await streamer.finish()
}With Response Schema
async function requestSymptomLog(userId: string) {
const [convId, streamer] = await agent.startConversation({
userId,
title: 'Daily Symptom Log',
awaiting: true,
responseSchema: {
type: 'object',
title: 'How are you feeling today?',
properties: {
painLevel: { type: 'integer', minimum: 0, maximum: 10, title: 'Pain Level' },
fatigue: { type: 'integer', minimum: 0, maximum: 10, title: 'Fatigue Level' },
notes: { type: 'string', title: 'Additional Notes' },
},
required: ['painLevel'],
},
})
streamer.stream('Please fill out today\'s symptom log:')
await streamer.finish()
}With Metadata
const [convId, streamer] = await agent.startConversation({
userId,
title: 'New Research Alert',
awaiting: true,
messageMetadata: { alertType: 'research', paperId: 'PMC12345' },
metadata: { source: 'pubmed_monitor', triggeredAt: '2025-01-15T09:00:00Z' },
})
streamer.stream('A new paper about your condition was published today...')
await streamer.finish()API Reference
async startConversation(options: {
userId: string // Target user UUID
title?: string // Sidebar title (default: "Agent-initiated conversation")
awaiting?: boolean // Route user reply back to this agent (default: false)
responseSchema?: Record<string, unknown> // JSON Schema for structured response
messageMetadata?: Record<string, unknown> // Metadata on the streamed message
metadata?: Record<string, unknown> // Metadata on the conversation itself
}): Promise<[string, Streamer]> // Returns [conversationId, streamer]Inter-Agent Communication
Two patterns for agents to communicate with each other, each suited to different use cases.
Why Inter-Agent Communication?
In a rare disease platform, no single agent can cover everything. A patient might ask a general wellness agent about fatigue, but the underlying cause is Ehlers-Danlos syndrome — something only a genetics specialist would recognize. The general agent needs to either hand off the conversation to the specialist entirely, or call the specialist to get a quick answer and weave it into its own response.
Without inter-agent communication, every agent would need to be an expert in everything, or the user would have to manually switch between agents mid-conversation.
Overview
| Pattern | Method | Blocking? | Use Case |
|---|---|---|---|
| Hand-off | message.handOff(agent) | No | Fire-and-forget forward; your agent is done, another takes over |
| Call | agent.call(agent, convId, content) | Yes (await) | Request → response between agents; your agent stays in control |
Hand-Off (message.handOff)
Forward a message to another agent with a single Kafka message. No streamer, no done flag — the target agent's onMessage handler fires exactly once. The calling agent can optionally stream to the user before handing off.
Scenario: Out-of-expertise escalation. A patient asks the general wellness agent about joint hypermobility and chronic pain. The wellness agent recognizes this sounds like Ehlers-Danlos syndrome — far outside its expertise. It tells the user what's happening and hands off to the connective tissue specialist, who takes over the conversation entirely.
wellnessAgent.onMessage(async (message, sender) => {
const content = message.content.toLowerCase()
// Detect rare disease indicators outside our expertise
const rareIndicators = ['hypermobility', 'joint laxity', 'stretchy skin', 'ehlers-danlos']
if (rareIndicators.some(term => content.includes(term))) {
// Stream a brief explanation to the user before handing off
const s = wellnessAgent.streamer(message)
s.stream('Your symptoms suggest a connective tissue condition that requires specialist input.\n\n')
s.stream('I\'m connecting you with our rare disease specialist who can help.')
// Hand off — the specialist takes over the conversation
message.handOff('rare-disease-specialist',
`Patient reports: ${message.content}\n\n` +
'Wellness agent assessment: Symptoms consistent with possible Ehlers-Danlos syndrome. ' +
'Patient needs specialist evaluation for hypermobility spectrum disorders.'
)
return
}
// Normal wellness handling...
const s = wellnessAgent.streamer(message)
s.stream(generateWellnessResponse(message.content))
await s.finish()
})Scenario: Agent has completed its task and knows the best next agent. A lab results agent finishes analyzing blood work and hands off to the treatment planning agent, because the natural next step is treatment recommendations — and the treatment agent is the right one for the job.
labResultsAgent.onMessage(async (message, sender) => {
const s = labResultsAgent.streamer(message)
const analysis = analyzeLabResults(message.content)
s.stream(`Here's your lab analysis:\n\n${analysis}\n\n`)
s.stream('Now let me connect you with our treatment planning specialist ' +
'to discuss next steps based on these results.')
// Our job is done — hand off to the treatment agent with context
message.handOff('treatment-planning-agent',
`Lab analysis complete. Results summary:\n${analysis}\n\n` +
`Patient's original message: ${message.content}`
)
})When to hand off vs let the router decide: Use
handOffwhen your agent has domain knowledge about which agent should come next. If you're unsure, hand off toguide-agent(or simply don't setawaitingand let the router handle the next message).
Scenario: Multi-step diagnostic pipeline. A general intake agent collects symptoms, hands off to a diagnostic agent, which may further hand off to a condition-specific agent.
intakeAgent.onMessage(async (message, sender) => {
const symptoms = extractSymptoms(message.content)
const s = intakeAgent.streamer(message)
s.stream('Thank you for describing your symptoms. Let me connect you with the right specialist.')
// Hand off with structured context for the diagnostic agent
message.handOff('diagnostic-agent',
`Extracted symptoms: ${symptoms.join(', ')}\n` +
`Original message: ${message.content}`
)
})How it works:
- Produces a single Kafka message to
router.inboxwith the original message headers - Router forwards to the target agent's inbox and updates
currentRouteto the target agent - Target agent's
onMessagehandler fires with the forwarded message - The calling agent does not send
done=true— the target agent is responsible for responding to the user - The conversation lock is not stuck — the target agent responds to the user via normal streamer flow, which sends
done=truetouser.inbox, and the provider unlocks the conversation
handOff(agentName: string, content?: string): void| Parameter | Type | Default | Description |
|---|---|---|---|
| agentName | string | required | Target agent name |
| content | string | undefined | Override content. If omitted, forwards the original message.content |
Call (agent.call)
Synchronous agent-to-agent communication. Call another agent and await the response. The calling agent keeps conversation ownership — currentRoute is not changed.
Scenario: Cross-referencing with a specialist. A patient asks the care coordinator about drug interactions for their rare condition (Gaucher disease). The coordinator needs specific pharmacogenomics information but should remain the patient's primary contact. It calls the pharmacology agent behind the scenes, gets the answer, and incorporates it into its own response.
coordinatorAgent.onMessage(async (message, sender) => {
const s = coordinatorAgent.streamer(message)
s.stream('Let me check on that for you...\n\n')
try {
// Call the pharmacology agent — patient doesn't see this exchange
const drugInfo = await coordinatorAgent.call(
'pharmacology-agent',
message.conversationId,
`Patient has Gaucher disease (Type 1, on ERT). Question: ${message.content}`,
15000,
)
s.stream(`Here's what I found about your medication:\n\n${drugInfo}`)
} catch (error) {
const msg = error instanceof Error ? error.message : String(error)
if (msg.includes('timed out')) {
s.stream("I wasn't able to get the pharmacology details right now. " +
"Let me note this and follow up with you shortly.")
} else {
s.stream(`I encountered an issue checking that: ${msg}`)
}
}
await s.finish()
})Scenario: Gathering information from multiple specialists. A rare disease coordinator needs input from both a genetics agent and a clinical trials agent to give the patient a complete answer.
coordinatorAgent.onMessage(async (message, sender) => {
const s = coordinatorAgent.streamer(message)
s.stream('Looking into this from multiple angles...\n\n')
// Call two specialists in parallel
const [geneticsResult, trialsResult] = await Promise.allSettled([
coordinatorAgent.call(
'genetics-agent', message.conversationId,
`Patient with suspected Wilson's disease. ${message.content}`,
),
coordinatorAgent.call(
'clinical-trials-agent', message.conversationId,
`Find active clinical trials for Wilson's disease relevant to: ${message.content}`,
),
])
if (geneticsResult.status === 'fulfilled') {
s.stream(`**Genetic perspective:**\n${geneticsResult.value}\n\n`)
}
if (trialsResult.status === 'fulfilled') {
s.stream(`**Clinical trials:**\n${trialsResult.value}\n\n`)
}
s.stream('Is there anything else you\'d like to know?')
await s.finish()
})How it works:
- The calling agent produces a message to
router.inboxwithisCall: 'true'and a uniquecallIdheader - Router forwards to the target agent's inbox —
currentRouteis not updated (the caller keeps ownership) - The target agent's
onMessagehandler fires — the streamer auto-detectsisCalland routes the response back to the calling agent (not the user) - The calling agent's message interceptor captures the response chunks, accumulates them, and resolves the
call()promise ondone=true call()returns the full accumulated response as a string- Call messages are not persisted — they are transient, like internal tool calls. The
conversationIdis only a Kafka partition key. This keeps the conversation history clean for LLMs (no confusing consecutive assistant messages from different agents)
async call(
agentName: string,
conversationId: string,
content: string,
timeout: number = 30000,
): Promise<string>| Parameter | Type | Default | Description |
|---|---|---|---|
| agentName | string | required | Target agent to call |
| conversationId | string | required | Conversation ID (used as Kafka partition key only — not persisted) |
| content | string | required | Text content to send |
| timeout | number | 30000 | Max milliseconds to wait for a response |
Returns: Promise<string> — the target agent's full response.
Throws: An error with "timed out" in the message if the target agent doesn't respond within the timeout.
Target agent handler — basic: The target agent doesn't need special handling — its normal onMessage handler fires. The SDK's streamer auto-detects the isCall header and routes the response back to the caller.
// Target agent — no special code needed
pharmacologyAgent.onMessage(async (message, sender) => {
const s = pharmacologyAgent.streamer(message)
s.stream(generateDrugInteractionReport(message.content))
await s.finish()
// Response automatically routed back to the calling agent
})Target agent handler — with mutual understanding. agent.call() works best when the two agents have a mutual understanding. The target agent knows it will be called by specific agents and has a dedicated handler optimized for those calls — structured input, structured output, no conversational overhead. While agent.call() works with any handler, dedicated handlers make the interaction more reliable and efficient.
// ---- Calling agent (coordinator) ----
coordinatorAgent.onMessage(async (message, sender) => {
const s = coordinatorAgent.streamer(message)
// The gene-lookup-agent expects a specific input format
// and returns structured data — both sides know the contract
const geneData = await coordinatorAgent.call(
'gene-lookup-agent',
message.conversationId,
'LOOKUP gene_symbol=HTT disease=Huntington',
10000,
)
s.stream(`Here's what we know about the gene:\n\n${geneData}`)
await s.finish()
}, { senders: ['user'] })
// ---- Target agent (gene-lookup) ----
// Dedicated handler for coordinator calls — expects structured input,
// returns focused data. This handler is NOT designed for end users.
geneLookupAgent.onMessage(async (message, sender) => {
const parsed = parseLookupRequest(message.content) // Parse the structured input
const result = await queryKnowledgeGraph(parsed)
const s = geneLookupAgent.streamer(message)
s.stream(formatGeneReport(result))
await s.finish()
}, { senders: ['coordinator-agent'] })
// Separate handler for direct user messages — full conversational flow
geneLookupAgent.onMessage(async (message, sender) => {
const s = geneLookupAgent.streamer(message)
// Natural language processing, history, memory, etc.
s.stream(generateConversationalResponse(message.content))
await s.finish()
}, { senders: ['user'] })Tip: You might be tempted to use
agent.call()to implement your own routing — fetching the agent list from the inventory, picking the best one, and calling it. This is not recommended. The router already does this with access to the full agent registry and document matching. Useagent.call()for targeted, structured agent-to-agent interactions where both sides know each other, not as a replacement for routing.
Combining Patterns
Real-world agents often combine awaiting, handOff, and call in a single flow.
Scenario: A symptom checker that collects data (awaiting), consults a specialist (call), and escalates when needed (handOff).
symptomAgent.onMessage(async (message, sender) => {
const history = await message.conversation!.getAgentHistory({ limit: 20 })
const stage = determineStage(history)
if (stage === 'collecting') {
// Still gathering symptoms — keep the user talking to us
const s = symptomAgent.streamer(message, { awaiting: true })
s.stream(askNextSymptomQuestion(history, message.content))
await s.finish()
return
}
if (stage === 'analyzing') {
const s = symptomAgent.streamer(message)
s.stream('Let me analyze your symptoms...\n\n')
// Call the diagnostic agent for analysis — we stay in control
const assessment = await symptomAgent.call(
'diagnostic-agent',
message.conversationId,
`Symptoms collected: ${summarizeSymptoms(history)}`,
)
if (assessment.toLowerCase().includes('urgent') ||
assessment.toLowerCase().includes('specialist')) {
// Serious finding — hand off to the appropriate specialist
s.stream('Based on the assessment, I\'m connecting you with a specialist.\n\n')
message.handOff('rare-disease-specialist',
`Symptom assessment: ${assessment}\nFull history available in conversation.`
)
return
}
// Non-urgent — share results and release back to router
s.stream(`Here's what I found:\n\n${assessment}`)
await s.finish()
}
})When to Use Which
| Scenario | Pattern | Example |
|---|---|---|
| Topic is outside your expertise — another agent should take over | handOff | Wellness agent detects Ehlers-Danlos symptoms, hands off to rare disease specialist |
| Your agent has completed its task and knows the best next agent | handOff | Lab results agent finishes analysis, hands off to treatment planning agent |
| User is talking to you with awaiting but topic doesn't match | handOff to guide-agent | Nutrition agent with awaiting: true gets a genetics question, hands off to guide |
| Need information from a specialist to complete your response | call | Care coordinator calls pharmacology agent about Gaucher disease drug interactions |
| Two agents have a mutual understanding and structured contract | call + senders handler | Coordinator calls gene-lookup with LOOKUP gene_symbol=HTT, gets structured data back |
| Gather input from multiple agents in parallel | call + Promise.allSettled | Coordinator calls genetics + clinical trials agents simultaneously for Wilson's disease |
| Multi-turn data collection — keep user talking to your agent | awaiting: true | Diagnostic agent asks a series of symptom questions before generating assessment |
| Multi-step pipeline — each agent processes and passes forward | handOff | Intake → diagnostic → condition-specific agent |
| Collect data (awaiting), consult specialist (call), escalate if needed (handOff) | Combined | Symptom checker: collect → call diagnostic → hand off to specialist if urgent |
| Fact-check or validate your response before sending to user | call | General agent calls medical-qa agent to verify a dosage recommendation |
Conversation Locking and Persistence
Hand-off does not cause stuck locks. When Agent A hands off to Agent B, the conversation lock persists correctly through the transition. Agent B responds to the user via the normal streamer flow, which sends done=true to user.inbox. The provider receives it, persists the message, and unlocks the conversation. The lock is never orphaned — it simply transfers naturally as Agent B takes over.
Call messages are not persisted. Agent-to-agent call messages are transient by design — they never reach user.inbox and the provider never sees them. The conversationId passed to agent.call() serves only as a Kafka partition key for ordering guarantees. This is intentional: if call messages were persisted, they would pollute the conversation history with confusing consecutive assistant messages from different agents (the call question and the call response) that the user never saw. The calling agent already incorporates the call result into its own streamed response, keeping the LLM's conversation history clean and coherent.
Remote Logging (Loki)
Send agent logs to Loki for centralized monitoring.
const app = new BasionAgentApp({
...,
enableRemoteLogging: true,
remoteLogLevel: 'info',
remoteLogBatchSize: 100,
remoteLogFlushInterval: 5.0,
})
const agent = await app.registerMe({ ... })
// Use the logger
app.logger?.info('Processing symptom query', { userId: message.userId })
app.logger?.warn('Slow response', { latencyMs: '5000' })
app.logger?.error('LLM call failed', { model: 'gpt-4o' })
app.logger?.debug('Detailed info', { step: 'parsing' })Logs are batched, gzip-compressed, and flushed automatically. Extra labels are Record<string, string>.
Connection Management
The SDK auto-reconnects with exponential backoff. Monitor state changes:
const unsubscribe = app.onConnectionStateChange((state, error) => {
// state: 'disconnected' | 'connecting' | 'connected' | 'reconnecting'
console.log(`Connection: ${state}`, error?.message)
})
app.isConnected() // boolean
app.getConnectionState() // ConnectionState
// Force reconnect manually
await app.forceReconnect()
// Cleanup
unsubscribe()After reconnection, agents are automatically re-initialized (handlers re-registered, heartbeats resumed).
Extensions
LangGraph
Persist LangGraph state via the Conversation Store checkpoint API.
npm install @langchain/langgraph @langchain/langgraph-checkpoint @langchain/coreimport { BasionAgentApp, Message } from 'basion-ai-sdk'
import { HTTPCheckpointSaver } from 'basion-ai-sdk/extensions/langgraph'
import { StateGraph, START, END, Annotation } from '@langchain/langgraph'
import { ChatOpenAI } from '@langchain/openai'
import { HumanMessage, BaseMessage } from '@langchain/core/messages'
const app = new BasionAgentApp({ gatewayUrl: '...', apiKey: '...' })
const checkpointer = new HTTPCheckpointSaver(app)
const StateAnnotation = Annotation.Root({
messages: Annotation<BaseMessage[]>({ reducer: (x, y) => x.concat(y) }),
})
const llm = new ChatOpenAI({ model: 'gpt-4o-mini' })
const graph = new StateGraph(StateAnnotation)
.addNode('respond', async (state) => ({
messages: [await llm.invoke(state.messages)],
}))
.addEdge(START, 'respond')
.addEdge(respond, END)
.compile({ checkpointer })
const agent = await app.registerMe({ name: 'lg-agent', about: '...', document: '...' })
agent.onMessage(async (message: Message) => {
const config = { configurable: { thread_id: message.conversationId } }
const result = await graph.invoke({ messages: [new HumanMessage(message.content)] }, config)
const s = agent.streamer(message)
s.stream(result.messages.at(-1)!.content as string)
await s.finish()
})
await app.run()Vercel AI SDK
Persist message history for Vercel AI SDK.
npm install ai @ai-sdk/openaiimport { BasionAgentApp, Message } from 'basion-ai-sdk'
import { VercelAIMessageStore } from 'basion-ai-sdk/extensions/vercel-ai'
import { streamText, CoreMessage } from 'ai'
import { openai } from '@ai-sdk/openai'
const app = new BasionAgentApp({ gatewayUrl: '...', apiKey: '...' })
const store = new VercelAIMessageStore(app)
const agent = await app.registerMe({
name: 'rare-disease-agent',
about: 'Rare disease specialist powered by GPT-4o',
document: 'Rare disease specialist. Abilities: disease lookup, symptom analysis, genetic variant interpretation. Keywords: rare disease, genetics, orphan disease, diagnosis.',
})
agent.onMessage(async (message: Message) => {
const history = await store.load(message.conversationId)
const messages: CoreMessage[] = [
{ role: 'system', content: 'You are a rare disease specialist.' },
...history,
{ role: 'user', content: message.content },
]
const s = agent.streamer(message)
const result = await streamText({
model: openai('gpt-4o'),
messages,
onChunk: ({ chunk }) => {
if (chunk.type === 'text-delta') s.stream(chunk.textDelta)
},
})
await s.finish()
await store.save(message.conversationId, [
...history,
{ role: 'user', content: message.content },
{ role: 'assistant', content: result.text },
])
})
await app.run()VercelAIMessageStore methods: load(id), save(id, msgs), append(id, msgs), clear(id), getLastMessages(id, n).
Full Example: Rare Disease Assistant
A complete agent that uses memory, knowledge graph, attachments, and structural streaming.
import { BasionAgentApp, Message, Stepper, TextBlock, Form, text, slider, select, option, isImageAttachment, isPdfAttachment } from 'basion-ai-sdk'
const app = new BasionAgentApp({
gatewayUrl: process.env.GATEWAY_URL!,
apiKey: process.env.GATEWAY_API_KEY!,
enableRemoteLogging: true,
})
// Define forms
const symptomLogForm = Form.create({
painLevel: slider({ label: 'Pain Level', min: 0, max: 10 }),
location: select({
label: 'Pain Location',
options: [option('head', 'Head'), option('chest', 'Chest'), option('back', 'Back')],
allowCustom: true,
}),
}).title('How are you feeling today?')
const agent = await app.registerMe({
name: 'rare-disease-assistant',
about: 'Helps patients understand rare diseases, find similar conditions, and track symptoms',
document: `
Rare disease specialist agent.
Abilities: disease lookup, gene and phenotype search, symptom analysis,
genetic variant interpretation, finding similar conditions by shared symptoms
or genes, drug and treatment information, processing genetic test reports.
Keywords: rare disease, genetics, orphan disease, diagnosis, phenotype,
genotype, Huntington, Cystic Fibrosis, Marfan syndrome, biomedical,
knowledge graph, patient support.
`,
})
// Handle user messages
agent.onMessage(async (message: Message, sender: string) => {
const kg = agent.tools.knowledgeGraph
const mem = message.memoryV2
// Ingest the user's message into long-term memory
if (mem) {
await mem.ingest('user', message.content)
}
// Check for form submissions
if (message.schema) {
const data = symptomLogForm.parse(message)
const s = agent.streamer(message)
s.stream(`Thank you for logging your symptoms.\n\n`)
s.stream(`- Pain level: ${data.painLevel}/10\n`)
s.stream(`- Location: ${data.location}\n`)
await s.finish()
return
}
// Check for attachments (e.g., genetic test PDF)
if (message.hasAttachments()) {
const s = agent.streamer(message)
for (let i = 0; i < message.getAttachmentCount(); i++) {
const att = message.attachments[i]
if (isPdfAttachment(att)) {
const bytes = await message.getAttachmentBytesAt(i)
s.stream(`Processing **${att.filename}** (${att.size} bytes)...\n`)
// Parse and analyze the PDF...
} else if (isImageAttachment(att)) {
const base64 = await message.getAttachmentBase64At(i)
s.stream(`Received image **${att.filename}**.\n`)
}
}
await s.finish()
return
}
// Main message handling with knowledge graph
const s = agent.streamer(message)
// Show thinking process
const thinking = new TextBlock()
s.streamBy(thinking).setVariant('thinking')
s.streamBy(thinking).streamTitle('Analyzing query...')
// Recall relevant memories
if (mem) {
const memories = await mem.search(message.content)
if (memories.length > 0) {
s.streamBy(thinking).streamBody('Found relevant patient history.\n')
}
}
// Search for diseases mentioned in the query
const stepper = new Stepper({ steps: ['Search diseases', 'Find connections', 'Compile results'] })
s.streamBy(stepper).startStep(0)
const diseases = await kg.searchDiseases({ name: message.content, limit: 5 })
s.streamBy(stepper).completeStep(0)
if (diseases.length > 0) {
s.streamBy(stepper).startStep(1)
const diseaseName = (diseases[0] as any).name ?? ''
const similar = await kg.findSimilarDiseases(diseaseName, 5)
s.streamBy(stepper).completeStep(1)
s.streamBy(stepper).startStep(2)
s.streamBy(thinking).done()
s.stream(`## ${diseaseName}\n\n`)
if (similar.length > 0) {
s.stream('### Similar Conditions\n\n')
for (const sim of similar) {
const score = Math.round(sim.similarityScore * 100)
s.stream(`- **${sim.diseaseName}** (${score}% similar, ${sim.sharedCount} shared phenotypes)\n`)
}
}
s.streamBy(stepper).completeStep(2)
} else {
s.streamBy(thinking).done()
s.stream(`I couldn't find a disease matching "${message.content}". Try searching for a specific disease name.`)
s.streamBy(stepper).completeStep(0)
}
s.streamBy(stepper).done()
await s.finish()
}, { senders: ['user'] })
// Proactive check-in (called from a scheduler or API)
async function dailyCheckIn(userId: string) {
const [convId, streamer] = await agent.startConversation({
userId,
title: 'Daily Symptom Check-in',
awaiting: true,
responseSchema: symptomLogForm.toDict(),
})
streamer.stream('Good morning! Time for your daily check-in.\n\n')
streamer.stream('Please fill out the form below:')
await streamer.finish()
}
await app.run()Message Properties Reference
| Property / Method | Type | Description |
|---|---|---|
| content | string | Message text |
| conversationId | string | Conversation UUID |
| userId | string | Sender's user ID |
| metadata | Record<string, unknown> | Custom metadata from frontend |
| schema | Record<string, unknown> | JSON schema (for form responses) |
| conversation | Conversation | Conversation history helper |
| memoryV2 | MemoryV2 | mem0-backed long-term memory |
| memory | Memory | Long-term memory *(depr
