@agent-client/aa
v2.0.1
Published
A streaming chat client SDK for AI Agent conversations
Maintainers
Readme
@agent-client/aa
A lightweight, type-safe streaming chat client SDK for AutoAgents (AA) conversations. Built with TypeScript and native Web APIs.
Features
- 🔄 Streaming Support - Real-time SSE (Server-Sent Events) streaming
- 🎯 Type Safe - Full TypeScript support with comprehensive type definitions
- 🪶 Zero Dependencies - Uses native browser APIs only
- ⚡️ Async Generator - Modern AsyncGenerator pattern for clean stream handling
- 🎨 Rich Metadata - Supports reasoning content, thinking time, and custom metadata
- 🛡️ Error Handling - Robust error handling and stream cleanup
Installation
npm install @agent-client/aayarn add @agent-client/aapnpm add @agent-client/aabun add @agent-client/aaQuick Start
import { chat } from '@agent-client/aa';
// Simple and clean API
const chatStream = chat('https://api.example.com/chat', {
token: 'your-auth-token',
body: {
agentId: 'agent-123',
userChatInput: 'Hello, how are you?',
},
});
// Process streaming messages
for await (const { messages, conversationId, chatId, chunk, rawChunk, sseLine } of chatStream) {
// Structured messages
console.log('Current messages:', messages);
console.log('Conversation ID:', conversationId);
console.log('Chat ID:', chatId);
// Raw data access (optional, for advanced use)
if (chunk) console.log('Parsed chunk:', chunk);
if (rawChunk) console.log('Raw JSON:', rawChunk);
if (sseLine) console.log('SSE line:', sseLine);
}API Reference
chat(url, options)
Main function to initiate a streaming chat conversation. Simple, clean, and powerful.
Parameters:
url(string): The API endpoint URLoptions(object):token(string): Authentication tokenbody(ChatRequestBody): Request payloadsignal(AbortSignal, optional): Abort signal for cancellationonopen(function, optional): Callback when connection opens
Returns: AsyncGenerator<GenerateMessagesYield>
Types
ChatRequestBody
interface ChatRequestBody {
agentId: string;
chatId?: string;
userChatInput: string;
files?: { fileId: string; fileName: string; fileUrl: string; }[];
images?: { url: string }[];
kbIdList?: number[];
database?: {
databaseUuid: string;
tableNames: string[];
};
state?: any;
trialOperation?: boolean;
}ChatMessage
interface ChatMessage {
content: string;
role: "assistant" | "user";
messageId: string;
loading: boolean;
contentType?: "q_file" | "q_image";
metadata?: Record<string, {
complete: boolean;
result?: any[];
type?: string;
}>;
type: "text" | "table" | "buttons" | "result_file";
reasoningContent?: string;
thinkingElapsedMillSecs?: number;
__raw?: any;
}ChatMessageStreamYield
interface ChatMessageStreamYield {
messages: ChatMessage[]; // Array of accumulated messages
conversationId?: string; // Conversation identifier
chatId?: string; // Chat identifier for continuation
chunk?: ChatStreamChunk; // Parsed chunk object
rawChunk?: string; // Raw JSON text (without "data:" prefix)
sseLine?: string; // Complete SSE line (with "data:" prefix)
}Note - Data Layers:
sseLine: Complete SSE protocol line, e.g.,"data: {\"content\":\"hello\"}"rawChunk: JSON text only (withoutdata:prefix), e.g.,"{\"content\":\"hello\"}"chunk: Parsed JSON object, e.g.,{content: "hello"}
Use sseLine for protocol-level debugging or logging, rawChunk for custom JSON parsing, and chunk for direct data access.
Advanced Usage
With Abort Signal
const controller = new AbortController();
const chatStream = chat('https://api.example.com/chat', {
token: 'your-token',
body: { agentId: 'agent-123', userChatInput: 'Hello!' },
signal: controller.signal,
});
// Cancel the request after 5 seconds
setTimeout(() => controller.abort(), 5000);With Connection Callback
const chatStream = chat('https://api.example.com/chat', {
token: 'your-token',
body: { agentId: 'agent-123', userChatInput: 'Hello!' },
onopen: () => {
console.log('Connection established!');
},
});React Integration Example
import { useState, useEffect } from 'react';
import { chat, ChatMessage } from '@agent-client/aa';
function ChatComponent() {
const [messages, setMessages] = useState<ChatMessage[]>([]);
const [conversationId, setConversationId] = useState<string>('');
const sendMessage = async (input: string) => {
const chatStream = chat('https://api.example.com/chat', {
token: 'your-token',
body: {
agentId: 'agent-123',
userChatInput: input,
chatId: conversationId,
},
});
for await (const { messages, conversationId: convId } of chatStream) {
setMessages(messages);
if (convId) setConversationId(convId);
}
};
return (
<div>
{messages.map((msg) => (
<div key={msg.messageId}>
<strong>{msg.role}:</strong> {msg.content}
{msg.loading && <span>...</span>}
</div>
))}
</div>
);
}Low-Level APIs
createChatSSEStream(url, options)
Creates a ReadableStream for SSE data.
import { createChatSSEStream } from '@agent-client/aa';
const stream = await createChatSSEStream('https://api.example.com/chat', {
token: 'your-token',
body: { agentId: 'agent-123', userChatInput: 'Hello!' },
});createChatMessageStream(stream)
Processes a ReadableStream and yields structured messages.
import { createChatSSEStream, createChatMessageStream } from '@agent-client/aa';
const stream = await createChatSSEStream(url, options);
for await (const result of createChatMessageStream(stream)) {
console.log(result.messages);
}Protocol
The SDK expects Server-Sent Events (SSE) in the following format:
data: {"chatId":"123","conversationId":"456","content":"Hello","complete":false,"finish":false,...}
data: {"chatId":"123","conversationId":"456","content":" World","complete":true,"finish":false,...}
data: [DONE]Stream Markers
complete: true- Current message is complete (may have more messages in conversation)finish: true- Entire conversation stream is finisheddata: [DONE]- Alternative stream termination marker
Browser Support
This package uses native browser APIs:
fetchAPIReadableStreamAPITextDecoderAPIAsyncGeneratorsupport
Requires modern browsers with ES2022 support. For older browsers, use appropriate polyfills.
Publishing
To publish this package to npm:
# First time: Login to npm
npm run login
# Then: One-command publish
npm run publish:nowLicense
Apache-2.0
Contributing
Contributions are welcome! Please read our contributing guidelines and code of conduct.
Support
- GitHub Issues: https://github.com/align-dev/align/issues
- Email: [email protected]
