@ainative/ai-kit-svelte
v0.2.0
Published
AI Kit - Svelte stores and actions for building AI-powered applications
Maintainers
Readme
@ainative/ai-kit-svelte
Svelte adapter for AI Kit with reactive stores for AI streaming.
Installation
pnpm add @ainative/ai-kit-svelteUsage
Basic Example
<script lang="ts">
import { createAIStream } from '@ainative/ai-kit-svelte'
const aiStream = createAIStream({
endpoint: '/api/chat',
model: 'gpt-4',
systemPrompt: 'You are a helpful assistant.'
})
// Subscribe to reactive stores
$: messages = $aiStream.messages
$: isStreaming = $aiStream.isStreaming
$: error = $aiStream.error
let userInput = ''
async function handleSend() {
if (!userInput.trim()) return
await aiStream.send(userInput)
userInput = ''
}
// Clean up when component is destroyed
import { onDestroy } from 'svelte'
onDestroy(() => {
aiStream.destroy()
})
</script>
<div class="chat-container">
{#each $messages as message (message.id)}
<div class="message {message.role}">
<strong>{message.role}:</strong>
<p>{message.content}</p>
</div>
{/each}
</div>
<form on:submit|preventDefault={handleSend}>
<input
type="text"
bind:value={userInput}
disabled={$isStreaming}
placeholder="Type your message..."
/>
<button type="submit" disabled={$isStreaming}>
{$isStreaming ? 'Sending...' : 'Send'}
</button>
</form>
{#if $error}
<div class="error">Error: {$error.message}</div>
{/if}Advanced Example with Callbacks
<script lang="ts">
import { createAIStream } from '@ainative/ai-kit-svelte'
const aiStream = createAIStream({
endpoint: '/api/chat',
model: 'gpt-4',
onToken: (token) => {
console.log('Received token:', token)
},
onCost: (usage) => {
console.log('Token usage:', usage)
},
onError: (error) => {
console.error('Stream error:', error)
},
retry: {
maxRetries: 3,
backoff: 'exponential',
initialDelay: 1000,
maxDelay: 10000
}
})
// Usage statistics
$: usage = $aiStream.usage
$: console.log('Total tokens:', $usage.totalTokens)
function handleRetry() {
aiStream.retry()
}
function handleReset() {
aiStream.reset()
}
function handleStop() {
aiStream.stop()
}
</script>
<!-- UI implementation -->API Reference
createAIStream(config: StreamConfig): AIStreamStore
Creates a new AI streaming store.
Parameters
config.endpoint(string, required): The API endpoint for streamingconfig.model(string, optional): The AI model to useconfig.systemPrompt(string, optional): System prompt for the AIconfig.onToken(function, optional): Callback fired for each tokenconfig.onCost(function, optional): Callback fired when usage stats updateconfig.onError(function, optional): Callback fired on errorsconfig.retry(object, optional): Retry configurationmaxRetries(number): Maximum number of retriesbackoff('linear' | 'exponential'): Backoff strategyinitialDelay(number): Initial retry delay in msmaxDelay(number): Maximum retry delay in ms
config.headers(object, optional): Additional HTTP headers
Returns
An AIStreamStore object with the following properties:
Stores (Readable)
messages: Readable store containing the message historyisStreaming: Readable store indicating if currently streamingerror: Readable store containing any errorsusage: Readable store with token usage statistics
Methods
send(content: string): Promise<void>: Send a user messagereset(): void: Clear all messages and reset stateretry(): Promise<void>: Retry the last messagestop(): void: Stop the current streamdestroy(): void: Clean up resources and event listeners
Types
interface Message {
id: string
role: 'user' | 'assistant' | 'system'
content: string
timestamp: number
}
interface Usage {
promptTokens: number
completionTokens: number
totalTokens: number
estimatedCost?: number
latency?: number
model?: string
cacheHit?: boolean
}Features
- ✅ Reactive Svelte stores for all state
- ✅ Automatic message history management
- ✅ Server-sent events (SSE) streaming
- ✅ Automatic retry with exponential backoff
- ✅ Token usage tracking
- ✅ Error handling
- ✅ TypeScript support
- ✅ Framework-agnostic core
Testing
# Run tests
pnpm test
# Run tests with coverage
pnpm test:coverage
# Type check
pnpm type-checkLicense
MIT
