@lanonasis/ai-sdk
v0.2.2
Published
Drop-in AI SDK for browser and Node.js with persistent memory, chat completions, and vortexai-l0 orchestration
Maintainers
Readme
@lanonasis/ai-sdk
Drop-in AI SDK for browser and Node.js applications with persistent memory, chat completions, and vortexai-l0 orchestration.
Version
Current Release: v0.2.1
Features
- 🔑 API Key Authentication - Secure
lano_xxx...format keys - 💾 Persistent Memory - Built-in integration with
@lanonasis/memory-sdk-standalone - 🌊 Streaming Support - Real-time response streaming
- 🎭 Orchestration - Complex multi-agent workflows via vortexai-l0
- ⚛️ React Hooks - First-class React integration
- 📦 Tree-shakeable - Only import what you need
- 🔒 Type-safe - Full TypeScript support
Installation
npm install @lanonasis/ai-sdk
# or
bun add @lanonasis/ai-sdkEnvironment Setup
Add your API key to your environment:
# .env.local (Next.js) or .env
LANONASIS_API_KEY=lano_your_api_key_here
# For client-side usage in Next.js
NEXT_PUBLIC_LANONASIS_API_KEY=lano_your_api_key_hereOnce configured, the SDK works seamlessly with no additional setup required.
Quick Start
import { LanonasisAI } from '@lanonasis/ai-sdk';
// Initialize with API key from environment
const ai = new LanonasisAI({
apiKey: process.env.LANONASIS_API_KEY!,
// baseUrl defaults to https://api.lanonasis.com
});
// Simple message
const response = await ai.send('Hello, who are you?');
console.log(response);
// Chat with full control
const result = await ai.chat({
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is 2+2?' },
],
model: 'gpt-4o-mini',
temperature: 0.7,
maxTokens: 1000,
});
console.log(result.response);Detailed Examples
Basic Chat Completion
import { LanonasisAI } from '@lanonasis/ai-sdk';
const ai = new LanonasisAI({
apiKey: process.env.LANONASIS_API_KEY!,
});
// Single turn conversation
const response = await ai.chat({
messages: [
{ role: 'user', content: 'Explain quantum computing in simple terms' }
],
model: 'gpt-4o-mini',
});
console.log(response.response);
// Output: "Quantum computing is like having a super-powered calculator..."Multi-turn Conversation
const ai = new LanonasisAI({ apiKey: process.env.LANONASIS_API_KEY! });
// Build conversation history
const messages = [
{ role: 'system', content: 'You are a coding assistant specializing in TypeScript.' },
{ role: 'user', content: 'How do I create a generic function?' },
];
const response1 = await ai.chat({ messages });
console.log(response1.response);
// Continue the conversation
messages.push({ role: 'assistant', content: response1.response });
messages.push({ role: 'user', content: 'Can you show me an example with arrays?' });
const response2 = await ai.chat({ messages });
console.log(response2.response);With Persistent Memory
The SDK automatically integrates with @lanonasis/memory-sdk-standalone:
const ai = new LanonasisAI({
apiKey: process.env.LANONASIS_API_KEY!,
memory: {
enabled: true,
autoSave: true,
contextStrategy: 'relevance',
},
});
// Store important information
await ai.memory.createMemory({
title: 'User Preferences',
content: 'User prefers dark mode and concise responses',
status: 'active',
});
// Search memories
const memories = await ai.memory.searchMemories({
query: 'user preferences',
status: 'active',
threshold: 0.7,
});
// Chat with memory context
const response = await ai.chat({
messages: [{ role: 'user', content: 'Remember my preferences?' }],
conversationId: 'user-123-session',
});
// Get context from memories
const context = await ai.memory.searchWithContext('previous conversations');Streaming Responses
// Async iterator style
for await (const chunk of ai.chatStream({
messages: [{ role: 'user', content: 'Tell me a story' }],
})) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Callback style
await ai.streamChat(
{ messages: [{ role: 'user', content: 'Count to 10' }] },
(chunk) => console.log(chunk.choices[0]?.delta?.content)
);Orchestration
Use vortexai-l0 for complex workflows:
// Remote API orchestration (default)
const result = await ai.orchestrate('Create a viral TikTok campaign');
// Local orchestration (no API call)
const ai = new LanonasisAI({
apiKey: 'lnss_xxx',
useLocalOrchestration: true,
});
const result = await ai.orchestrate('analyze trending hashtags');React Integration
import { useLanonasis, useChat } from '@lanonasis/ai-sdk/react';
function ChatComponent() {
const { client, isReady } = useLanonasis({
apiKey: process.env.NEXT_PUBLIC_LANONASIS_KEY!,
});
const { messages, send, isLoading, sendWithStream } = useChat({
client,
systemPrompt: 'You are a helpful assistant.',
});
return (
<div>
{messages.map((msg, i) => (
<div key={i} className={msg.role}>{msg.content}</div>
))}
{isLoading && <div>Thinking...</div>}
</div>
);
}Configuration
new LanonasisAI({
apiKey: string; // Required: lano_xxx format
baseUrl?: string; // Default: https://api.lanonasis.com
memoryUrl?: string; // Default: same as baseUrl
timeout?: number; // Default: 30000 (30s)
maxRetries?: number; // Default: 3
debug?: boolean; // Default: false
organizationId?: string; // For multi-tenant setups
useLocalOrchestration?: bool // Use vortexai-l0 locally
memory?: {
enabled?: boolean; // Default: true
autoSave?: boolean; // Default: true
contextStrategy?: string; // 'relevance' | 'temporal' | 'hybrid'
maxContextTokens?: number; // Default: 4000
};
});API Endpoints
The SDK connects to these endpoints by default:
| Feature | Endpoint |
|---------|----------|
| Chat Completions | POST /v1/chat/completions |
| Memory API | GET/POST /api/v1/memory |
| Health Check | GET /health |
All requests are authenticated via the Authorization: Bearer lano_xxx header.
Architecture
This SDK integrates with existing Lanonasis infrastructure:
- @lanonasis/memory-sdk - Persistent memory and context building
- vortexai-l0 - Local orchestration engine
- Backend API - Chat completions and remote orchestration
Migration from v0.1.0
// Old (v0.1.0)
import { AiSDK } from '@lanonasis/ai-sdk';
const sdk = new AiSDK();
await sdk.orchestrate('query');
// New (v0.2.x)
import { LanonasisAI } from '@lanonasis/ai-sdk';
const ai = new LanonasisAI({ apiKey: process.env.LANONASIS_API_KEY! });
await ai.orchestrate('query');
// Or simple:
const message = await ai.send('query');License
MIT
Advanced (Plugins)
import { LanonasisAI, createPluginManager } from '@lanonasis/ai-sdk';
const plugins = createPluginManager();
plugins.register({
metadata: { name: 'demo', version: '1.0.0', description: 'Demo plugin' },
triggers: ['demo'],
handler: async (ctx) => ({ message: `Hello ${ctx.query}`, type: 'orchestration' })
});
const sdk = new AiSDK({ plugins });
await sdk.orchestrate('demo request');Smoke test (browser bundling)
Ensures no Node-only deps leak into browser builds.
cd packages/ai-sdk
bun run smoke:web # esbuild bundle → dist-smoke/index.jsBuild
cd packages/ai-sdk
bun run buildNotes
- Backed by
vortexai-l0browser-safe entry; CLI lives invortexai-l0/dist/node/cli.js. - Last validated: 2025-12-15 via
bun run buildandbun run smoke:web.
