onebrain-ai-sdk
v0.1.0
Published
OneBrain integration for Vercel AI SDK — persistent memory for AI apps
Maintainers
Readme
@onebrain/ai-sdk
OneBrain integration for the Vercel AI SDK — give your AI apps persistent, personalized memory.
This package wraps the onebrain JavaScript SDK and provides ready-to-use middleware and tools for the Vercel AI SDK (v3+).
Features
- Context Middleware — Automatically inject user context (profile, memories, entities) into every LLM call
- Memory Middleware — Auto-save conversations to OneBrain after each generation
- AI Tools — Let your LLM search memories, write new ones, fetch context, and explore entities
- Streaming Support — Full support for
streamText()andgenerateText() - Self-hosted Ready — Works with OneBrain Cloud and self-hosted instances
Installation
npm install @onebrain/ai-sdk ai onebrainQuick Start (Next.js)
// app/api/chat/route.ts
import { streamText, wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
import { oneBrainContext, oneBrainMemory } from '@onebrain/ai-sdk';
const contextMiddleware = oneBrainContext({
apiKey: process.env.ONEBRAIN_API_KEY!,
scope: 'assistant',
});
const memoryMiddleware = oneBrainMemory({
apiKey: process.env.ONEBRAIN_API_KEY!,
});
export async function POST(req: Request) {
const { messages } = await req.json();
const model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware: contextMiddleware,
});
const result = streamText({
model,
messages,
});
return result.toDataStreamResponse();
}Context Middleware
Injects OneBrain user context as a system message into every LLM call. The context includes the user's profile, recent memories, active projects, and known entities.
import { oneBrainContext } from '@onebrain/ai-sdk';
import { generateText, wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
const model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware: oneBrainContext({
apiKey: process.env.ONEBRAIN_API_KEY!,
scope: 'assistant', // 'brief' | 'assistant' | 'project' | 'deep'
}),
});
const { text } = await generateText({
model,
prompt: 'What are my current projects?',
});Scopes
| Scope | Description | Token Estimate |
|-------|-------------|----------------|
| brief | Minimal context — profile summary only | ~100 tokens |
| assistant | Standard — profile, recent memories, key entities | ~500 tokens |
| project | Project-focused — includes active project details | ~800 tokens |
| deep | Comprehensive — everything OneBrain knows | ~2000 tokens |
Memory Middleware
Automatically saves each conversation turn (user message + assistant response) to OneBrain as a memory.
import { oneBrainMemory } from '@onebrain/ai-sdk';
import { generateText, wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
const model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware: oneBrainMemory({
apiKey: process.env.ONEBRAIN_API_KEY!,
autoSave: true, // default: true
}),
});
const { text } = await generateText({
model,
prompt: 'I just finished migrating to Next.js 15',
});
// This conversation is automatically saved to OneBrainCombining Middleware
You can use both middleware together by wrapping the model twice:
import { wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
import { oneBrainContext, oneBrainMemory } from '@onebrain/ai-sdk';
let model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware: oneBrainContext({
apiKey: process.env.ONEBRAIN_API_KEY!,
scope: 'assistant',
}),
});
model = wrapLanguageModel({
model,
middleware: oneBrainMemory({
apiKey: process.env.ONEBRAIN_API_KEY!,
}),
});Tools
Give your LLM direct access to OneBrain's memory, context, and entity features via Vercel AI SDK tools.
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { oneBrainTools } from '@onebrain/ai-sdk';
const result = await generateText({
model: openai('gpt-4o'),
tools: oneBrainTools({
apiKey: process.env.ONEBRAIN_API_KEY!,
}),
maxSteps: 3,
prompt: 'What do you remember about my work projects?',
});Available Tools
| Tool | Description |
|------|-------------|
| searchMemory | Semantic + keyword search across all user memories |
| writeMemory | Save a new memory (fact, preference, decision, goal, experience, skill) |
| getContext | Retrieve the user's full OneBrain context |
| searchEntities | Find people, organizations, tools, and topics |
Selective Tool Usage
You can pick specific tools instead of providing all of them:
import { oneBrainTools } from '@onebrain/ai-sdk';
const allTools = oneBrainTools({
apiKey: process.env.ONEBRAIN_API_KEY!,
});
// Only give the LLM read access
const result = await generateText({
model: openai('gpt-4o'),
tools: {
searchMemory: allTools.searchMemory,
getContext: allTools.getContext,
},
prompt: 'Summarize what you know about me',
});Configuration
Options
All functions accept these common options:
| Option | Type | Required | Description |
|--------|------|----------|-------------|
| apiKey | string | Yes | Your OneBrain API key |
| baseUrl | string | No | Custom API URL for self-hosted instances |
Context Middleware Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| scope | 'brief' \| 'assistant' \| 'project' \| 'deep' | 'assistant' | Context depth |
Memory Middleware Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| autoSave | boolean | true | Auto-save conversations as memories |
Self-Hosted Setup
If you're running a self-hosted OneBrain instance, pass the baseUrl option:
import { oneBrainContext, oneBrainMemory, oneBrainTools } from '@onebrain/ai-sdk';
const options = {
apiKey: process.env.ONEBRAIN_API_KEY!,
baseUrl: 'https://onebrain.your-company.com',
};
const contextMiddleware = oneBrainContext({ ...options, scope: 'assistant' });
const memoryMiddleware = oneBrainMemory(options);
const tools = oneBrainTools(options);Environment Variables
# Required
ONEBRAIN_API_KEY=ob_your_api_key_here
# Optional: self-hosted instance
ONEBRAIN_BASE_URL=https://onebrain.your-company.comAPI Reference
oneBrainContext(options)
Returns a Vercel AI SDK middleware that prepends OneBrain context as a system message.
oneBrainMemory(options)
Returns a Vercel AI SDK middleware that saves conversation turns to OneBrain after generation.
oneBrainTools(options)
Returns a Record<string, CoreTool> with searchMemory, writeMemory, getContext, and searchEntities tools.
VERSION
The current package version string.
Requirements
- Node.js 18+
- Vercel AI SDK v3+
- OneBrain JS SDK v1+
License
MIT
