@kognitivedev/vercel-ai-provider
v0.1.4
Published
Vercel AI SDK provider wrapper that integrates the Kognitive memory layer into your AI applications. Automatically injects memory context and logs conversations for memory processing.
Downloads
487
Readme
@kognitivedev/vercel-ai-provider
Vercel AI SDK provider wrapper that integrates the Kognitive memory layer into your AI applications. Automatically injects memory context and logs conversations for memory processing.
Installation
npm install @kognitivedev/vercel-ai-providerPeer Dependencies
This package requires the Vercel AI SDK:
npm install aiQuick Start
import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
// 1. Create the cognitive layer
const clModel = createCognitiveLayer({
provider: openai,
clConfig: {
appId: "my-app",
defaultAgentId: "assistant",
baseUrl: "http://localhost:3001"
}
});
// 2. Use it with Vercel AI SDK
const { text } = await generateText({
model: clModel("gpt-4o", {
userId: "user-123",
sessionId: "session-abc"
}),
prompt: "What's my favorite color?"
});Configuration
CognitiveLayerConfig
| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| appId | string | ✓ | - | Unique identifier for your application |
| defaultAgentId | string | - | "default" | Default agent ID when not specified per-request |
| baseUrl | string | - | "http://localhost:3001" | Kognitive backend API URL |
| apiKey | string | - | - | API key for authentication (if required) |
| processDelayMs | number | - | 500 | Delay before triggering memory processing (set to 0 to disable) |
API Reference
createCognitiveLayer(config)
Creates a model wrapper function that adds memory capabilities to any Vercel AI SDK provider.
Parameters:
createCognitiveLayer({
provider: any, // Vercel AI SDK provider (e.g., openai, anthropic)
clConfig: CognitiveLayerConfig
}): CLModelWrapperReturns: CLModelWrapper - A function to wrap models with memory capabilities.
CLModelWrapper
The function returned by createCognitiveLayer.
type CLModelWrapper = (
modelId: string,
settings?: {
userId?: string;
agentId?: string;
sessionId?: string;
},
providerOptions?: Record<string, unknown>
) => LanguageModelV2;Parameters:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| modelId | string | ✓ | Model identifier (e.g., "gpt-4o", "claude-3-opus") |
| settings.userId | string | - | User identifier (required for memory features) |
| settings.agentId | string | - | Override default agent ID |
| settings.sessionId | string | - | Session identifier (required for logging) |
| providerOptions | Record<string, unknown> | - | Provider-specific options passed directly to the underlying provider |
Usage Examples
With OpenAI
import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
const clModel = createCognitiveLayer({
provider: openai,
clConfig: {
appId: "my-app",
baseUrl: "https://api.kognitive.dev"
}
});
const { text } = await generateText({
model: clModel("gpt-4o", {
userId: "user-123",
sessionId: "session-abc"
}),
prompt: "Remember that my favorite color is blue"
});With OpenRouter (Provider Options)
Pass provider-specific options as the third parameter:
import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
import { generateText } from "ai";
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
const clModel = createCognitiveLayer({
provider: openrouter.chat,
clConfig: {
appId: "my-app",
baseUrl: "https://api.kognitive.dev"
}
});
// Pass provider-specific options as the third parameter
const { text } = await generateText({
model: clModel("moonshotai/kimi-k2-0905", {
userId: "user-123",
sessionId: "session-abc"
}, {
provider: {
only: ["openai"]
}
}),
prompt: "What's the weather like?"
});With Anthropic
import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
import { anthropic } from "@ai-sdk/anthropic";
import { streamText } from "ai";
const clModel = createCognitiveLayer({
provider: anthropic,
clConfig: {
appId: "my-app",
defaultAgentId: "claude-assistant"
}
});
const result = await streamText({
model: clModel("claude-3-5-sonnet-latest", {
userId: "user-456",
sessionId: "chat-xyz"
}),
prompt: "What did I tell you about my favorite color?"
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}With System Prompts
The provider automatically injects memory context into your system prompts:
const { text } = await generateText({
model: clModel("gpt-4o", {
userId: "user-123",
sessionId: "session-abc"
}),
system: "You are a helpful assistant.",
prompt: "What do you know about me?"
});
// Memory context is automatically appended to system promptWithout Memory (Anonymous Users)
Skip memory features by omitting userId:
const { text } = await generateText({
model: clModel("gpt-4o"),
prompt: "General question without memory"
});How It Works
Memory Injection Flow
- Request Interception: When a request is made, the middleware fetches the user's memory snapshot
- Context Injection: Memory context is injected into the system prompt as
<MemoryContext>block - Response Processing: After the response, the conversation is logged
- Background Processing: Memory extraction and management runs asynchronously
Memory Context Format
The injected memory context follows this structure:
<MemoryContext>
Use the following memory to stay consistent. Prefer UserContext facts for answers; AgentHeuristics guide style, safety, and priorities.
<AgentHeuristics>
- User prefers concise responses
- Always greet user by name
</AgentHeuristics>
<UserContext>
<Facts>
- User's name is John
- Favorite color is blue
</Facts>
<State>
- Currently working on a project
</State>
</UserContext>
</MemoryContext>Backend API Integration
The provider communicates with your Kognitive backend via these endpoints:
| Endpoint | Method | Description |
|----------|--------|-------------|
| /api/cognitive/snapshot | GET | Fetches user's memory snapshot |
| /api/cognitive/log | POST | Logs conversation for processing |
| /api/cognitive/process | POST | Triggers memory extraction/management |
Query Parameters for Snapshot
GET /api/cognitive/snapshot?userId={userId}&agentId={agentId}&appId={appId}Troubleshooting
Memory not being injected
- Ensure
userIdandsessionIdare provided - Check that the backend is running at the configured
baseUrl - Verify the snapshot endpoint returns data
Console warnings
CognitiveLayer: sessionId is required to log and process memories; skipping logging until provided.This warning appears when userId is provided but sessionId is missing. Add sessionId to enable logging.
Processing delay
The default 500ms delay before triggering memory processing allows database writes to settle. Adjust with processDelayMs:
clConfig: {
processDelayMs: 1000 // 1 second delay
// processDelayMs: 0 // Immediate processing
}License
MIT
