npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@kognitivedev/vercel-ai-provider

v0.1.4

Published

Vercel AI SDK provider wrapper that integrates the Kognitive memory layer into your AI applications. Automatically injects memory context and logs conversations for memory processing.

Downloads

487

Readme

@kognitivedev/vercel-ai-provider

Vercel AI SDK provider wrapper that integrates the Kognitive memory layer into your AI applications. Automatically injects memory context and logs conversations for memory processing.

Installation

npm install @kognitivedev/vercel-ai-provider

Peer Dependencies

This package requires the Vercel AI SDK:

npm install ai

Quick Start

import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

// 1. Create the cognitive layer
const clModel = createCognitiveLayer({
    provider: openai,
    clConfig: {
        appId: "my-app",
        defaultAgentId: "assistant",
        baseUrl: "http://localhost:3001"
    }
});

// 2. Use it with Vercel AI SDK
const { text } = await generateText({
    model: clModel("gpt-4o", {
        userId: "user-123",
        sessionId: "session-abc"
    }),
    prompt: "What's my favorite color?"
});

Configuration

CognitiveLayerConfig

| Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| | appId | string | ✓ | - | Unique identifier for your application | | defaultAgentId | string | - | "default" | Default agent ID when not specified per-request | | baseUrl | string | - | "http://localhost:3001" | Kognitive backend API URL | | apiKey | string | - | - | API key for authentication (if required) | | processDelayMs | number | - | 500 | Delay before triggering memory processing (set to 0 to disable) |

API Reference

createCognitiveLayer(config)

Creates a model wrapper function that adds memory capabilities to any Vercel AI SDK provider.

Parameters:

createCognitiveLayer({
    provider: any,        // Vercel AI SDK provider (e.g., openai, anthropic)
    clConfig: CognitiveLayerConfig
}): CLModelWrapper

Returns: CLModelWrapper - A function to wrap models with memory capabilities.


CLModelWrapper

The function returned by createCognitiveLayer.

type CLModelWrapper = (
    modelId: string,
    settings?: {
        userId?: string;
        agentId?: string;
        sessionId?: string;
    },
    providerOptions?: Record<string, unknown>
) => LanguageModelV2;

Parameters:

| Parameter | Type | Required | Description | |-----------|------|----------|-------------| | modelId | string | ✓ | Model identifier (e.g., "gpt-4o", "claude-3-opus") | | settings.userId | string | - | User identifier (required for memory features) | | settings.agentId | string | - | Override default agent ID | | settings.sessionId | string | - | Session identifier (required for logging) | | providerOptions | Record<string, unknown> | - | Provider-specific options passed directly to the underlying provider |

Usage Examples

With OpenAI

import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

const clModel = createCognitiveLayer({
    provider: openai,
    clConfig: {
        appId: "my-app",
        baseUrl: "https://api.kognitive.dev"
    }
});

const { text } = await generateText({
    model: clModel("gpt-4o", {
        userId: "user-123",
        sessionId: "session-abc"
    }),
    prompt: "Remember that my favorite color is blue"
});

With OpenRouter (Provider Options)

Pass provider-specific options as the third parameter:

import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
import { generateText } from "ai";

const openrouter = createOpenRouter({
    apiKey: process.env.OPENROUTER_API_KEY
});

const clModel = createCognitiveLayer({
    provider: openrouter.chat,
    clConfig: {
        appId: "my-app",
        baseUrl: "https://api.kognitive.dev"
    }
});

// Pass provider-specific options as the third parameter
const { text } = await generateText({
    model: clModel("moonshotai/kimi-k2-0905", {
        userId: "user-123",
        sessionId: "session-abc"
    }, {
        provider: {
            only: ["openai"]
        }
    }),
    prompt: "What's the weather like?"
});

With Anthropic

import { createCognitiveLayer } from "@kognitivedev/vercel-ai-provider";
import { anthropic } from "@ai-sdk/anthropic";
import { streamText } from "ai";

const clModel = createCognitiveLayer({
    provider: anthropic,
    clConfig: {
        appId: "my-app",
        defaultAgentId: "claude-assistant"
    }
});

const result = await streamText({
    model: clModel("claude-3-5-sonnet-latest", {
        userId: "user-456",
        sessionId: "chat-xyz"
    }),
    prompt: "What did I tell you about my favorite color?"
});

for await (const chunk of result.textStream) {
    process.stdout.write(chunk);
}

With System Prompts

The provider automatically injects memory context into your system prompts:

const { text } = await generateText({
    model: clModel("gpt-4o", {
        userId: "user-123",
        sessionId: "session-abc"
    }),
    system: "You are a helpful assistant.",
    prompt: "What do you know about me?"
});

// Memory context is automatically appended to system prompt

Without Memory (Anonymous Users)

Skip memory features by omitting userId:

const { text } = await generateText({
    model: clModel("gpt-4o"),
    prompt: "General question without memory"
});

How It Works

Memory Injection Flow

  1. Request Interception: When a request is made, the middleware fetches the user's memory snapshot
  2. Context Injection: Memory context is injected into the system prompt as <MemoryContext> block
  3. Response Processing: After the response, the conversation is logged
  4. Background Processing: Memory extraction and management runs asynchronously

Memory Context Format

The injected memory context follows this structure:

<MemoryContext>
Use the following memory to stay consistent. Prefer UserContext facts for answers; AgentHeuristics guide style, safety, and priorities.
<AgentHeuristics>
- User prefers concise responses
- Always greet user by name
</AgentHeuristics>
<UserContext>
<Facts>
- User's name is John
- Favorite color is blue
</Facts>
<State>
- Currently working on a project
</State>
</UserContext>
</MemoryContext>

Backend API Integration

The provider communicates with your Kognitive backend via these endpoints:

| Endpoint | Method | Description | |----------|--------|-------------| | /api/cognitive/snapshot | GET | Fetches user's memory snapshot | | /api/cognitive/log | POST | Logs conversation for processing | | /api/cognitive/process | POST | Triggers memory extraction/management |

Query Parameters for Snapshot

GET /api/cognitive/snapshot?userId={userId}&agentId={agentId}&appId={appId}

Troubleshooting

Memory not being injected

  1. Ensure userId and sessionId are provided
  2. Check that the backend is running at the configured baseUrl
  3. Verify the snapshot endpoint returns data

Console warnings

CognitiveLayer: sessionId is required to log and process memories; skipping logging until provided.

This warning appears when userId is provided but sessionId is missing. Add sessionId to enable logging.

Processing delay

The default 500ms delay before triggering memory processing allows database writes to settle. Adjust with processDelayMs:

clConfig: {
    processDelayMs: 1000 // 1 second delay
    // processDelayMs: 0 // Immediate processing
}

License

MIT