@kognitivedev/agents
v0.2.8
Published
AI agent framework with guardrails, memory, and multi-agent networks
Maintainers
Readme
@kognitivedev/agents
AI agent framework with guardrails, memory, and multi-agent networks — built on Vercel AI SDK.
Installation
bun add @kognitivedev/agents ai @ai-sdk/openai zodQuick Start
import { createAgent, tokenLimiter, contentFilter } from "@kognitivedev/agents";
import { openai } from "@ai-sdk/openai";
const agent = createAgent({
name: "support",
instructions: "You are a helpful support agent.",
model: openai("gpt-4o"),
tools: [searchTool],
guardrails: [
tokenLimiter({ maxTokens: 4000 }),
contentFilter({ patterns: [/password/i], mode: "block" }),
],
maxSteps: 5,
});
const result = await agent.generate({
messages: [{ role: "user", content: "Help me" }],
resourceId: { projectId: "demo" },
});Instructions can be declared in 3 ways
The instructions field supports three formats:
| Type | Purpose |
|------|---------|
| string | Static system prompt |
| (ctx) => string \| Promise<string> | Dynamic prompt built per run |
| PromptHubConfig | Runtime-resolved prompt from Prompt Hub |
Static string instructions
const agent = createAgent({
name: "writer",
instructions: "You are a professional copywriter.",
model: openai("gpt-4o"),
});Function-based instructions
const agent = createAgent({
name: "localized-agent",
instructions: async (ctx) => {
const locale = ctx.resourceId.userId === "fr-user" ? "fr-FR" : "en-US";
return `You are a support agent. Reply in ${locale}.`;
},
model: openai("gpt-4o"),
});Prompt Hub instructions
Use a PromptHubConfig object when instructions should be resolved from @kognitivedev/prompthub at runtime.
const agent = createAgent({
name: "support",
instructions: {
slug: "support-v2",
tag: "production",
variables: {
brand: "Acme",
tone: "formal",
},
},
model: openai("gpt-4o"),
apiKey: process.env.KOGNITIVE_API_KEY,
baseUrl: "https://api.kognitive.dev",
});slug is required. tag and variables are optional.
Prompt variables and precedence
Definition-level variables are optional, and can be overridden per run by promptVariables on generate, stream, or streamWithModes.
await agent.generate({
messages: [{ role: "user", content: "Draft a reply" }],
resourceId: { projectId: "demo", userId: "user_1" },
promptVariables: {
brand: "Acme",
tone: "casual",
},
});Merge order is:
- definition
variables - run-level
promptVariables(overrides duplicates)
Resolved prompt metadata
When using Prompt Hub instructions, metadata from the backend resolution is attached to runtime context:
ctx.resolvedPromptinside agent hooksprepare()result metadata atresult.resolvedPrompt
const result = await agent.prepare({
resourceId: { projectId: "demo", userId: "user_1" },
});
console.log(result.resolvedPrompt);
// { promptId, slug, version, tag?, abTestId?, variant? }Prompt Hub requirements
Prompt Hub mode needs backend credentials:
apiKey(required)baseUrl(optional, defaults tohttp://localhost:3001)
Set them directly on the agent, or inherit from a Kognitive registry. Missing credentials throw:
Agent "<name>" uses prompt hub (slug: "<slug>") but no apiKey is configured...
Features
- createAgent — orchestrates AI SDK streamText/generateText with memory + tools
- Guardrails — 6 built-in (tokenLimiter, contentFilter, maxMessageLength, outputContentFilter, asyncLogger, judgeGuardrail) + composition (chain, all, toAsync)
- Networks — multi-agent routing via
createAgentNetwork() - prepare() — escape hatch returning raw AI SDK inputs + resolved prompt metadata (
result.resolvedPrompt) - Memory — automatic snapshot injection from cognitive backend
- Multi-mode streaming —
streamWithModes()emitsvalues,updates,messages,custom,debug - Double-texting — 4 strategies for handling concurrent requests: reject, queue, interrupt, rollback
Multi-Mode Streaming
Beyond the default stream() (compatible with useChat()), use streamWithModes() for richer event streams:
const eventStream = await agent.streamWithModes({
messages: [{ role: "user", content: "Hello" }],
resourceId: { projectId: "demo" },
streamModes: ["messages", "debug"],
});
// eventStream is ReadableStream<StreamEvent>
// Events: { event: "messages", data: { token: "Hi" } }
// { event: "debug", data: { type: "tool_call", ... } }Stream modes:
| Mode | Events |
|------|--------|
| messages | Token-by-token LLM output |
| values | Full state snapshot after each step |
| updates | State deltas only |
| debug | Tool calls, tool results, step lifecycle |
| custom | Application-specific events |
Double-Texting
Handle concurrent user inputs on the same session via the runtime API:
// In request body:
{
"messages": [...],
"sessionId": "session-123",
"doubleTexting": { "strategy": "reject" }
}| Strategy | Behavior |
|----------|----------|
| reject | Return 409 if a run is already active |
| queue | Wait for current run, then execute sequentially |
| interrupt | Abort current run, start new one |
| rollback | Same as interrupt for agents |
