npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@kognitivedev/agents

v0.2.8

Published

AI agent framework with guardrails, memory, and multi-agent networks

Readme

@kognitivedev/agents

AI agent framework with guardrails, memory, and multi-agent networks — built on Vercel AI SDK.

Installation

bun add @kognitivedev/agents ai @ai-sdk/openai zod

Quick Start

import { createAgent, tokenLimiter, contentFilter } from "@kognitivedev/agents";
import { openai } from "@ai-sdk/openai";

const agent = createAgent({
  name: "support",
  instructions: "You are a helpful support agent.",
  model: openai("gpt-4o"),
  tools: [searchTool],
  guardrails: [
    tokenLimiter({ maxTokens: 4000 }),
    contentFilter({ patterns: [/password/i], mode: "block" }),
  ],
  maxSteps: 5,
});

const result = await agent.generate({
  messages: [{ role: "user", content: "Help me" }],
  resourceId: { projectId: "demo" },
});

Instructions can be declared in 3 ways

The instructions field supports three formats:

| Type | Purpose | |------|---------| | string | Static system prompt | | (ctx) => string \| Promise<string> | Dynamic prompt built per run | | PromptHubConfig | Runtime-resolved prompt from Prompt Hub |

Static string instructions

const agent = createAgent({
  name: "writer",
  instructions: "You are a professional copywriter.",
  model: openai("gpt-4o"),
});

Function-based instructions

const agent = createAgent({
  name: "localized-agent",
  instructions: async (ctx) => {
    const locale = ctx.resourceId.userId === "fr-user" ? "fr-FR" : "en-US";
    return `You are a support agent. Reply in ${locale}.`;
  },
  model: openai("gpt-4o"),
});

Prompt Hub instructions

Use a PromptHubConfig object when instructions should be resolved from @kognitivedev/prompthub at runtime.

const agent = createAgent({
  name: "support",
  instructions: {
    slug: "support-v2",
    tag: "production",
    variables: {
      brand: "Acme",
      tone: "formal",
    },
  },
  model: openai("gpt-4o"),
  apiKey: process.env.KOGNITIVE_API_KEY,
  baseUrl: "https://api.kognitive.dev",
});

slug is required. tag and variables are optional.

Prompt variables and precedence

Definition-level variables are optional, and can be overridden per run by promptVariables on generate, stream, or streamWithModes.

await agent.generate({
  messages: [{ role: "user", content: "Draft a reply" }],
  resourceId: { projectId: "demo", userId: "user_1" },
  promptVariables: {
    brand: "Acme",
    tone: "casual",
  },
});

Merge order is:

  1. definition variables
  2. run-level promptVariables (overrides duplicates)

Resolved prompt metadata

When using Prompt Hub instructions, metadata from the backend resolution is attached to runtime context:

  • ctx.resolvedPrompt inside agent hooks
  • prepare() result metadata at result.resolvedPrompt
const result = await agent.prepare({
  resourceId: { projectId: "demo", userId: "user_1" },
});

console.log(result.resolvedPrompt);
// { promptId, slug, version, tag?, abTestId?, variant? }

Prompt Hub requirements

Prompt Hub mode needs backend credentials:

  • apiKey (required)
  • baseUrl (optional, defaults to http://localhost:3001)

Set them directly on the agent, or inherit from a Kognitive registry. Missing credentials throw:

Agent "<name>" uses prompt hub (slug: "<slug>") but no apiKey is configured...

Features

  • createAgent — orchestrates AI SDK streamText/generateText with memory + tools
  • Guardrails — 6 built-in (tokenLimiter, contentFilter, maxMessageLength, outputContentFilter, asyncLogger, judgeGuardrail) + composition (chain, all, toAsync)
  • Networks — multi-agent routing via createAgentNetwork()
  • prepare() — escape hatch returning raw AI SDK inputs + resolved prompt metadata (result.resolvedPrompt)
  • Memory — automatic snapshot injection from cognitive backend
  • Multi-mode streamingstreamWithModes() emits values, updates, messages, custom, debug
  • Double-texting — 4 strategies for handling concurrent requests: reject, queue, interrupt, rollback

Multi-Mode Streaming

Beyond the default stream() (compatible with useChat()), use streamWithModes() for richer event streams:

const eventStream = await agent.streamWithModes({
  messages: [{ role: "user", content: "Hello" }],
  resourceId: { projectId: "demo" },
  streamModes: ["messages", "debug"],
});

// eventStream is ReadableStream<StreamEvent>
// Events: { event: "messages", data: { token: "Hi" } }
//         { event: "debug", data: { type: "tool_call", ... } }

Stream modes: | Mode | Events | |------|--------| | messages | Token-by-token LLM output | | values | Full state snapshot after each step | | updates | State deltas only | | debug | Tool calls, tool results, step lifecycle | | custom | Application-specific events |

Double-Texting

Handle concurrent user inputs on the same session via the runtime API:

// In request body:
{
  "messages": [...],
  "sessionId": "session-123",
  "doubleTexting": { "strategy": "reject" }
}

| Strategy | Behavior | |----------|----------| | reject | Return 409 if a run is already active | | queue | Wait for current run, then execute sequentially | | interrupt | Abort current run, start new one | | rollback | Same as interrupt for agents |