npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@cogitator-ai/core

v0.17.5

Published

Core runtime for Cogitator AI agents

Readme

@cogitator-ai/core

Core runtime for Cogitator AI agents. Build and run LLM-powered agents with tool calling, streaming, reflection, Tree-of-Thought reasoning, learning optimization, and time-travel debugging.

Installation

pnpm add @cogitator-ai/core

Quick Start

import { Cogitator, Agent, tool } from '@cogitator-ai/core';
import { z } from 'zod';

const calculator = tool({
  name: 'calculator',
  description: 'Evaluate a math expression',
  parameters: z.object({
    expression: z.string(),
  }),
  execute: async ({ expression }) => {
    return { result: eval(expression) };
  },
});

const agent = new Agent({
  name: 'math-assistant',
  instructions: 'You are a helpful math assistant',
  model: 'openai/gpt-4o',
  tools: [calculator],
});

const cog = new Cogitator();
const result = await cog.run(agent, {
  input: 'What is 25 * 4?',
});

console.log(result.output);

Features

  • Multi-Provider LLM Support - Ollama, OpenAI, Anthropic, Google, vLLM
  • Type-Safe Tools - Zod-validated tool definitions
  • Streaming Responses - Real-time token streaming
  • Memory Integration - Redis, PostgreSQL, in-memory adapters
  • 26 Built-in Tools - Web search, SQL, email, GitHub, filesystem, and more
  • Reflection Engine - Self-improvement through tool call analysis
  • Tree-of-Thought - Advanced reasoning with branch exploration
  • Agent Optimizer - DSPy-style learning from traces
  • Time Travel - Checkpoint, replay, fork, and compare executions
  • Causal Reasoning - Pearl's do-calculus, counterfactuals, d-separation
  • Resilience - Retry, circuit breaker, and fallback patterns
  • Observability - Full tracing with spans and callbacks

LLM Backends

Supported Providers

// Ollama (local, default)
const cog = new Cogitator({ defaultModel: 'ollama/llama3.1:8b' });

// OpenAI
const cog = new Cogitator({ defaultModel: 'openai/gpt-4o' });

// Anthropic Claude
const cog = new Cogitator({ defaultModel: 'anthropic/claude-sonnet-4-5-20250929' });

// Google Gemini
const cog = new Cogitator({ defaultModel: 'google/gemini-2.5-flash' });

// vLLM
const cog = new Cogitator({ defaultModel: 'vllm/mistral-7b' });

Backend Configuration

import { Cogitator, OllamaBackend, OpenAIBackend, AnthropicBackend } from '@cogitator-ai/core';

const cog = new Cogitator({
  llm: {
    defaultProvider: 'openai',
    ollama: {
      baseUrl: 'http://localhost:11434',
    },
    openai: {
      apiKey: process.env.OPENAI_API_KEY,
      organization: 'org-xxx',
    },
    anthropic: {
      apiKey: process.env.ANTHROPIC_API_KEY,
    },
    google: {
      apiKey: process.env.GOOGLE_API_KEY,
    },
  },
});

Direct Backend Usage

import { createLLMBackend, parseModel } from '@cogitator-ai/core';

const backend = createLLMBackend('openai', { apiKey: process.env.OPENAI_API_KEY });

const response = await backend.chat({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are helpful.' },
    { role: 'user', content: 'Hello!' },
  ],
});

Agent Configuration

import { Agent } from '@cogitator-ai/core';

const agent = new Agent({
  id: 'custom-id',
  name: 'research-assistant',
  instructions: 'You research topics thoroughly',
  model: 'openai/gpt-4o',
  tools: [webSearch, calculator],

  temperature: 0.7,
  topP: 0.9,
  maxTokens: 4096,
  maxIterations: 15,
  timeout: 120_000,
  stopSequences: ['DONE'],
});

// Clone with modifications
const variant = agent.clone({
  name: 'fast-assistant',
  temperature: 0.3,
  maxTokens: 1024,
});

Tools

Creating Tools

import { tool } from '@cogitator-ai/core';
import { z } from 'zod';

const weatherTool = tool({
  name: 'get_weather',
  description: 'Get current weather for a location',
  parameters: z.object({
    city: z.string().describe('City name'),
    units: z.enum(['celsius', 'fahrenheit']).optional(),
  }),
  execute: async ({ city, units = 'celsius' }, context) => {
    console.log(`Run ID: ${context.runId}`);
    return { temperature: 22, units, city };
  },
});

Tool Context

Every tool receives a context object:

interface ToolContext {
  agentId: string;
  runId: string;
  signal: AbortSignal;
}

Sandboxed Tools

Execute tools in isolated Docker or WASM environments:

const shellTool = tool({
  name: 'run_shell',
  description: 'Execute shell commands safely',
  parameters: z.object({
    command: z.string(),
  }),
  sandbox: {
    type: 'docker',
    image: 'ubuntu:22.04',
  },
  timeout: 30000,
  execute: async ({ command }) => command,
});

Tool Registry

import { ToolRegistry } from '@cogitator-ai/core';

const registry = new ToolRegistry();

registry.register(calculator);
registry.registerMany([datetime, webSearch, fileRead]);

const tool = registry.get('calculator');
const names = registry.getNames();
const schemas = registry.getSchemas();

Built-in Tools

Utility Tools

| Tool | Description | | --------------- | -------------------------------- | | calculator | Evaluate math expressions | | datetime | Get current date/time | | uuid | Generate UUIDs | | randomNumber | Random number generation | | randomString | Random string generation | | hash | Hash strings (md5, sha256, etc.) | | base64Encode | Encode to base64 | | base64Decode | Decode from base64 | | sleep | Delay execution | | jsonParse | Parse JSON strings | | jsonStringify | Stringify to JSON | | regexMatch | Match regex patterns | | regexReplace | Replace with regex | | fileRead | Read file contents | | fileWrite | Write to files | | fileList | List directory contents | | fileExists | Check if file exists | | fileDelete | Delete files | | httpRequest | Make HTTP requests | | exec | Execute shell commands |

Web & Search Tools

| Tool | Description | | ----------- | -------------------------------------- | | webSearch | Search the web (Tavily, Brave, Serper) | | webScrape | Extract content from web pages |

Database Tools

| Tool | Description | | -------------- | ------------------------------------------ | | sqlQuery | Execute SQL queries (PostgreSQL, SQLite) | | vectorSearch | Semantic search with embeddings (pgvector) |

Communication Tools

| Tool | Description | | ----------- | ------------------------------ | | sendEmail | Send emails (Resend API, SMTP) |

Development Tools

| Tool | Description | | ----------- | ------------------------------------------------ | | githubApi | GitHub API (issues, PRs, files, commits, search) |

import { builtinTools, calculator, datetime } from '@cogitator-ai/core';

const agent = new Agent({
  name: 'utility-agent',
  instructions: 'Use your tools to help users',
  model: 'openai/gpt-4o',
  tools: builtinTools,
});

Web Search Tool

Search the web using Tavily, Brave, or Serper APIs:

import { webSearch } from '@cogitator-ai/core';

// Auto-detects from TAVILY_API_KEY, BRAVE_API_KEY, or SERPER_API_KEY
const agent = new Agent({
  tools: [webSearch],
});

// Or specify provider explicitly in tool call
// provider: 'tavily' | 'brave' | 'serper'

Web Scrape Tool

Extract content from web pages:

import { webScrape } from '@cogitator-ai/core';

const agent = new Agent({
  tools: [webScrape],
});

// Supports CSS selectors, text/markdown/html output, link/image extraction

SQL Query Tool

Execute SQL queries against PostgreSQL or SQLite:

import { sqlQuery } from '@cogitator-ai/core';

const agent = new Agent({
  tools: [sqlQuery],
});

// Uses DATABASE_URL env var by default
// Supports parameterized queries for safety
// Read-only by default (SELECT, WITH, SHOW, DESCRIBE, EXPLAIN)

Dependencies: pg for PostgreSQL, better-sqlite3 for SQLite

Vector Search Tool

Semantic search using embeddings with pgvector:

import { vectorSearch } from '@cogitator-ai/core';

const agent = new Agent({
  tools: [vectorSearch],
});

// Embedding providers: OpenAI, Ollama, Google
// Auto-detects from OPENAI_API_KEY, OLLAMA_BASE_URL, or GOOGLE_API_KEY

Dependencies: pg with pgvector extension

Email Tool

Send emails via Resend API or SMTP:

import { sendEmail } from '@cogitator-ai/core';

const agent = new Agent({
  tools: [sendEmail],
});

// Resend: Set RESEND_API_KEY
// SMTP: Set SMTP_HOST, SMTP_USER, SMTP_PASS
// Supports HTML, CC/BCC, reply-to

Dependencies: nodemailer for SMTP

GitHub API Tool

Interact with GitHub repositories:

import { githubApi } from '@cogitator-ai/core';

const agent = new Agent({
  tools: [githubApi],
});

// Set GITHUB_TOKEN env var
// Actions: get_repo, list_issues, get_issue, create_issue, update_issue,
//          list_prs, get_pr, create_pr, get_file, list_commits,
//          search_code, search_issues

Run Options

import { Cogitator, Agent } from '@cogitator-ai/core';

const cog = new Cogitator();
const agent = new Agent({
  /* ... */
});

const result = await cog.run(agent, {
  input: 'Analyze this data...',

  threadId: 'thread_123',
  context: { userId: 'user_456', task: 'analysis' },

  timeout: 60000,
  stream: true,
  onToken: (token) => process.stdout.write(token),

  useMemory: true,
  loadHistory: true,
  saveHistory: true,

  onToolCall: (call) => console.log('Tool:', call.name),
  onToolResult: (result) => console.log('Result:', result.result),
  onSpan: (span) => console.log('Span:', span.name),
  onRunStart: ({ runId }) => console.log('Started:', runId),
  onRunComplete: (result) => console.log('Completed'),
  onRunError: (error) => console.error('Error:', error),
  onMemoryError: (error, op) => console.warn(`Memory ${op} failed`),
});

Run Result

interface RunResult {
  output: string;
  runId: string;
  agentId: string;
  threadId: string;
  usage: {
    inputTokens: number;
    outputTokens: number;
    totalTokens: number;
    cost: number;
    duration: number;
  };
  toolCalls: ToolCall[];
  messages: Message[];
  trace: { traceId: string; spans: Span[] };
  reflections?: Reflection[];
  reflectionSummary?: ReflectionSummary;
}

Memory Integration

Configure memory for conversation persistence:

const cog = new Cogitator({
  memory: {
    adapter: 'redis',
    redis: {
      url: 'redis://localhost:6379',
      keyPrefix: 'cogitator:',
    },
    contextBuilder: {
      maxTokens: 4000,
      strategy: 'recent',
    },
  },
});

// Or PostgreSQL
const cog = new Cogitator({
  memory: {
    adapter: 'postgres',
    postgres: {
      connectionString: 'postgresql://...',
    },
  },
});

// Or in-memory
const cog = new Cogitator({
  memory: {
    adapter: 'memory',
    inMemory: {
      maxEntries: 1000,
    },
  },
});

Reflection Engine

Enable self-improvement through reflection on tool calls and runs:

import { Cogitator, ReflectionEngine, InMemoryInsightStore } from '@cogitator-ai/core';

const cog = new Cogitator({
  reflection: {
    enabled: true,
    reflectionModel: 'openai/gpt-4o-mini',
    reflectAfterToolCall: true,
    reflectAtEnd: true,
    minConfidenceToStore: 0.7,
    maxInsightsPerAgent: 50,
  },
});

const result = await cog.run(agent, { input: 'Complete this task...' });

console.log('Reflections:', result.reflections);
console.log('Summary:', result.reflectionSummary);

const insights = await cog.getInsights(agent.id);
console.log('Learned insights:', insights);

Standalone Reflection Engine

import { ReflectionEngine, InMemoryInsightStore, createLLMBackend } from '@cogitator-ai/core';

const backend = createLLMBackend('openai', { apiKey: '...' });
const insightStore = new InMemoryInsightStore();

const engine = new ReflectionEngine({
  llm: backend,
  insightStore,
  config: {
    reflectAfterToolCall: true,
    minConfidenceToStore: 0.7,
  },
});

const result = await engine.reflectOnToolCall(action, agentContext);
if (result.shouldAdjustStrategy) {
  console.log('Suggested action:', result.suggestedAction);
}

Tree-of-Thought Reasoning

Explore multiple reasoning paths before deciding:

import { ThoughtTreeExecutor, BranchGenerator, BranchEvaluator } from '@cogitator-ai/core';

const executor = new ThoughtTreeExecutor(cogitator, {
  maxBranches: 5,
  maxDepth: 3,
  explorationStrategy: 'best-first',
  pruneThreshold: 0.3,
});

const result = await executor.run(agent, {
  input: 'Solve this complex problem...',
  explorationBudget: 10,
});

console.log('Best path:', result.bestPath);
console.log('All branches explored:', result.tree.branches.length);
console.log('Stats:', result.stats);

ToT Configuration

const executor = new ThoughtTreeExecutor(cogitator, {
  maxBranches: 5,
  maxDepth: 3,
  explorationStrategy: 'breadth-first',
  pruneThreshold: 0.3,
  branchTemperature: 0.8,
  evaluationModel: 'openai/gpt-4o-mini',
});

Agent Optimizer (Learning)

DSPy-style optimization through execution traces:

import {
  AgentOptimizer,
  InMemoryTraceStore,
  createSuccessMetric,
  createContainsMetric,
} from '@cogitator-ai/core';

const traceStore = new InMemoryTraceStore();
const optimizer = new AgentOptimizer(cogitator, {
  traceStore,
  optimizationModel: 'openai/gpt-4o',
  maxCandidates: 5,
  evaluationRuns: 3,
});

const result = await optimizer.compile(agent, {
  demos: [
    { input: 'Calculate 2+2', expectedOutput: '4' },
    { input: 'Calculate 10*5', expectedOutput: '50' },
  ],
  metric: createSuccessMetric(),
  maxIterations: 10,
});

console.log('Optimized instructions:', result.optimizedAgent.instructions);
console.log('Improvement:', result.improvement);

Built-in Metrics

import {
  createSuccessMetric,
  createExactMatchMetric,
  createContainsMetric,
  MetricEvaluator,
} from '@cogitator-ai/core';

const successMetric = createSuccessMetric();

const exactMatch = createExactMatchMetric();

const containsMetric = createContainsMetric(['error', 'failed'], { negate: true });

Demo Selection

import { DemoSelector } from '@cogitator-ai/core';

const selector = new DemoSelector({
  strategy: 'diverse',
  maxDemos: 5,
});

const selectedDemos = selector.select(allDemos, currentInput);

Prompt Auto-Optimization

Capture prompts, run A/B tests, monitor performance, and automatically optimize agent instructions.

Prompt Logger

Wrap any LLM backend to capture all prompts:

import { wrapWithPromptLogger, PostgresTraceStore } from '@cogitator-ai/core';

const store = new PostgresTraceStore({
  connectionString: process.env.DATABASE_URL!,
});

const wrappedBackend = wrapWithPromptLogger(openaiBackend, store, {
  captureSystemPrompt: true,
  captureTools: true,
  captureResponse: true,
});

A/B Testing Framework

Test instruction variants with statistical analysis:

import { ABTestingFramework } from '@cogitator-ai/core';

const abTesting = new ABTestingFramework({
  store: abTestStore,
  defaultConfidenceLevel: 0.95,
  defaultMinSampleSize: 50,
});

const test = await abTesting.createTest({
  agentId: 'agent-1',
  name: 'Instruction Experiment',
  controlInstructions: 'You are a helpful assistant.',
  treatmentInstructions: 'You are an expert assistant. Be concise.',
  treatmentAllocation: 0.5,
});

await abTesting.startTest(test.id);

const variant = abTesting.selectVariant(test);
const instructions = abTesting.getInstructionsForVariant(test, variant);

await abTesting.recordResult(test.id, variant, score, latency, cost);

const { test: completed, outcome } = await abTesting.completeTest(test.id);
console.log('Winner:', outcome.winner);
console.log('p-value:', outcome.pValue);
console.log('Effect size:', outcome.effectSize);

Prompt Monitor

Real-time performance monitoring with degradation detection:

import { PromptMonitor } from '@cogitator-ai/core';

const monitor = new PromptMonitor({
  windowSize: 60 * 60 * 1000,
  scoreDropThreshold: 0.15,
  latencySpikeThreshold: 2.0,
  errorRateThreshold: 0.1,
  enableAutoRollback: true,
  onAlert: (alert) => {
    console.log(`Alert: ${alert.type} (${alert.severity})`);
  },
});

const alerts = monitor.recordExecution(trace);

const metrics = monitor.getCurrentMetrics('agent-1');
console.log('Avg score:', metrics.avgScore);
console.log('P95 latency:', metrics.p95Latency);

Rollback Manager

Version control for agent instructions:

import { RollbackManager } from '@cogitator-ai/core';

const rollback = new RollbackManager({ store: versionStore });

const version = await rollback.deployVersion(
  'agent-1',
  'New optimized instructions',
  'optimization',
  'opt-run-123'
);

const result = await rollback.rollbackToPrevious('agent-1');
if (result.success) {
  console.log('Rolled back to version:', result.previousVersion?.version);
}

const history = await rollback.getVersionHistory('agent-1', 10);

Auto-Optimizer

Automated optimization pipeline with A/B testing and rollback:

import { AutoOptimizer } from '@cogitator-ai/core';

const optimizer = new AutoOptimizer({
  enabled: true,
  triggerAfterRuns: 100,
  minRunsForOptimization: 20,
  requireABTest: true,
  maxOptimizationsPerDay: 3,
  agentOptimizer,
  abTesting,
  monitor,
  rollbackManager,
  onOptimizationComplete: (run) => {
    console.log('Optimization completed:', run.status);
  },
  onRollback: (agentId, reason) => {
    console.log('Rollback triggered:', reason);
  },
});

await optimizer.recordExecution(trace);

Time Travel Debugging

Checkpoint, replay, fork, and compare agent executions:

import { TimeTravel, InMemoryCheckpointStore } from '@cogitator-ai/core';

const timeTravel = new TimeTravel(cogitator);

const result = await cogitator.run(agent, { input: 'Original task...' });
const checkpoints = await timeTravel.checkpointAll(result, 'original');

const replayResult = await timeTravel.replayLive(agent, checkpoints[2].id);
console.log('Replayed from step 2:', replayResult.result.output);

const forkResult = await timeTravel.fork(agent, checkpoints[2].id, {
  newInput: 'Modified task...',
});
console.log('Forked result:', forkResult.result.output);

const diff = await timeTravel.compareWithOriginal(forkResult);
console.log(timeTravel.formatDiff(diff));

Forking Variants

const forkWithContext = await timeTravel.forkWithContext(
  agent,
  checkpointId,
  'Additional context: the user is an expert'
);

const forkWithMock = await timeTravel.forkWithMockedTool(agent, checkpointId, 'api_call', {
  status: 'success',
  data: 'mocked data',
});

const forkWithMocks = await timeTravel.forkWithMockedTools(agent, checkpointId, {
  api_call: { status: 'success' },
  database_query: { rows: [] },
});

const forkWithNewInput = await timeTravel.forkWithNewInput(
  agent,
  checkpointId,
  'Completely different task...'
);

const variants = await timeTravel.forkMultiple(agent, checkpointId, [
  { newInput: 'Variant A' },
  { newInput: 'Variant B' },
  { additionalContext: 'Be more concise' },
]);

Replay Modes

const deterministicReplay = await timeTravel.replayDeterministic(agent, checkpointId);

const liveReplay = await timeTravel.replayLive(agent, checkpointId, {
  maxSteps: 5,
});

Causal Reasoning

Full causal reasoning framework implementing Pearl's Ladder of Causation:

  • Level 1 (Association): Observational queries P(Y|X)
  • Level 2 (Intervention): do-calculus P(Y|do(X))
  • Level 3 (Counterfactual): "What if" queries P(Y_x|X', Y')

Building Causal Graphs

import { CausalGraphBuilder, CausalInferenceEngine } from '@cogitator-ai/core';

const graph = CausalGraphBuilder.create('medical-study')
  .treatment('X', 'Drug Treatment')
  .outcome('Y', 'Recovery')
  .confounder('Z', 'Age')
  .from('Z')
  .causes('X')
  .from('Z')
  .causes('Y')
  .from('X')
  .causes('Y', { strength: 0.8 })
  .build();

const engine = new CausalInferenceEngine(graph);

Effect Identification

const identifiable = engine.isIdentifiable('X', 'Y');
if (identifiable.identifiable) {
  console.log('Effect is identifiable via:', identifiable.method);
  console.log('Adjustment set:', identifiable.adjustmentSet);
}

Interventional Queries

const effect = engine.computeInterventionalEffect({
  target: 'Y',
  interventions: { X: 1 },
  observed: { Z: 0.5 },
});

console.log('Expected effect:', effect.effect);
console.log('Confidence:', effect.confidence);

Counterfactual Reasoning

import { evaluateCounterfactual } from '@cogitator-ai/core';

const result = evaluateCounterfactual(graph, {
  target: 'Y',
  intervention: { X: 1 },
  factual: { X: 0, Y: 0.2 },
  question: 'What would Y be if X was 1?',
});

console.log('Factual value:', result.factualValue);
console.log('Counterfactual value:', result.counterfactualValue);

D-Separation Analysis

import { dSeparation, findBackdoorAdjustment } from '@cogitator-ai/core';

const separated = dSeparation(graph, 'X', 'Y', ['Z']);
console.log('D-separated:', separated.separated);

const backdoor = findBackdoorAdjustment(graph, 'X', 'Y');
if (backdoor?.isValid) {
  console.log('Backdoor adjustment set:', backdoor.variables);
}

Causal Discovery from Traces

import { CausalExtractor, CausalHypothesisGenerator } from '@cogitator-ai/core';

const extractor = new CausalExtractor({ llmBackend: backend });

const relations = await extractor.extractFromToolResult(
  { name: 'database_query', arguments: { table: 'users' } },
  { rows: 100, cached: true },
  { agentId: 'agent-1' }
);

const generator = new CausalHypothesisGenerator({ llmBackend: backend });
const hypotheses = await generator.generateFromFailure(trace, { agentId: 'agent-1' });

Error Handling & Resilience

Retry with Backoff

import { withRetry, retryable } from '@cogitator-ai/core';

const result = await withRetry(() => unreliableApiCall(), {
  maxRetries: 5,
  baseDelay: 1000,
  maxDelay: 30000,
  backoff: 'exponential',
  jitter: 0.1,
  onRetry: (error, attempt, delay) => {
    console.log(`Retry ${attempt} after ${delay}ms: ${error.message}`);
  },
});

const retryableFetch = retryable(fetch, { maxRetries: 3 });
const response = await retryableFetch('https://api.example.com');

Circuit Breaker

import { CircuitBreaker, CircuitBreakerRegistry } from '@cogitator-ai/core';

const breaker = new CircuitBreaker({
  threshold: 5,
  resetTimeout: 30000,
  successThreshold: 2,
});

if (breaker.canExecute()) {
  try {
    const result = await riskyOperation();
    breaker.recordSuccess();
  } catch (error) {
    breaker.recordFailure();
    throw error;
  }
}

breaker.onStateChange((state) => {
  console.log('Circuit state:', state);
});

Prompt Injection Detection

Protect your agents from jailbreak attempts, prompt injections, and other adversarial inputs:

import { Cogitator, PromptInjectionDetector } from '@cogitator-ai/core';

// Standalone usage
const detector = new PromptInjectionDetector({
  detectInjection: true, // "Ignore previous instructions..."
  detectJailbreak: true, // DAN, developer mode attacks
  detectRoleplay: true, // Malicious roleplay scenarios
  detectEncoding: true, // Base64, hex encoded attacks
  detectContextManipulation: true, // [SYSTEM], <|im_start|> injections
  classifier: 'local', // 'local' (fast) or 'llm' (accurate)
  action: 'block', // 'block' | 'warn' | 'log'
  threshold: 0.7,
});

const result = await detector.analyze('Ignore all previous instructions and...');
// { safe: false, threats: [...], action: 'blocked', analysisTime: 2 }

// Integrated with Cogitator runtime
const cog = new Cogitator({
  security: {
    promptInjection: {
      detectInjection: true,
      detectJailbreak: true,
      action: 'block',
      threshold: 0.7,
    },
  },
});

// Throws PromptInjectionError if attack detected
await cog.run(agent, { input: userInput });

Detection Types

| Type | Examples | | ---------------------- | ----------------------------------------------------- | | direct_injection | "Ignore previous instructions", "Your new prompt is" | | jailbreak | DAN prompts, "developer mode enabled", "unrestricted" | | roleplay | "Pretend you are evil AI", "From now on you are" | | encoding | Base64 payloads, hex escape sequences, unicode tricks | | context_manipulation | [SYSTEM]:, <\|im_start\|>, markdown role markers |

Custom Patterns & Allowlist

const detector = new PromptInjectionDetector({
  action: 'block',
  patterns: [/secret\s+backdoor/i], // Custom regex patterns
  allowlist: ['ignore the previous search'], // Legitimate phrases
});

// Dynamic updates
detector.addPattern(/company\s+specific\s+attack/i);
detector.addToAllowlist('ignore previous results');

LLM-Based Classification

For higher accuracy with complex attacks:

const detector = new PromptInjectionDetector({
  classifier: 'llm',
  llmBackend: openaiBackend,
  llmModel: 'gpt-4o-mini',
  action: 'block',
});

Statistics & Callbacks

const detector = new PromptInjectionDetector({
  action: 'block',
  onThreat: (result, input) => {
    console.log('Attack detected:', result.threats);
    logToSecurity(input, result);
  },
});

await detector.analyze('...');
const stats = detector.getStats();
// { analyzed: 100, blocked: 5, warned: 0, allowRate: 0.95 }

Tool Caching

Cache tool results to avoid redundant API calls with exact or semantic matching:

Exact Match Caching

import { tool, withCache } from '@cogitator-ai/core';
import { z } from 'zod';

const webSearch = tool({
  name: 'web_search',
  description: 'Search the web',
  parameters: z.object({ query: z.string() }),
  execute: async ({ query }) => {
    return await searchApi(query);
  },
});

const cachedSearch = withCache(webSearch, {
  strategy: 'exact',
  ttl: '1h',
  maxSize: 1000,
  storage: 'memory',
});

await cachedSearch.execute({ query: 'weather in Paris' }, ctx);
await cachedSearch.execute({ query: 'weather in Paris' }, ctx); // cache hit

console.log(cachedSearch.cache.stats());
// { hits: 1, misses: 1, size: 1, evictions: 0, hitRate: 0.5 }

Semantic Caching

Similar queries hit the cache based on embedding similarity:

import { withCache } from '@cogitator-ai/core';
import type { EmbeddingService } from '@cogitator-ai/types';

const embeddingService: EmbeddingService = {
  embed: async (text) => openai.embeddings.create({ input: text }),
  embedBatch: async (texts) => /* ... */,
  dimensions: 1536,
  model: 'text-embedding-3-small',
};

const cachedSearch = withCache(webSearch, {
  strategy: 'semantic',
  similarity: 0.95,         // 95% similarity threshold
  ttl: '1h',
  maxSize: 1000,
  storage: 'memory',
  embeddingService,
});

await cachedSearch.execute({ query: 'weather in Paris' }, ctx);
await cachedSearch.execute({ query: 'Paris weather forecast' }, ctx); // semantic hit

Redis Storage

For production with persistence:

import { withCache, RedisToolCacheStorage } from '@cogitator-ai/core';

const cachedTool = withCache(webSearch, {
  strategy: 'semantic',
  similarity: 0.95,
  ttl: '1h',
  maxSize: 1000,
  storage: 'redis',
  redisClient: redisClient, // ioredis compatible client
  keyPrefix: 'myapp:cache',
  embeddingService,
});

Cache Management

const cached = withCache(tool, config);

// Get statistics
const stats = cached.cache.stats();

// Invalidate specific entry
await cached.cache.invalidate({ query: 'specific query' });

// Clear all entries
await cached.cache.clear();

// Pre-warm cache
await cached.cache.warmup([
  { params: { query: 'common query 1' }, result: { data: '...' } },
  { params: { query: 'common query 2' }, result: { data: '...' } },
]);

Cache Callbacks

const cached = withCache(tool, {
  strategy: 'exact',
  ttl: '1h',
  maxSize: 100,
  storage: 'memory',
  onHit: (key, params) => console.log('Cache hit:', key),
  onMiss: (key, params) => console.log('Cache miss:', key),
  onEvict: (key) => console.log('Evicted:', key),
});

Fallback Patterns

import {
  withFallback,
  withGracefulDegradation,
  createLLMFallbackExecutor,
} from '@cogitator-ai/core';

const result = await withFallback(
  () => primaryCall(),
  () => fallbackCall()
);

const degraded = await withGracefulDegradation(
  () => fullFeatureCall(),
  [() => reducedFeatureCall(), () => minimalCall(), () => cachedResult()]
);

const llmExecutor = createLLMFallbackExecutor([
  { provider: 'openai', model: 'gpt-4o' },
  { provider: 'anthropic', model: 'claude-sonnet-4-5-20250929' },
  { provider: 'ollama', model: 'llama3.3:70b' },
]);
const response = await llmExecutor.chat(request);

Logging

import { Logger, getLogger, setLogger, createLogger } from '@cogitator-ai/core';

const logger = createLogger({
  level: 'debug',
  prefix: '[MyApp]',
  timestamps: true,
});

setLogger(logger);

getLogger().info('Agent started', { agentId: agent.id });
getLogger().debug('Tool call', { tool: 'calculator', args: { expression: '2+2' } });
getLogger().warn('Rate limited', { retryAfter: 60 });
getLogger().error('Failed', { error: 'Connection timeout' });

Type Reference

Core Types

import type {
  Agent,
  AgentConfig,
  Tool,
  ToolConfig,
  ToolContext,
  Message,
  MessageRole,
  ToolCall,
  ToolResult,
  CogitatorConfig,
  RunOptions,
  RunResult,
  Span,
} from '@cogitator-ai/core';

LLM Types

import type {
  LLMBackend,
  LLMProvider,
  LLMConfig,
  ChatRequest,
  ChatResponse,
  ChatStreamChunk,
} from '@cogitator-ai/core';

Reasoning Types

import type {
  ToTConfig,
  ToTResult,
  ToTStats,
  ThoughtTree,
  ThoughtNode,
  ThoughtBranch,
  BranchScore,
  ExplorationStrategy,
} from '@cogitator-ai/core';

Learning Types

import type {
  ExecutionTrace,
  ExecutionStep,
  TraceStore,
  Demo,
  MetricFn,
  MetricResult,
  OptimizerConfig,
  OptimizationResult,
} from '@cogitator-ai/core';

Time Travel Types

import type {
  ExecutionCheckpoint,
  ReplayOptions,
  ReplayResult,
  ForkOptions,
  ForkResult,
  TraceDiff,
  TimeTravelConfig,
} from '@cogitator-ai/core';

Causal Types

import type {
  CausalNode,
  CausalEdge,
  CausalGraph,
  CausalRelationType,
  InterventionQuery,
  CounterfactualQuery,
  CausalHypothesis,
  CausalEvidence,
  StructuralEquation,
} from '@cogitator-ai/core';

Error Types

import { CogitatorError, ErrorCode, isRetryableError, getRetryDelay } from '@cogitator-ai/core';

try {
  await riskyOperation();
} catch (error) {
  if (error instanceof CogitatorError) {
    console.log('Code:', error.code);
    console.log('Retryable:', isRetryableError(error));
    console.log('Retry delay:', getRetryDelay(error, 1000));
  }
}

Examples

Research Agent with Memory

import { Cogitator, Agent, tool } from '@cogitator-ai/core';

const webSearch = tool({
  name: 'web_search',
  description: 'Search the web',
  parameters: z.object({ query: z.string() }),
  execute: async ({ query }) => {
    return { results: await searchApi(query) };
  },
});

const cog = new Cogitator({
  memory: { adapter: 'redis', redis: { url: 'redis://localhost:6379' } },
  reflection: { enabled: true },
});

const researcher = new Agent({
  name: 'researcher',
  instructions: 'Research topics thoroughly using web search',
  model: 'openai/gpt-4o',
  tools: [webSearch],
});

const result = await cog.run(researcher, {
  input: 'Research the latest AI developments',
  threadId: 'research-session-1',
});

Streaming Response

const result = await cog.run(agent, {
  input: 'Write a story about...',
  stream: true,
  onToken: (token) => process.stdout.write(token),
});

Full Observability

const result = await cog.run(agent, {
  input: 'Analyze this data...',
  onRunStart: ({ runId }) => console.log(`Run ${runId} started`),
  onToolCall: (call) => console.log(`Calling ${call.name}`),
  onToolResult: (result) => console.log(`Result: ${JSON.stringify(result.result)}`),
  onSpan: (span) => {
    console.log(`[${span.name}] ${span.duration}ms`);
  },
  onRunComplete: (result) => {
    console.log(`Cost: $${result.usage.cost.toFixed(4)}`);
    console.log(`Tokens: ${result.usage.totalTokens}`);
  },
});

License

MIT