npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@lapage/ai-agent

v1.0.2

Published

Standalone AI Agent library built on LangChain with MCP tool calling, memory, and vector knowledge base support

Readme

@lapage/ai-agent

A standalone, TypeScript-first AI Agent library built on LangChain. Wire up OpenAI or Anthropic models, register tools (including live MCP servers), add persistent memory, and query vector knowledge bases — all from a single AIAgent class.

Table of Contents


Features

  • Multi-provider LLM support — OpenAI (GPT-4o, etc.) and Anthropic (Claude 3/3.5) via optional peer dependencies
  • MCP tool calling — connect to any Model Context Protocol server and auto-register all its tools
  • Custom tools — register functions with Zod or JSON Schema 7 input schemas
  • Conversation memory — in-memory buffer window, summary memory, or PostgreSQL-backed history
  • Vector knowledge bases — automatic RAG tools powered by pgvector + configurable embeddings
  • TypeScript-first — full type definitions, strict mode, exported interfaces for every option

Installation

npm install @lapage/ai-agent

Install the LLM provider you need (at least one required):

npm install @langchain/openai      # for OpenAI / GPT models
npm install @langchain/anthropic   # for Anthropic / Claude models

Quick Start

import { AIAgent } from '@lapage/ai-agent';

const agent = new AIAgent({
  model: {
    provider: 'openai',
    model: 'gpt-4o-mini',
  },
  systemMessage: 'You are a helpful assistant.',
});

const { output } = await agent.invoke('What is the capital of France?');
console.log(output); // Paris

Models

Configure the model via the model option:

// OpenAI
model: {
  provider: 'openai',
  model: 'gpt-4o',          // any OpenAI chat model
  temperature: 0.7,          // default: 0.7
  maxTokens: 1024,           // optional
  apiKey: 'sk-...',          // or set OPENAI_API_KEY env var
}

// Anthropic
model: {
  provider: 'anthropic',
  model: 'claude-3-5-sonnet-20241022',
  temperature: 0.5,
  apiKey: 'sk-ant-...',      // or set ANTHROPIC_API_KEY env var
}

| Field | Type | Default | Description | | ------------- | ------------------------- | ------- | -------------------- | | provider | 'openai' \| 'anthropic' | — | LLM provider | | model | string | — | Model name | | temperature | number | 0.7 | Sampling temperature | | maxTokens | number | — | Max output tokens | | apiKey | string | env var | Override API key |


Tools

Custom tools — Zod schema

import { z } from 'zod';

agent.addTool({
  name: 'calculate',
  description: 'Evaluate a mathematical expression',
  schema: z.object({
    expression: z.string().describe('e.g. "12 * 4 + 7"'),
  }),
  func: ({ expression }) => String(eval(expression)),
});

Custom tools — JSON Schema

agent.addTool({
  name: 'get_user',
  description: 'Look up a user by ID',
  schema: {
    type: 'object',
    properties: {
      userId: { type: 'string', description: 'The user UUID' },
    },
    required: ['userId'],
  },
  func: async ({ userId }) => {
    const user = await db.users.findById(userId);
    return JSON.stringify(user);
  },
});

Add several tools at once with addTools([ ... ]).


MCP Tool Calling

The agent can connect to any MCP server and automatically discover and register all the tools it exposes. No manual tool registration is required.

Single server

const agent = new AIAgent({
  model: { provider: 'openai', model: 'gpt-4o' },
  mcpServers: [
    {
      url: 'https://my-mcp-server.example.com/mcp',
    },
  ],
});

// Tools are fetched lazily on first use, or eagerly:
const tools = await agent.getToolsAsync();
console.log(tools.map(t => t.name));

const { output } = await agent.invoke('What can you do?');

Authentication

mcpServers: [
  {
    url: 'https://secure-mcp.example.com/mcp',
    headers: {
      Authorization: `Bearer ${process.env.MCP_API_TOKEN}`,
    },
  },
],

Any arbitrary HTTP headers can be passed (API keys, custom auth schemes, etc.).

Tool filtering

By default all tools from the server are registered. Use toolMode to limit them:

// Only expose specific tools
{
  url: '...',
  toolMode: 'selected',
  includeTools: ['search_documents', 'get_invoice'],
}

// Expose everything except dangerous operations
{
  url: '...',
  toolMode: 'except',
  excludeTools: ['delete_record', 'drop_table'],
}

Legacy SSE transport

{
  url: 'https://legacy-mcp.example.com/sse',
  transport: 'sse',   // default is 'httpStreamable'
}

Multiple servers

mcpServers: [
  {
    url: 'https://crm-mcp.example.com/mcp',
    name: 'crm-client',
    headers: { Authorization: `Bearer ${process.env.CRM_TOKEN}` },
  },
  {
    url: 'https://payments-mcp.example.com/mcp',
    name: 'payments-client',
    toolMode: 'selected',
    includeTools: ['get_invoice', 'list_transactions'],
  },
],

All tools from every server are merged into a single pool available to the agent.

MCPServerConfig reference

| Field | Type | Default | Description | | -------------- | --------------------------------- | ----------------------- | ---------------------------------------- | | url | string | — | MCP server endpoint URL | | transport | 'httpStreamable' \| 'sse' | 'httpStreamable' | Wire protocol | | headers | Record<string, string> | — | HTTP headers (auth, etc.) | | name | string | 'ai-agent-mcp-client' | Client identifier | | timeout | number | 60000 | Tool call timeout (ms) | | toolMode | 'all' \| 'selected' \| 'except' | 'all' | Tool filtering strategy | | includeTools | string[] | — | Tools to expose (toolMode: 'selected') | | excludeTools | string[] | — | Tools to hide (toolMode: 'except') |


Memory

Pass a memory option to retain conversation history across turns.

Buffer window memory

Keeps the last N exchanges in memory:

const agent = new AIAgent({
  model: { provider: 'openai', model: 'gpt-4o-mini' },
  memory: {
    type: 'buffer-window',
    contextWindowLength: 10,   // number of message pairs, default 10
    sessionId: 'user-123',     // optional; auto-generated if omitted
  },
});

await agent.invoke('My name is Alice.');
const reply = await agent.invoke('What is my name?'); // "Alice"

Conversation summary memory

The model summarises older messages, keeping context compact for long conversations:

memory: {
  type: 'summary',
  sessionId: 'user-456',
}

Requires the same model instance — an extra LLM call is made periodically to summarise.

PostgreSQL-backed memory

Persist conversation history across process restarts:

memory: {
  type: 'postgres',
  sessionId: 'user-789',
  contextWindowLength: 20,
  postgresConfig: {
    host: 'localhost',
    port: 5432,
    database: 'mydb',
    user: 'pguser',
    password: 'secret',
    tableName: 'chat_messages',  // default: 'chat_messages'
  },
}

MemoryConfig reference

| Field | Type | Default | Description | | --------------------- | -------------------------------------------- | ----------------- | --------------------------------- | | type | 'buffer-window' \| 'summary' \| 'postgres' | 'buffer-window' | Memory strategy | | sessionId | string | auto-generated | Isolates history per user/session | | contextWindowLength | number | 10 | Messages to keep in window | | postgresConfig | PostgresConfig | — | Required for 'postgres' type |


Knowledge Bases

Attach one or more vector knowledge bases. The agent automatically gets a search tool per knowledge base and uses it to answer questions via RAG.

Requires PostgreSQL with the pgvector extension.

import { AIAgent } from '@lapage/ai-agent';

const agent = new AIAgent({
  model: { provider: 'openai', model: 'gpt-4o' },
  knowledgeBases: [
    {
      name: 'company_docs',
      description:
        'Company policies, HR procedures, and internal documentation. ' +
        'Use this when the user asks about company-related topics.',
      pgConfig: {
        host: 'localhost',
        port: 5432,
        database: 'vectordb',
        user: 'pguser',
        password: 'secret',
        tableName: 'company_docs',
      },
      embeddings: {
        provider: 'openai',
        model: 'text-embedding-3-small',
      },
      topK: 5,
      includeMetadata: true,
    },
  ],
});

const { output } = await agent.invoke('What is the parental leave policy?');

KnowledgeBaseConfig reference

| Field | Type | Default | Description | | ----------------- | --------------------- | ------- | ------------------------------------------ | | name | string | — | Unique name; becomes part of the tool name | | description | string | — | Helps the agent decide when to use this KB | | pgConfig | PGVectorConfig | — | PostgreSQL + pgvector connection | | embeddings | EmbeddingsConfig | — | Embeddings model for search | | topK | number | 4 | Results to retrieve per query | | includeMetadata | boolean | true | Include document metadata in results | | metadataFilter | Record<string, any> | — | Optional filter applied to every search |

EmbeddingsConfig reference

| Field | Type | Description | | ---------- | --------------------------------------- | ---------------------------------------- | | provider | 'openai' \| 'cohere' \| 'huggingface' | Embeddings provider | | model | string | Model name | | apiKey | string | Override API key (falls back to env var) |


API Reference

new AIAgent(options)

| Option | Type | Default | Description | | ------------------------- | ----------------------- | ------------------------------- | ------------------------------------------- | | model | ModelConfig | — | Required. LLM to use | | systemMessage | string | 'You are a helpful assistant' | System prompt | | maxIterations | number | 10 | Max tool-calling loops before stopping | | returnIntermediateSteps | boolean | true | Include step-by-step tool calls in response | | memory | MemoryConfig | — | Conversation memory | | tools | ToolOptions[] | — | Custom tools to register upfront | | mcpServers | MCPServerConfig[] | — | MCP servers to auto-register tools from | | knowledgeBases | KnowledgeBaseConfig[] | — | Vector knowledge bases |

Instance methods

| Method | Returns | Description | | ----------------------------- | -------------------------- | ---------------------------------------------- | | invoke(input) | Promise<AIAgentResponse> | Run the agent on a user message | | stream(input, onToken?) | Promise<AIAgentResponse> | Streaming-aware invoke (returns same response) | | addTool(options) | void | Register a custom tool | | addTools(options[]) | void | Register multiple tools at once | | addLangChainTool(tool) | void | Register a pre-built LangChain tool | | getTools() | Tool[] | Return currently registered tools (sync) | | getToolsAsync() | Promise<Tool[]> | Return tools after all async init completes | | removeTool(name) | boolean | Remove a tool by name | | clearTools() | void | Remove all tools | | setupMemory(config) | Promise<void> | (Re-)configure memory after construction | | clearMemory() | Promise<void> | Wipe the current session's history | | getConversationHistory() | Promise<BaseMessage[]> | Return raw message history | | addToHistory(input, output) | Promise<void> | Manually append a turn to history | | setSystemMessage(msg) | void | Update the system prompt | | setMaxIterations(n) | void | Update the iteration cap |

AIAgentResponse

interface AIAgentResponse {
  output: string;                    // Final answer
  intermediateSteps?: AgentStep[];   // Tool calls (when returnIntermediateSteps: true)
  error?: string;                    // Set when invocation fails
}

Environment Variables

| Variable | Provider | | -------------------------- | -------------------------- | | OPENAI_API_KEY | OpenAI models & embeddings | | ANTHROPIC_API_KEY | Anthropic models | | COHERE_API_KEY | Cohere embeddings | | HUGGINGFACEHUB_API_TOKEN | HuggingFace embeddings |


Development

# Install dependencies
npm install

# Build
npm run build

# Watch mode
npm run dev

# Run tests
npm test

# Run demo scripts (requires .env file with API keys)
npm run demo:openai
npm run demo:anthropic
npm run demo:postgres

Running examples

Copy .env.example to .env and fill in your keys, then:

# OpenAI examples (basic, memory, tools, MCP)
npm run demo:openai

# Anthropic examples
npm run demo:anthropic

# PostgreSQL memory / knowledge base demo
npm run demo:postgres

License

MIT © Huy Lan