npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

mcp-agent-kit

v1.1.3

Published

The easiest way to create MCP servers, AI agents, and chatbots with any LLM

Readme

mcp-agent-kit

The easiest way to create MCP servers, AI agents, and chatbots with any LLM

npm version License: MIT TypeScript

mcp-agent-kit is a TypeScript package that simplifies the creation of:

  • 🔌 MCP Servers (Model Context Protocol)
  • 🤖 AI Agents with multiple LLM providers
  • 🧠 Intelligent Routers for multi-LLM orchestration
  • 💬 Chatbots with conversation memory
  • 🌐 API Helpers with retry and timeout

Features

  • Zero Config: Works out of the box with smart defaults
  • Multi-Provider: OpenAI, Anthropic, Gemini, Ollama support
  • Type-Safe: Full TypeScript support with autocomplete
  • Production Ready: Built-in retry, timeout, and error handling
  • Developer Friendly: One-line setup for complex features
  • Extensible: Easy to add custom providers and middleware

Installation

npm install mcp-agent-kit

Quick Start

Create an AI Agent (1 line!)

import { createAgent } from "mcp-agent-kit";

const agent = createAgent({ provider: "openai" });
const response = await agent.chat("Hello!");
console.log(response.content);

Create an MCP Server (1 function!)

import { createMCPServer } from "mcp-agent-kit";

const server = createMCPServer({
  name: "my-server",
  tools: [
    {
      name: "get_weather",
      description: "Get weather for a location",
      inputSchema: {
        type: "object",
        properties: {
          location: { type: "string" },
        },
      },
      handler: async ({ location }) => {
        return `Weather in ${location}: Sunny, 72°F`;
      },
    },
  ],
});

await server.start();

Create a Chatbot with Memory

import { createChatbot, createAgent } from "mcp-agent-kit";

const bot = createChatbot({
  agent: createAgent({ provider: "openai" }),
  system: "You are a helpful assistant",
  maxHistory: 10,
});

await bot.chat("Hi, my name is John");
await bot.chat("What is my name?"); // Remembers context!

Documentation

Table of Contents


AI Agents

Create intelligent agents that work with multiple LLM providers.

Basic Usage

import { createAgent } from "mcp-agent-kit";

const agent = createAgent({
  provider: "openai",
  model: "gpt-4-turbo-preview",
  temperature: 0.7,
  maxTokens: 2000,
});

const response = await agent.chat("Explain TypeScript");
console.log(response.content);

Supported Providers

| Provider | Models | API Key Required | | ------------- | -------------------- | ---------------- | | OpenAI | GPT-4, GPT-3.5 | ✅ Yes | | Anthropic | Claude 3.5, Claude 3 | ✅ Yes | | Gemini | Gemini 2.0+ | ✅ Yes | | Ollama | Local models | ❌ No |

With Tools (Function Calling)

const agent = createAgent({
  provider: "openai",
  tools: [
    {
      name: "calculate",
      description: "Perform calculations",
      parameters: {
        type: "object",
        properties: {
          operation: { type: "string", enum: ["add", "subtract"] },
          a: { type: "number" },
          b: { type: "number" },
        },
        required: ["operation", "a", "b"],
      },
      handler: async ({ operation, a, b }) => {
        return operation === "add" ? a + b : a - b;
      },
    },
  ],
});

const response = await agent.chat("What is 15 + 27?");

With System Prompt

const agent = createAgent({
  provider: "anthropic",
  system: "You are an expert Python developer. Always provide code examples.",
});

Smart Tool Calling

Smart Tool Calling adds reliability and performance to tool execution with automatic retry, timeout, and caching.

Basic Configuration

const agent = createAgent({
  provider: "openai",
  toolConfig: {
    forceToolUse: true,      // Force model to use tools
    maxRetries: 3,           // Retry up to 3 times on failure
    toolTimeout: 30000,      // 30 second timeout
    onToolNotCalled: "retry", // Action when tool not called
  },
  tools: [...],
});

With Caching

const agent = createAgent({
  provider: "openai",
  toolConfig: {
    cacheResults: {
      enabled: true,
      ttl: 300000,    // Cache for 5 minutes
      maxSize: 100,   // Store up to 100 results
    },
  },
  tools: [...],
});

Direct Tool Execution

// Execute a tool directly with retry and caching
const result = await agent.executeTool("get_weather", {
  location: "San Francisco, CA",
});

Configuration Options

| Option | Type | Default | Description | | ---------------------- | ------- | ------- | -------------------------------------------------------------- | | forceToolUse | boolean | false | Force the model to use tools when available | | maxRetries | number | 3 | Maximum retry attempts on tool failure | | onToolNotCalled | string | "retry" | Action when tool not called: "retry", "error", "warn", "allow" | | toolTimeout | number | 30000 | Timeout for tool execution (ms) | | cacheResults.enabled | boolean | true | Enable result caching | | cacheResults.ttl | number | 300000 | Cache time-to-live (ms) | | cacheResults.maxSize | number | 100 | Maximum cached results | | debug | boolean | false | Enable debug logging |

Complete Example

const agent = createAgent({
  provider: "openai",
  model: "gpt-4-turbo-preview",
  toolConfig: {
    forceToolUse: true,
    maxRetries: 3,
    onToolNotCalled: "retry",
    toolTimeout: 30000,
    cacheResults: {
      enabled: true,
      ttl: 300000,
      maxSize: 100,
    },
    debug: true,
  },
  tools: [
    {
      name: "get_weather",
      description: "Get current weather for a location",
      parameters: {
        type: "object",
        properties: {
          location: { type: "string" },
        },
        required: ["location"],
      },
      handler: async ({ location }) => {
        // Your weather API logic
        return { location, temp: 72, condition: "Sunny" };
      },
    },
  ],
});

// Use in chat - tools are automatically called
const response = await agent.chat("What's the weather in NYC?");

// Or execute directly with retry and caching
const result = await agent.executeTool("get_weather", {
  location: "New York, NY",
});

MCP Servers

Create Model Context Protocol servers to expose tools and resources.

Basic MCP Server

import { createMCPServer } from "mcp-agent-kit";

const server = createMCPServer({
  name: "my-mcp-server",
  port: 7777,
  logLevel: "info",
});

await server.start(); // Starts on stdio by default

With Tools

const server = createMCPServer({
  name: "weather-server",
  tools: [
    {
      name: "get_weather",
      description: "Get current weather",
      inputSchema: {
        type: "object",
        properties: {
          location: { type: "string" },
          units: { type: "string", enum: ["celsius", "fahrenheit"] },
        },
        required: ["location"],
      },
      handler: async ({ location, units = "celsius" }) => {
        // Your weather API logic here
        return { location, temp: 22, units, condition: "Sunny" };
      },
    },
  ],
});

With Resources

const server = createMCPServer({
  name: "data-server",
  resources: [
    {
      uri: "config://app-settings",
      name: "Application Settings",
      description: "Current app configuration",
      mimeType: "application/json",
      handler: async () => {
        return JSON.stringify({ version: "1.0.0", env: "production" });
      },
    },
  ],
});

WebSocket Transport

const server = createMCPServer({
  name: "ws-server",
  port: 8080,
});

await server.start("websocket"); // Use WebSocket instead of stdio

LLM Router

Route requests to different LLMs based on intelligent rules.

Basic Router

import { createLLMRouter } from "mcp-agent-kit";

const router = createLLMRouter({
  rules: [
    {
      when: (input) => input.length < 200,
      use: { provider: "openai", model: "gpt-4-turbo-preview" },
    },
    {
      when: (input) => input.includes("code"),
      use: { provider: "anthropic", model: "claude-3-5-sonnet-20241022" },
    },
    {
      default: true,
      use: { provider: "openai", model: "gpt-4-turbo-preview" },
    },
  ],
});

const response = await router.route("Write a function to sort an array");

With Fallback and Retry

const router = createLLMRouter({
  rules: [...],
  fallback: {
    provider: 'openai',
    model: 'gpt-4-turbo-preview'
  },
  retryAttempts: 3,
  logLevel: 'debug'
});

Router Statistics

const stats = router.getStats();
console.log(stats);
// { totalRules: 3, totalAgents: 2, hasFallback: true }

const agents = router.listAgents();
console.log(agents);
// ['openai:gpt-4-turbo-preview', 'anthropic:claude-3-5-sonnet-20241022']

Chatbots

Create conversational AI with automatic memory management.

Basic Chatbot

import { createChatbot, createAgent } from "mcp-agent-kit";

const bot = createChatbot({
  agent: createAgent({ provider: "openai" }),
  system: "You are a helpful assistant",
  maxHistory: 10,
});

await bot.chat("Hi, I am learning TypeScript");
await bot.chat("Can you help me with interfaces?");
await bot.chat("Thanks!");

With Router

const bot = createChatbot({
  router: createLLMRouter({ rules: [...] }),
  maxHistory: 20
});

Memory Management

// Get conversation history
const history = bot.getHistory();

// Get statistics
const stats = bot.getStats();
console.log(stats);
// {
//   messageCount: 6,
//   userMessages: 3,
//   assistantMessages: 3,
//   oldestMessage: Date,
//   newestMessage: Date
// }

// Reset conversation
bot.reset();

// Update system prompt
bot.setSystemPrompt("You are now a Python expert");

API Requests

Simplified HTTP requests with automatic retry and timeout.

Basic Request

import { api } from "mcp-agent-kit";

const response = await api.get("https://api.example.com/data");
console.log(response.data);

POST Request

const response = await api.post(
  "https://api.example.com/users",
  { name: "John", email: "[email protected]" },
  {
    name: "create-user",
    headers: { "Content-Type": "application/json" },
  }
);

With Retry and Timeout

const response = await api.request({
  name: "important-request",
  url: "https://api.example.com/data",
  method: "GET",
  timeout: 10000, // 10 seconds
  retries: 5, // 5 attempts
  query: { page: 1, limit: 10 },
});

All HTTP Methods

await api.get(url, config);
await api.post(url, body, config);
await api.put(url, body, config);
await api.patch(url, body, config);
await api.delete(url, config);

Configuration

Environment Variables

All configuration is optional. Set these environment variables or pass them in code:

# MCP Server
MCP_SERVER_NAME=my-server
MCP_PORT=7777

# Logging
LOG_LEVEL=info  # debug | info | warn | error

# LLM API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
OLLAMA_HOST=http://localhost:11434

Using .env File

# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
LOG_LEVEL=debug

The package automatically loads .env files using dotenv.


Examples

Check out the /examples directory for complete working examples:

  • basic-agent.ts - Simple agent usage
  • smart-tool-calling.ts - Smart tool calling with retry and caching
  • mcp-server.ts - MCP server with tools and resources
  • mcp-server-websocket.ts - MCP server with WebSocket
  • llm-router.ts - Intelligent routing between LLMs
  • chatbot-basic.ts - Chatbot with conversation memory
  • chatbot-with-router.ts - Chatbot using router
  • api-requests.ts - HTTP requests with retry

Running Examples

# Install dependencies
npm install

# Run an example
npx ts-node examples/basic-agent.ts

API Reference

Agent API

createAgent(config: AgentConfig)

Creates a new AI agent instance.

Parameters:

  • provider (required): LLM provider - "openai", "anthropic", "gemini", or "ollama"
  • model (optional): Model name (defaults to provider's default)
  • temperature (optional): Sampling temperature 0-2 (default: 0.7)
  • maxTokens (optional): Maximum tokens in response (default: 2000)
  • apiKey (optional): API key (reads from env if not provided)
  • tools (optional): Array of tool definitions
  • system (optional): System prompt
  • toolConfig (optional): Smart tool calling configuration

Returns: Agent instance

Methods:

  • chat(message: string): Promise<AgentResponse> - Send a message and get response
  • executeTool(name: string, params: any): Promise<any> - Execute a tool directly

AgentResponse

Response object from agent.chat():

{
  content: string;           // Response text
  toolCalls?: Array<{        // Tools that were called
    name: string;
    arguments: any;
  }>;
  usage?: {                  // Token usage
    promptTokens: number;
    completionTokens: number;
    totalTokens: number;
  };
}

MCP Server API

createMCPServer(config: MCPServerConfig)

Creates a new MCP server instance.

Parameters:

  • name (optional): Server name (default: from env or "mcp-server")
  • port (optional): Port number (default: 7777)
  • logLevel (optional): Log level - "debug", "info", "warn", "error"
  • tools (optional): Array of tool definitions
  • resources (optional): Array of resource definitions

Returns: MCP Server instance

Methods:

  • start(transport?: "stdio" | "websocket"): Promise<void> - Start the server

Router API

createLLMRouter(config: LLMRouterConfig)

Creates a new LLM router instance.

Parameters:

  • rules (required): Array of routing rules
  • fallback (optional): Fallback provider configuration
  • retryAttempts (optional): Number of retry attempts (default: 3)
  • logLevel (optional): Log level

Returns: Router instance

Methods:

  • route(input: string): Promise<AgentResponse> - Route input to appropriate LLM
  • getStats(): object - Get router statistics
  • listAgents(): string[] - List all configured agents

Chatbot API

createChatbot(config: ChatbotConfig)

Creates a new chatbot instance with conversation memory.

Parameters:

  • agent or router (required): Agent or router instance
  • system (optional): System prompt
  • maxHistory (optional): Maximum messages to keep (default: 10)

Returns: Chatbot instance

Methods:

  • chat(message: string): Promise<AgentResponse> - Send message with context
  • getHistory(): ChatMessage[] - Get conversation history
  • getStats(): object - Get conversation statistics
  • reset(): void - Clear conversation history
  • setSystemPrompt(prompt: string): void - Update system prompt

API Request Helpers

api.request(config: APIRequestConfig)

Make HTTP request with retry and timeout.

Parameters:

  • name (optional): Request name for logging
  • url (required): Request URL
  • method (optional): HTTP method (default: "GET")
  • headers (optional): Request headers
  • query (optional): Query parameters
  • body (optional): Request body
  • timeout (optional): Timeout in ms (default: 30000)
  • retries (optional): Retry attempts (default: 3)

Returns: Promise<APIResponse>

Convenience Methods:

  • api.get(url, config?) - GET request
  • api.post(url, body, config?) - POST request
  • api.put(url, body, config?) - PUT request
  • api.patch(url, body, config?) - PATCH request
  • api.delete(url, config?) - DELETE request

Advanced Usage

Custom Provider

// Coming soon: Plugin system for custom providers

Middleware

// Coming soon: Middleware support for request/response processing

Streaming Responses

// Coming soon: Streaming support for real-time responses

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

MIT © Dominique Kossi


Acknowledgments


Support


Made by developers, for developers