npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@agentpaid/mcp-use

v0.1.9

Published

A utility library for integrating Model Context Protocol (MCP) with LangChain, Zod, and related tools. Provides helpers for schema conversion, event streaming, and SDK usage.

Readme

🌐 MCP Client is the open-source way to connect any LLM to any MCP server in TypeScript/Node.js, letting you build custom agents with tool access without closed-source dependencies.

💡 Let developers easily connect any LLM via LangChain.js to tools like web browsing, file operations, 3D modeling, and more.


✨ Key Features

| Feature | Description | | ------------------------------- | -------------------------------------------------------------------------- | | 🔄 Ease of use | Create an MCP-capable agent in just a few lines of TypeScript. | | 🤖 LLM Flexibility | Works with any LangChain.js-supported LLM that supports tool calling. | | 🌐 HTTP Support | Direct SSE/HTTP connection to MCP servers. | | ⚙️ Dynamic Server Selection | Agents select the right MCP server from a pool on the fly. | | 🧩 Multi-Server Support | Use multiple MCP servers in one agent. | | 🛡️ Tool Restrictions | Restrict unsafe tools like filesystem or network. | | 🔧 Custom Agents | Build your own agents with LangChain.js adapter or implement new adapters. |


🚀 Quick Start

Requirements

  • Node.js 22.0.0 or higher
  • npm, yarn, or pnpm (examples use pnpm)

Installation

# Install from npm
npm install mcp-use
# LangChain.js and your LLM provider (e.g., OpenAI)
npm install langchain @langchain/openai dotenv

Create a .env:

OPENAI_API_KEY=your_api_key

Basic Usage

import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'
import 'dotenv/config'

async function main() {
  // 1. Configure MCP servers
  const config = {
    mcpServers: {
      playwright: { command: 'npx', args: ['@playwright/mcp@latest'] }
    }
  }
  const client = MCPClient.fromDict(config)

  // 2. Create LLM
  const llm = new ChatOpenAI({ modelName: 'gpt-4o' })

  // 3. Instantiate agent
  const agent = new MCPAgent({ llm, client, maxSteps: 20 })

  // 4. Run query
  const result = await agent.run('Find the best restaurant in Tokyo using Google Search')
  console.log('Result:', result)
}

main().catch(console.error)

🔧 API Methods

MCPAgent Methods

The MCPAgent class provides several methods for executing queries with different output formats:

run(query: string, maxSteps?: number): Promise<string>

Executes a query and returns the final result as a string.

const result = await agent.run('What tools are available?')
console.log(result)

stream(query: string, maxSteps?: number): AsyncGenerator<AgentStep, string, void>

Yields intermediate steps during execution, providing visibility into the agent's reasoning process.

const stream = agent.stream('Search for restaurants in Tokyo')
for await (const step of stream) {
  console.log(`Tool: ${step.action.tool}, Input: ${step.action.toolInput}`)
  console.log(`Result: ${step.observation}`)
}

streamEvents(query: string, maxSteps?: number): AsyncGenerator<StreamEvent, void, void>

Yields fine-grained LangChain StreamEvent objects, enabling token-by-token streaming and detailed event tracking.

const eventStream = agent.streamEvents('What is the weather today?')
for await (const event of eventStream) {
  // Handle different event types
  switch (event.event) {
    case 'on_chat_model_stream':
      // Token-by-token streaming from the LLM
      if (event.data?.chunk?.content) {
        process.stdout.write(event.data.chunk.content)
      }
      break
    case 'on_tool_start':
      console.log(`\nTool started: ${event.name}`)
      break
    case 'on_tool_end':
      console.log(`Tool completed: ${event.name}`)
      break
  }
}

Key Differences

  • run(): Best for simple queries where you only need the final result
  • stream(): Best for debugging and understanding the agent's tool usage
  • streamEvents(): Best for real-time UI updates with token-level streaming

🔄 AI SDK Integration

The library provides built-in utilities for integrating with Vercel AI SDK, making it easy to build streaming UIs with React hooks like useCompletion and useChat.

Installation

npm install ai @langchain/anthropic

Basic Usage

import { ChatAnthropic } from '@langchain/anthropic'
import { LangChainAdapter } from 'ai'
import { createReadableStreamFromGenerator, MCPAgent, MCPClient, streamEventsToAISDK } from 'mcp-use'

async function createApiHandler() {
  const config = {
    mcpServers: {
      everything: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-everything'] }
    }
  }

  const client = new MCPClient(config)
  const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
  const agent = new MCPAgent({ llm, client, maxSteps: 5 })

  return async (request: { prompt: string }) => {
    const streamEvents = agent.streamEvents(request.prompt)
    const aiSDKStream = streamEventsToAISDK(streamEvents)
    const readableStream = createReadableStreamFromGenerator(aiSDKStream)

    return LangChainAdapter.toDataStreamResponse(readableStream)
  }
}

Enhanced Usage with Tool Visibility

import { streamEventsToAISDKWithTools } from 'mcp-use'

async function createEnhancedApiHandler() {
  const config = {
    mcpServers: {
      everything: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-everything'] }
    }
  }

  const client = new MCPClient(config)
  const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
  const agent = new MCPAgent({ llm, client, maxSteps: 8 })

  return async (request: { prompt: string }) => {
    const streamEvents = agent.streamEvents(request.prompt)
    // Enhanced stream includes tool usage notifications
    const enhancedStream = streamEventsToAISDKWithTools(streamEvents)
    const readableStream = createReadableStreamFromGenerator(enhancedStream)

    return LangChainAdapter.toDataStreamResponse(readableStream)
  }
}

Next.js API Route Example

// pages/api/chat.ts or app/api/chat/route.ts
import { ChatAnthropic } from '@langchain/anthropic'
import { LangChainAdapter } from 'ai'
import { createReadableStreamFromGenerator, MCPAgent, MCPClient, streamEventsToAISDK } from 'mcp-use'

export async function POST(req: Request) {
  const { prompt } = await req.json()

  const config = {
    mcpServers: {
      everything: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-everything'] }
    }
  }

  const client = new MCPClient(config)
  const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
  const agent = new MCPAgent({ llm, client, maxSteps: 10 })

  try {
    const streamEvents = agent.streamEvents(prompt)
    const aiSDKStream = streamEventsToAISDK(streamEvents)
    const readableStream = createReadableStreamFromGenerator(aiSDKStream)

    return LangChainAdapter.toDataStreamResponse(readableStream)
  }
  finally {
    await client.closeAllSessions()
  }
}

Frontend Integration

// components/Chat.tsx
import { useCompletion } from 'ai/react'

export function Chat() {
  const { completion, input, handleInputChange, handleSubmit } = useCompletion({
    api: '/api/chat',
  })

  return (
    <div>
      <div>{completion}</div>
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Ask me anything..."
        />
      </form>
    </div>
  )
}

Available AI SDK Utilities

  • streamEventsToAISDK(): Converts streamEvents to basic text stream
  • streamEventsToAISDKWithTools(): Enhanced stream with tool usage notifications
  • createReadableStreamFromGenerator(): Converts async generator to ReadableStream

📂 Configuration File

You can store servers in a JSON file:

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"]
    }
  }
}

Load it:

import { MCPClient } from 'mcp-use'

const client = MCPClient.fromConfigFile('./mcp-config.json')

📚 Examples

We provide a comprehensive set of examples demonstrating various use cases. All examples are located in the examples/ directory with a dedicated README.

Running Examples

# Install dependencies
npm install

# Run any example
npm run example:airbnb      # Search accommodations with Airbnb
npm run example:browser     # Browser automation with Playwright
npm run example:chat        # Interactive chat with memory
npm run example:stream      # Demonstrate streaming methods (stream & streamEvents)
npm run example:stream_events # Comprehensive streamEvents() examples
npm run example:ai_sdk      # AI SDK integration with streaming
npm run example:filesystem  # File system operations
npm run example:http        # HTTP server connection
npm run example:everything  # Test MCP functionalities
npm run example:multi       # Multiple servers in one session

Example Highlights

  • Browser Automation: Control browsers to navigate websites and extract information
  • File Operations: Read, write, and manipulate files through MCP
  • Multi-Server: Combine multiple MCP servers (Airbnb + Browser) in a single task
  • Sandboxed Execution: Run MCP servers in isolated E2B containers
  • OAuth Flows: Authenticate with services like Linear using OAuth2
  • Streaming Methods: Demonstrate both step-by-step and token-level streaming
  • AI SDK Integration: Build streaming UIs with Vercel AI SDK and React hooks

See the examples README for detailed documentation and prerequisites.


🔄 Multi-Server Example

const config = {
  mcpServers: {
    airbnb: { command: 'npx', args: ['@openbnb/mcp-server-airbnb'] },
    playwright: { command: 'npx', args: ['@playwright/mcp@latest'] }
  }
}
const client = MCPClient.fromDict(config)
const agent = new MCPAgent({ llm, client, useServerManager: true })
await agent.run('Search Airbnb in Barcelona, then Google restaurants nearby')

🔒 Tool Access Control

const agent = new MCPAgent({
  llm,
  client,
  disallowedTools: ['file_system', 'network']
})

👥 Contributors

📜 License

MIT © Zane