npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

mcp-test-mcp

v0.3.0

Published

An MCP server for testing MCP servers you are developing with AI assistants

Downloads

139

Readme

mcp-test-mcp

Python Version License: MIT

An MCP server that helps AI assistants test other MCP servers. It provides tools to connect to target MCP servers, discover their capabilities, execute tools, read resources, and test prompts—all through proper MCP protocol communication.

Features

  • Connection Management: Connect to any MCP server (STDIO or HTTP transport), auto-detect protocols, track connection state
  • Tool Testing: List all tools with complete input schemas, call tools with arbitrary arguments, get detailed execution results
  • Resource Testing: List all resources with metadata, read text and binary content
  • Prompt Testing: List all prompts with argument schemas, get rendered prompts with custom arguments
  • LLM Integration: Execute prompts end-to-end with actual LLM inference, supports template variables and JSON extraction

Installation

Prerequisites: Node.js 16+ and Python 3.11+

Choose your AI coding tool:

Config file location:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%/Claude/claude_desktop_config.json

Configuration:

{
  "mcpServers": {
    "mcp-test-mcp": {
      "command": "npx",
      "args": ["-y", "mcp-test-mcp"]
    }
  }
}

Or use Claude Code CLI:

claude mcp add mcp-test-mcp -- npx -y mcp-test-mcp

Config file location:

  • Global: ~/.cursor/mcp.json
  • Project: .cursor/mcp.json

Or access via: File → Preferences → Cursor Settings → MCP

Configuration:

{
  "mcpServers": {
    "mcp-test-mcp": {
      "command": "npx",
      "args": ["-y", "mcp-test-mcp"]
    }
  }
}

Config file location: ~/.codeium/windsurf/mcp_config.json

Or access via: Windsurf Settings → Cascade → Plugins

Configuration:

{
  "mcpServers": {
    "mcp-test-mcp": {
      "command": "npx",
      "args": ["-y", "mcp-test-mcp"]
    }
  }
}

Requires VS Code 1.99+ with chat.agent.enabled setting enabled.

Config file location:

  • Workspace: .vscode/mcp.json
  • Global: Run MCP: Open User Configuration from Command Palette

Configuration:

{
  "servers": {
    "mcpTestMcp": {
      "command": "npx",
      "args": ["-y", "mcp-test-mcp"]
    }
  }
}

Note: VS Code uses servers instead of mcpServers and recommends camelCase naming.

Config file location: ~/.codex/config.toml

Add via CLI:

codex mcp add mcp-test-mcp -- npx -y mcp-test-mcp

Or add manually to config.toml:

[mcp_servers.mcp-test-mcp]
command = "npx"
args = ["-y", "mcp-test-mcp"]

To use the execute_prompt_with_llm tool, add environment variables to your configuration:

JSON format (Claude, Cursor, Windsurf, VS Code):

{
  "mcpServers": {
    "mcp-test-mcp": {
      "command": "npx",
      "args": ["-y", "mcp-test-mcp"],
      "env": {
        "LLM_URL": "https://your-llm-endpoint.com/v1",
        "LLM_MODEL_NAME": "your-model-name",
        "LLM_API_KEY": "your-api-key"
      }
    }
  }
}

TOML format (Codex):

[mcp_servers.mcp-test-mcp]
command = "npx"
args = ["-y", "mcp-test-mcp"]

[mcp_servers.mcp-test-mcp.env]
LLM_URL = "https://your-llm-endpoint.com/v1"
LLM_MODEL_NAME = "your-model-name"
LLM_API_KEY = "your-api-key"
# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install from PyPI
pip install mcp-test-mcp

# Or install from source
git clone https://github.com/example/mcp-test-mcp
cd mcp-test-mcp
pip install -e ".[dev]"

Command Line Options

The server supports multiple transports for different deployment scenarios:

# Default: stdio transport (for Claude Desktop/Code)
mcp-test-mcp

# Explicit stdio
mcp-test-mcp --transport stdio

# HTTP transport for web deployments
mcp-test-mcp --transport streamable-http
mcp-test-mcp --transport streamable-http --host 0.0.0.0 --port 8080

# Legacy SSE transport (for backward compatibility)
mcp-test-mcp --transport sse --port 9000

Options:

| Flag | Short | Description | Default | |------|-------|-------------|---------| | --transport | -t | Transport type: stdio, streamable-http, sse | stdio | | --host | -H | Host to bind (HTTP transports only) | 127.0.0.1 | | --port | -p | Port to bind (HTTP transports only) | 8000 |

Using with npx:

# HTTP server via npx
npx -y mcp-test-mcp --transport streamable-http --port 8080

Quick Start

Once configured, test MCP servers through natural conversation:

  • Connect: "Connect to my MCP server at /path/to/server"
  • Connect with auth: "Connect to https://api.example.com/mcp with headers Authorization: Bearer my-token"
  • Discover: "What tools does it have?"
  • Test: "Call the echo tool with message 'Hello'"
  • Status: "What's the connection status?"
  • Disconnect: "Disconnect from the server"

Available Tools

Connection Management

  • connect_to_server: Connect to a target MCP server (stdio or HTTP). Supports optional headers parameter for authenticated HTTP connections (e.g., {"Authorization": "Bearer token"}). Headers are ignored for stdio transport.
  • disconnect: Close active connection
  • get_connection_status: Check connection state and statistics

Tool Testing

  • list_tools: Get all tools with complete schemas
  • call_tool: Execute a tool with arguments

Resource Testing

  • list_resources: Get all resources with metadata
  • read_resource: Read resource content by URI

Prompt Testing

  • list_prompts: Get all prompts with argument schemas
  • get_prompt: Get rendered prompt with arguments
  • execute_prompt_with_llm: Execute prompts with actual LLM inference

Utility

  • health_check: Verify server is running
  • ping: Test connectivity (returns "pong")
  • echo: Echo a message back
  • add: Add two numbers

Environment Variables

Transport Configuration

These environment variables configure the server transport. CLI arguments take precedence.

| Variable | Description | Default | |----------|-------------|---------| | MCP_TEST_TRANSPORT | Transport type: stdio, streamable-http, sse | stdio | | MCP_TEST_HOST | Host to bind (HTTP transports only) | 127.0.0.1 | | MCP_TEST_PORT | Port to bind (HTTP transports only) | 8000 |

Priority: CLI argument > environment variable > default

Core

  • MCP_TEST_LOG_LEVEL: Logging level (DEBUG, INFO, WARNING, ERROR). Default: INFO
  • MCP_TEST_CONNECT_TIMEOUT: Connection timeout in seconds. Default: 30.0

LLM Integration (for execute_prompt_with_llm)

  • LLM_URL: LLM API endpoint URL
  • LLM_MODEL_NAME: Model name
  • LLM_API_KEY: API key

Development

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run with coverage
pytest --cov=mcp_test_mcp --cov-report=html

# Format and lint
black src/ tests/
ruff check src/ tests/
mypy src/

Container Deployment

For deploying mcp-test-mcp in containers (e.g., OpenShift, Kubernetes):

FROM registry.redhat.io/ubi9/python-311:latest

WORKDIR /app

# Install mcp-test-mcp
RUN pip install --no-cache-dir mcp-test-mcp

# Expose HTTP port
EXPOSE 8000

# Run with streamable-http transport
CMD ["mcp-test-mcp", "--transport", "streamable-http", "--host", "0.0.0.0", "--port", "8000"]

Or use environment variables:

# kubernetes deployment snippet
env:
  - name: MCP_TEST_TRANSPORT
    value: "streamable-http"
  - name: MCP_TEST_HOST
    value: "0.0.0.0"
  - name: MCP_TEST_PORT
    value: "8000"

Documentation

License

MIT License - see LICENSE for details.

Resources