npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

mirror-mcp

v0.0.8

Published

A Model Context Protocol (MCP) server that provides a reflect tool, enabling LLMs to engage in self-reflection and introspection through recursive questioning and MCP sampling.

Downloads

125

Readme

mirror-mcp

npm version License: MIT

A Model Context Protocol (MCP) server that provides a reflect tool, enabling LLMs to engage in self-reflection and introspection through recursive questioning and MCP sampling.

Overview

mirror-mcp allows AI models to "look at themselves" by providing a reflection mechanism. When an LLM uses the reflect tool, it can pose questions to itself and receive answers through the Model Context Protocol's sampling capabilities. This creates a powerful feedback loop for self-analysis, reasoning validation, and iterative problem-solving.

Features

  • 🪞 Self-Reflection Tool: Enables LLMs to ask themselves questions and receive computed responses
  • 🔄 MCP Sampling Integration: Uses the Model Context Protocol's sampling mechanism for responses
  • 📦 npm Installable: Easy installation and deployment
  • Lightweight: Minimal dependencies and fast startup
  • 🔧 Configurable: Customizable reflection parameters and sampling options

Installation

Quick Install for VS Code

Install in VS Code Install in VS Code Insiders

Via npm

npm install -g mirror-mcp

Via npx (no installation required)

npx mirror-mcp

From Source

git clone https://github.com/toby/mirror-mcp.git
cd mirror-mcp
npm install
npm run build
npm start

VS Code Setup

To use mirror-mcp with GitHub Copilot in VS Code:

  1. First install mirror-mcp globally:

    npm install -g mirror-mcp
  2. Add to your VS Code settings (.vscode/settings.json or user settings):

    {
      "github.copilot.chat.modelContextProtocol.servers": {
        "mirror": {
          "command": "mirror-mcp"
        }
      }
    }
  3. Restart VS Code and start using the reflect tool in Copilot Chat!

VS Code Insiders Setup

To use mirror-mcp with GitHub Copilot in VS Code Insiders:

  1. First install mirror-mcp globally:

    npm install -g mirror-mcp
  2. Add to your VS Code Insiders settings (.vscode/settings.json or user settings):

    {
      "github.copilot.chat.modelContextProtocol.servers": {
        "mirror": {
          "command": "mirror-mcp"
        }
      }
    }
  3. Restart VS Code Insiders and start using the reflect tool in Copilot Chat!

Usage

Using with VS Code Copilot

Once you've configured mirror-mcp with VS Code (see installation), you can use the reflect tool directly in Copilot Chat:

@workspace /reflect "What are the potential weaknesses in my reasoning about this React component?"
@workspace /reflect "How confident am I in my approach to handling this async operation?"

Basic Configuration

Add the server to your MCP client configuration:

{
  "mcpServers": {
    "mirror": {
      "command": "mirror-mcp",
      "args": []
    }
  }
}

Using the Reflect Tool

Once configured, the LLM can use the reflect tool for basic self-reflection:

reflect: "What are the potential weaknesses in my reasoning about quantum computing?"

For more directed reflection, custom prompts can be used:

reflect: {
  "question": "How can I improve my problem-solving approach?",
  "system_prompt": "You are a strategic thinking mentor focused on systematic improvement",
  "user_prompt": "Provide 3 specific actionable recommendations with examples"
}

The tool will:

  1. Accept the self-directed question and optional custom prompts
  2. Use MCP sampling to generate a response (with system/user prompts if provided)
  3. Return the tailored reflection back to the requesting model

Advanced Configuration

{
  "mcpServers": {
    "mirror": {
      "command": "mirror-mcp",
      "args": [
        "--max-tokens", "1000",
        "--temperature", "0.7",
        "--reflection-depth", "3"
      ]
    }
  }
}

API Reference

Tools

reflect

Enables the LLM to ask itself a question and receive a response through MCP sampling. The tool supports custom system and user prompts to help the LLM self-direct what kind of response it gets.

Self-Direction with Custom Prompts:

  • System Prompt: Define the role or perspective for the reflection (e.g., "expert coach", "critical thinker", "creative problem solver")
  • User Prompt: Specify the format, structure, or focus of the reflection response
  • Default Behavior: When no custom prompts are provided, uses built-in reflection guidance focused on strengths, weaknesses, assumptions, and alternative perspectives

Parameters:

  • question (string, required): The question the LLM wants to ask itself
  • context (string, optional): Additional context for the reflection
  • system_prompt (string, optional): Custom system prompt to direct the reflection approach
  • user_prompt (string, optional): Custom user prompt to replace the default reflection instructions
  • max_tokens (number, optional): Maximum tokens for the response (default: 500)
  • temperature (number, optional): Sampling temperature (default: 0.8)

Example:

{
  "name": "reflect",
  "arguments": {
    "question": "How confident am I in my previous analysis of the data?",
    "context": "Previous analysis showed a 23% increase in user engagement",
    "max_tokens": 300,
    "temperature": 0.6
  }
}

Example with custom prompts:

{
  "name": "reflect",
  "arguments": {
    "question": "What are the potential weaknesses in my reasoning?",
    "system_prompt": "You are an expert critical thinking coach helping to identify logical fallacies and reasoning gaps.",
    "user_prompt": "Analyze my reasoning step-by-step and provide specific examples of potential weaknesses or blind spots.",
    "context": "Working on a complex machine learning model evaluation",
    "max_tokens": 400,
    "temperature": 0.7
  }
}

Response:

{
  "reflection": "Upon reflection, my confidence in the 23% engagement increase analysis is moderate to high. The data sources appear reliable, and the methodology follows standard practices. However, I should consider potential confounding variables such as seasonal effects or concurrent marketing campaigns that might influence the results.",
  "metadata": {
    "tokens_used": 67,
    "reflection_time_ms": 1240
  }
}

Architecture & Rationale

Design Philosophy

mirror-mcp is built on the principle that self-reflection is crucial for robust AI reasoning. By enabling models to question their own outputs and reasoning processes, we create opportunities for:

  • Error Detection: Models can identify potential flaws in their logic
  • Confidence Calibration: Self-assessment helps gauge certainty levels
  • Iterative Improvement: Reflective questioning can lead to better solutions
  • Metacognitive Awareness: Understanding of the model's own reasoning process

Technical Architecture

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   LLM Client    │───▶│   mirror-mcp    │───▶│  MCP Sampling   │
│                 │    │                 │    │   Infrastructure │
│ Calls reflect() │    │ Processes       │    │                 │
│                 │◀───│ reflection      │◀───│ Returns response│
└─────────────────┘    └─────────────────┘    └─────────────────┘

Key Components

  1. Reflection Engine: Processes incoming self-directed questions
  2. Sampling Interface: Interfaces with MCP's sampling capabilities
  3. Context Manager: Maintains conversation context for coherent reflections
  4. Response Formatter: Structures reflection responses for optimal consumption

Why MCP?

The Model Context Protocol provides a standardized way for AI models to connect with external resources and tools. By implementing mirror-mcp as an MCP server, we ensure:

  • Interoperability: Works with any MCP-compatible client
  • Standardization: Follows established protocols for tool integration
  • Scalability: Can be deployed alongside other MCP servers
  • Future-Proofing: Benefits from ongoing MCP ecosystem development

Sampling Strategy

The reflection mechanism leverages MCP's sampling capabilities to generate thoughtful responses. The sampling process:

  1. Takes the self-directed question as a prompt
  2. Applies configurable sampling parameters (temperature, max tokens)
  3. Generates a response using the underlying model
  4. Returns the reflection with appropriate metadata

This approach ensures that reflections are generated using the same model capabilities as the original reasoning, creating authentic self-assessment.

Development

Prerequisites

  • Node.js 18 or higher
  • npm or yarn
  • TypeScript (for development)

Development Setup

git clone https://github.com/toby/mirror-mcp.git
cd mirror-mcp
npm install
npm run dev

Testing

npm test

Building

npm run build

Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

Areas for Contribution

  • Enhanced reflection strategies
  • Additional sampling parameters
  • Performance optimizations
  • Documentation improvements
  • Test coverage expansion

Related Projects

  • Model Context Protocol: The foundational protocol specification
  • MCP Ecosystem: Various other MCP servers and tools

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • The Model Context Protocol team for creating the foundational specification
  • The broader AI research community working on metacognition and self-reflection
  • Contributors and early adopters who help shape this tool

"The unexamined life is not worth living" - Socrates

Enable your AI models to examine their own reasoning with mirror-mcp.