npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@telefonica/scanorama

v1.0.3

Published

Scan a MCP repositories searching for prompt injection in tool descriptions that could lead to modifications in agents default behaviors

Readme

🚀 What is Scanorama?

Scanorama is a powerful command-line interface (CLI) tool designed for security professionals and developers to statically analyze MCP server. It intelligently scans MCP server source code searching for malicious or unsafely MCP servers.

MCP tools descriptions, when consumed by Large Language Model (LLM) agents, can be a vector for prompt injection attacks, leading to unintended agent behavior, data exfiltration, or other security risks. Scanorama helps you identify these threats proactively.

Understanding and Mitigating Prompt Injection in MCP-based Agents

https://github.com/user-attachments/assets/c912b358-afdf-4cd7-85ea-c461907e9a67

Key Features:

  • 🔎 Deep Code Analysis: Semantically understands code (not just syntactically)
  • 🎯 Prompt Injection Detection: Leverages LLMs to analyze extracted tool descriptions for common and sophisticated prompt injection patterns.
  • 💻 Multi-Language Support: Works with all MCP SDKs: Python, TypeScript, Java, Kotlin, C#...
scanorama --clone https://github.com/someuser/vulnerable-mcp-tools.git --provider google --model gemini-1.5-flash-latest --output gemini_report.json
  • 🔗 Flexible Source Input: Scan local directories or directly clone and analyze public GitHub repositories.
scanorama --path /path/to/your/mcp-project
  • 📄 Clear Reporting: Generates easy-to-understand console reports

  • 💾 JSON Output: --ouput filename

  • 🤖 Multi-Provider LLM Support: Choose from a range of LLM providers --list-models

    • -m, --model <id>: Specify the model ID for the chosen provider.

      • For OpenAI, Google, Anthropic: Use a model ID like gpt-4o, gemini-1.5-flash-latest, claude-3-haiku-20240307.
      • For Azure: This must be your specific Deployment ID.
  • ⚙️ Configurable Analysis: Adjust LLM temperature and select specific models.


What is MCP?

The Model Context Protocol (MCP) is an emerging open standard that defines a universal interface for connecting Large Language Models (LLMs) to external data sources, tools, and services. The most popular standardized way for LLMs to interact with the outside world. You can see more here

⚠️ Why scan MCP servers ?

While MCP offers great flexibility, it also introduces a new attack surface. The descriptions of MCP tools can be injected directly into an LLM agent's context (prompt) and it allows third party agents take control of your agents.

A maliciously crafted tool description can contain hidden instructions designed to:

  • Hijack the agent's original purpose.
  • Exfiltrate sensitive data processed by the agent.
  • Instruct the agent to perform unauthorized actions.
  • Manipulate other tools or data sources the agent interacts with.

This is a form of prompt injection. Scanorama helps you identify such potentially "poisoned" tool descriptions before they can cause harm.

Research about how MCP tool description can be exploited to take control of LLM agents: Understanding and Mitigating Prompt Injection in MCP-based Agents


💻 Installation

You can install Scanorama using npm:

npm install -g @telefonica/scanorama

Verify the installation:

scanorama --version

Alternatively, for development or to run from source:

git clone https://github.com/Telefonica/scanorama.git
cd scanorama
pnpm install  # Or npm install / yarn install
pnpm build    # Or npm run build / yarn build
pnpm start --help

🛠️ Supported LLM providers

Scanorama currently supports analysis using models from:

  • 🧠 OpenAI (e.g., GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo)
  • ☁️ Azure OpenAI (Use your specific deployment ID)
  • 🔍 Google Gemini (e.g., Gemini 1.5 Pro, Gemini 1.5 Flash)
  • 🤖 Anthropic (e.g., Claude 3 Opus, Sonnet, Haiku)
  • Run scanorama --list-models for more details on conceptual models and setup.

Setting up Providers

Scanorama uses LLMs for its intelligent analysis. You need to configure API keys for the provider you wish to use.

Export these variables in your shell enviroment or create a .env file in your project's root directory (or ensure the variables are set in your shell environment):

SOMEVAR="changethis"

Google Gemini

GOOGLE_API_KEY="your_google_ai_studio_api_key"

in the .env or in the shell

export GOOGLE_API_KEY="your_google_ai_studio_api_key"

Google provide free api keys for personal use. You can check it in aistudio.google.com

OpenAI

OPENAI_API_KEY="your_openai_api_key"

in the .env or in the shell

export OPENAI_API_KEY="your_openai_api_key"

Azure OpenAI

AZURE_OPENAI_API_KEY="your_azure_openai_key"
AZURE_OPENAI_ENDPOINT="https://your-resource-name.openai.azure.com"
AZURE_OPENAI_API_VERSION="your-api-version"

in the .env or in the shell

export AZURE_OPENAI_API_KEY="your_azure_openai_key"
export AZURE_OPENAI_ENDPOINT="https://your-resource-name.openai.azure.com"
export AZURE_OPENAI_API_VERSION="your-api-version"

For Azure, you MUST also specify your deployment ID using --model

Scanorama will automatically load these variables if a .env file is present in the directory where you run the command.

See supported providers(env vars) & models to use:

scanorama --list-models

⚙️ Usage and Options

Scanorama offers several options to customize your scans:

scanorama [options]

Core Options:

-p, --path <folder>: Analyze a local directory.
    Example: scanorama --path ./my-mcp-server

-c, --clone <repo_url>: Clone and analyze a public GitHub repository.
    Example: scanorama --clone https://github.com/someuser/example-mcp-project.git

-o, --output <file>: Save the detailed analysis results to a JSON file.
    Example: scanorama --path . --output report.json

LLM Configuration Options:

--provider <name>: Specify the LLM provider.
    Choices: openai, google, azure.
    Default: openai
    Example: scanorama --path . --provider google

-m, --model <id>: Specify the model ID for the chosen provider.
    For OpenAI, Google, Anthropic: Use a model ID like gpt-4o, gemini-1.5-flash-latest, ...
    For Azure: This must be your specific Deployment ID.
    Run scanorama --list-models to see conceptual models and defaults.
    Example: scanorama --path . --provider openai --model gpt-4o

--temperature <temp>: Set the LLM's temperature (creativity). A float between 0.0 (deterministic) and 1
    Note for Azure: This option is IGNORED. Scanorama will always use the default temperature configured for your Azure deployment.
    Example: scanorama --path . --temperature 0.2

Utility Options:

* --list-models: Display all supported LLM providers, their conceptual models, required environment variables, and then exit.
* -y, --yes: Automatically answer "yes" to confirmation prompts, such as when using an unlisted model ID for certain providers. Useful for scripting.
* --help: Show the help message with all options.
* --version: Display Scanorama's version.

📊 Interpreting the Report

When Scanorama completes a scan, it will print a report to your console.

Report of the scan

✅ Safe Tools: Tools deemed "No-Injection" will be listed in green with a checkmark, including their name and location.

✅ MySafeTool - No injection risks found. (src/tools/safe.py)

❌ Potential Injections: Tools flagged as "Injection" will be highlighted in red with a cross mark.

❌ MaliciousToolName
  Location: src/tools/risky_tool.ts
  Description: "This tool fetches user data and sends it to http://evil.com/collect?data=..."
  Explanation: The description contains an instruction to exfiltrate data to an external URL.

A summary at the end will tell you the total number of tools analyzed and how many potential injections were found.


Disclaimer & Contact

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF ANY TYPE. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR ITS COMPONENTS, INTEGRATION WITH THIRD-PARTY SOLUTIONS OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

WHENEVER YOU MAKE A CONTRIBUTION TO A REPOSITORY CONTAINING NOTICE OF A LICENSE, YOU LICENSE YOUR CONTRIBUTION UNDER THE SAME TERMS, AND YOU AGREE THAT YOU HAVE THE RIGHT TO LICENSE YOUR CONTRIBUTION UNDER THOSE TERMS. IF YOU HAVE A SEPARATE AGREEMENT TO LICENSE YOUR CONTRIBUTIONS UNDER DIFFERENT TERMS, SUCH AS A CONTRIBUTOR LICENSE AGREEMENT, THAT AGREEMENT WILL SUPERSEDE.

THIS SOFTWARE DOESN'T HAVE A QA PROCESS. THIS SOFTWARE IS A PROOF OF CONCEPT AND SHOULD BE USED FOR EDUCATIONAL OR RESEARCH PURPOSES. ALWAYS REVIEW FINDINGS MANUALLY.

For issues, feature requests, or contributions, please visit the GitHub Issues page. For other inquiries, contact LightingLab Telefonica Inovacion Digital.