npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

mcp-server-serper

v1.0.0

Published

Serper MCP Server supporting search and webpage scraping

Downloads

230

Readme

Serper Search and Scrape MCP Server

smithery badge

A TypeScript-based MCP server that provides web search and webpage scraping capabilities using the Serper API. This server integrates with Claude Desktop to enable powerful web search and content extraction features.

Enhanced MCP server for advanced Google search with comprehensive operator support, crafted by Pavel Sukhachev (pashvc).


👋 Open for Opportunities!
I'm actively seeking new opportunities and collaborations. If you have interesting projects or job opportunities, feel free to reach out!
📧 Contact: [email protected]


Features

Tools

  • google_search - Perform advanced Google searches with full operator support
    • Rich search results including organic results, knowledge graph, "people also ask", and related searches
    • Supports region (default: 'us') and language (default: 'en') targeting
    • Advanced search operators directly in the query:
      • " ": Exact phrase matching
      • site:: Limit to specific domain (e.g., site:linkedin.com/in/ for people)
      • -: Exclude terms (e.g., -spam -ads)
      • OR or |: Alternative terms
      • filetype: or ext:: Specific file types
      • intitle: / allintitle:: Words in page title
      • inurl: / allinurl:: Words in URL
      • before: / after:: Date filtering (YYYY-MM-DD format)
      • *: Wildcard for any word
      • #..#: Number ranges
      • related:: Find similar sites
      • AROUND(X): Proximity search
  • scrape - Extract content from web pages
    • Get plain text and optional markdown content
    • Includes JSON-LD and head metadata
    • Preserves document structure

Example Queries

LinkedIn Searches

# Find people
site:linkedin.com/in/ "data scientist" Python "San Francisco"
site:linkedin.com/in/ "software engineer" (Google OR Meta) -recruiter

# Find companies  
site:linkedin.com/company/ fintech "series B" 
site:linkedin.com/company/ "artificial intelligence" healthcare

Technical Searches

# Stack Overflow solutions
site:stackoverflow.com "TypeError" pandas dataframe

# GitHub repositories
site:github.com "machine learning" README.md stars:>100

# Documentation
site:docs.python.org asyncio tutorial

Research & Academic

# Research papers
filetype:pdf "deep learning" medical after:2023-01-01
site:arxiv.org "transformer architecture" "computer vision"

# Academic content
site:edu "climate change" research "peer reviewed"

Advanced Combinations

# Exclude noise
python tutorial -youtube -video -course -udemy

# Multiple sites
(site:github.com OR site:gitlab.com) "react hooks" example

# Date and type filters
"machine learning" news after:2024-01-01 -opinion -sponsored

Requirements

  • Node.js >= 18
  • Serper API key (set as SERPER_API_KEY environment variable)

Development

Install dependencies:

npm install

Build the server:

npm run build

For development with auto-rebuild:

npm run watch

Run tests:

npm test                  # Run all tests
npm run test:watch       # Run tests in watch mode
npm run test:coverage    # Run tests with coverage
npm run test:integration # Run integration tests

Environment Variables

Create a .env file in the root directory:

SERPER_API_KEY=your_api_key_here

Debugging

Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:

npm run inspector

The Inspector will provide a URL to access debugging tools in your browser.

Installation

Installing via Smithery

To install Serper Search and Scrape for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @pashvc/mcp-server-serper --client claude

Claude Desktop

Add the server config at:

  • MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "serper-search": {
      "command": "npx",
      "args": ["-y", "serper-search-scrape-mcp-server"],
      "env": {
        "SERPER_API_KEY": "your_api_key_here"
      }
    }
  }
}

Cline

  1. Open the Cline extension settings
  2. Open "MCP Servers" tab
  3. Click on "Configure MCP Servers"
  4. Add the server config:
{
  "mcpServers": {
    "github.com/pashvc/mcp-server-serper": {
      "command": "npx",
      "args": ["-y", "serper-search-scrape-mcp-server"],
      "env": {
        "SERPER_API_KEY": "your_api_key_here"
      },
      "disabled": false,
      "autoApprove": ["google_search", "scrape"]
    }
  }
}

Additional Cline configuration options:

  • disabled: Set to false to enable the server
  • autoApprove: List of tools that don't require explicit approval for each use

Cursor

  1. Open the Cursor settings
  2. Open "Features" settings
  3. In the "MCP Servers" section, click on "Add new MCP Server"
  4. Choose a name, and select "command" as "Type"
  5. In the "Command" field, enter the following:
env SERPER_API_KEY=your_api_key_here npx -y serper-search-scrape-mcp-server

Docker

You can also run the server using Docker. First, build the image:

docker build -t mcp-server-serper .

Then run the container with your Serper API key:

docker run -e SERPER_API_KEY=your_api_key_here mcp-server-serper

Alternatively, if you have your environment variables in a .env file:

docker run --env-file .env mcp-server-serper

For development, you might want to mount your source code as a volume:

docker run -v $(pwd):/app --env-file .env mcp-server-serper

Note: Make sure to replace your_api_key_here with your actual Serper API key.