npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@mimiqai/mcp

v0.3.0

Published

MCP server for Mimiq AI — test pages, copy, and product decisions on simulated audiences that tell you what's actually wrong before you ship.

Readme

Mimiq MCP Server

Give your AI coding agent honest user feedback. Test pages, copy, and product decisions on simulated audiences that tell you what's actually wrong — before you ship.

Install in VS Code Install in Cursor

Setup (30 seconds)

Claude Code

claude mcp add --transport http mimiq https://mcp.mimiqai.com/mcp

Or add to .mcp.json in your project root:

{
  "mcpServers": {
    "mimiq": {
      "type": "http",
      "url": "https://mcp.mimiqai.com/mcp"
    }
  }
}

Codex

Add to ~/.codex/config.toml (or .codex/config.toml in your project root):

[mcp_servers.mimiq]
url = "https://mcp.mimiqai.com/mcp"

With an API key, set the env var and reference it:

[mcp_servers.mimiq]
url = "https://mcp.mimiqai.com/mcp"
bearer_token_env_var = "MIMIQ_API_KEY"

Then set the env var: export MIMIQ_API_KEY=mq_sk_your_key_here

Cursor

Add to MCP settings (Settings > MCP):

{
  "mimiq": {
    "transport": "streamable_http",
    "url": "https://mcp.mimiqai.com/mcp"
  }
}

VS Code (GitHub Copilot)

Open Command Palette > "MCP: Add Server" or add to .vscode/mcp.json:

{
  "servers": {
    "mimiq": {
      "type": "http",
      "url": "https://mcp.mimiqai.com/mcp"
    }
  }
}

Windsurf

Add to ~/.codeium/windsurf/mcp_config.json:

{
  "mcpServers": {
    "mimiq": {
      "serverUrl": "https://mcp.mimiqai.com/mcp"
    }
  }
}

Claude Desktop

Add to your Claude Desktop config:

{
  "mcpServers": {
    "mimiq": {
      "transport": "streamable_http",
      "url": "https://mcp.mimiqai.com/mcp"
    }
  }
}

No API key needed — you get 100 free personas to start.

Use it

Your agent now has access to Mimiq tools. It uses them proactively — when it builds or changes a page, it tests automatically. No need to ask.

It also works with localhost. The agent starts a temporary cloudflared tunnel, runs the simulation, and tears it down. You build, it tests, in the same flow.

Or ask directly:

"Test my landing page on startup founders"

"A/B test these two headlines on e-commerce shoppers"

"Ask 20 SaaS founders which pricing model they prefer"

Tools

| Tool | What it does | |------|-------------| | mimiq.test_page | Test a web page — finds UX issues, confusing copy, conversion blockers. Pass goal to specify what the page should achieve (e.g. "get visitors to sign up") | | mimiq.test_flow | Deep interactive simulation of multi-step flows (signup, checkout). Pass goal to define success (e.g. "complete signup and reach dashboard") | | mimiq.test_copy | Test copy or A/B compare two variants head-to-head | | mimiq.test_text | Test any text content (positioning, descriptions, error messages) | | mimiq.test_component | Evaluate UI components from HTML snippets | | mimiq.ask_audience | Survey a synthetic audience on product decisions |

What you get back

Raw per-persona results. Each simulated user has a name, demographics, and independently decides what to do. You get:

  • action: what the persona did — the raw action string (e.g. clicked_cta, browsed_and_left, skimmed)
  • action_class: classification into converted, engaged, or bounced
  • monologue: what they were thinking — this is where the real insights are
  • objections: specific concerns that stopped them
  • what_would_help: what would change their mind
  • trust_score: 1–10 rating of how much the persona trusts the page/content
  • journey_steps: scroll-by-scroll breakdown of what the persona saw and thought at each section (visual_journey mode only, used by test_page and test_flow)
  • aggregate counts: how many converted, engaged, or bounced

There are no pre-computed verdicts or scores. Your AI agent analyzes the raw feedback and decides what it means.

When you need more

After 100 free personas, sign up at mimiqai.com to get an API key (Settings > API Keys), then add it to your config:

Claude Code:

claude mcp remove mimiq
claude mcp add --transport http \
  --header="Authorization: Bearer mq_sk_your_key_here" \
  mimiq https://mcp.mimiqai.com/mcp

Codex:

[mcp_servers.mimiq]
url = "https://mcp.mimiqai.com/mcp"
bearer_token_env_var = "MIMIQ_API_KEY"

Then: export MIMIQ_API_KEY=mq_sk_your_key_here

Cursor / VS Code / Claude Desktop / Windsurf — add a headers field:

{
  "headers": {
    "Authorization": "Bearer mq_sk_your_key_here"
  }
}

Pricing

Self-hosting

If you're running the Mimiq backend yourself:

cd mcp-hosted
MIMIQ_API_URL=http://127.0.0.1:8000/api node src/server.js

Point your agent at http://127.0.0.1:8787/mcp. In local dev mode, no API key is required.

Docker

cd mcp-hosted
docker build -t mimiq-mcp .
docker run -p 8787:8787 -e MIMIQ_API_URL=http://host.docker.internal:8000/api -e HOST=0.0.0.0 mimiq-mcp

Environment variables

| Variable | Default | Description | |----------|---------|-------------| | MIMIQ_API_URL | http://127.0.0.1:8000/api | Mimiq backend URL | | MIMIQ_API_KEY | — | Backend API key (if backend requires shared auth) | | PORT | 8787 | Server port | | HOST | 127.0.0.1 | Bind host (0.0.0.0 for Docker) |

Testing localhost pages

Mimiq works with localhost out of the box. When your agent calls test_page or test_flow on a local dev server, it automatically:

  1. Starts a temporary cloudflared tunnel (free, no account needed)
  2. Uses the public tunnel URL for the simulation
  3. Tears down the tunnel when done

This is the primary workflow — test while you build, not after you deploy.

Requirement: cloudflared must be installed on the machine running the AI agent.

# macOS
brew install cloudflared

# Linux
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64 -o /usr/local/bin/cloudflared && chmod +x /usr/local/bin/cloudflared

The agent handles the rest. No bridge daemon, no extra config, no manual tunnel setup.

Troubleshooting

| Error | Cause | Fix | |-------|-------|-----| | INSUFFICIENT_CREDITS | You've used all your free personas (100) or your purchased credits are exhausted. | Buy more at mimiqai.com/app/usage or add an API key to your config. | | URL_UNREACHABLE | The simulation backend couldn't fetch the URL. It must be publicly accessible — internal IPs, VPNs, and auth-gated pages won't work. | For localhost, make sure cloudflared is installed so the agent can create a tunnel automatically. For deployed pages, verify the URL loads in a normal browser. | | SIM_TIMEOUT | The simulation exceeded the time limit. Default is 120 seconds for test_page, 300 seconds for test_flow. | Reduce persona count, or pass a higher timeoutSeconds value in the tool call. Complex multi-step flows take longer. | | RATE_LIMITED | Too many concurrent requests from the same IP or API key. | Wait a few seconds and retry. If you're running batch tests, add a short delay between calls. |

License

MIT