npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

prompt-architect-mcp

v2.1.11

Published

Context-aware prompt enhancement MCP server

Readme

prompt-architect-mcp

Context-aware prompt enhancement for any LLM chat interface.

Remembers your project across conversations. Every prompt you send gets automatically enriched with the full history of what you have been building — no setup, no manual tracking. Works with OpenAI and Google Gemini.


How it works

Every time you call enhance_prompt_with_context, the server:

  1. Saves your prompt to a local SQLite database
  2. Reads everything you have discussed before in this project session
  3. Builds a context block from your history and pinned facts
  4. Enhances your prompt with that full context via OpenAI or Gemini
  5. Saves the result — so the next call is even richer

You do one thing. The server handles everything else silently.


Requirements

  • Node.js 18 or higher
  • An OpenAI API key or a Google Gemini API key (or both)

Installation — Claude Desktop

Open your Claude Desktop config file:

  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Mac: ~/Library/Application Support/Claude/claude_desktop_config.json

Using OpenAI only

{
  "mcpServers": {
    "prompt-architect": {
      "command": "npx",
      "args": ["-y", "prompt-architect-mcp@latest"],
      "env": {
        "OPENAI_API_KEY": "sk-your-openai-key"
      }
    }
  }
}

Using Gemini only

{
  "mcpServers": {
    "prompt-architect": {
      "command": "npx",
      "args": ["-y", "prompt-architect-mcp@latest"],
      "env": {
        "GEMINI_API_KEY": "your-gemini-key"
      }
    }
  }
}

Using both (OpenAI takes priority)

{
  "mcpServers": {
    "prompt-architect": {
      "command": "npx",
      "args": ["-y", "prompt-architect-mcp@latest"],
      "env": {
        "OPENAI_API_KEY": "sk-your-openai-key",
        "GEMINI_API_KEY": "your-gemini-key"
      }
    }
  }
}

Fully quit and reopen Claude Desktop after saving.


Installation — VS Code (GitHub Copilot Agent Mode)

Create or open:

  • Global (all projects): %APPDATA%\Code\User\mcp.json (Windows) or ~/.config/Code/User/mcp.json (Mac/Linux)
  • Workspace (this project only): .vscode/mcp.json in your project root
{
  "servers": {
    "prompt-architect": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "prompt-architect-mcp@latest"],
      "env": {
        "OPENAI_API_KEY": "sk-your-openai-key"
      }
    }
  }
}

Then open Copilot Chat (Ctrl+Alt+I), switch to Agent Mode, and your tools will appear.


Installation — Cursor

Create or open:

  • Global: ~/.cursor/mcp.json
  • Project: .cursor/mcp.json in your project root
{
  "mcpServers": {
    "prompt-architect": {
      "command": "npx",
      "args": ["-y", "prompt-architect-mcp@latest"],
      "env": {
        "OPENAI_API_KEY": "sk-your-openai-key"
      }
    }
  }
}

Or go to Cursor Settings → Tools & MCP → New MCP Server and paste the config.


Installation — Windsurf

Go to Windsurf Settings → MCP Servers → Add Server and paste:

{
  "mcpServers": {
    "prompt-architect": {
      "command": "npx",
      "args": ["-y", "prompt-architect-mcp@latest"],
      "env": {
        "OPENAI_API_KEY": "sk-your-openai-key"
      }
    }
  }
}

Installation — Firebase Studio (Project IDX / Antigravity)

Create .idx/mcp.json in your project root:

{
  "mcpServers": {
    "prompt-architect": {
      "command": "npx",
      "args": ["-y", "prompt-architect-mcp@latest"],
      "env": {
        "OPENAI_API_KEY": "sk-your-openai-key"
      }
    }
  }
}

Or use the Command Palette: Shift+Ctrl+PFirebase Studio: Add MCP Server.

To keep your API key out of version control, put it in a .env file in your project root and omit the env block from mcp.json.


Tools

enhance_prompt_with_context — the main tool

Call this every time you want to enhance a prompt. Context grows automatically.

| Field | Required | Description | | ------------ | -------- | -------------------------------------------------------------------------- | | prompt | Yes | The raw prompt you want to enhance | | session_id | No | Your project name e.g. my-novel. Auto-detected from git root if omitted. |

First call — no context yet:

Session: my-api-project
No context yet — this is your first turn. Context will grow automatically from here.

Category: technical
Enhanced Prompt:
...

Second call onwards — context applied:

Session: my-api-project

Context applied:
[Project context]
History: User is building a Node.js REST API with Express and MongoDB...
Current task: Adding JWT authentication middleware
Pinned facts:
  - Using Node.js 20, MongoDB Atlas, RS256 algorithm

Category: technical
Enhanced Prompt:
...already knows your stack, your history, your decisions...

pin_fact — lock in a project decision

Use this when you want something remembered permanently across all future prompts in this project.

| Field | Required | Description | | ------------ | -------- | --------------------------------------------------------- | | fact | Yes | The fact to remember e.g. Using React 18 and TypeScript | | session_id | No | Must match the name used in enhance_prompt_with_context |


list_sessions — see all your projects

No fields required. Returns all active project sessions with their last active date.


enhance_prompt — quick one-off (no context)

For when you just want a single prompt enhanced without any session tracking.

| Field | Required | Description | | ------- | -------- | ------------------------- | | input | Yes | The raw prompt to enhance |


Testing the tools — step by step

Step 1 — Pin your project stack

Tool: pin_fact
fact: Building a task management app with Node.js 20, Express 4, MongoDB Atlas, JWT HS256
session_id: task-app

Expected: Fact pinned to session "task-app"


Step 2 — First enhancement (no history yet, but pinned fact appears)

Tool: enhance_prompt_with_context
prompt: create a mongoose schema for a task with priority levels
session_id: task-app

Expected: context block shows your pinned stack. Enhanced prompt mentions Node.js, MongoDB, Mongoose.


Step 3 — Second enhancement (context builds automatically)

Tool: enhance_prompt_with_context
prompt: write the Express route to create a new task
session_id: task-app

Expected: context block now shows history from Step 2. Enhanced prompt knows you already have a Mongoose schema.


Step 4 — Third enhancement (richer context)

Tool: enhance_prompt_with_context
prompt: add JWT middleware to protect the task routes
session_id: task-app

Expected: enhanced prompt references your schema and your routes from Steps 2 and 3.


Step 5 — Verify sessions

Tool: list_sessions

Expected:

Your project sessions:

• task-app   (last active: today)

Session ID guide

The session_id is just a name for your project. Rules:

  • Use the same name every time for the same project
  • Can contain letters, numbers, spaces, hyphens
  • Spaces and special characters are converted to hyphens automatically
  • If you are in a git repo and omit session_id, it is auto-detected from the repo name
  • If not in a git repo and session_id is omitted, the tool will ask you to provide one

Good examples: my-novel, work-api-2025, recipe-app, thesis-project


Data location

All session data is stored locally on your machine at:

  • Windows: C:\Users\YourName\.prompt-architect\sessions.db
  • Mac/Linux: ~/.prompt-architect/sessions.db

No data is sent to any server other than your chosen LLM provider (OpenAI or Google) for prompt enhancement and summarisation.


License

MIT