npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@devora_no/prompt-assistant-mcp

v0.2.1

Published

Production-ready MCP server for enhancing coding prompts with multiple AI providers, featuring TEST_MODE, health checks, and enhanced error handling

Readme

Devora Prompt Assistant (MCP Server)

CI Status NPM Version NPM Downloads License Node Version Security Status Production Ready

Anthropic OpenAI Azure OpenAI Google Gemini Perplexity

🚀 Production-Ready MCP Server - Transform raw coding prompts into structured, enhanced prompts using multiple AI providers with enterprise-grade security, monitoring, and reliability.

Table of Contents

Overview

The Devora Prompt Assistant is a production-ready Model Context Protocol (MCP) server that transforms your raw coding prompts into structured, enhanced prompts optimized for AI assistants. Built with enterprise-grade security, comprehensive monitoring, and high reliability following all 14 MCP Server Best Practices.

What Makes This Special?

  • 🎯 Production-Grade: Implements all 14 MCP Server Best Practices for enterprise use
  • 🔒 Security First: Defense-in-depth security with rate limiting, circuit breakers, and input sanitization
  • 📊 Full Observability: Comprehensive metrics, tracing, and structured logging
  • ⚡ High Performance: >100 req/s (stdio), >500 req/s (HTTP) with intelligent caching
  • 🛡️ Resilient: Circuit breaker protection, graceful degradation, and 99.9% uptime
  • 🔧 Multi-Provider: Support for 5 AI providers with automatic failover

Key Features

🧠 Intelligent Prompt Enhancement

  • Use Case Auto-Detection: Automatically detects debugging, refactoring, feature creation, architecture decisions, tech comparison, and content design
  • Framework Detection: Detects your tech stack (React, Vue, Angular, Node.js, Python, PHP, etc.) and adjusts suggestions
  • Smart Question Generation: Generates clarifying questions when prompts are vague or incomplete
  • Structured Templates: Enforces consistent markdown sections with use-case-specific scaffolds

🔍 Intelligent Context Management

  • Git Integration: Automatically detects git repos and uses git diff for changed files
  • Smart Filtering: Honors .gitignore patterns and excludes common build directories
  • Multiple Strategies: changed, paths, and related collection strategies
  • Intelligent Caching: LRU cache with 10-minute TTL for fast repeat requests

🔒 Enterprise Security

  • Defense in Depth: 6-layer security model with network isolation, authentication, authorization, validation, sanitization, and rate limiting
  • Circuit Breaker: Prevents cascade failures with automatic recovery
  • Input/Output Sanitization: Protects against injection attacks and data leaks
  • Secret Redaction: API keys and tokens automatically redacted from logs

📊 Production Monitoring

  • Comprehensive Metrics: Track throughput, latency, error rate, cache hit rate, and memory usage
  • Distributed Tracing: Full request lifecycle tracking with trace ID propagation
  • Structured Logging: JSON logs with rotation, separate error/audit files
  • Health Checks: /health, /ready, and /metrics endpoints

High Performance

  • Connection Pooling: Reuse HTTP connections for LLM providers
  • Intelligent Caching: 15-minute TTL with size-based eviction
  • Memory Guards: Automatic cache clearing at 90% memory usage
  • Batch Operations: Optimized file reads and context collection

Quick Start

🚀 One-Click Cursor Installation

Install MCP Server

Click the button above to automatically add this MCP server to Cursor.

⚠️ Important: At least one AI provider API key is required. The server will auto-detect which providers are available.

🧪 Testing: Set TEST_MODE=true to run without API keys for testing purposes.

📋 Manual Installation

Add this to your Cursor MCP settings (~/.cursor/mcp.json):

{
  "mcpServers": {
    "devora-prompt-assistant": {
      "command": "npx",
      "args": ["-y", "@devora_no/prompt-assistant-mcp"],
      "env": {
        "TRANSPORT": "stdio",
        "OPENAI_API_KEY": "your-openai-key-here",
        "ANTHROPIC_API_KEY": "your-anthropic-key-here"
      }
    }
  }
}

Set your API keys (at least one required):

export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
# ... or any other provider

Restart Cursor and you're ready to go!

Installation

Prerequisites

  • Node.js: 20+ (recommended: latest LTS)
  • Package Manager: pnpm (recommended), npm, or yarn
  • AI Provider: At least one API key from supported providers

Installation Methods

Option 1: NPM (Recommended)

# One-time use
npx @devora_no/prompt-assistant-mcp

# Global installation
npm install -g @devora_no/prompt-assistant-mcp
devora-prompt-assistant

# Alias
npx dpa

Option 2: Development Setup

# Clone repository
git clone https://github.com/Devora-AS/devora-prompt-assistant-mcp.git
cd devora-prompt-assistant-mcp

# Install dependencies
pnpm install

# Copy environment template
cp .env.example .env

# Edit with your API keys
nano .env

Option 3: Docker

# Run with Docker
docker run -p 8000:8000 \
  -e OPENAI_API_KEY=your_key_here \
  -e AUTH_BEARER_TOKENS=your_token_here \
  ghcr.io/devora-as/devora-prompt-assistant-mcp

Inspector (stdio) Quick Start

🔍 Testing with MCP Inspector

For development and debugging, use MCP Inspector with stdio transport:

  1. Build the project:

    pnpm install && pnpm build
  2. Choose your configuration:

    • Published package: Load examples/inspector-stdio.json
    • Local development: Load examples/inspector-stdio-local.json
  3. Test the tools:

    • Verify collect_context and enhance_prompt are listed
    • Run test scenarios from docs/inspector-playbook.md

🐛 Debug Mode

Enable detailed logging by setting CONTEXT_DEBUG=1 in your environment:

{
  "env": {
    "CONTEXT_DEBUG": "1",
    "LOG_LEVEL": "debug"
  }
}

This provides comprehensive trace information for debugging file collection, git integration, and performance.

Usage

🎯 Core Workflow

  1. Collect Context (optional but recommended):

    {
      "strategy": "changed",
      "maxKB": 32,
      "maxFiles": 20,
      "extensions": ["ts", "tsx", "js", "jsx"]
    }
  2. Enhance Prompt:

    {
      "task": "Refactor this React component to use TypeScript",
      "context": "[context from collect_context]",
      "audience": "cursor",
      "style": "detailed"
    }

🛠️ Available Tools

enhance_prompt - Prompt Enhancement

Transforms raw coding prompts into structured, enhanced prompts with use-case detection and smart question generation.

Parameters:

  • task (string, required): The coding task to enhance
  • context (string, optional): Additional context from workspace
  • audience (string, optional): Target audience (cursor, claude, copilot, general)
  • style (string, optional): Response style (concise, detailed)
  • constraints (array, optional): Specific constraints
  • provider (string, optional): AI provider to use
  • temperature (number, optional): Generation temperature (0-2)
  • maxTokens (number, optional): Maximum tokens to generate

collect_context - Workspace Context Collection

Intelligently collects relevant files and context from the workspace using git awareness and smart filtering.

Parameters:

  • strategy (string, optional): Collection strategy (changed, paths, related)
  • maxKB (number, optional): Maximum total size in KB
  • maxFiles (number, optional): Maximum number of files
  • include (array, optional): Glob patterns to include
  • exclude (array, optional): Glob patterns to exclude
  • useGit (boolean, optional): Enable git integration
  • extensions (array, optional): File extensions to include

🔧 LLM Enhancement Modes

  • review (default): Minor improvements, structure validation
  • refine: Comprehensive content enhancement
  • off: Deterministic scaffold only, no LLM calls

🌐 Provider Support

| Provider | Default Model | Temperature | Max Tokens | Notes | |----------|--------------|-------------|------------|-------| | Anthropic | claude-3-5-sonnet-latest | ✓ | maxTokens | - | | OpenAI | o3-mini | ✓ | maxTokens | Chat Completions | | Azure OpenAI | gpt-4o-mini | ✓ | maxTokens | Deployment required | | Gemini | gemini-2.0-flash | ✓ | maxOutputTokens | Different param name | | Perplexity | sonar | ✓ | maxTokens | - |

Documentation

📚 Complete Documentation

🎯 Use Case Examples

Security & Privacy

🔒 Security Features

  • Defense in Depth: 6-layer security model
  • Rate Limiting: Token bucket algorithm per-client
  • Circuit Breaker: Prevents cascade failures
  • Input/Output Sanitization: Protection against injection attacks
  • Secret Redaction: API keys automatically redacted from logs
  • Bearer Token Authentication: Secure HTTP transport

🛡️ Privacy Protection

  • Local First: All processing happens locally in stdio mode
  • No Data Storage: No code or prompts stored or transmitted
  • Secret Redaction: Sensitive data automatically redacted
  • Context Collection: Optional workspace scanning with user control

Performance

📊 Performance Metrics

  • Throughput: >100 req/s (stdio), >500 req/s (HTTP)
  • Latency P95: <100ms (deterministic), <2s (with LLM)
  • Error Rate: <0.1% under normal load
  • Memory: <512MB per instance with auto-clearing
  • Cache Hit Rate: >70% for repeated queries
  • Uptime: 99.9% with circuit breaker protection

Optimization Features

  • Connection Pooling: Reuse HTTP connections
  • Intelligent Caching: 15-minute TTL with LRU eviction
  • Memory Guards: Automatic cache clearing at 90% usage
  • Batch Operations: Optimized file reads and context collection

Development

🏗️ Project Structure

src/
├── core/            # Core utilities (security, monitoring, caching)
│   ├── security/    # Rate limiting, sanitization, circuit breaker
│   ├── metrics.ts   # Performance monitoring
│   ├── tracing.ts   # Distributed tracing
│   └── fileLogger.ts # Structured logging
├── config/          # Environment and configuration management
├── providers/       # AI provider adapters
├── server/          # MCP server and transports
├── auth/            # Authentication middleware
└── index.ts         # CLI entry point

🛠️ Available Scripts

# Development
pnpm dev:stdio       # Run with stdio transport
pnpm dev:http        # Run with HTTP transport

# Building
pnpm build           # Build TypeScript to dist/
pnpm prepare         # Build and set executable bit

# Testing
pnpm test            # Run unit tests
pnpm test:watch      # Run tests in watch mode
pnpm test:coverage   # Run with coverage

# Code Quality
pnpm lint            # Run ESLint
pnpm lint:fix        # Fix ESLint issues
pnpm format          # Format with Prettier

🧪 Testing

  • Unit Tests: >80% coverage
  • Integration Tests: All tool workflows
  • Chaos Tests: Resilience under failure conditions
  • Performance Tests: KPI benchmarking

🔧 Troubleshooting

Quick Fixes

Server won't start?

  • Set at least one API key or use TEST_MODE=true
  • Check your configuration in ~/.cursor/mcp.json

Git errors in collect_context?

  • Use strategy: "paths" instead of "changed"
  • Or initialize a git repository

Connection issues?

  • Verify the server is running
  • Check for port conflicts
  • Ensure proper MCP configuration

Test Mode

Run without API keys for testing:

# Test mode (no API keys needed)
TEST_MODE=true npx @devora_no/prompt-assistant-mcp

# Or in MCP config
{
  "env": {
    "TEST_MODE": "true"
  }
}

Health Check

Check server status and configuration:

npx @modelcontextprotocol/inspector --cli npx -y @devora_no/prompt-assistant-mcp --method tools/call --tool-name health_check

Debug Mode

Enable detailed logging:

CONTEXT_DEBUG=1 LOG_LEVEL=debug npx @devora_no/prompt-assistant-mcp

Common Issues

| Problem | Solution | |---------|----------| | "No providers configured" | Set API key or TEST_MODE=true | | "No git history detected" | Use strategy: "paths" | | "Connection closed" | Restart server, check logs | | Slow responses | Check API limits, enable caching |

📖 Full troubleshooting guide: docs/troubleshooting.md

FAQ

General Questions

Q: What is MCP? A: The Model Context Protocol (MCP) is a standard for connecting AI assistants to data sources and tools. This server implements the MCP specification.

Q: Which AI providers are supported? A: Anthropic Claude, OpenAI, Azure OpenAI, Google Gemini, and Perplexity. At least one API key is required.

Q: Is this production-ready? A: Yes! This implements all 14 MCP Server Best Practices with enterprise-grade security, monitoring, and reliability.

Installation and Setup

Q: How do I install this in Cursor? A: Use the one-click installation button above, or manually add the configuration to your ~/.cursor/mcp.json file.

Q: Do I need all provider API keys? A: No, you only need at least one. The server will auto-detect which providers are available.

Q: What's the difference between stdio and HTTP transport? A: Stdio is for local development (recommended), HTTP is for remote access (experimental in v0.2.1).

API and Integration

Q: How do I use the tools? A: The tools are automatically available in Cursor. Use collect_context to gather workspace files, then enhance_prompt to improve your prompts.

Q: Can I use this with other MCP clients? A: Yes, this implements the standard MCP protocol and works with any MCP-compatible client.

Q: What are the input limits? A: Total input: 64KB, Task: 32KB, Context: 16KB. These limits ensure optimal performance.

Security

Q: Is my code safe? A: Yes, in stdio mode all processing happens locally. No code or prompts are stored or transmitted to third parties.

Q: Are API keys secure? A: Yes, API keys are never logged and are automatically redacted from error messages.

Q: What about rate limiting? A: The server implements token bucket rate limiting per-client to prevent abuse.

Troubleshooting

Q: "No tools, prompts, or resources" error? A: Check your MCP configuration, ensure API keys are set, and restart Cursor.

Q: "No providers configured" error? A: Set at least one provider API key in your environment variables.

Q: How do I enable debug logging? A: Set LOG_LEVEL=debug in your environment variables.

Contributing

We welcome contributions! Please see our Contributing Guide for details.

How to Contribute

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Run linting and tests
  6. Submit a pull request

Reporting Issues

License

MIT License - see LICENSE file for details.


Last Updated: January 15, 2025
Version: 0.2.1
Status: Production Ready
Security Status: ✅ Secured & Monitored
Maintained by: Devora

📄 License

MIT License - see LICENSE file for details.


Developed by Devora ☔️

Brave • Innovative • Responsible • Creative • Different