npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@dici1435/spec-review-mcp

v0.1.7

Published

LLM-powered spec compliance review MCP server for dici-spec projects.

Readme

@dici1435/spec-review-mcp

LLM-powered spec compliance review MCP server for dici-spec projects. Reviews code against spec documents and constraints, returning structured findings with category, severity, file location, and description.

Installation

npm install -g @dici1435/spec-review-mcp
# or run directly
npx @dici1435/spec-review-mcp

Added to a project via dici-spec add spec-review, which prompts for your LLM API key.

Tools

| Tool | Description | |---|---| | review_code | Review code files against spec documents for compliance. Accepts spec paths, code paths, and optional constraints. Returns structured findings. |

Finding Structure

Each finding includes:

| Field | Values | |---|---| | category | spec-compliance, missing-implementation, deviation, architecture, correctness | | severity | blocker, major, minor, nit | | file | Path to the file with the finding | | line | Line number (when applicable) | | description | Detailed explanation of the finding |

Environment Variables

| Variable | Required | Default | Description | |---|---|---|---| | LLM_API_KEY | Yes | — | API key for the LLM provider (OpenAI or Anthropic) | | LLM_BASE_URL | No | https://api.openai.com/v1 | Base URL for the LLM API. Set to https://api.anthropic.com/v1 to use Anthropic. | | LLM_MODEL | No | gpt-4o (OpenAI) / claude-3-5-sonnet-20241022 (Anthropic) | Model to use for review | | PROJECT_ROOT | No | process.cwd() | Root directory of the project | | SPECS_DIR | No | agents/specs | Directory containing spec files (relative to PROJECT_ROOT) | | CONSTRAINTS_DIR | No | agents/constraints | Directory containing constraint files (relative to PROJECT_ROOT) |

The provider is auto-detected from LLM_BASE_URL — no separate provider flag needed. If the URL contains anthropic.com, the Anthropic Messages API is used. Everything else uses the OpenAI Chat Completions API.

Provider Configuration

OpenAI (default)

"dici-spec-review": {
  "command": "npx",
  "args": ["@dici1435/spec-review-mcp"],
  "env": {
    "LLM_API_KEY": "sk-...",
    "LLM_MODEL": "gpt-4o"
  }
}

Anthropic

"dici-spec-review": {
  "command": "npx",
  "args": ["@dici1435/spec-review-mcp"],
  "env": {
    "LLM_API_KEY": "sk-ant-...",
    "LLM_BASE_URL": "https://api.anthropic.com/v1",
    "LLM_MODEL": "claude-sonnet-4-5"
  }
}

The provider is detected automatically from LLM_BASE_URL. When using Anthropic, the server uses the /v1/messages endpoint with x-api-key authentication and passes the system prompt as Anthropic's top-level system parameter.

OpenAI-compatible providers (OpenRouter, Azure, etc.)

Any provider with an OpenAI-compatible /chat/completions endpoint works by setting LLM_BASE_URL:

"LLM_BASE_URL": "https://openrouter.ai/api/v1",
"LLM_MODEL": "anthropic/claude-sonnet-4-5"

How It Works

  1. Loads spec documents from SPECS_DIR and constraints from CONSTRAINTS_DIR
  2. Reads the specified code files
  3. Assembles a review prompt with system instructions, spec content, code content, and constraint context
  4. Sends the prompt to the configured LLM (auto-detecting OpenAI vs. Anthropic from LLM_BASE_URL)
  5. Parses the response into structured findings

Transport

Stdio (stdin/stdout). Designed to be launched by AI agent IDEs as a child process.

License

MIT