npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

model-agency

v2.0.1

Published

MCP server providing an agency of AI models for expert consultation and idiomatic code generation

Readme

model-agency

MCP server that provides unified access to multiple AI models (OpenAI, Google, Anthropic, xAI) with structured output support, async operations, and idiomatic code pattern enforcement.

License: MIT MCP Version TypeScript Bun

Features

  • Multi-provider AI integration (OpenAI, Google, Anthropic, xAI)
  • Automatic structured output detection with JSON response support
  • Async operations with request tracking and caching
  • Idiomatic pattern enforcement to prevent common anti-patterns
  • Smart retry logic with exponential backoff
  • Real-time model health checks and capability detection

Installation

git clone https://github.com/yourusername/model-agency
cd model-agency
bun install
bun run build

Configuration

Set API keys as environment variables:

export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="..."     # or GEMINI_API_KEY
export ANTHROPIC_API_KEY="..."
export XAI_API_KEY="..."         # or GROK_API_KEY

Usage

Start the server:

bun start

Development mode:

bun run dev

Model-Specific Limitations

o3 Models

  • Temperature parameter is NOT supported - will cause API errors if included
  • Use reasoning_effort instead for controlling output quality
  • Supports: minimal, low, medium, high reasoning levels

GPT-5 Models

  • Supports verbosity parameter for output detail control
  • Temperature works as expected

Other Models

  • Standard parameters (temperature, top_p, etc.) work normally

Available Tools

models

List available models with capabilities and performance characteristics.

{ "detailed": false }  // Basic listing
{ "detailed": true }   // Include configuration status

advice

Get AI assistance with automatic capability detection.

{
  "model": "openai:gpt-4o",
  "prompt": "How do I optimize this React component?",
  "reasoningEffort": "medium",  // o3/GPT-5 only
  "verbosity": "low"            // GPT-5 only
}

Features:

  • Intelligent routing: OpenAI models use async, others use sync
  • Automatic structured output when supported
  • Fallback to text mode for incompatible models
  • Response includes confidence scores

OpenAI models get additional features:

  • Multi-turn conversation support (use conversation_id)
  • Request caching and deduplication
  • Non-blocking operations (poll with request_id)
  • Automatic context iteration (max 3 rounds)
// Example with OpenAI (async features enabled)
{
  "model": "openai:o3",
  "prompt": "Analyze this architecture...",
  "conversation_id": "uuid",     // For multi-turn
  "max_completion_tokens": 2000,
  "wait_timeout_ms": 120000
}

// Example with Google (standard sync)
{
  "model": "google:gemini-2.5-flash",
  "prompt": "Quick code review..."
}

idiom

Get ecosystem-aware implementation approaches.

{
  "task": "Implement global state management in React",
  "context": {
    "dependencies": "{ \"react\": \"^18.2.0\" }",
    "language": "typescript",
    "constraints": ["no new dependencies"]
  }
}

Returns:

  • Recommended approach with rationale
  • Packages to use/avoid
  • Code examples
  • Anti-patterns to avoid

Architecture

src/
├── server.ts           # Main MCP server
├── providers.ts        # Provider configuration
├── model-registry.ts   # Model factory registry
├── handlers/
│   ├── advice.ts       # Sync advice handler
│   ├── advice-async.ts # Async with caching
│   ├── idiom.ts        # Pattern enforcement
│   └── models.ts       # Model listing
├── utils/
│   ├── errors.ts       # Error handling
│   └── optimization.ts # Performance utils
└── clients/
    └── openai-async.ts # OpenAI async client

Available Models

OpenAI

  • Reasoning: o3, o3-mini, o3-pro, o4-mini (60-120s)
  • Fast: gpt-4o, gpt-4o-mini (5-15s)
  • GPT-5 Series: gpt-5, gpt-5-mini, gpt-5-nano (with reasoning)
  • GPT-4.1 Series: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano

Google

  • Gemini 2.5: gemini-2.5-pro, gemini-2.5-flash (with thinking mode)
  • Gemini 2.0: gemini-2.0-flash, gemini-2.0-flash-lite
  • Gemini 1.x: gemini-1.5-pro, gemini-1.5-flash, gemini-1.0-pro

Anthropic

  • Claude 4 (Opus): claude-opus-4-1-20250805, claude-opus-4-20250514 (hybrid reasoning)
  • Claude 3.7: claude-3-7-sonnet-20250219
  • Claude 3.5: claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022
  • Claude 3: claude-3-opus-20240229, claude-3-haiku-20240307

xAI

  • Grok 4: grok-4 (with reasoning)
  • Grok 3: grok-3, grok-3-mini
  • Grok 2: grok-2
  • Legacy: grok-beta

Claude Desktop Integration

Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "model-agency": {
      "command": "bun",
      "args": ["run", "/path/to/model-agency/dist/run.js"],
      "env": {
        "OPENAI_API_KEY": "sk-...",
        "GOOGLE_API_KEY": "...",
        "ANTHROPIC_API_KEY": "...",
        "XAI_API_KEY": "..."
      }
    }
  }
}

Testing

bun test           # Run all tests
bun test:watch     # Watch mode
bun check          # Type checking

Troubleshooting

No models available: Check that at least one API key is set.

Model not found: Use full format provider:model-name (e.g., openai:gpt-4o).

Rate limits: Error -32003 indicates rate limiting. Try another provider or wait.

API key issues: Error -32002 indicates auth problems. Verify key is valid.

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Commit changes
  4. Push to branch
  5. Open a Pull Request

License

MIT - see LICENSE file.

Acknowledgments