npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

openrouter-deep-research-hadrian

v1.0.15

Published

Simplified Deep Research MCP for Qwen Desktop (no database)

Readme

Star on GitHub

OpenRouter Agents MCP Server

npm version GitHub Packages

[UPDATE – 2025-08-26] Two modes (set MODE env):

  • AGENT: one simple tool (agent) that routes research / follow_up / retrieve / query
  • MANUAL: individual tools for each action
  • ALL (default): both AGENT and MANUAL, plus always-on ops tools

Diagram (simple)

[Always-On Ops]  ping • get_server_status • job_status • cancel_job

AGENT MODE
client → agent → (research | follow_up | retrieve | query)

MANUAL MODE
client → (submit_research | conduct_research | retrieve | query | research_follow_up | get_report_content | list_research_history)
  • Killer features
    • Plan → parallelize → synthesize workflow with bounded parallelism
    • Dynamic model catalog; supports Anthropic Sonnet‑4 and OpenAI GPT‑5 family
    • Built‑in semantic KB (PGlite + pgvector) with backup, export/import, health, and reindex tools
    • Lightweight web helpers: quick search and page fetch for context
    • Robust streaming (SSE), per‑connection auth, clean logs

Install / Run

  • Install (project dependency)
npm install @terminals-tech/openrouter-agents
  • Global install (optional)
npm install -g @terminals-tech/openrouter-agents
  • Run with npx (no install)
npx @terminals-tech/openrouter-agents --stdio
# or daemon
SERVER_API_KEY=devkey npx @terminals-tech/openrouter-agents

What’s new (v1.5.0)

  • Version parity across npm, GitHub Releases, and GitHub Packages
  • Dual publish workflow enabled

Changelog →

Quick start

  1. Prereqs
  • Node 18+ (20 LTS recommended), npm, Git, OpenRouter API key
  1. Install
npm install
  1. Configure (.env)
OPENROUTER_API_KEY=your_openrouter_key
SERVER_API_KEY=your_http_transport_key
SERVER_PORT=3002

# Modes (pick one; default ALL)
# AGENT  = agent-only + always-on ops (ping/status/jobs)
# MANUAL = individual tools + always-on ops
# ALL    = agent + individual tools + always-on ops
MODE=ALL

# Orchestration
ENSEMBLE_SIZE=2
PARALLELISM=4

# Models (override as needed) - Updated with state-of-the-art cost-effective models
PLANNING_MODEL=openai/gpt-5-chat
PLANNING_CANDIDATES=openai/gpt-5-chat,google/gemini-2.5-pro,anthropic/claude-sonnet-4
HIGH_COST_MODELS=x-ai/grok-4,openai/gpt-5-chat,google/gemini-2.5-pro,anthropic/claude-sonnet-4,morph/morph-v3-large
LOW_COST_MODELS=deepseek/deepseek-chat-v3.1,z-ai/glm-4.5v,qwen/qwen3-coder,openai/gpt-5-mini,google/gemini-2.5-flash
VERY_LOW_COST_MODELS=openai/gpt-5-nano,deepseek/deepseek-chat-v3.1

# Storage
PGLITE_DATA_DIR=./researchAgentDB
PGLITE_RELAXED_DURABILITY=true
REPORT_OUTPUT_PATH=./research_outputs/

# Indexer
INDEXER_ENABLED=true
INDEXER_AUTO_INDEX_REPORTS=true
INDEXER_AUTO_INDEX_FETCHED=true

# MCP features
MCP_ENABLE_PROMPTS=true
MCP_ENABLE_RESOURCES=true

# Prompt strategy
PROMPTS_COMPACT=true
PROMPTS_REQUIRE_URLS=true
PROMPTS_CONFIDENCE=true
  1. Run
  • STDIO (for Cursor/VS Code MCP):
node src/server/mcpServer.js --stdio
  • HTTP/SSE (local daemon):
SERVER_API_KEY=$SERVER_API_KEY node src/server/mcpServer.js

Windows PowerShell examples

  • STDIO
$env:OPENROUTER_API_KEY='your_key'
$env:INDEXER_ENABLED='true'
node src/server/mcpServer.js --stdio
  • HTTP/SSE
$env:OPENROUTER_API_KEY='your_key'
$env:SERVER_API_KEY='devkey'
$env:SERVER_PORT='3002'
node src/server/mcpServer.js

One-liner demo scripts

Dev (HTTP/SSE):

SERVER_API_KEY=devkey INDEXER_ENABLED=true node src/server/mcpServer.js

STDIO (Cursor/VS Code):

OPENROUTER_API_KEY=your_key INDEXER_ENABLED=true node src/server/mcpServer.js --stdio

MCP client JSON configuration (no manual start required)

You can register this server directly in MCP clients that support JSON server manifests.

Minimal examples:

  1. STDIO transport (recommended for IDEs)
{
  "servers": {
    "openrouter-agents": {
      "command": "npx",
      "args": ["@terminals-tech/openrouter-agents", "--stdio"],
      "env": {
        "OPENROUTER_API_KEY": "${OPENROUTER_API_KEY}",
        "SERVER_API_KEY": "${SERVER_API_KEY}",
        "PGLITE_DATA_DIR": "./researchAgentDB",
        "INDEXER_ENABLED": "true"
      }
    }
  }
}
  1. HTTP/SSE transport (daemon mode)
{
  "servers": {
    "openrouter-agents": {
      "url": "http://127.0.0.1:3002",
      "sse": "/sse",
      "messages": "/messages",
      "headers": {
        "Authorization": "Bearer ${SERVER_API_KEY}"
      }
    }
  }
}

With the package installed globally (or via npx), MCP clients can spawn the server automatically. See your client’s docs for where to place this JSON (e.g., ~/.config/client/mcp.json).

Tools (high‑value)

  • Always‑on (all modes): ping, get_server_status, job_status, get_job_status, cancel_job
  • AGENT: agent (single entrypoint for research / follow_up / retrieve / query)
  • MANUAL/ALL toolset: submit_research (async), conduct_research (sync/stream), research_follow_up, search (hybrid), retrieve (index/sql), query (SELECT), get_report_content, list_research_history
  • Jobs: get_job_status, cancel_job
  • Retrieval: search (hybrid BM25+vector with optional LLM rerank), retrieve (index/sql wrapper)
  • SQL: query (SELECT‑only, optional explain)
  • Knowledge base: get_past_research, list_research_history, get_report_content
  • DB ops: backup_db (tar.gz), export_reports, import_reports, db_health, reindex_vectors
  • Models: list_models
  • Web: search_web, fetch_url
  • Indexer: index_texts, index_url, search_index, index_status

Tool usage patterns (for LLMs)

Use tool_patterns resource to view JSON recipes describing effective chaining, e.g.:

  • Search → Fetch → Research
  • Async research: submit, stream via SSE /jobs/:id/events, then get report content

Notes

  • Data lives locally under PGLITE_DATA_DIR (default ./researchAgentDB). Backups are tarballs in ./backups.
  • Use list_models to discover current provider capabilities and ids.

Architecture at a glance

See docs/diagram-architecture.mmd (Mermaid). Render to SVG with Mermaid CLI if installed:

npx @mermaid-js/mermaid-cli -i docs/diagram-architecture.mmd -o docs/diagram-architecture.svg

Or use the script:

npm run gen:diagram

Architecture Diagram (branded)

If the image doesn’t render in your viewer, open docs/diagram-architecture-branded.svg directly.

Answer crystallization view

Answer Crystallization Diagram

How it differs from typical “agent chains”:

  • Not just hardcoded handoffs; the plan is computed, then parallel agents search, then a synthesis step reasons over consensus, contradictions, and gaps.
  • The system indexes what it reads during research, so subsequent queries get faster/smarter.
  • Guardrails shape attention: explicit URL citations, [Unverified] labelling, and confidence scoring.

Minimal‑token prompt strategy

  • Compact mode strips preambles to essential constraints; everything else is inferred.
  • Enforced rules: explicit URL citations, no guessing IDs/URLs, confidence labels.
  • Short tool specs: use concise param names and rely on server defaults.

Common user journeys

  • “Give me an executive briefing on MCP status as of July 2025.”

    • Server plans sub‑queries, fetches authoritative sources, synthesizes with citations.
    • Indexed outputs make related follow‑ups faster.
  • “Find vision‑capable models and route images gracefully.”

    • /models discovered and filtered, router template generated, fallback to text models.
  • “Compare orchestration patterns for bounded parallelism.”

    • Pulls OTel/Airflow/Temporal docs, produces a MECE synthesis and code pointers.

Cursor IDE usage

  • Add this server in Cursor MCP settings pointing to node src/server/mcpServer.js --stdio.
  • Use the new prompts (planning_prompt, synthesis_prompt) directly in Cursor to scaffold tasks.

FAQ (quick glance)

  • How does it avoid hallucinations?
    • Strict citation rules, [Unverified] labels, retrieval of past work, on‑the‑fly indexing.
  • Can I disable features?
    • Yes, via env flags listed above.
  • Does it support streaming?
    • Yes, SSE for HTTP; stdio for MCP.

Command Map (quick reference)

  • Start (stdio): npm run stdio
  • Start (HTTP/SSE): npm start
  • Run via npx (scoped): npx @terminals-tech/openrouter-agents --stdio
  • Generate examples: npm run gen:examples
  • List models: MCP list_models { refresh:false }
  • Submit research (async): submit_research { q:"<query>", cost:"low", aud:"intermediate", fmt:"report", src:true }
  • Track job: get_job_status { job_id:"..." }, cancel: cancel_job { job_id:"..." }
  • Unified search: search { q:"<query>", k:10, scope:"both" }
  • SQL (read‑only): query { sql:"SELECT ... WHERE id = $1", params:[1], explain:true }
  • Get past research: get_past_research { query:"<query>", limit:5 }
  • Index URL (if enabled): index_url { url:"https://..." }
  • Micro UI (ghost): visit http://localhost:3002/ui to stream job events (SSE).

Package publishing

  • Name: @terminals-tech/openrouter-agents
  • Version: 1.3.2
  • Bin: openrouter-agents
  • Author: Tej Desai [email protected]
  • Homepage: https://terminals.tech

Install and run without cloning:

npx @terminals-tech/openrouter-agents --stdio
# or daemon
SERVER_API_KEY=your_key npx @terminals-tech/openrouter-agents

Publish (scoped)

npm login
npm version 1.3.2 -m "chore(release): %s"
git push --follow-tags
npm publish --access public --provenance

Validation – MSeeP (Multi‑Source Evidence & Evaluation Protocol)

  • Citations enforced: explicit URLs, confidence tags; unknowns marked [Unverified].
  • Cross‑model triangulation: plan fans out to multiple models; synthesis scores consensus vs contradictions.
  • KB grounding: local hybrid index (BM25+vector) retrieves past work for cross‑checking.
  • Human feedback: rate_research_report { rating, comment } stored to DB; drives follow‑ups.
  • Reproducibility: export_reports + backup_db capture artifacts for audit.

Quality feedback loop

  • Run examples: npm run gen:examples
  • Review: list_research_history, get_report_content {reportId}
  • Rate: rate_research_report { reportId, rating:1..5, comment }
  • Improve retrieval: reindex_vectors, index_status, search_index { query }

Architecture diagram (branded)

  • See docs/diagram-architecture-branded.svg (logo links to https://terminals.tech).

Stargazers

Star on GitHub

Star History Chart