@thejusdutt/kiro-research-mcp
v1.0.0
Published
Claude Research-style MCP server implementing Anthropic's multi-agent research methodology with source quality assessment and iterative searching
Maintainers
Readme
Kiro Research MCP Server
A Claude Research-style MCP server implementing Anthropic's multi-agent research methodology that outperforms single-agent by 90.2%.
Features
- 🔍 Iterative Search - Start broad, then narrow (Claude Research methodology)
- 📊 Source Quality Scoring - Prioritizes primary sources over SEO content farms
- 📝 Citation Tracking - Automatic numbered citations [1], [2], etc.
- 📈 Scaling Rules - Built-in guidance for simple, comparison, and complex research
- 📋 Quality-Tiered Reports - Sources grouped by authority level
Quick Start
Installation
Use directly with npx (no installation required):
npx @thejusdutt/kiro-research-mcpOr install globally:
npm install -g @thejusdutt/kiro-research-mcpConfiguration
Kiro IDE
Add to .kiro/settings/mcp.json:
{
"mcpServers": {
"research": {
"command": "npx",
"args": ["-y", "@thejusdutt/kiro-research-mcp"],
"env": {
"TAVILY_API_KEY": "your-tavily-api-key"
},
"autoApprove": ["web_search", "add_source", "get_citations", "research_session", "generate_report"]
}
}
}Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"research": {
"command": "npx",
"args": ["-y", "@thejusdutt/kiro-research-mcp"],
"env": {
"TAVILY_API_KEY": "your-tavily-api-key"
}
}
}
}Get a Tavily API Key
- Go to tavily.com
- Sign up for free (1,000 searches/month free)
- Copy your API key
Claude Research Methodology
This server implements the research methodology described in Anthropic's engineering blog:
Architecture
┌─────────────────────────────────────────────────────────────┐
│ LEAD RESEARCHER (You) │
│ Plans strategy → Coordinates searches → Synthesizes │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────┼─────────────────────┐
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ SUBAGENT 1 │ │ SUBAGENT 2 │ │ SUBAGENT 3 │
│ (Aspect A) │ │ (Aspect B) │ │ (Aspect C) │
└───────────────┘ └───────────────┘ └───────────────┘
│ │ │
└─────────────────────┼─────────────────────┘
▼
┌───────────────┐
│ CITATION AGENT│
│ Validates all │
│ claims/sources│
└───────────────┘Scaling Rules
| Task Type | Searches | Sources | Example | |-----------|----------|---------|---------| | Simple fact | 3-10 | 3-5 | "What is WebAssembly?" | | Comparison | 10-15/aspect | 4-6/aspect | "Compare React vs Vue" | | Complex research | 25+ | 10-15 | "State of AI in healthcare 2025" |
Source Quality Scoring
Anthropic found that agents often chose SEO content farms over authoritative sources. This server implements quality heuristics:
| Score | Tier | Examples | |-------|------|----------| | 10 | Primary | Official docs, research papers, company blogs | | 9 | Authoritative | .gov, .edu, major institutions | | 7 | Quality | Quality journalism, expert analysis | | 5 | General | General web content | | 3 | Low | SEO farms, social media (deprioritized) |
Search Strategy
- Phase 1 - Broad Exploration: Start with 2-4 word queries
- Phase 2 - Parallel Exploration: Search different aspects simultaneously
- Phase 3 - Gap Filling: Target identified knowledge gaps
- Phase 4 - Verification: Cross-reference key claims
Tools
web_search
Search the web with quality scoring.
Parameters:
- query: Search query (start broad, then refine)
- maxResults: Number of results (default: 10)
- sessionId: Session ID to track progressadd_source
Track a fetched URL as a citation source.
Parameters:
- sessionId: Research session ID
- url: URL of the source
- title: Title of the article
- author: (optional) Author name
- publishedDate: (optional) Publication date
- siteName: (optional) Site nameresearch_session
Manage research sessions with scaling guidance.
Actions:
- create: Start new session (provides scaling targets)
- update: Add findings and gaps
- status: Get progress metrics
- complete: Mark session doneget_citations
Get formatted citations.
Parameters:
- sessionId: Research session ID
- format: markdown, inline, apa, mla, chicagogenerate_report
Generate a quality-tiered research report.
Parameters:
- sessionId: Research session ID
- title: (optional) Report title
- includeSources: Include sources section (default: true)
- includeGaps: Include gaps section (default: true)Example Workflow
User: Research the current state of WebAssembly in 2025
1. Create session:
research_session(action: "create", query: "WebAssembly 2025 state")
→ Returns scaling guidance: 10-15 searches, 5-8 sources
2. Broad search:
web_search(query: "WebAssembly 2025", sessionId: "...")
→ Returns quality-scored results
3. Aspect searches:
web_search(query: "WebAssembly server-side performance")
web_search(query: "WebAssembly browser support 2025")
web_search(query: "companies using WebAssembly production")
4. Fetch & track sources:
mcp_fetch_fetch(url: "https://...")
add_source(sessionId: "...", url: "...", title: "...")
5. Record findings:
research_session(action: "update", findings: [...], gaps: [...])
6. Generate report:
generate_report(sessionId: "...")
→ Quality-tiered report with citationsEnvironment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| TAVILY_API_KEY | Yes | Tavily API key for web search |
Development
# Clone the repo
git clone https://github.com/thejusdutt/kiro-research-mcp.git
cd kiro-research-mcp
# Install dependencies
npm install
# Build
npm run build
# Run locally
node build/index.jsReferences
- How we built our multi-agent research system - Anthropic Engineering
- Model Context Protocol - MCP Documentation
- Tavily API - Search API
License
MIT © Thejus Dutt
