@blackdrome/open-deepresearch
v0.1.0
Published
Standalone open-source DeepResearch engine with OpenRouter and NVIDIA NIM providers
Maintainers
Readme
UPAI Open DeepResearch Engine
Standalone open-source DeepResearch engine with:
- adaptive multi-round retrieval
- ranking + domain diversification
- claim clustering + contradiction checks
- citation-grounded synthesis with critique pass
- pluggable adapters for search + LLM providers
- built-in providers for OpenRouter and NVIDIA NIM
Install
npm install @upai/open-deepresearchQuick Start
import {
HttpJsonSearchAdapter,
createOpenDeepResearchEngine,
} from "@upai/open-deepresearch";
const searchAdapter = new HttpJsonSearchAdapter({
endpoint: process.env.SEARCH_ENDPOINT!,
apiKey: process.env.SEARCH_API_KEY,
apiKeyHeader: "Authorization",
staticPayload: { provider: "free", mode: "web", num: 10 },
});
const engine = createOpenDeepResearchEngine({
searchAdapter,
defaultProvider: "openrouter",
openRouter: {
apiKey: process.env.OPENROUTER_API_KEY!,
model: process.env.OPENROUTER_MODEL || "openai/gpt-4o-mini",
},
nim: {
apiKey: process.env.NIM_API_KEY!,
model: process.env.NIM_MODEL || "meta/llama-3.1-70b-instruct",
},
fallbackComplete: async (prompt) => {
return `Fallback handler received prompt length=${prompt.length}`;
},
});
const run = await engine.run("Best RAG architecture for support chat in 2026", {
depth: "deep",
providerHint: "openrouter",
onProgress: (p) => console.log(`[${p.stage}] ${p.message}`),
});
console.log(run.finalAnswer);
console.log(run.sourcesForMessage.slice(0, 5));Required API Keys
At minimum, configure one LLM provider and one search endpoint.
OpenRouter
OPENROUTER_API_KEYOPENROUTER_MODEL(example:openai/gpt-4o-mini)
NVIDIA NIM
NIM_API_KEYNIM_MODEL(example:meta/llama-3.1-70b-instruct)
Search
Use any endpoint that returns JSON with either:
results: [{ title, snippet, url }]- OR
items: [{ title, snippet, url|link }]
Then configure:
SEARCH_ENDPOINTSEARCH_API_KEY(optional, depending on your backend)
Engine Pipeline
- Plan: Creates multi-angle query set (facts, recency, benchmarks, counterpoints, docs).
- Retrieve: Executes search with domain/forum constraints.
- Rank: Scores by relevance + authority + recency + constraint matching.
- Diversify: Caps per-domain results to avoid over-concentration.
- Verify: Builds claim graph and finds likely contradictions.
- Synthesize: Produces direct answer with citations.
- Critique: Enforces truth contract and repairs response when possible.
Domain / Forum Constraints
The engine auto-detects constraints from user query language:
site:reddit.com best vector dbonly from stack overflowdiscussion on hacker news
You can also pass explicit constraints in run(..., { constraints }).
Progress Events
run supports progress hooks for UI and logs:
onProgress: (progress) => {
// progress.stage: planning | retrieving | ranking | verifying | synthesizing | critiquing
}Exports
OpenDeepResearchEnginecreateOpenDeepResearchEngineOpenRouterAdapterNimAdapterHttpJsonSearchAdapterFunctionLlmAdapter- all core types and constraints utilities
Notes
- Node 20+ recommended.
- The library uses
fetchand standard Web APIs. - For strict security, keep provider keys server-side.
License
Apache-2.0
