@bluebossa63/mcp-stock-analyzer-ts-stdio
v1.0.7
Published
MCP server (Node 20, STDIO-only) with Yahoo Finance, OpenAI-compatible sentiment, evaluation, and n8n Command Line integration.
Downloads
38
Readme
mcp-stock-analyzer-ts (STDIO)
An MCP server (Node 20+) that exposes stock-analysis utilities as Model Context Protocol tools.
Designed to work smoothly with n8n’s MCP Client (Command Line) node and OpenAI-compatible LLMs.
✨ What’s inside (current state)
Pure AI-driven evaluation (no local math):
defineSentiment→ calls your OpenAI-compatible endpoint to classify sentiment from headlines/text.evaluateWithAI→ passes intraday time series + sentiment to the model with an engineered prompt; returns per-timeframe decisions (and optionaloverallif requested).
Convenience fetchers:
fetchChart→ Yahoo Finance OHLCV (range/interval).fetchMultiIntraday→ pulls 1m, 10m (fallback 15m), 60m series.fetchNewsTitles→ Yahoo Finance RSS headlines.
AI-only orchestration:
getSentimentFromNews→ fetch headlines + rundefineSentiment.evaluateScoreWithAI→ run the model on your provided{intraday, sentiment}(no fetching).pipelineEvaluateAI→ one-shot: fetch intraday + news → sentiment → model evaluation.
Diagnostics:
debug-env,debug-echo-openai
No local calculations remain (e.g., SMA). All decisioning is model-driven when you call
evaluateWithAI/evaluateScoreWithAIor the pipeline tool.
🧰 Tool reference
| Tool | Purpose | Input (JSON) | Output (JSON, in result.content[0].text) |
|---|---|---|---|
| fetchChart | Single Yahoo series | { "symbol":"AAPL", "range":"1mo", "interval":"1d" } | { symbol, currency?, points: [{t,c,o?,h?,l?,v?}] } |
| fetchMultiIntraday | 1m / 10m(→15m) / 60m | { "symbol":"NVDA" } | { symbol, intraday: { "1m": PriceSeries, "10m": PriceSeries?, "60m": PriceSeries } } |
| fetchNewsTitles | Headlines (RSS) | { "symbol":"NVDA", "max": 10 } | { symbol, titles: string[] } |
| defineSentiment | LLM sentiment | { "articles": ["...","..."] } | { sentiment: "positive"|"neutral"|"negative", confidence: number, reasoning?: string } |
| getSentimentFromNews | Headlines → sentiment | { "symbol":"NVDA", "max": 10 } | { symbol, titles, sentiment } |
| evaluateWithAI | Model eval (compact series) | { "symbol":"NVDA", "intraday": { "1m":{points:[{t,c}]}, ... }, "sentiment": {...}, "aggregate": false } | { perTimeframe: { "<tf>": { decision, reasoning } }, overall? } |
| evaluateScoreWithAI | Same as above (explicit name) | same as evaluateWithAI | same |
| pipelineEvaluateAI | One-shot pipeline | { "symbol":"NVDA", "maxNews":10, "aggregate":false, "perTimeframeMaxPoints":200 } | same as evaluateWithAI |
Yahoo ranges/intervals (validated combos)
- 1d → 1m
- 5d → 15m
- 1mo → 60m
- 3mo+ → 1d / 1wk / 1mo
(Use
15minstead of10mfor 5-day intraday data.)
🚀 Quick start
1) Install & build
npm ci
npm run build2) Run as STDIO (for n8n MCP Client)
npx -y @bluebossa63/mcp-stock-analyzer-ts-stdio3) Environment (single variable friendly)
The n8n MCP node may only accept one env var. This server supports blob hydration:
- Preferred: set one of the following in the node’s Environment Variables field:
Comma/newline blob
MCP_ENV=OPENAI_BASE_URL=https://api.openai.com,OPENAI_API_KEY=sk-...,OPENAI_MODEL=gpt-4o-miniJSON blob
MCP_ENV_JSON={"OPENAI_BASE_URL":"https://api.openai.com","OPENAI_API_KEY":"sk-...","OPENAI_MODEL":"gpt-4o-mini"}Base64 blob
printf 'OPENAI_BASE_URL=https://api.openai.com\nOPENAI_API_KEY=sk-...\nOPENAI_MODEL=gpt-4o-mini\n' | base64
# paste result as:
MCP_ENV_B64=PD9...- Wrapper command alternative (bypasses env parsing):
- Command:
sh - Arguments:
-lc 'OPENAI_BASE_URL=https://api.openai.com OPENAI_API_KEY=sk-... OPENAI_MODEL=gpt-4o-mini npx -y @bluebossa63/mcp-stock-analyzer-ts-stdio'
- Command:
4) Required env keys
OPENAI_BASE_URL(defaulthttps://api.openai.com)OPENAI_API_KEY(required)OPENAI_MODEL(defaultgpt-4o-mini)
Optional (for webhooks):
N8N_WEBHOOK_URLN8N_AUTH_HEADER(e.g.,x-api-key: abc123)
🔌 n8n wiring patterns
A) News → Sentiment → Intraday → Evaluate (modular)
- MCP: fetchNewsTitles
{ "symbol":"NVDA", "max": 10 } - Function: map titles →
{ "articles": [...] } - MCP: defineSentiment (from step 2)
- MCP: fetchMultiIntraday
{ "symbol":"NVDA" } - Function: build
toolParametersforevaluateScoreWithAI:function parseMcp(item){ const raw=item?.json?.result?.content?.[0]?.text; return raw?JSON.parse(raw):null; } const intradayObj = parseMcp(itemsFromNode('MCP: fetchMultiIntraday')[0]).intraday; const sentimentObj = parseMcp(itemsFromNode('MCP: defineSentiment')[0]); const symbol = parseMcp(itemsFromNode('MCP: fetchMultiIntraday')[0]).symbol || "NVDA"; return [{ json: { toolParameters: JSON.stringify({ symbol, intraday: intradayObj, sentiment: sentimentObj, aggregate: false, perTimeframeMaxPoints: 200 }) } }]; - MCP: evaluateScoreWithAI with
toolParameters = {{$json.toolParameters}}
B) One-shot
- MCP: pipelineEvaluateAI
{ "symbol":"NVDA", "maxNews": 10, "aggregate": false, "perTimeframeMaxPoints": 200 }
C) Post to n8n webhook (optional)
If you add the postToN8N/evaluateAndPost tools (see code snippets), you can push results to your own webhook.
🧪 Local smoke tests
Sentiment only:
OPENAI_BASE_URL=https://api.openai.com OPENAI_API_KEY=sk-... OPENAI_MODEL=gpt-4o-mini \
node --input-type=module -e "import('./dist/ai.js').then(async m => { const r = await m.defineSentimentFromTexts(['Strong datacenter demand','Analyst warns of volatility']); console.log(r) })"Evaluate with AI (provide your own small series):
node --input-type=module -e "import('./dist/ai.js').then(async m => {
const res = await m.evaluateWithAI({
symbol:'NVDA',
intraday: { '60m': { points: [ {t: 1710000000000, c: 100}, {t: 1710003600000, c: 102}, {t: 1710007200000, c: 101} ] } },
sentiment: { sentiment:'positive', confidence:0.75, reasoning:'…' },
aggregate:false, perTimeframeMaxPoints:120
});
console.log(JSON.stringify(res,null,2));
})"🛡️ Robustness & Troubleshooting
- Env hydration: runs at the top of
ai.tsso even if your MCP node collapses variables into one, parsing works. - Yahoo compat:
fetchMultiIntradaytransparently falls back5d/10m → 5d/15m. UsefetchChartfor custom pairs. - Network hiccups: if you see
fetch failed, consider adding to your env blob:NODE_OPTIONS=--dns-result-order=ipv4first- corporate proxy/CA:
HTTPS_PROXY,HTTP_PROXY,NO_PROXY,NODE_EXTRA_CA_CERTS
- 401: check key with
debug-echo-openai(shows base + prefix) and ensure Authorization is set (the code forces it).
🔒 Notes
- Do not log secrets;
debug-echo-openaimasks the key. - Be mindful of token size;
evaluateWithAIcompacts each timeframe (default last 120 points). Tune viaperTimeframeMaxPoints.
License
MIT © 2025
