sa-agent
v1.0.2
Published
Stock investment analyzer using CodeBuddy Agent SDK
Downloads
227
Maintainers
Readme
Stock Analyzer (sa)
A stock investment value analyzer using CodeBuddy Agent SDK. The agent autonomously searches public financial websites and generates comprehensive analysis reports.
Features
- Multi-dimensional Analysis: Fundamental, technical, and valuation analysis
- Dual Output Format: Markdown reports for reading, JSON for programmatic use
- Intelligent Data Fetching: Agent-driven search across public financial websites
- Flexible Configuration: TOML-based configuration with sensible defaults
- Hook System: Extensible error and tool-use hooks for custom handling
Installation
npm install saQuick Start
import { analyzeStock, registerHook } from 'sa';
// Optional: Register error hook for custom error handling
registerHook('onError', (error) => {
console.error('Analysis error:', error);
});
// Analyze a stock
const result = await analyzeStock({
stockCode: '00700.HK',
stockName: '腾讯控股',
});
// Access the results
console.log(result.markdownPath); // Report file path
console.log(result.jsonPath); // JSON file path
console.log(result.success); // Whether analysis succeededConfiguration
Create a sa.toml file in your project root:
[general]
output_dir = "./reports"
default_model = "glm-4.7"
max_turns = 15
[output]
format = "both" # markdown, json, both
naming = "code" # code, name, code_date, name_date
date_subdir = false
[analysis]
fundamental = true
technical = true
valuation = true
[analysis.weights]
fundamental = 0.4
technical = 0.3
valuation = 0.3
[logging]
level = "info"
error_log = "./logs/errors.log"Configuration Options
| Section | Option | Type | Default | Description | |---------|--------|------|---------|-------------| | general | output_dir | string | "./reports" | Directory for output files | | general | default_model | string | "glm-4.7" | AI model to use | | general | max_turns | number | 15 | Maximum conversation turns | | output | format | string | "both" | Output format: markdown, json, both | | output | naming | string | "code" | File naming: code, name, code_date, name_date | | output | date_subdir | boolean | false | Create date subdirectory | | analysis | fundamental | boolean | true | Enable fundamental analysis | | analysis | technical | boolean | true | Enable technical analysis | | analysis | valuation | boolean | true | Enable valuation analysis | | logging | level | string | "info" | Log level: debug, info, warn, error | | logging | error_log | string | "./logs/errors.log" | Error log file path |
Examples
Basic Usage
Analyze a single stock with minimal configuration:
import { analyzeStock } from 'sa';
const result = await analyzeStock({
stockCode: '00700.HK',
stockName: '腾讯控股',
});
console.log(`Analysis completed: ${result.success}`);
console.log(`Report saved to: ${result.markdownPath}`);Multiple Stock Analysis
Analyze multiple stocks in sequence:
import { analyzeMultipleStocks } from 'sa';
const stocks = [
{ code: '00700.HK', name: '腾讯控股' },
{ code: '09988.HK', name: '阿里巴巴' },
{ code: '03690.HK', name: '美团' },
];
const results = await analyzeMultipleStocks(stocks, {
model: 'glm-4.7',
outputDir: './reports',
});
results.forEach((r) => {
console.log(`${r.stockName}: ${r.success ? 'OK' : 'FAILED'}`);
});Using Configuration File
import { analyzeStock, loadConfig } from 'sa';
// Load and inspect configuration
const config = await loadConfig('./sa.toml');
console.log(`Using model: ${config.general.default_model}`);
// Analysis will use config file settings
const result = await analyzeStock({
stockCode: 'AAPL.US',
stockName: 'Apple Inc.',
configPath: './sa.toml',
});Error Handling with Hooks
import { analyzeStock, registerHook } from 'sa';
// Register error hook for logging
registerHook('onError', (error) => {
console.error(`[${error.timestamp}] ${error.toolName}: ${error.error}`);
});
// Register tool use hook for monitoring
registerHook('onToolUse', (toolName, input, result) => {
console.log(`Tool called: ${toolName}`);
});
const result = await analyzeStock({
stockCode: 'TSLA.US',
stockName: 'Tesla Inc.',
});Stock Code Normalization
The library automatically normalizes various stock code formats:
import { normalizeStockCode } from 'sa';
// Various input formats are supported:
normalizeStockCode('00700.HK'); // '00700.HK' (already standard)
normalizeStockCode('HK00700'); // '00700.HK' (prefix format)
normalizeStockCode('00700hk'); // '00700.HK' (suffix format)
normalizeStockCode('00700'); // '00700.HK' (5-digit starting with 0)
normalizeStockCode('300001'); // '300001.SZ' (ChiNext)
normalizeStockCode('600000'); // '600000.SH' (Shanghai Main)
normalizeStockCode('AAPL'); // 'AAPL.US' (US stock)Custom Output Options
Override default settings per analysis:
import { analyzeStock } from 'sa';
const result = await analyzeStock({
stockCode: '000001.SZ',
stockName: '平安银行',
model: 'glm-4.7', // Override model
maxTurns: 20, // Allow more turns
outputDir: './custom-reports', // Custom output directory
});Complete Example Script
import {
analyzeStock,
analyzeMultipleStocks,
loadConfig,
registerHook,
clearHooks,
} from 'sa';
async function main() {
// Setup error handling
registerHook('onError', (error) => {
console.error(`Error: ${error.error}`);
});
try {
// Load configuration
const config = await loadConfig();
console.log('Configuration loaded');
// Analyze a single stock
const result = await analyzeStock({
stockCode: '00700.HK',
stockName: '腾讯控股',
});
if (result.success) {
console.log(`Report: ${result.markdownPath}`);
console.log(`JSON: ${result.jsonPath}`);
}
} finally {
// Cleanup
clearHooks('onError');
}
}
main().catch(console.error);API Reference
analyzeStock(options)
Analyze a single stock's investment value.
const result = await analyzeStock({
stockCode: '00700.HK', // Required: stock code
stockName: '腾讯控股', // Required: stock name
model: 'glm-4.7', // Optional: override default model
maxTurns: 15, // Optional: override max turns
configPath: './sa.toml', // Optional: custom config path
outputDir: './reports', // Optional: override output directory
});analyzeMultipleStocks(stocks, options)
Analyze multiple stocks in sequence.
const results = await analyzeMultipleStocks(
[
{ code: '00700.HK', name: '腾讯控股' },
{ code: '09988.HK', name: '阿里巴巴' },
],
{ model: 'glm-4.7' }
);loadConfig(configPath?)
Load configuration from a TOML file.
const config = await loadConfig('./sa.toml');normalizeStockCode(input)
Normalize stock code to standard format.
const code = normalizeStockCode('00700'); // '00700.HK'Hook System
Register custom hooks for error handling and tool usage monitoring:
import { registerHook, unregisterHook, clearHooks } from 'sa';
// Error hook
const errorHook = (error) => {
console.error(`[${error.timestamp}] ${error.toolName}: ${error.error}`);
};
registerHook('onError', errorHook);
// Tool use hook
const toolHook = (toolName, input, result) => {
console.log(`Tool used: ${toolName}`);
};
registerHook('onToolUse', toolHook);
// Remove a specific hook
unregisterHook('onError', errorHook);
// Clear all hooks of a type
clearHooks('onError');Output Format
Markdown Report
The analyzer generates a comprehensive markdown report including:
- Basic company information (name, ticker, industry, market cap)
- Fundamental analysis (P/E, P/B, ROE, margins, growth rates)
- Technical analysis (trend, moving averages, RSI, MACD, support/resistance)
- Valuation analysis (PEG, industry comparison, DCF)
- Overall investment score (0-100)
- Risk factors and opportunities
- Data sources and disclaimers
JSON Structure
The analysis generates a JSON file with the following structure:
interface AnalysisSummary {
score: number; // Investment score (0-100)
keywords: string[]; // Key investment keywords
opinion: string; // One-sentence investment opinion
action: string; // One-sentence action recommendation
operate: 'BUY' | 'HOLD' | 'SELL' | 'AVOID'; // Hold recommendation
}Analysis Result
interface AnalysisResult {
stockCode: string; // Normalized stock code
stockName: string; // Stock name
outDir: string; // Output directory path
markdownPath: string; // Report file path
jsonPath: string; // JSON file path
success: boolean; // Whether analysis succeeded
error?: string; // Error message (if success is false)
}Error Handling
When analysis fails, the success field is false and error contains details:
const result = await analyzeStock({
stockCode: '000001.SZ',
stockName: '平安银行',
});
if (!result.success) {
console.error('Analysis failed:', result.error);
// Possible errors:
// - "Markdown report file not generated: ./reports/000001.SZ.md"
// - "JSON summary file not generated: ./reports/000001.SZ.json"
}Common failure reasons:
- Output file not generated: AI Agent failed to create the report files
- Network issues: Failed to fetch financial data
- Model errors: API rate limits or model unavailability
File Naming
Output files can be named using different strategies:
| Mode | Example | Description |
|------|---------|-------------|
| code | 0700-hk.md | Stock code only |
| name | tencent.md | Stock name (lowercase) |
| code_date | 0700-hk_20260314.md | Code with date |
| name_date | tencent_20260314.md | Name with date |
Requirements
- Node.js >= 20.0.0
- TypeScript >= 5.7.0
Cluster Mode (Distributed Processing)
The Stock Analyzer supports distributed processing with a master-worker architecture using HTTP polling. Workers poll the master for tasks, execute analysis, and report results back. This enables parallel processing across multiple worker nodes.
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Master Node │
│ ┌─────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ TaskLoader │─►│ StateManager │◄─│ TaskDistributor │ │
│ │ (CSV load) │ │ (In-memory) │ │ (HTTP API) │ │
│ └─────────────┘ └──────┬───────┘ └────────┬─────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────────────┐ │
│ │ HTTP API (Port 3000) │ │
│ │ /api/v1/task/poll │ │
│ │ /api/v1/task/complete │ │
│ │ /api/v1/heartbeat │ │
│ │ /api/v1/shutdown │ │
│ └─────────────────────────────┘ │
│ │ │
│ Output Dir: ./results │
│ ├── {stockCode}.md │
│ ├── {stockCode}.json │
│ └── {stockCode}-state.json │
└──────────────────────────────┬──────────────────────────────┘
│ HTTP Polling (2s interval)
▼
┌──────────────────────────────────────────────────────────────┐
│ Worker Node(s) │
│ ┌─────────┐ ┌───────────┐ ┌───────────────────────┐ │
│ │ Poller │───►│ Executor │───►│ HttpReporter │ │
│ │ (poll) │ │ (analyze) │ │ (send results) │ │
│ └─────────┘ └───────────┘ └───────────────────────┘ │
│ │
│ Output Dir: ./worker-tmp (temporary, for intermediate files)│
└──────────────────────────────────────────────────────────────┘Features
- Horizontal Scaling: Multiple workers process tasks in parallel
- Dynamic Scaling: Adjust worker count via HTTP API
- Task Recovery: Failed tasks are automatically recovered
- Heartbeat Monitoring: Master tracks worker health
- Graceful Shutdown: Workers notify master before exiting
- File Transfer: Workers send analysis results (Markdown + JSON) to Master via HTTP
Configuration
Master and Worker use separate configuration files for independent deployment.
See master.toml.example and worker.toml.example for complete configuration options.
master.toml:
[general]
# Master saves received files here:
# - {stockCode}.md : Markdown report
# - {stockCode}.json : Analysis summary JSON
# - {stockCode}-state.json: Task execution state
output_dir = "./results"
default_model = "glm-4.7"
[logging]
level = "info"
error_log = "./logs/master-errors.log"
[cluster]
enabled = true
role = "master"
[cluster.polling]
interval_ms = 2000
timeout_ms = 30000
[master]
api_port = 3000
task_batch_size = 100
queue_threshold = 500
launch_type = "subprocess"
initial_workers = 0
shuffle_tasks = false
[master.subprocess]
command = "node"
args = ["dist/cli/worker.js"]worker.toml:
[general]
# Temporary directory for intermediate files during analysis
# Worker reads generated files and sends content to Master
output_dir = "./worker-tmp"
default_model = "glm-4.7"
[logging]
level = "info"
error_log = "./logs/worker-errors.log"
[cluster]
enabled = true
role = "worker"
[cluster.polling]
interval_ms = 2000
timeout_ms = 30000
[cluster.master_endpoint]
host = "localhost"
port = 3000
protocol = "http"
[worker]
max_tasks = 20
heartbeat_interval_ms = 30000
heartbeat_timeout_ms = 120000
# Workspace configuration (inline in worker section)
workspace_dir = "/workspace/run"
# workspace_template_dir = "/workspace/templates/default"
workspace_cleanup = "failure"
setting_sources = ["user", "project"]Worker Workspace Configuration
Each worker task operates in an isolated workspace directory, providing clean separation between tasks.
Configuration Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| workspace_dir | string | /workspace/run | Base directory for task workspaces |
| workspace_template_dir | string | (none) | Template directory to copy when initializing workspace |
| workspace_cleanup | string | no | Cleanup strategy: yes, no, or failure |
| setting_sources | string[] | ["user", "project"] | Session setting sources priority |
Workspace Directory Structure
Each task creates a unique workspace using its task ID:
{workspace_dir}/
├── task-abc123/ # Workspace for task-abc123
│ ├── reports/ # Generated analysis reports
│ ├── logs/ # Task-specific logs
│ └── ...
├── task-def456/ # Workspace for task-def456
│ └── ...
└── task-ghi789/ # Workspace for task-ghi789
└── ...Cleanup Strategies
| Strategy | Behavior |
|----------|----------|
| yes | Always clean up workspace after task completes (success or failure) |
| no | Never clean up workspace, useful for debugging |
| failure | Only clean up on success, preserve workspace for failed tasks |
Workspace Templates
You can specify a template directory that will be copied to the workspace when initializing:
[worker]
workspace_dir = "/workspace/run"
workspace_template_dir = "/workspace/templates/default"
workspace_cleanup = "failure"
setting_sources = ["user", "project"]This is useful for:
- Pre-configured settings files
- Required dependencies or tools
- Standard directory structure
- Configuration templates
Running Master Node
Start the master node:
npm run master -- --config ./master.toml --csv ./tasks.csvThe master provides an HTTP API on port 3000:
| Endpoint | Method | Description |
|----------|--------|-------------|
| /status | GET | Get system status |
| /stats | GET | Get detailed statistics |
| /workers | GET | List all workers |
| /worker/start | POST | Start a new worker |
| /worker/stop/:id | POST | Stop a specific worker |
| /worker/scale | POST | Scale workers (body: {"count": 5}) |
| /task/pause | POST | Pause task distribution |
| /task/resume | POST | Resume task distribution |
| /task/load | POST | Load tasks from CSV (body: {"path": "./tasks.csv"}) |
| /recovery/timeout | POST | Recover timed-out tasks |
| /health | GET | Health check |
Running Worker Node
Start a worker node:
npm run worker -- --config ./worker.tomlWorkers will:
- Poll Master via HTTP API for new tasks
- Execute stock analysis using the Agent SDK
- Read generated files and send content to Master via HTTP
- Send periodic heartbeats to Master
- Gracefully shutdown after processing
max_tasks
Task CSV Format
Create a CSV file with stock tasks:
stock_code,stock_name,model
00700.HK,腾讯控股,glm-4.7
09988.HK,阿里巴巴,glm-4.7
03690.HK,美团,glm-4.7Example Workflow
# Terminal 1: Start Master
npm run master -- --config ./master.toml --csv ./stocks.csv
# Terminal 2: Start Worker(s)
npm run worker -- --config ./worker.toml
# Terminal 3: Monitor via API
curl http://localhost:3000/status
curl http://localhost:3000/stats
# Scale workers
curl -X POST http://localhost:3000/worker/scale -H "Content-Type: application/json" -d '{"count": 3}'Worker Lifecycle
- Startup: Worker starts polling Master for tasks
- Processing: Receives task, executes analysis, sends results to Master
- Heartbeat: Sends heartbeat every 30 seconds via HTTP
- Shutdown: After
max_tasks, sends shutdown notification and exits
Task Recovery
When a worker crashes or times out, its assigned tasks are automatically recovered:
- Master monitors worker heartbeats
- If no heartbeat for
heartbeat_timeout_ms, worker is marked as timeout - Assigned tasks are reset to
pendingstatus - Other workers can pick up the recovered tasks
License
MIT
