@sparrowai/sparrow-mcp
v1.0.4
Published
A Model Context Protocol (MCP) server that automatically analyzes security vulnerabilities in your code and generates secure code alternatives. This server integrates with Cursor IDE to provide real-time security analysis and secure code generation capabi
Downloads
216
Readme
Sparrow MCP - Security Code Analysis Server
A Model Context Protocol (MCP) server that automatically analyzes security vulnerabilities in your code and generates secure code alternatives. This server integrates with Cursor IDE to provide real-time security analysis and secure code generation capabilities.
✨ Features
- File Security Analysis: Automatically analyze security vulnerabilities in Java code
- AI-Powered Briefing: Get easy-to-understand explanations of analysis results
- Secure Code Generation: Automatically generate secure code that fixes discovered vulnerabilities
- Diff Generation: Visualize differences between original and secure code
- Real-time Status Monitoring: Track analysis progress in real-time
- Multi-file Analysis: Analyze multiple files, folders, or ZIP archives at once
- Progress Tracking: Monitor analysis progress with detailed notifications
🛠️ Technology Stack
- TypeScript: Main development language
- MCP SDK: Model Context Protocol implementation
- Ollama: LLM integration (gemma2:27b model)
- Sparrow OnDemand API: SAST security analysis
- Winston: Logging
📦 Installation
Prerequisites
- Node.js 18+ and npm
- Sparrow OnDemand API key
- (Optional) Ollama for LLM features
Install from npm
npm install -g @sparrowai/sparrow-mcpOr install locally in your project:
npm install @sparrowai/sparrow-mcpVerify Installation
After installation, verify that the package is installed correctly by checking if the server file exists:
For local installation:
# Check if the package is installed
ls node_modules/@sparrowai/sparrow-mcp/dist/server.js
# Or verify the package version
npm list @sparrowai/sparrow-mcpFor global installation:
# Check if the package is installed globally
npm list -g @sparrowai/sparrow-mcp
# Find the exact global installation path
npm root -g
# Verify the server file exists (path may vary by system)
# Windows: %APPDATA%\npm\node_modules\@sparrowai\sparrow-mcp\dist\server.js
# (typically: C:\Users\<username>\AppData\Roaming\npm\node_modules\@sparrowai\sparrow-mcp\dist\server.js)
# macOS/Linux: /usr/local/lib/node_modules/@sparrowai/sparrow-mcp/dist/server.js
# (or ~/.npm-global/lib/node_modules/@sparrowai/sparrow-mcp/dist/server.js if using custom prefix)Note: This package is an MCP server, not a CLI tool. It doesn't provide a direct command-line executable. The server is meant to be run by Cursor IDE through the MCP configuration (see Cursor IDE Configuration section below).
⚙️ Configuration
Sparrow OnDemand API Setup
- Sign up for Sparrow OnDemand service
- Obtain your API key from the dashboard
- Set the
SPARROW_API_KEYenvironment variable in your MCP configuration (see below)
🔧 Cursor IDE Configuration
Configure the MCP server in Cursor IDE. The server requires environment variables to be set in the MCP configuration:
Location: ~/.cursor/mcp.json (macOS/Linux) or %APPDATA%\Cursor\mcp.json (Windows)
For Local Installation
If you installed the package locally in your project:
{
"mcpServers": {
"sparrow-mcp": {
"command": "node",
"args": ["./node_modules/@sparrowai/sparrow-mcp/dist/server.js"],
"env": {
"SPARROW_API_KEY": "your-ondemand-token",
"SPARROW_API_URL": "https://ondemand.sparrowcloud.ai",
"OLLAMA_BASE_URL": "http://localhost:11434",
"OLLAMA_MODEL": "gpt-oss:20b",
"NODE_ENV": "development"
}
}
}
}For Global Installation
If you installed the package globally with npm install -g, use the absolute path. First, find your global installation path:
npm root -gThen use the full path in your configuration:
Windows example:
{
"mcpServers": {
"sparrow-mcp": {
"command": "node",
"args": ["C:\\Users\\<username>\\AppData\\Roaming\\npm\\node_modules\\@sparrowai\\sparrow-mcp\\dist\\server.js"],
"env": {
"SPARROW_API_KEY": "your-ondemand-token",
"SPARROW_API_URL": "https://ondemand.sparrowcloud.ai",
"OLLAMA_BASE_URL": "http://localhost:11434",
"OLLAMA_MODEL": "gpt-oss:20b",
"NODE_ENV": "development"
}
}
}
}macOS/Linux example:
{
"mcpServers": {
"sparrow-mcp": {
"command": "node",
"args": ["/usr/local/lib/node_modules/@sparrowai/sparrow-mcp/dist/server.js"],
"env": {
"SPARROW_API_KEY": "your-ondemand-token",
"SPARROW_API_URL": "https://ondemand.sparrowcloud.ai",
"OLLAMA_BASE_URL": "http://localhost:11434",
"OLLAMA_MODEL": "gpt-oss:20b",
"NODE_ENV": "development"
}
}
}
}Note:
- If you're using a local development build, replace
./node_modules/@sparrowai/sparrow-mcp/dist/server.jswithdist/server.jsand add"cwd"pointing to your project root. - Replace
"OLLAMA_BASE_URL"with your actual Ollama server URL (e.g.,http://192.168.30.169:11434for remote servers). - Replace
"your-ondemand-token"with your actual Sparrow OnDemand API key. - For global installation, replace
<username>in the Windows path with your actual username, or usenpm root -gto get the exact path.
Restart Cursor
After updating the MCP configuration:
- Save the
mcp.jsonfile - Restart Cursor IDE completely
- The MCP server should connect automatically
Verify Connection
To verify that the MCP server is connected:
- Open Cursor IDE
- Check the MCP server status in the status bar or settings
- Try using one of the MCP tools (see Usage section below)
🚀 Usage
Available MCP Tools
The server provides the following MCP tools:
1. analyze_file_security
Analyze a single file for security vulnerabilities.
Parameters:
fileContent(string): The content of the file to analyzefileName(string): The name of the file
Returns:
analysisId: Unique identifier for the analysisstatus: Current analysis status
Example Usage in Cursor:
Analyze this file for security vulnerabilities:
[file content]2. get_analysis_status
Check the progress of an ongoing analysis.
Parameters:
analysisId(string): The analysis ID returned fromanalyze_file_security
Returns:
status: Current status (pending, processing, completed, failed)progress: Progress percentage (0-100)message: Status message
3. get_analysis_results
Get detailed results from a completed analysis.
Parameters:
analysisId(string): The analysis IDfileContent(string): The original file content
Returns:
vulnerabilities: List of discovered vulnerabilitiesanalysisBrief: AI-generated analysis briefingsecureCode: Generated secure codesecureCodeBrief: Explanation of the secure codediff: Unified diff showing changes
4. analyze_files_security
Analyze multiple files at once. Files are automatically zipped and analyzed.
Parameters:
filePaths(array of strings): List of file paths to analyze (absolute or relative to working directory)zipFileName(optional string): Name for the generated ZIP file
Returns:
analysisId: Unique identifier for the analysisstatus: Current analysis statusfilePathCount: Number of files being analyzed
5. analyze_zip_security
Analyze a ZIP file for security vulnerabilities.
Parameters:
zipFilePath(string): Path to the ZIP file to analyzezipFileName(optional string): Name for the ZIP file
Returns:
analysisId: Unique identifier for the analysisstatus: Current analysis status
6. analyze_folder_security
Analyze an entire folder for security vulnerabilities. The folder is automatically zipped before analysis.
Parameters:
folderPath(string): Path to the folder to analyzezipFileName(optional string): Name for the generated ZIP file
Returns:
analysisId: Unique identifier for the analysisstatus: Current analysis status
7. track_analysis_progress
Track the progress of an analysis with real-time notifications.
Parameters:
analysisId(string): The analysis IDintervalMs(optional number): Status check interval in milliseconds (default: 3000)
Returns:
- Progress notifications sent via MCP progress notifications
- Final status when analysis completes
Analysis Stages:
INIT: InitializationREADY: Ready to startPRE_PROCESS: Pre-processingANALYSIS: Analysis in progress (progress percentage available)POST_PROCESS: Post-processingCOMPLETE: Analysis complete
Typical Workflow
- Write Code: Write your Java code in Cursor
- Request Analysis: Use the
analyze_file_securitytool to request analysis - Track Progress: Use
track_analysis_progressto monitor the analysis - Get Results: Use
get_analysis_resultsto retrieve detailed results - Review: Review the vulnerabilities, secure code, and diff
- Apply Changes: Apply the suggested secure code changes
🐛 Troubleshooting
Common Issues
1. MCP Server Not Connecting
Symptoms: The MCP server doesn't appear in Cursor or shows as disconnected.
Solutions:
- Verify the
mcp.jsonfile is in the correct location - Check that all environment variables are set correctly
- Ensure Node.js is installed and accessible in PATH
- Restart Cursor IDE completely
- Check Cursor's MCP server logs for error messages
2. Ollama Connection Failed
Symptoms: LLM features are not working, errors about Ollama connection.
Solutions:
- Verify Ollama is installed and running:
ollama serve - Check that
OLLAMA_BASE_URLis correct (default:http://localhost:11434) - Verify the model is downloaded:
ollama list - Check firewall settings if using a remote Ollama instance
3. Sparrow API Errors
Symptoms: Analysis requests fail with API errors.
Solutions:
- Verify
SPARROW_API_KEYis correct and not expired - Check
SPARROW_API_URLis correct - Verify your API key has sufficient permissions
- Check network connectivity to Sparrow API
4. Analysis Timeout
Symptoms: Analysis takes too long or times out.
Solutions:
- Large files may take longer to analyze
- Check network connectivity
- Verify Sparrow API service status
- Consider analyzing smaller files or folders separately
Logging
The server logs to both console and log files:
- Console: Real-time logs during development
- Log Files:
logs/combined.log: All logslogs/error.log: Error logs only
To enable more detailed logging, add "LOG_LEVEL": "debug" to the "env" section in your MCP configuration.
📝 License
ISC
