vallm-analyzer
v1.1.0
Published
Use LLM to quickly diagnose vulnerabilities within your project's code.
Downloads
2
Readme
VulnerabilityAnalyzer (vallm)
Use LLMs such as OpenAI, Anthropic Claude, and Google Gemini to quickly diagnose vulnerabilities within your project's code.
Default usage [English]
You can download vallm with the command below.
npm i -g vallm-analyzerThis is a CLI tool that uses the LLM to scan your project for vulnerabilities.
$ vallm .The above command analyzes all code files in the project at the current location for potential risks. When analyzing files, it also considers files with dependencies.
Before using it, register an API key for the provider you want to use. The CLI accepts the provider name followed by the key; omitting the provider assumes OpenAI.
$ vallm --ai openai YOUR_OPENAI_KEY
$ vallm --ai anthropic YOUR_ANTHROPIC_KEY
$ vallm --ai gemini YOUR_GEMINI_KEYNote: We do not currently support some files, such as images and binary files. Please check before running a scan.
Options
The following options are available
h,--helpProvide help and explanations.
$ vallm -h-V,--versionCheck the version of vallm.
$ vallm -V-a,--aiRegister an API key. Provide one argument for OpenAI, or two arguments (
provider key) for other providers.$ vallm -a sk-openai... $ vallm -a anthropic sk-antropic...-s--searchRegister your Google Search API Key and CX.$ vallm -s your_api_key your_cx-p,--providerSelect the active provider (
openai,anthropic, orgemini). The model automatically switches to a sensible default if the current choice is incompatible.-m,--modelDetermine the LLM model to use. The default value is
gpt-4o. You can also set the provider inline withprovider:model.$ vallm -m gpt-4o $ vallm -m anthropic:claude-3-5-sonnet-20241022-r,--reasoning-effortWhen using a reasoning model, set the reasoning ability of that model. You must enter one oflow,medium, orhigh. The default value ismedium.$ vallm -r low-l,--limitMaximum number of files to diagnose when diagnosing vulnerabilities. The default is
64.$ vallm -l 128-c,--checkWhether to save inline annotated snippets for vulnerable code. You can enter
yorn. The default isn. When enabled, the annotated snippets are written tovallm-reports/<relative-path>.notes.txtinstead of modifying your source files.$ vallm -c y-k,--skip-cve-searchWhether to skip CVE scanning. You can enter
yorn. The default isy.$ vallm -k y-i,--infoRetrieve the values set in vallm.$ vallm --info
CVE Intelligence Sources
vallm gathers vulnerability intelligence from multiple sources:
- Google Custom Search (requires both API key and CX). The tool asks an LLM to craft precise queries, fetches the most relevant results, and summarises them.
- npm audit advisories. When a
package.jsonis present, vallm automatically runsnpm audit --jsonto capture advisories. This runs alongside Google search when credentials are available, and acts as a fallback when they are not.
Example Usage
When search is turned off
It will analyze the project and notify you of any issues it finds, as shown below.
PS C:\Users\user\prob> vallm . 🔍 Scanning Project: C:\Users\shkh0\prob ✔ ✅ Found 17 target files. ✔ ✅ Found 0 package files. ⠋ Analyzing project files... 🔍 Attempting OpenAI call (1/3)... ⠇ Analyzing (1/17): eval.php... ✅ eval.php analysis complete. 🔍 Attempting OpenAI call (1/3)... ⠙ Analyzing (2/17): phpinfo.php... ✅ phpinfo.php analysis complete. 🔍 Attempting OpenAI call (1/3)... ... ⠋ Analyzing (17/17): index.php... ✅ index.php analysis complete. ✔ ✅ All files analyzed. 🚀 Scanning Completed! 🔹 Summary of Findings: 📂 C:\Users\user\prob\eval.php 🔎 Findings: - Lines 3-5: The use of JavaScript eval() with untrusted data (code) without any sanitization or validation may lead to Remote Code Execution (RCE). ...If annotations are enabled, the model writes only the relevant snippets and comments to
vallm-reports/eval.php.notes.txt, keeping your source file untouched.When search is turned on
It will analyze the versions of external modules, libraries in the detected package files and search for known vulnerabilities with Google search. At the end, it will output a brief report based on the vulnerabilities found.
PS C:\Users\user\prob> vallm . ... ⠋ Analyzing CVEs... Searching for [email protected] vulnerability ... 🔹 Summary of Findings: ... 🛑 CVE Report: ...
Local Build & Test run
Requires tsc.
$ npm run build
$ node bin/index.jsFor Korean documentation, see readmeKr.md.
