cognium-ai
v1.5.0
Published
AI-powered static analysis CLI with LLM-enhanced vulnerability detection
Maintainers
Readme
cognium-ai
AI-powered static analysis CLI with LLM-enhanced vulnerability detection. Built on circle-ir and circle-ir-ai.
Installation
npm install -g cognium-aiCommands
cognium-ai scan <path> # Scan for security vulnerabilities (LLM-enhanced)
cognium-ai dead-code <path> # Detect dead/unreachable code
cognium-ai secrets <path> # Scan for secrets and credentials
cognium-ai health <path> # Calculate codebase health score
cognium-ai skill <path> # Analyze AI skill bundle security
cognium-ai init # Create configuration fileScan Options
cognium-ai scan src/ # LLM-enhanced scan (default)
cognium-ai scan src/ --no-llm # Static-only (no LLM)
cognium-ai scan src/ --llm-discovery # LLM discovery mode (deeper)
cognium-ai scan src/ -f json -o results.json # JSON output to file
cognium-ai scan src/ -f sarif -o results.sarif # SARIF output
cognium-ai scan src/ --severity high # High+ severity only
cognium-ai scan src/ --exclude-tests # Skip test files
cognium-ai scan src/ --threads 20 # Custom parallelismLLM Configuration
Configure via CLI flags or environment variables (flags take precedence):
# CLI flags (override env vars)
cognium-ai scan src/ \
--llm-base-url https://api.openai.com/v1 \
--llm-api-key sk-... \
--llm-model gpt-4o
# Environment variables (used as defaults)
export LLM_API_KEY=your-api-key
export LLM_BASE_URL=http://localhost:4000/v1
export LLM_ENRICHMENT_MODEL=cognium/gpt-oss-120b| Flag | Description | Default |
|------|-------------|---------|
| --llm-base-url <url> | LLM API base URL (OpenAI-compatible) | http://localhost:4000/v1 |
| --llm-api-key <key> | LLM API key | LLM_API_KEY env var |
| --llm-model <model> | LLM model name | cognium/gpt-oss-120b |
| --no-llm | Disable LLM, static analysis only | off |
| --llm-discovery | Enable deeper LLM discovery mode | off |
Provider Examples
| Provider | --llm-base-url | --llm-model |
|----------|-------------------|---------------|
| Cognium (free) | http://localhost:4000/v1 | cognium/gpt-oss-120b |
| OpenAI | https://api.openai.com/v1 | gpt-4o |
| GitHub Models (free) | https://models.github.ai/inference | openai/gpt-5 |
| Azure OpenAI | https://YOUR.openai.azure.com/... | gpt-4o |
| Ollama (local) | http://localhost:11434/v1 | llama3 |
| Together AI | https://api.together.xyz/v1 | meta-llama/Llama-3-70b |
CI/CD with GitHub Actions
Run LLM-enhanced SAST in CI using GitHub Models free tier -- no API keys to configure:
name: Security Scan
on: [pull_request]
permissions:
contents: read
models: read
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "22"
- run: npm install -g cognium-ai
- name: LLM-enhanced SAST scan
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
cognium-ai scan ./src \
--llm-base-url https://models.github.ai/inference \
--llm-api-key "$GITHUB_TOKEN" \
--llm-model openai/gpt-5 \
-f sarif -o results.sarifFree tier limits: openai/gpt-5 = 50 req/day, openai/gpt-4o-mini = 150 req/day. Uses the built-in GITHUB_TOKEN with models: read permission.
Supported Languages
| Language | Extensions | Frameworks |
|----------|------------|------------|
| Java | .java | Spring, JAX-RS, Servlet API |
| JavaScript | .js, .mjs | Express, Fastify, Node.js |
| TypeScript | .ts, .tsx | Express, Fastify, Node.js |
| Python | .py | Flask, Django, FastAPI |
| Rust | .rs | Actix-web, Rocket, Axum |
| Bash | .sh, .bash | Shell scripts |
Benchmark Results
| Benchmark | Score | |-----------|-------| | OWASP Benchmark (Java, 1415 tests) | +100% | | Juliet Test Suite (156 tests) | +100% | | SecuriBench Micro | 97.7% TPR, 6.7% FPR | | CWE-Bench-Java (120 CVEs) | 42.5% static, 81.7% +LLM Discovery | | NodeJS Synthetic (25 tests) | 100% TPR | | CWE-Bench-Rust (30 tests) | 77.8% TPR, 0% FPR | | Bash Synthetic (31 tests) | 68.2% TPR, 0% FPR |
CWE-Bench-Java reference: CodeQL 22.5%, IRIS+GPT-4 45.8%.
Related Packages
- circle-ir -- Core SAST library (open source, MIT)
- circle-ir-ai -- LLM enrichment layer and programmatic API
License
MIT
