circle-ir-ai
v2.5.6
Published
LLM-enhanced SAST analysis built on circle-ir
Maintainers
Readme
circle-ir-ai
LLM-enhanced static analysis built on circle-ir. Provides CLI tools, AI-powered vulnerability verification, and comprehensive security analysis.
Features
- Multi-Language Support: Java, JavaScript/TypeScript, Python, Rust, HTML
- LLM Discovery: Discover vulnerabilities beyond static patterns via LLM
- Cross-File Analysis: Track taint across file boundaries
- AI Skills Analysis: Security analysis for AI agent skills (MCP servers)
- Security Scanning: OWASP Top 10 mapping with trend tracking
- Dead Code Detection: Find unreachable code via call graph analysis
- Secret Scanning: Detect secrets in code and Git history
- Health Scoring: Calculate overall codebase health
Installation
npm install circle-ir-ai circle-irCLI
For CLI usage, install cognium-ai:
npm install -g cognium-ai
cognium-ai scan ./src # Security scan (OWASP Top 10)
cognium-ai scan ./src --language java # Filter by language
cognium-ai trust ./src # Trust score (27 passes)
cognium-ai quality ./src # Quality score (5 passes)
cognium-ai dead-code ./src # Dead code detection
cognium-ai secrets ./src # Secret scanning
cognium-ai health ./src # Codebase health scoreOptions: --format json|summary|sarif, -o <file>, --llm, -q
LLM CLI Flags
cognium-ai scan ./src --llm # Enable LLM enrichment
cognium-ai scan ./src --llm --llm-model gpt-4o # Override model
cognium-ai scan ./src --llm --llm-base-url <url> # Override API base URL| Flag | Description | Default |
|------|-------------|---------|
| --llm | Enable LLM enrichment | off |
| --llm-base-url <url> | LLM API base URL (OpenAI-compatible) | LLM_BASE_URL env var |
| --llm-api-key <key> | LLM API key | LLM_API_KEY env var |
| --llm-model <model> | LLM model name | LLM_ENRICHMENT_MODEL env var |
| --language <lang> | Filter by language (java, typescript, python, etc.) | all |
CLI flags override environment variables. Any OpenAI-compatible API works (OpenAI, Azure, Ollama, Together, GitHub Models, etc.).
Programmatic API
import { initAnalyzer } from 'circle-ir';
import { analyzeFile } from 'circle-ir-ai';
await initAnalyzer();
const result = await analyzeFile('MyClass.java', code, {
language: 'java',
enableEnrichment: true,
enableVerification: true,
});
console.log('Findings:', result.findings);Multi-file analysis
import { analyzeProjectTwoPhase } from 'circle-ir-ai';
const result = await analyzeProjectTwoPhase(files, 'java', {
enableEnrichment: true,
parallelPhase1: true,
maxConcurrency: 10,
});
console.log('Cross-file taint flows:', result.crossFileFlows);Security scanning, dead code, secrets, health
import { scanDirectory, detectDeadCode, scanForSecrets, calculateHealthScore } from 'circle-ir-ai';
const scan = await scanDirectory('/path/to/project');
const deadCode = await detectDeadCode({ target: '/path/to/project' });
const secrets = await scanForSecrets('/path/to/project', { scanHistory: true });
const health = await calculateHealthScore('/path/to/project');AI skill analysis
import { analyzeSkillBundle } from 'circle-ir-ai';
const result = await analyzeSkillBundle('./my-skill', {
enableCrossArtifact: true,
enableVerification: true,
});
console.log(`Trust Score: ${result.score}`);
console.log(`Findings: ${result.findings.length}`);LLM Configuration
Configure via environment variables or CLI flags (flags take precedence):
# Environment variables
export LLM_API_KEY=your-api-key
export LLM_BASE_URL=http://localhost:4000/v1
export LLM_ENRICHMENT_MODEL=cognium/gpt-oss-120b # default (FREE)
# Or use CLI flags directly
cognium-ai scan ./src --llm \
--llm-base-url https://api.openai.com/v1 \
--llm-api-key sk-... \
--llm-model gpt-4oProvider Examples
| Provider | --llm-base-url | --llm-model |
|----------|-------------------|---------------|
| Cognium (free) | http://localhost:4000/v1 | cognium/gpt-oss-120b |
| OpenAI | https://api.openai.com/v1 | gpt-4o |
| GitHub Models (free) | https://models.github.ai/inference | openai/gpt-5 |
| Azure OpenAI | https://YOUR.openai.azure.com/... | gpt-4o |
| Ollama (local) | http://localhost:11434/v1 | llama3 |
| Together AI | https://api.together.xyz/v1 | meta-llama/Llama-3-70b |
CI/CD with GitHub Actions
Run LLM-enhanced SAST in CI using GitHub Models free tier -- no API keys needed:
name: SAST Scan
on: [pull_request]
permissions:
contents: read
models: read
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "22"
- name: Install cognium-ai
run: |
npm install cognium-ai@latest
ln -s node_modules/circle-ir/dist/wasm wasm
- name: LLM-enhanced SAST scan
env:
LLM_BASE_URL: https://models.github.ai/inference
LLM_API_KEY: ${{ github.token }}
LLM_ENRICHMENT_MODEL: openai/gpt-4o-mini
run: |
npx cognium-ai scan ./src --llm --format json -o sast-results.jsonGitHub Models free tier limits: openai/gpt-5 = 50 req/day, openai/gpt-4o-mini = 150 req/day. Auth uses the built-in GITHUB_TOKEN with models: read permission.
Benchmark Results
| Benchmark | Score | |-----------|-------| | OWASP Benchmark (Java, 1415 tests) | 100% | | Juliet Test Suite (243 tests) | 100% | | SecuriBench Micro | 97.7% TPR, 6.7% FPR | | CWE-Bench-Java (120 CVEs) | 50.0% static, 81.7% +LLM Discovery | | NodeJS Synthetic (25 tests) | 93.8% TPR | | CWE-Bench-Rust (50 tests) | 86.5% TPR, 7.7% FPR |
CWE-Bench-Java reference: CodeQL 22.5%, IRIS+GPT-4 45.8%.
Top 100 Secure Repos (LLM Discovery on 21 production repos, 967K LOC)
| Model | LLM Findings | Notes | |-------|--------------|-------| | claude-opus-latest | 29 | Zero JSON failures, most reliable | | grok-code-latest | 29 | Best Rust detection | | gpt-oss-120b (FREE) | ~27 | 90% detection at zero cost |
Top findings: Jenkins (9), HuggingFace (4), Tauri (6-7), TiKV (3), Deno (3), Cargo (2-5).
Supported Languages
| Language | Frameworks | |----------|------------| | Java | Spring, JAX-RS, Servlet API | | JavaScript/TypeScript | Express, Fastify, Node.js | | Python | Flask, Django, FastAPI | | Rust | Actix-web, Rocket, Axum | | HTML | XSS detection in templates |
Related Packages
- circle-ir: Core SAST library (open source, MIT)
License
PolyForm Noncommercial License 1.0.0 -- free for noncommercial use. Commercial use requires a separate license. See LICENSE for details.
