npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

cognium-ai

v1.5.0

Published

AI-powered static analysis CLI with LLM-enhanced vulnerability detection

Readme

cognium-ai

AI-powered static analysis CLI with LLM-enhanced vulnerability detection. Built on circle-ir and circle-ir-ai.

Installation

npm install -g cognium-ai

Commands

cognium-ai scan <path>         # Scan for security vulnerabilities (LLM-enhanced)
cognium-ai dead-code <path>    # Detect dead/unreachable code
cognium-ai secrets <path>      # Scan for secrets and credentials
cognium-ai health <path>       # Calculate codebase health score
cognium-ai skill <path>        # Analyze AI skill bundle security
cognium-ai init                # Create configuration file

Scan Options

cognium-ai scan src/                              # LLM-enhanced scan (default)
cognium-ai scan src/ --no-llm                     # Static-only (no LLM)
cognium-ai scan src/ --llm-discovery              # LLM discovery mode (deeper)
cognium-ai scan src/ -f json -o results.json      # JSON output to file
cognium-ai scan src/ -f sarif -o results.sarif    # SARIF output
cognium-ai scan src/ --severity high              # High+ severity only
cognium-ai scan src/ --exclude-tests              # Skip test files
cognium-ai scan src/ --threads 20                 # Custom parallelism

LLM Configuration

Configure via CLI flags or environment variables (flags take precedence):

# CLI flags (override env vars)
cognium-ai scan src/ \
  --llm-base-url https://api.openai.com/v1 \
  --llm-api-key sk-... \
  --llm-model gpt-4o

# Environment variables (used as defaults)
export LLM_API_KEY=your-api-key
export LLM_BASE_URL=http://localhost:4000/v1
export LLM_ENRICHMENT_MODEL=cognium/gpt-oss-120b

| Flag | Description | Default | |------|-------------|---------| | --llm-base-url <url> | LLM API base URL (OpenAI-compatible) | http://localhost:4000/v1 | | --llm-api-key <key> | LLM API key | LLM_API_KEY env var | | --llm-model <model> | LLM model name | cognium/gpt-oss-120b | | --no-llm | Disable LLM, static analysis only | off | | --llm-discovery | Enable deeper LLM discovery mode | off |

Provider Examples

| Provider | --llm-base-url | --llm-model | |----------|-------------------|---------------| | Cognium (free) | http://localhost:4000/v1 | cognium/gpt-oss-120b | | OpenAI | https://api.openai.com/v1 | gpt-4o | | GitHub Models (free) | https://models.github.ai/inference | openai/gpt-5 | | Azure OpenAI | https://YOUR.openai.azure.com/... | gpt-4o | | Ollama (local) | http://localhost:11434/v1 | llama3 | | Together AI | https://api.together.xyz/v1 | meta-llama/Llama-3-70b |

CI/CD with GitHub Actions

Run LLM-enhanced SAST in CI using GitHub Models free tier -- no API keys to configure:

name: Security Scan
on: [pull_request]

permissions:
  contents: read
  models: read

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: "22"

      - run: npm install -g cognium-ai

      - name: LLM-enhanced SAST scan
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          cognium-ai scan ./src \
            --llm-base-url https://models.github.ai/inference \
            --llm-api-key "$GITHUB_TOKEN" \
            --llm-model openai/gpt-5 \
            -f sarif -o results.sarif

Free tier limits: openai/gpt-5 = 50 req/day, openai/gpt-4o-mini = 150 req/day. Uses the built-in GITHUB_TOKEN with models: read permission.

Supported Languages

| Language | Extensions | Frameworks | |----------|------------|------------| | Java | .java | Spring, JAX-RS, Servlet API | | JavaScript | .js, .mjs | Express, Fastify, Node.js | | TypeScript | .ts, .tsx | Express, Fastify, Node.js | | Python | .py | Flask, Django, FastAPI | | Rust | .rs | Actix-web, Rocket, Axum | | Bash | .sh, .bash | Shell scripts |

Benchmark Results

| Benchmark | Score | |-----------|-------| | OWASP Benchmark (Java, 1415 tests) | +100% | | Juliet Test Suite (156 tests) | +100% | | SecuriBench Micro | 97.7% TPR, 6.7% FPR | | CWE-Bench-Java (120 CVEs) | 42.5% static, 81.7% +LLM Discovery | | NodeJS Synthetic (25 tests) | 100% TPR | | CWE-Bench-Rust (30 tests) | 77.8% TPR, 0% FPR | | Bash Synthetic (31 tests) | 68.2% TPR, 0% FPR |

CWE-Bench-Java reference: CodeQL 22.5%, IRIS+GPT-4 45.8%.

Related Packages

  • circle-ir -- Core SAST library (open source, MIT)
  • circle-ir-ai -- LLM enrichment layer and programmatic API

License

MIT