botguard-cli
v0.1.2
Published
BotGuard CLI — security scanning for AI agents from your terminal
Downloads
28
Maintainers
Readme
BotGuard CLI
Security scanning for AI agents from your terminal. Tests for prompt injection, jailbreaks, data extraction, and 70+ attack scenarios aligned to OWASP LLM Top 10.
Install
npm install -g botguard-cli
# or run directly
npx botguard-cli scanQuick Start
# Run a security scan
botguard scan --api-key YOUR_KEY --endpoint https://my-agent.com/chat --description "My chatbot"
# Scan with a system prompt file
botguard scan --api-key YOUR_KEY --system-prompt ./prompts/system.md --fail-threshold 80
# Check remaining credits
botguard credits --api-key YOUR_KEY
# Generate a config file
botguard initConfig File
Create a .botguard.yml to avoid passing flags every time:
apiKey: ${BOTGUARD_API_KEY}
scan:
endpoint: https://my-agent.com/chat
description: "Customer support chatbot"
categories: [jailbreak, prompt_injection, data_extraction]
failThreshold: 80
format: tableThen just run:
botguard scanOptions
| Flag | Description | Default |
|---|---|---|
| --api-key | API key (or BOTGUARD_API_KEY env var) | — |
| --endpoint | Agent API endpoint URL | — |
| --description | Agent description | — |
| --system-prompt | System prompt (text or file path) | — |
| --mode | sync or async | sync |
| --categories | Comma-separated attack categories | all |
| --attack-count | Number of attacks | — |
| --fail-threshold | Exit 1 if score below this (0-100) | 0 |
| --format | table, json, or summary | table |
| --config | Path to .botguard.yml | auto-detect |
CI/CD Usage
The CLI returns exit code 1 when the score is below --fail-threshold, making it CI-friendly:
botguard scan --fail-threshold 80 --format summaryFor GitHub Actions, use botguardai/security-scan which wraps this CLI with PR comments and check status.
Get Your API Key
- Sign up at botguard.dev
- Go to Account > API Keys
- Create a CI/CD key
Related
- BotGuard — Automated red-teaming & real-time firewall for AI agents
- GitHub Action — CI/CD security scanning
- Attack Library — 229+ open-source LLM attack templates
License
MIT
