@bpsecops/ai-guard
v0.2.6
Published
AI Guard stops sensitive data from being accidentally sent to AI tools. It works in three places — your browser, your terminal, and your file system.
Downloads
243
Readme
AI Guard
AI Guard stops sensitive data from being accidentally sent to AI tools. It works in three places — your browser, your terminal, and your file system.
Browser extension — intercepts messages before you send them on ChatGPT, Claude, Gemini, and more. If sensitive data is detected, it blocks submission and shows you exactly what it found.
CLI tool — wraps your AI command-line tools (claude, chatgpt, gemini, etc.) and scans your prompt before it reaches the model. If something sensitive is found, it warns you and blocks the command.
File protection — prevents AI tools from reading files that contain secrets. For all tools (aider, cursor, claude, etc.), AI Guard scans any files you pass as arguments before the tool launches. For Claude Code specifically, it also intercepts file reads mid-session, blocks dangerous bash commands (env, printenv, git log -p, direct reads of ~/.aws/credentials, ~/.ssh/id_rsa, and more), and scans the output of every bash command before it reaches the model — catching secrets that would otherwise slip through via git diff, docker inspect, kubectl get secret, and similar commands.
What it catches
- Credentials — API keys, tokens, passwords, private keys (AWS, GitHub, Stripe, Slack, and more)
- PII — Social Security Numbers, email addresses, phone numbers, passport numbers
- Financial — Credit card numbers, bank account and routing numbers
- Health — Medical record numbers, diagnoses, medication names
- Code secrets — Hardcoded passwords,
.envfiles, DjangoSECRET_KEY
Browser Extension
The extension watches what you type on AI websites. When you hit Send, it scans your message first. If something sensitive is found, it blocks the submission and shows you exactly what it caught — you can then edit your message or choose to send anyway.
Supported sites: ChatGPT, Claude, Gemini, Copilot, Perplexity, Brave Leo
Warning card — blocked submission with details on what was found:

Popup — toggle protection on/off or pause monitoring:

Dashboard — track detections, blocked submissions, and overrides:

Settings — configure detection categories and actions per category:

Custom keywords — add your own terms to watch for (plain text or regex):

Install
Download the latest release (no Node.js required)
- Go to the Releases page
- Download the zip for your browser:
ai-guard-chrome-vX.X.X.zip— Chrome, Edge, or Braveai-guard-firefox-vX.X.X.zip— Firefox
- Unzip the file
Chrome / Edge
- Go to
chrome://extensions - Enable Developer mode (top right)
- Click Load unpacked → select the unzipped folder
Brave
- Go to
brave://extensions - Enable Developer mode (top right)
- Click Load unpacked → select the unzipped folder
Firefox
- Go to
about:debugging→ This Firefox - Click Load Temporary Add-on
- Select
manifest.jsoninside the unzipped folder
Requires Node.js 18+
git clone https://github.com/bpSecOps/ai-guard.git
cd ai-guard
npm install
npm run buildThen load the .output/chrome-mv3 folder (Chrome/Brave) or .output/firefox-mv2/manifest.json (Firefox) as above.
CLI
The CLI tool scans prompts before they reach your AI tool. Add a shell wrapper once and it works automatically in the background — you never have to think about it.
Install
npm install -g @bpsecops/ai-guardRequires Node.js 18+
Set up shell wrappers
Run the one-time setup command:
ai-guard setupThis installs shell wrappers for claude, chatgpt, gemini, copilot, cursor, and aider into your ~/.zshrc and ~/.bashrc. Then reload your shell:
source ~/.zshrc # or source ~/.bashrcHow it works
From this point on, just use your AI tools normally. AI Guard runs silently in the background.
claude "explain this function"
# ✅ Clean — claude launches normally
claude "my Stripe key is sk_live_abc123..."
# 🚫 Blocked — AI Guard warns you before claude launchesIf something is detected you'll see a warning card showing exactly what was found. Fix your message and try again.
File Protection
AI Guard protects you from accidentally feeding secrets into AI tools through three mechanisms — one that fires when you pass files at launch, one that intercepts file reads mid-session, and one that scans bash command output before it reaches the model.
Launch-time file scanning (all tools)
The shell wrappers installed by ai-guard setup automatically scan any files you pass as arguments before the AI tool launches. This works for every supported tool:
aider .env secrets.py
# 🚫 Blocked — AI Guard found credentials in .env before aider launched
cursor --read config/database.yml
# 🚫 Blocked — AI Guard found a password before cursor launched
claude --file deployment-notes.txt
# ✅ Clean — claude launches normallyIf a file contains critical secrets it is blocked outright. Warnings let the command through but tell you what was found.
Mid-session file protection (Claude Code)
Claude Code has a hook system that lets AI Guard intercept file reads that happen during a conversation — not just at launch. When Claude tries to read a .env file, private key, or any file containing credentials mid-session, AI Guard blocks the read, tells you exactly what it found, and asks if you want to proceed.
# Inside a Claude Code session:
# You: "read my .env file"
# 🚫 AI Guard blocked this read — .env contains: Generic credential in key=value,
# AWS Access Key. Do you want to allow it?Bash command protection (Claude Code)
AI Guard also intercepts bash commands that could expose secrets — both before they run and after. This catches the cases that file scanning misses entirely.
Blocked before execution:
| Command | Why |
|---|---|
| env / printenv | Dumps all environment variables including API keys |
| cat ~/.aws/credentials | AWS credentials file |
| cat ~/.ssh/id_rsa | Private SSH key |
| cat ~/.netrc / ~/.npmrc / ~/.pypirc | Auth tokens |
| cat ~/.docker/config.json | Docker registry credentials |
| git log -p / git log --patch | Git history may contain previously-committed secrets |
# Claude tries to run: env
# 🚫 AI Guard blocked this command. Running 'env' dumps all environment
# variables, which likely include API keys and tokens.
# Claude tries to run: git log -p
# 🚫 AI Guard blocked 'git log -p'. Showing full git diffs may expose
# secrets that were previously committed and later removed.Scanned after execution:
Every bash command output is scanned before Claude sees it. If secrets appear in the output — from git diff, docker inspect, kubectl get secret, or anything else — Claude is warned not to repeat the values verbatim.
# Claude runs: docker inspect my-container
# ⚠️ AI Guard WARNING: This command output contains high-risk sensitive
# data (Generic credential in key=value). Do NOT reproduce these values.If AI Guard blocks a command you actually need, tell Claude to proceed and it will be allowed through once.
Setup
All three protections are installed by a single command:
ai-guard setupThis installs the shell wrappers for all tools and registers all three Claude Code hooks automatically.
MCP Server
AI Guard includes an MCP (Model Context Protocol) server that adds file protection to any MCP-compatible AI tool — Cursor, Zed, Continue, and others.
The MCP server exposes two tools:
read_file— scans the file before returning its contents. Critical findings are blocked outright; high/medium findings return the content with a warning prepended.read_file_force— reads without scanning. Use this only when the user has explicitly confirmed they want to proceed.
Install
Build the MCP server:
npm install -g github:bpSecOps/ai-guard
ai-guard build:mcp # or: node mcp/build.mjs from the repoThis produces dist/ai-guard-mcp.mjs.
Claude Code
claude mcp add ai-guard node /path/to/ai-guard/dist/ai-guard-mcp.mjs --scope userNote: Claude Code users also get the PreToolUse hook installed by
ai-guard setup, which intercepts native file reads at the OS level and cannot be bypassed by the model. The MCP server adds a second layer on top.
Cursor
Add to your Cursor MCP config (~/.cursor/mcp.json or the project-level .cursor/mcp.json):
{
"mcpServers": {
"ai-guard": {
"command": "node",
"args": ["/path/to/ai-guard/dist/ai-guard-mcp.mjs"]
}
}
}Zed
Add to your Zed settings.json under "context_servers":
{
"context_servers": {
"ai-guard": {
"command": {
"path": "node",
"args": ["/path/to/ai-guard/dist/ai-guard-mcp.mjs"]
}
}
}
}Continue
Add to your Continue config.json under "mcpServers":
{
"mcpServers": [
{
"name": "ai-guard",
"command": "node",
"args": ["/path/to/ai-guard/dist/ai-guard-mcp.mjs"]
}
]
}How it works
When a supported tool calls read_file, AI Guard scans the file using the same detection engine as the browser extension and CLI. If something critical is found, the read is blocked and the tool is instructed to tell you what was found and ask whether to proceed. If you confirm, the tool can call read_file_force with the same path to allow the read.
# AI asks to read .env
read_file("/home/user/project/.env")
🔴 AI Guard blocked this read.
The file `.env` contains critical sensitive data:
[!!!] AWS Access Key ID — AK********LE
[!!!] .env file content — AW********DE
Tell the user what was found and ask if they want to proceed.
If they confirm, use `read_file_force` with the same path to allow the read.Development
npm install
npm test # Run test suite
npm run dev # Chrome dev server with hot reload
npm run build # Build for Chrome
npm run build:firefox # Build for Firefox
npm run build:cli # Build CLI binary
npm run build:mcp # Build MCP server
npm run build:all # Build everything