npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@bpsecops/ai-guard

v0.2.6

Published

AI Guard stops sensitive data from being accidentally sent to AI tools. It works in three places — your browser, your terminal, and your file system.

Downloads

243

Readme

AI Guard

AI Guard stops sensitive data from being accidentally sent to AI tools. It works in three places — your browser, your terminal, and your file system.

Browser extension — intercepts messages before you send them on ChatGPT, Claude, Gemini, and more. If sensitive data is detected, it blocks submission and shows you exactly what it found.

CLI tool — wraps your AI command-line tools (claude, chatgpt, gemini, etc.) and scans your prompt before it reaches the model. If something sensitive is found, it warns you and blocks the command.

File protection — prevents AI tools from reading files that contain secrets. For all tools (aider, cursor, claude, etc.), AI Guard scans any files you pass as arguments before the tool launches. For Claude Code specifically, it also intercepts file reads mid-session, blocks dangerous bash commands (env, printenv, git log -p, direct reads of ~/.aws/credentials, ~/.ssh/id_rsa, and more), and scans the output of every bash command before it reaches the model — catching secrets that would otherwise slip through via git diff, docker inspect, kubectl get secret, and similar commands.


What it catches

  • Credentials — API keys, tokens, passwords, private keys (AWS, GitHub, Stripe, Slack, and more)
  • PII — Social Security Numbers, email addresses, phone numbers, passport numbers
  • Financial — Credit card numbers, bank account and routing numbers
  • Health — Medical record numbers, diagnoses, medication names
  • Code secrets — Hardcoded passwords, .env files, Django SECRET_KEY

Browser Extension

The extension watches what you type on AI websites. When you hit Send, it scans your message first. If something sensitive is found, it blocks the submission and shows you exactly what it caught — you can then edit your message or choose to send anyway.

Supported sites: ChatGPT, Claude, Gemini, Copilot, Perplexity, Brave Leo

Warning card — blocked submission with details on what was found:

AI Guard warning card

Popup — toggle protection on/off or pause monitoring:

AI Guard popup

Dashboard — track detections, blocked submissions, and overrides:

AI Guard dashboard

Settings — configure detection categories and actions per category:

AI Guard settings

Custom keywords — add your own terms to watch for (plain text or regex):

AI Guard custom keywords

Install

Download the latest release (no Node.js required)

  1. Go to the Releases page
  2. Download the zip for your browser:
    • ai-guard-chrome-vX.X.X.zip — Chrome, Edge, or Brave
    • ai-guard-firefox-vX.X.X.zip — Firefox
  3. Unzip the file

Chrome / Edge

  1. Go to chrome://extensions
  2. Enable Developer mode (top right)
  3. Click Load unpacked → select the unzipped folder

Brave

  1. Go to brave://extensions
  2. Enable Developer mode (top right)
  3. Click Load unpacked → select the unzipped folder

Firefox

  1. Go to about:debuggingThis Firefox
  2. Click Load Temporary Add-on
  3. Select manifest.json inside the unzipped folder

Requires Node.js 18+

git clone https://github.com/bpSecOps/ai-guard.git
cd ai-guard
npm install
npm run build

Then load the .output/chrome-mv3 folder (Chrome/Brave) or .output/firefox-mv2/manifest.json (Firefox) as above.


CLI

The CLI tool scans prompts before they reach your AI tool. Add a shell wrapper once and it works automatically in the background — you never have to think about it.

Install

npm install -g @bpsecops/ai-guard

Requires Node.js 18+

Set up shell wrappers

Run the one-time setup command:

ai-guard setup

This installs shell wrappers for claude, chatgpt, gemini, copilot, cursor, and aider into your ~/.zshrc and ~/.bashrc. Then reload your shell:

source ~/.zshrc   # or source ~/.bashrc

How it works

From this point on, just use your AI tools normally. AI Guard runs silently in the background.

claude "explain this function"
# ✅ Clean — claude launches normally

claude "my Stripe key is sk_live_abc123..."
# 🚫 Blocked — AI Guard warns you before claude launches

If something is detected you'll see a warning card showing exactly what was found. Fix your message and try again.


File Protection

AI Guard protects you from accidentally feeding secrets into AI tools through three mechanisms — one that fires when you pass files at launch, one that intercepts file reads mid-session, and one that scans bash command output before it reaches the model.

Launch-time file scanning (all tools)

The shell wrappers installed by ai-guard setup automatically scan any files you pass as arguments before the AI tool launches. This works for every supported tool:

aider .env secrets.py
# 🚫 Blocked — AI Guard found credentials in .env before aider launched

cursor --read config/database.yml
# 🚫 Blocked — AI Guard found a password before cursor launched

claude --file deployment-notes.txt
# ✅ Clean — claude launches normally

If a file contains critical secrets it is blocked outright. Warnings let the command through but tell you what was found.

Mid-session file protection (Claude Code)

Claude Code has a hook system that lets AI Guard intercept file reads that happen during a conversation — not just at launch. When Claude tries to read a .env file, private key, or any file containing credentials mid-session, AI Guard blocks the read, tells you exactly what it found, and asks if you want to proceed.

# Inside a Claude Code session:
# You: "read my .env file"
# 🚫 AI Guard blocked this read — .env contains: Generic credential in key=value,
#    AWS Access Key. Do you want to allow it?

Bash command protection (Claude Code)

AI Guard also intercepts bash commands that could expose secrets — both before they run and after. This catches the cases that file scanning misses entirely.

Blocked before execution:

| Command | Why | |---|---| | env / printenv | Dumps all environment variables including API keys | | cat ~/.aws/credentials | AWS credentials file | | cat ~/.ssh/id_rsa | Private SSH key | | cat ~/.netrc / ~/.npmrc / ~/.pypirc | Auth tokens | | cat ~/.docker/config.json | Docker registry credentials | | git log -p / git log --patch | Git history may contain previously-committed secrets |

# Claude tries to run: env
# 🚫 AI Guard blocked this command. Running 'env' dumps all environment
#    variables, which likely include API keys and tokens.

# Claude tries to run: git log -p
# 🚫 AI Guard blocked 'git log -p'. Showing full git diffs may expose
#    secrets that were previously committed and later removed.

Scanned after execution:

Every bash command output is scanned before Claude sees it. If secrets appear in the output — from git diff, docker inspect, kubectl get secret, or anything else — Claude is warned not to repeat the values verbatim.

# Claude runs: docker inspect my-container
# ⚠️  AI Guard WARNING: This command output contains high-risk sensitive
#    data (Generic credential in key=value). Do NOT reproduce these values.

If AI Guard blocks a command you actually need, tell Claude to proceed and it will be allowed through once.

Setup

All three protections are installed by a single command:

ai-guard setup

This installs the shell wrappers for all tools and registers all three Claude Code hooks automatically.


MCP Server

AI Guard includes an MCP (Model Context Protocol) server that adds file protection to any MCP-compatible AI tool — Cursor, Zed, Continue, and others.

The MCP server exposes two tools:

  • read_file — scans the file before returning its contents. Critical findings are blocked outright; high/medium findings return the content with a warning prepended.
  • read_file_force — reads without scanning. Use this only when the user has explicitly confirmed they want to proceed.

Install

Build the MCP server:

npm install -g github:bpSecOps/ai-guard
ai-guard build:mcp   # or: node mcp/build.mjs from the repo

This produces dist/ai-guard-mcp.mjs.

Claude Code

claude mcp add ai-guard node /path/to/ai-guard/dist/ai-guard-mcp.mjs --scope user

Note: Claude Code users also get the PreToolUse hook installed by ai-guard setup, which intercepts native file reads at the OS level and cannot be bypassed by the model. The MCP server adds a second layer on top.

Cursor

Add to your Cursor MCP config (~/.cursor/mcp.json or the project-level .cursor/mcp.json):

{
  "mcpServers": {
    "ai-guard": {
      "command": "node",
      "args": ["/path/to/ai-guard/dist/ai-guard-mcp.mjs"]
    }
  }
}

Zed

Add to your Zed settings.json under "context_servers":

{
  "context_servers": {
    "ai-guard": {
      "command": {
        "path": "node",
        "args": ["/path/to/ai-guard/dist/ai-guard-mcp.mjs"]
      }
    }
  }
}

Continue

Add to your Continue config.json under "mcpServers":

{
  "mcpServers": [
    {
      "name": "ai-guard",
      "command": "node",
      "args": ["/path/to/ai-guard/dist/ai-guard-mcp.mjs"]
    }
  ]
}

How it works

When a supported tool calls read_file, AI Guard scans the file using the same detection engine as the browser extension and CLI. If something critical is found, the read is blocked and the tool is instructed to tell you what was found and ask whether to proceed. If you confirm, the tool can call read_file_force with the same path to allow the read.

# AI asks to read .env
read_file("/home/user/project/.env")

🔴 AI Guard blocked this read.

The file `.env` contains critical sensitive data:

  [!!!] AWS Access Key ID — AK********LE
  [!!!] .env file content — AW********DE

Tell the user what was found and ask if they want to proceed.
If they confirm, use `read_file_force` with the same path to allow the read.

Development

npm install
npm test               # Run test suite
npm run dev            # Chrome dev server with hot reload
npm run build          # Build for Chrome
npm run build:firefox  # Build for Firefox
npm run build:cli      # Build CLI binary
npm run build:mcp      # Build MCP server
npm run build:all      # Build everything