npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@capsulesecurity/clawguard

v0.1.5

Published

Security guard plugin for OpenClaw - uses LLM as a Judge to detect and block risky tool calls

Downloads

119

Readme

ClawGuard by Capsule

ClawGuard

A security guard plugin for OpenClaw that monitors and validates tool calls before execution using an LLM as a Judge approach for risk detection.

Features

  • Tool Call Logging - Logs full JSON of every tool call before execution
  • LLM as a Judge - Uses a secondary LLM to judge and evaluate tool calls for security risks
  • Configurable Blocking - Automatically blocks high/critical risk operations based on the judge's verdict
  • Custom Judge Prompts - Override the default judging prompt for security evaluation

Installation

openclaw plugins install @capsulesecurity/clawguard

Configuration

| Option | Type | Default | Description | |--------|------|---------|-------------| | enabled | boolean | true | Enable or disable the plugin | | logToolCalls | boolean | true | Log full tool call JSON to logger | | securityCheckEnabled | boolean | true | Enable LLM as a Judge for security evaluation | | securityPrompt | string | (built-in) | Custom prompt for the judge LLM | | blockOnRisk | boolean | true | Block tool calls judged as high/critical risk | | timeoutMs | number | 15000 | Timeout for judge evaluation in milliseconds | | maxContextWords | number | 2000 | Maximum words of session context to include | | gatewayHost | string | 127.0.0.1 | Gateway host for LLM calls | | gatewayPort | number | 18789 | Gateway port for LLM calls |

Example Configuration

{
  "plugins": {
    "capsule-claw-guard": {
      "enabled": true,
      "logToolCalls": true,
      "securityCheckEnabled": true,
      "blockOnRisk": true,
      "timeoutMs": 20000
    }
  }
}

Security Risks Evaluated

The judge LLM evaluates tool calls for:

  • Command injection (shell commands with untrusted input)
  • Path traversal attacks (accessing files outside allowed directories)
  • Sensitive data exposure (reading credentials, secrets, private keys)
  • Destructive operations (deleting important files, dropping databases)
  • Network attacks (unauthorized external requests, data exfiltration)
  • Privilege escalation attempts
  • Malicious file operations (writing to system directories)
  • SQL injection patterns
  • Code execution with untrusted input
  • Rogue agent behavior (attempts to bypass safety controls, deceptive actions, unauthorized autonomous operations)

Custom Judge Prompt

You can provide a custom prompt for the judge LLM using the securityPrompt configuration option. Use {TOOL_CALL_JSON} as a placeholder for the tool call data:

{
  "plugins": {
    "capsule-claw-guard": {
      "securityPrompt": "You are a security judge. Evaluate this tool call:\n{TOOL_CALL_JSON}\n\nReturn your verdict as JSON: {\"isRisk\": boolean, \"riskLevel\": \"none\"|\"low\"|\"medium\"|\"high\"|\"critical\", \"riskType\": string, \"reason\": string}"
    }
  }
}

Requirements

The plugin makes HTTP calls to the OpenClaw Gateway's /v1/chat/completions endpoint for LLM evaluation. This requires:

  1. Gateway running: The OpenClaw gateway must be running and accessible
  2. Enable chat completions endpoint: Set gateway.http.endpoints.chatCompletions.enabled to true in your config:
    openclaw config set gateway.http.endpoints.chatCompletions.enabled true
  3. Authentication (optional): If your gateway requires authentication, set one of:
    • OPENCLAW_GATEWAY_TOKEN environment variable
    • OPENCLAW_GATEWAY_PASSWORD environment variable

How It Works

  1. The plugin hooks into before_tool_call events
  2. Logs the full tool call JSON (if logging enabled)
  3. Loads the session context from session files (limited by maxContextWords)
  4. Sends both the tool call and session context to the judge LLM for security evaluation
  5. The judge returns a verdict with risk level and reasoning
  6. If judged as high/critical risk and blocking is enabled, the tool call is blocked
  7. All verdicts are logged for audit purposes

Session Context

The plugin loads conversation history from the session files to provide context for the judge LLM. This allows the judge to make more informed decisions by understanding the conversation flow that led to the tool call.

  • Session files are located at ~/.openclaw/agents/{agentId}/sessions/*.jsonl
  • The context is limited by word count (default: 2000 words) to manage token usage
  • Most recent messages are prioritized when truncating
  • Only user and assistant messages are included (system messages are filtered out)

License

MIT