aegisprompt-proxy
v1.0.2
Published
Privacy proxy for OpenClaw - scans and redacts sensitive data between AI agents and LLM providers
Maintainers
Readme
AegisPrompt OpenClaw Proxy
Privacy proxy that sits between OpenClaw and your LLM provider. Scans every prompt for sensitive data (PII, credentials, API keys, medical records) and redacts or blocks it before it leaves your machine.
Quick Start
# From the project root
cd packages/openclaw-proxy
npm install
npm startThe proxy starts on http://127.0.0.1:18790 by default.
Connect to OpenClaw
In your OpenClaw config (~/.openclaw/config.yaml or via the CLI), point your LLM provider through the proxy:
For Anthropic (Claude)
# Instead of pointing directly to api.anthropic.com:
providers:
anthropic:
baseUrl: http://127.0.0.1:18790/v1/messagesFor OpenAI
providers:
openai:
baseUrl: http://127.0.0.1:18790/v1That's it. OpenClaw sends requests to the proxy, the proxy scans them, and forwards clean requests to the real API.
Scanning Modes
| Mode | Behavior |
|----------|---------------------------------------------------------|
| audit | Logs detections but passes everything through |
| warn | Logs + flags detections, passes through |
| redact | Replaces sensitive data with [REDACTED] (default) |
| block | Rejects requests containing critical sensitive data |
Set the mode via:
# CLI flag
npm start -- --mode block
# Environment variable
AEGIS_MODE=redact npm start
# Config file
# Edit aegisprompt.config.json (run `npm start init` to generate)What It Detects
46 detection patterns across 6 categories:
- PII — Email, phone, SSN, addresses, passport, driver's license, UK NINO
- Credentials — AWS, GitHub, Stripe, OpenAI, Anthropic, database URLs, private keys, JWTs, env secrets, passwords, Bearer tokens, SendGrid, Twilio, Azure, GCP, OpenClaw gateway tokens
- Financial — Credit cards (Luhn validated), bank accounts, routing numbers, IBAN, SWIFT/BIC, EIN/TIN
- HIPAA — Medical record numbers, NPI, DEA (with checksum), health plan IDs
- Crypto — Bitcoin, Ethereum addresses, seed/mnemonic phrases
- System — Sensitive file paths (SSH keys, .env files, OpenClaw credentials)
Configuration
Generate a config file:
node src/index.js initThis creates aegisprompt.config.json:
{
"proxy": {
"host": "127.0.0.1",
"port": 18790,
"routes": {
"/v1/messages": "https://api.anthropic.com",
"/v1/": "https://api.openai.com"
}
},
"scanning": {
"mode": "redact",
"enabledTiers": ["core", "tier1", "tier2"],
"minSeverity": "medium",
"scanResponses": true
},
"audit": {
"enabled": true,
"filePath": "./aegisprompt-audit.log"
}
}Industry Templates
For healthcare / HIPAA:
{
"scanning": {
"enabledCategories": ["pii", "hipaa", "credential"],
"mode": "block",
"blockSeverities": ["critical", "high"]
}
}For financial services:
{
"scanning": {
"enabledCategories": ["pii", "financial", "credential"],
"mode": "redact"
}
}Audit Log
When enabled, every detection is written to aegisprompt-audit.log in JSON-lines format:
{"event":"detection","timestamp":"2026-02-16T14:30:00.000Z","method":"POST","url":"/v1/messages","mode":"redact","detections":[{"type":"ssn","label":"Social Security Numbers","severity":"critical","count":1}]}View stats:
node src/index.js statsCLI Options
--port <number> Proxy port (default: 18790)
--host <string> Bind address (default: 127.0.0.1)
--mode <mode> Scanning mode: audit|warn|redact|block
--upstream <url> Default upstream LLM URL
--verbose Enable debug logging
--audit-off Disable audit logging
--no-responses Don't scan LLM responsesEnvironment Variables
AEGIS_PORT=18790
AEGIS_HOST=127.0.0.1
AEGIS_MODE=redact
AEGIS_UPSTREAM=https://api.openai.com
AEGIS_LOG_LEVEL=info
AEGIS_SCAN_RESPONSES=true