@futurespeak-ai/privacy-shield
v1.0.0
Published
Strip PII from LLM prompts. Restore it in responses. Zero cloud dependency.
Maintainers
Readme
Privacy Shield
Strip PII from LLM prompts. Restore it in responses. Zero cloud dependency.
The Problem
Every API call to an LLM can leak sensitive data. API keys, email addresses, credit card numbers, Social Security numbers, file paths with your username in them. Once those tokens hit an external server, you have lost control of them. Most developers do not realize how much PII flows through their prompts until it is too late.
Privacy Shield sits between your application and the LLM. It detects PII using regex pattern matching, replaces each match with a deterministic placeholder, and stores the mapping locally. When the response comes back, it rehydrates the placeholders with the original values. The LLM never sees your secrets. Your output is fully restored.
How It Works
- Detection -- Regex patterns scan text across 10 PII categories
- Replacement -- Each match gets a deterministic placeholder using FNV-1a hashing scoped to your session nonce
- Round-trip -- Rehydration restores placeholders to original values with perfect fidelity
Placeholders look like <<PII:SECRET:a3f8b2c1>>. They are deterministic within a session, so the same value always produces the same placeholder. Different sessions produce different placeholders.
PII Categories
| Category | What it catches | Example |
|---|---|---|
| SECRET | Anthropic keys, OpenAI keys, AWS keys, GitHub PATs, Slack tokens, Google API keys, JWTs, generic secrets | sk-ant-api03-..., AKIA..., ghp_... |
| CREDIT_CARD | Visa, Mastercard, Amex, Discover | 4111111111111111 |
| SSN | US Social Security numbers | 123-45-6789 |
| EMAIL | Email addresses | [email protected] |
| PHONE | US phone numbers (various formats) | (512) 555-0199 |
| IP | Public IP addresses (preserves private/loopback) | 203.0.113.42 (scrubbed), 192.168.1.1 (kept) |
| PATH | File paths containing your OS username | C:\Users\you\secret.txt |
Private IPs (192.168.x, 10.x, 172.16-31.x) and loopback (127.0.0.1) are intentionally preserved. They are not externally identifying.
Install
npm install @asimov-federation/privacy-shieldQuick Start
import { createShield } from '@asimov-federation/privacy-shield';
const shield = createShield();
const clean = shield.scrub('My key is sk-ant-api03-secret123456789012345 and email is [email protected]');
// Send `clean` to your LLM -- no secrets leave your machine
const restored = shield.rehydrate(llmResponse);
// Original values are backAPI Reference
createShield(config?)
Creates an isolated Privacy Shield instance.
Config options:
| Option | Type | Default | Description |
|---|---|---|---|
| nonce | string | Random hex | Session nonce for deterministic hashing |
| patterns | object | {} | Additional regex patterns keyed by category |
| username | string | process.env.USERNAME or USER | OS username for path scrubbing |
Returns an object with four methods:
shield.scrub(text)
Scans text for PII, replaces matches with placeholders, returns scrubbed text.
const scrubbed = shield.scrub('Contact [email protected] about invoice 4111111111111111');
// 'Contact <<PII:EMAIL:f3a1b2c4>> about invoice <<PII:CREDIT_CARD:d9e8f7a6>>'shield.rehydrate(text)
Restores PII placeholders to their original values.
const restored = shield.rehydrate(scrubbed);
// 'Contact [email protected] about invoice 4111111111111111'shield.stats()
Returns scrubbing statistics for the current session.
shield.stats();
// { categories: { EMAIL: 1, CREDIT_CARD: 1 }, total: 2 }shield.reset()
Clears all stored mappings, generates a new session nonce, and zeros the stats. Previously scrubbed placeholders will no longer rehydrate.
Exported Utilities
import { PII_PATTERNS, scrubPii, rehydratePii, fnv1a, escapeRegex } from '@asimov-federation/privacy-shield';These are the lower-level building blocks if you need to integrate with custom storage or build your own shield implementation.
Integration with LLM Libraries
Wrapping fetch calls
import { createShield } from '@asimov-federation/privacy-shield';
const shield = createShield();
async function safeLlmCall(prompt) {
const scrubbed = shield.scrub(prompt);
const response = await fetch('https://api.anthropic.com/v1/messages', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': process.env.ANTHROPIC_API_KEY,
'anthropic-version': '2023-06-01'
},
body: JSON.stringify({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: scrubbed }]
})
});
const data = await response.json();
const rawAnswer = data.content[0].text;
return shield.rehydrate(rawAnswer);
}Per-request isolation
If you need separate PII mappings per request (for example, in a multi-tenant server), create a shield per request:
app.post('/chat', async (req, res) => {
const shield = createShield();
const clean = shield.scrub(req.body.prompt);
const answer = await callLlm(clean);
res.json({ response: shield.rehydrate(answer) });
});Custom Patterns
Add your own PII categories by extending an existing category:
const shield = createShield({
patterns: {
SECRET: [/\bEMP-\d{6}\b/g], // Employee IDs as secrets
EMAIL: [/\b[A-Z0-9]+@internal\b/g] // Internal shorthand addresses
}
});Running Tests
node test.js25 tests covering all categories, round-trip rehydration, stats tracking, session determinism, reset behavior, custom patterns, and a real-world mixed-PII scenario.
Credits
Built by FutureSpeak.AI with Claude Opus 4.6 as part of the Asimov Federation.
Part of the Privacy Shield from Asimov's cLaws: data sovereignty means your secrets never leave your machine unmasked.
Related
- Agent Friday -- AI OS with voice portal, Claude Code backbone, and privacy-first architecture
- Asimov's cLaw Spec -- Constraint framework for AI agents: safety, sovereignty, transparency
License
MIT
