stripllm
v0.1.0
Published
LLM sanitization SDK — DOMPurify, but for LLM context windows.
Maintainers
Readme
StripLLM
DOMPurify, but for LLM context windows.
StripLLM is an open-source TypeScript/JavaScript SDK that sanitizes LLM inputs and outputs in your existing pipeline — no infrastructure changes, no external API calls, sub-10ms latency.
npm install stripllmQuickstart
import { StripLLM } from 'stripllm';
const strip = new StripLLM();
// 1. Block prompt injection before it reaches the LLM
const safeInput = strip.clean(userMessage);
// 2. Redact PII — get a mapping back for rehydration
const [redacted, mapping] = strip.redact('Email me at [email protected]');
// redacted → "Email me at [EMAIL_1]"
// mapping → { "[EMAIL_1]": "[email protected]" }
// After LLM responds, restore originals
const response = strip.rehydrate(llmOutput, mapping);
// 3. Validate LLM output — enforce schema, catch leaks & hallucinations
const validated = strip.enforce(llmResponse, 'json');
// 4. Full conversation risk audit
const report = strip.audit(conversation);
console.log(report.riskScore); // → 0.12API Reference
new StripLLM(threshold = 0.3)
Initialize the sanitizer. threshold controls how sensitive clean() is (0–1, lower = stricter).
strip.clean(text: string): string
Detects prompt injection, jailbreak attempts, and unicode tricks. Throws InjectionDetectedError if risk score >= threshold.
try {
const safe = strip.clean(userInput);
} catch (e) {
if (e instanceof InjectionDetectedError) {
console.log(`Blocked. Risk score: ${e.riskScore}`);
}
}Non-throwing variant:
const result = strip.scan(text);
result.detected // boolean
result.riskScore // number 0–1
result.matchedPatterns // string[]strip.redact(text: string, entities?: EntityType[]): [string, Record<string, string>]
Replaces PII with typed placeholders. Returns [redactedText, mapping].
Supported entity types: EMAIL, PHONE, SSN, CREDIT_CARD, IP_ADDRESS, PASSPORT, DRIVERS_LICENSE, DATE_OF_BIRTH, URL
const [redacted, mapping] = strip.redact(text, ['EMAIL', 'SSN']);
// Restore originals in LLM output
const final = strip.rehydrate(llmOutput, mapping);strip.enforce(text: string, schema?: SchemaSpec, raiseOnError = true): string
Validates LLM output for safety and structural correctness.
// Require valid JSON
const validated = strip.enforce(response, 'json');
// Require specific keys and types
const validated = strip.enforce(response, { status: 'string', count: 'number' });
// Check for leaks only
const validated = strip.enforce(response);Non-throwing variant:
const result = strip.validate(text, 'json');
result.valid // boolean
result.errors // string[]
result.warnings // string[]
result.output // stringstrip.audit(conversation: ConversationTurn[]): AuditReport
Full security audit of a multi-turn conversation.
const report = strip.audit([
{ role: 'user', content: 'My email is [email protected]. Help me.' },
{ role: 'assistant', content: 'Sure, I can help with that.' },
]);
report.riskScore // number 0–1
report.findings // Finding[]
report.recommendations // string[]Why StripLLM vs Alternatives?
| | StripLLM | Lakera Guard | Rebuff | DIY Regex | |---|---|---|---|---| | Local (no API calls) | ✅ | ❌ | ❌ | ✅ | | Latency | <10ms | ~100ms | ~200ms | <1ms | | PII rehydration | ✅ | ❌ | ❌ | ❌ | | Output validation | ✅ | ❌ | ❌ | ❌ | | Conversation audit | ✅ | ❌ | ❌ | ❌ | | TypeScript types | ✅ | ❌ | ❌ | ✅ | | Open source | ✅ MIT | ❌ | ❌ | ✅ | | Zero dependencies | ✅ | ❌ | ❌ | ✅ |
License
MIT — see LICENSE
Python SDK
Also available for Python: pip install stripllm
Enterprise?
Need centralized LLM security across your organization? Check out Context Firewall — an API gateway that applies StripLLM-style protection to every LLM call in your stack, with a real-time dashboard, SOC 2 audit trail, and RBAC.
