@barric/openai
v0.1.2
Published
barric firewall wrapper for the OpenAI SDK
Readme
@barric/openai
barric firewall wrapper for the OpenAI SDK. Drop-in replacement that scans every chat.completions.create call for prompt injection, PII leaks, and other threats.
Install
npm install @barric/core @barric/openai openaiQuick Start
import OpenAI from 'openai'
import { createFirewall } from '@barric/core'
import { barricOpenAI } from '@barric/openai'
const firewall = createFirewall({
rules: ['prompt-injection', 'pii-redaction'],
injection: { threshold: 0.7, action: 'block' },
pii: { mode: 'redact', types: ['email', 'phone', 'ssn'] },
})
const client = barricOpenAI(new OpenAI(), firewall)
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: userInput }],
})
console.log(response.choices[0].message.content)
console.log(response._barric.events) // firewall scan resultsHow It Works
barricOpenAI returns a drop-in replacement for the OpenAI client:
- Extracts the last
usermessage fromparams.messages - Runs it through inbound scanners (injection detection, PII redaction, rate limiting, encoding detection, input limits)
- If the input passes, the sanitized message replaces the original and the real OpenAI API call executes
- Outbound scanners run on the response (output limits, system prompt leak detection)
- The response is returned with a
_barricproperty containing firewall metadata
System and assistant messages pass through untouched.
Handling Blocked Requests
When the firewall blocks a request, an error is thrown before the API call:
try {
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: maliciousInput }],
})
} catch (err) {
if (err._barric?.blocked) {
console.log('Blocked by:', err._barric.events[0].scanners)
} else {
throw err // Regular OpenAI error
}
}Per-request Context
const client = barricOpenAI(new OpenAI(), firewall, {
context: { userId: 'user-123', ip: '192.168.1.1' },
})License
MIT
