evalguardai-openai
v1.0.1
Published
Drop-in OpenAI SDK wrapper with EvalGuard guardrails, logging & cost tracking
Maintainers
Readme
@evalguard/openai
Drop-in OpenAI SDK wrapper that adds real-time guardrails, trace logging, and cost tracking via EvalGuard.
Installation
npm install @evalguard/openai openaiQuick Start
import OpenAI from "openai";
import { wrapOpenAI } from "@evalguard/openai";
const openai = wrapOpenAI(new OpenAI(), {
apiKey: "eg_...",
projectId: "proj_...",
});
// Use exactly like the normal OpenAI SDK — guardrails are automatic
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello, how are you?" }],
});
console.log(response.choices[0].message.content);Streaming
Streaming works transparently. The wrapper intercepts chunks to log the assembled response without affecting stream behavior.
const stream = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a poem about AI safety" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}Configuration
const openai = wrapOpenAI(new OpenAI(), {
// Required: your EvalGuard API key
apiKey: "eg_...",
// Optional: EvalGuard API base URL (default: https://evalguard.ai/api/v1)
baseUrl: "https://your-evalguard-instance.com/api/v1",
// Optional: block requests that fail guardrails (default: true)
blockOnViolation: true,
// Optional: log all requests to EvalGuard (default: true)
enableLogging: true,
// Optional: project ID for organizing traces
projectId: "proj_...",
// Optional: custom metadata attached to every trace
metadata: { environment: "production", service: "chatbot" },
// Optional: callback when a guardrail violation is detected
onViolation: (result) => {
console.warn("Guardrail violation:", result.violations);
},
});What It Does
| Phase | Action | |-------|--------| | Pre-request | Sends the prompt to EvalGuard's firewall for prompt injection detection, PII scanning, and toxicity checks | | LLM call | Passes through to the real OpenAI API unchanged | | Post-response | Logs model, tokens, latency, cost, and guardrail results as a trace to EvalGuard |
Fail-Open Design
If EvalGuard is unreachable (network error, timeout, 5xx), the wrapper passes requests through to OpenAI directly. Your application never breaks because of EvalGuard downtime.
Error Handling
When blockOnViolation is true (default) and a guardrail check fails:
import { EvalGuardViolationError } from "@evalguard/openai";
try {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "malicious prompt..." }],
});
} catch (error) {
if (error instanceof EvalGuardViolationError) {
console.log("Blocked:", error.violations);
// [{ type: "prompt_injection", severity: "critical", message: "..." }]
}
}Set blockOnViolation: false to log violations without blocking:
const openai = wrapOpenAI(new OpenAI(), {
apiKey: "eg_...",
blockOnViolation: false,
onViolation: (result) => {
// Log but don't block
analytics.track("guardrail_violation", result);
},
});License
MIT
