evalguardai-anthropic
v1.0.1
Published
Drop-in Anthropic SDK wrapper with EvalGuard guardrails, logging & cost tracking
Maintainers
Readme
@evalguard/anthropic
Drop-in Anthropic SDK wrapper that adds real-time guardrails, trace logging, and cost tracking via EvalGuard.
Installation
npm install @evalguard/anthropic @anthropic-ai/sdkQuick Start
import Anthropic from "@anthropic-ai/sdk";
import { wrapAnthropic } from "@evalguard/anthropic";
const anthropic = wrapAnthropic(new Anthropic(), {
apiKey: "eg_...",
projectId: "proj_...",
});
// Use exactly like the normal Anthropic SDK — guardrails are automatic
const message = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, how are you?" }],
});
console.log(message.content[0].text);Streaming
Streaming works transparently with both create({ stream: true }) and the stream() helper method.
Using create with stream: true
const stream = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a poem about AI safety" }],
stream: true,
});
for await (const event of stream) {
if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
process.stdout.write(event.delta.text);
}
}Using the stream() helper
const stream = await anthropic.messages.stream({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Explain quantum computing" }],
});
for await (const event of stream) {
if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
process.stdout.write(event.delta.text);
}
}Configuration
const anthropic = wrapAnthropic(new Anthropic(), {
// Required: your EvalGuard API key
apiKey: "eg_...",
// Optional: EvalGuard API base URL (default: https://evalguard.ai/api/v1)
baseUrl: "https://your-evalguard-instance.com/api/v1",
// Optional: block requests that fail guardrails (default: true)
blockOnViolation: true,
// Optional: log all requests to EvalGuard (default: true)
enableLogging: true,
// Optional: project ID for organizing traces
projectId: "proj_...",
// Optional: custom metadata attached to every trace
metadata: { environment: "production", service: "chatbot" },
// Optional: callback when a guardrail violation is detected
onViolation: (result) => {
console.warn("Guardrail violation:", result.violations);
},
});What It Does
| Phase | Action | |-------|--------| | Pre-request | Sends the prompt (including system prompt) to EvalGuard's firewall for prompt injection detection, PII scanning, and toxicity checks | | LLM call | Passes through to the real Anthropic API unchanged | | Post-response | Logs model, tokens, latency, cost, and guardrail results as a trace to EvalGuard |
Fail-Open Design
If EvalGuard is unreachable (network error, timeout, 5xx), the wrapper passes requests through to Anthropic directly. Your application never breaks because of EvalGuard downtime.
Error Handling
When blockOnViolation is true (default) and a guardrail check fails:
import { EvalGuardViolationError } from "@evalguard/anthropic";
try {
const message = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "malicious prompt..." }],
});
} catch (error) {
if (error instanceof EvalGuardViolationError) {
console.log("Blocked:", error.violations);
// [{ type: "prompt_injection", severity: "critical", message: "..." }]
}
}Set blockOnViolation: false to log violations without blocking:
const anthropic = wrapAnthropic(new Anthropic(), {
apiKey: "eg_...",
blockOnViolation: false,
onViolation: (result) => {
analytics.track("guardrail_violation", result);
},
});License
MIT
