deadpipe
v4.0.0
Published
LLM observability that answers: Is this prompt still behaving safely?
Maintainers
Readme
Deadpipe Node.js SDK
LLM observability that answers one question: "Is this prompt still behaving safely?"
Supports: OpenAI, Anthropic, Google AI (Gemini), Mistral, Cohere
Installation
npm install deadpipe
# or
yarn add deadpipe
# or
pnpm add deadpipeQuick Start
Universal Wrapper (Recommended)
The wrap() function auto-detects your provider and wraps appropriately:
import { wrap } from 'deadpipe';
import OpenAI from 'openai';
import Anthropic from '@anthropic-ai/sdk';
// Wrap once with optional app context
const openai = wrap(new OpenAI(), { app: 'my_app' });
const anthropic = wrap(new Anthropic(), { app: 'my_app' });
// Pass promptId per call - identifies which prompt/agent this is
const response = await openai.chat.completions.create({
promptId: 'checkout_agent', // Required for tracking
model: 'gpt-4',
messages: [{ role: 'user', content: 'Process refund for order 1938' }]
});
// Use different promptIds for different prompts
await anthropic.messages.create({
promptId: 'support_agent',
model: 'claude-3-opus',
messages: [{ role: 'user', content: 'Help me with my order' }]
});Provider-Specific Wrappers
For explicit control, use provider-specific wrappers:
import { wrapOpenAI, wrapAnthropic, wrapGoogleAI, wrapMistral, wrapCohere } from 'deadpipe';
const openai = wrapOpenAI(new OpenAI(), { app: 'my_app' });
const anthropic = wrapAnthropic(new Anthropic(), { app: 'my_app' });Manual Tracking
For streaming, custom logic, or unsupported clients:
import { track } from 'deadpipe';
import OpenAI from 'openai';
const client = new OpenAI();
const params = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'Process refund for order 1938' }]
};
const response = await track('checkout_agent', async (t) => {
const response = await client.chat.completions.create(params);
t.record(response, undefined, params); // Pass params to capture input
return response;
});Provider Examples
OpenAI
import { wrap } from 'deadpipe';
import OpenAI from 'openai';
const client = wrap(new OpenAI(), { app: 'my_app' });
const response = await client.chat.completions.create({
promptId: 'openai_agent',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }]
});Anthropic
import { wrap } from 'deadpipe';
import Anthropic from '@anthropic-ai/sdk';
const client = wrap(new Anthropic(), { app: 'my_app' });
const response = await client.messages.create({
promptId: 'claude_agent',
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello, Claude!' }]
});Google AI (Gemini)
import { wrap } from 'deadpipe';
import { GoogleGenerativeAI } from '@google/generative-ai';
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY);
const model = wrap(genAI, { app: 'my_app' }).getGenerativeModel({ model: 'gemini-1.5-pro' });
const result = await model.generateContent('Hello, Gemini!', { promptId: 'gemini_agent' });Mistral
import { wrap } from 'deadpipe';
import { Mistral } from '@mistralai/mistralai';
const client = wrap(new Mistral({ apiKey: process.env.MISTRAL_API_KEY }), { app: 'my_app' });
const response = await client.chat.complete({
promptId: 'mistral_agent',
model: 'mistral-large-latest',
messages: [{ role: 'user', content: 'Hello, Mistral!' }]
});Cohere
import { wrap } from 'deadpipe';
import { CohereClient } from 'cohere-ai';
const client = wrap(new CohereClient({ token: process.env.COHERE_API_KEY }), { app: 'my_app' });
const response = await client.chat({
promptId: 'cohere_agent',
model: 'command-r-plus',
message: 'Hello, Cohere!'
});What Gets Tracked
Every prompt execution captures:
| Category | Metrics | |----------|---------| | Identity | prompt_id, model, provider, app_id, environment, version | | Timing | request_start, first_token_time, total_latency | | Volume | input_tokens, output_tokens, estimated_cost_usd | | Reliability | http_status, timeout, retry_count, error_message | | Output Integrity | output_length, empty_output, truncated, json_parse_success, schema_validation_pass | | Behavioral Fingerprint | output_hash, refusal_flag, tool_calls_count | | Safety Proxies | enum_out_of_range, numeric_out_of_bounds | | Change Context | prompt_hash, tool_schema_hash, system_prompt_hash |
Advanced Usage
Schema Validation with Universal Wrapper (Zod)
Each prompt can have its own schema when using the universal wrapper:
import { wrap } from 'deadpipe';
import { z } from 'zod';
import OpenAI from 'openai';
const OrderResponse = z.object({
product_id: z.string(),
confidence: z.number().min(0).max(1),
category: z.enum(['electronics', 'clothing', 'food']),
});
const RefundResponse = z.object({
refund_id: z.string(),
amount: z.number(),
status: z.enum(['pending', 'approved', 'rejected']),
});
const zodValidator = (schema: z.ZodSchema) => ({
validate: (data: unknown) => {
const result = schema.safeParse(data);
return {
success: result.success,
data: result.success ? result.data : undefined,
errors: result.success ? undefined : result.error.errors.map(e => e.message)
};
}
});
// Wrap once
const client = wrap(new OpenAI(), { app: 'my_ecommerce' });
// Different schemas for different prompts
const order = await client.chat.completions.create({
promptId: 'recommender',
schema: zodValidator(OrderResponse), // Auto-validates, tracks pass rates
model: 'gpt-4',
messages: [{ role: 'user', content: 'Recommend a product' }],
response_format: { type: 'json_object' }
});
const refund = await client.chat.completions.create({
promptId: 'refund_agent',
schema: zodValidator(RefundResponse), // Different schema for this prompt
model: 'gpt-4',
messages: [{ role: 'user', content: 'Process refund for order 123' }],
response_format: { type: 'json_object' }
});Schema Validation with track() (Zod)
import { track } from 'deadpipe';
import { z } from 'zod';
import OpenAI from 'openai';
const RefundResponse = z.object({
order_id: z.string(),
amount: z.number(),
status: z.string(),
});
const client = new OpenAI();
const result = await track('checkout_agent', async (t) => {
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Process refund for order 1938' }],
response_format: { type: 'json_object' }
});
return t.record(response);
}, {
schema: {
validate: (data) => {
const result = RefundResponse.safeParse(data);
return {
success: result.success,
data: result.success ? result.data : undefined,
errors: result.success ? undefined : result.error.errors.map(e => e.message)
};
}
}
});Track Streaming Responses
const params = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
};
const response = await track('streaming_agent', async (t) => {
const stream = await client.chat.completions.create(params);
let fullContent = '';
for await (const chunk of stream) {
if (chunk.choices[0]?.delta?.content) {
t.markFirstToken(); // Call once on first token
fullContent += chunk.choices[0].delta.content;
}
}
t.record({
model: 'gpt-4',
choices: [{ message: { content: fullContent } }],
usage: { prompt_tokens: 10, completion_tokens: 100, total_tokens: 110 }
}, undefined, params);
return fullContent;
});Track Retries
const response = await track('checkout_agent', async (t) => {
for (let attempt = 0; attempt < 3; attempt++) {
try {
const response = await client.chat.completions.create({...});
t.record(response);
return response;
} catch (error) {
t.markRetry();
if (attempt === 2) throw error;
}
}
});Environment-Based Configuration
// Uses these environment variables:
// DEADPIPE_API_KEY - Your API key
// DEADPIPE_APP_ID - Application identifier
// DEADPIPE_ENVIRONMENT - e.g., 'production', 'staging'
// DEADPIPE_VERSION or GIT_COMMIT - Version/commit hash
import { wrap } from 'deadpipe';
// API key auto-loaded from DEADPIPE_API_KEY
const client = wrap(new OpenAI(), { app: 'my_app' });
// Then pass promptId per call
await client.chat.completions.create({ promptId: 'checkout', model: 'gpt-4', messages: [...] });Full Options
const client = wrap(new OpenAI(), {
// App identifier (groups all prompts for your app)
app: 'my-app',
// Authentication
apiKey: 'dp_...',
baseUrl: 'https://www.deadpipe.com/api/v1',
timeout: 10000,
// Identity
environment: 'production',
version: '1.2.3',
});
// Then pass promptId per call (with optional schema per call)
await client.chat.completions.create({
promptId: 'checkout_agent',
schema: { validate: (data) => RefundSchema.safeParse(data) }, // Optional: per-call schema
enumFields: { status: ['pending', 'approved', 'rejected'] }, // Optional: per-call enums
numericBounds: { amount: { min: 0, max: 10000 } }, // Optional: per-call bounds
model: 'gpt-4',
messages: [...]
});Framework Examples
Next.js API Routes
import { wrap } from 'deadpipe';
import OpenAI from 'openai';
const client = wrap(new OpenAI(), { app: 'my_nextjs_app' });
export async function POST(request: Request) {
const { prompt, promptId } = await request.json();
const response = await client.chat.completions.create({
promptId: promptId || 'api_handler',
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }]
});
return Response.json({ result: response.choices[0].message.content });
}Express.js
import express from 'express';
import { wrap } from 'deadpipe';
import OpenAI from 'openai';
const app = express();
const client = wrap(new OpenAI(), { app: 'my_express_app' });
app.post('/generate', async (req, res) => {
const response = await client.chat.completions.create({
promptId: req.body.promptId || 'express_endpoint',
model: 'gpt-4',
messages: req.body.messages
});
res.json(response);
});API Reference
wrap(client, options?)
Universal wrapper that auto-detects provider.
client: Any supported LLM clientoptions.app: Optional app identifier (can also use DEADPIPE_APP_ID env var)
Returns: Wrapped client with identical API. Each call must include promptId.
Provider-Specific Wrappers
wrapOpenAI(client, options?)- OpenAI clientwrapAnthropic(client, options?)- Anthropic clientwrapGoogleAI(client, options?)- Google AI clientwrapMistral(client, options?)- Mistral clientwrapCohere(client, options?)- Cohere client
All wrappers accept optional { app } and require promptId per call.
track(promptId, fn, options?)
Track a prompt execution with full telemetry.
promptId: Unique identifier for this promptfn: Async function that receives aPromptTrackeroptions: Configuration options
Returns: Promise<T> (result of fn)
PromptTracker
The tracker object passed to your function:
record(response, parsedOutput?, input?)- Record the LLM responsemarkFirstToken()- Mark when first token received (streaming)markRetry()- Mark a retry attemptrecordError(error)- Record an errorgetTelemetry()- Get the telemetry object
Utility Functions
estimateCost(model, inputTokens, outputTokens)- Estimate USD costdetectRefusal(text)- Detect if response is a refusaldetectProvider(response)- Detect provider from responsedetectClientProvider(client)- Detect provider from client
Supported Models & Pricing
| Provider | Models | |----------|--------| | OpenAI | gpt-4, gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo, o1, o1-mini, o1-pro | | Anthropic | claude-3-opus, claude-3-sonnet, claude-3-haiku, claude-3.5-sonnet, claude-sonnet-4, claude-opus-4 | | Google AI | gemini-1.5-pro, gemini-1.5-flash, gemini-2.0-flash, gemini-2.0-pro | | Mistral | mistral-large, mistral-medium, mistral-small, mistral-nemo, codestral, pixtral | | Cohere | command-r-plus, command-r, command, command-light |
Zero Dependencies
This SDK has zero runtime dependencies. Uses native fetch (Node 18+).
TypeScript
Full TypeScript support with type definitions included.
License
Deadpipe SDK License - see LICENSE file.
