@ctrlpnl/node
v0.2.3
Published
Official Node.js SDK for Ctrlpnl - AI Pipeline Protection
Downloads
65
Maintainers
Readme
@ctrlpnl/node
Official Node.js SDK for Ctrlpnl - AI Pipeline Protection.
Protect your AI applications with PII redaction, secret removal, prompt injection blocking, and content filtering.
Installation
npm install @ctrlpnl/node
# or
pnpm add @ctrlpnl/node
# or
yarn add @ctrlpnl/nodeQuick Start
import Ctrlpnl from "@ctrlpnl/node";
const ctrlpnl = new Ctrlpnl({
apiToken: process.env.CTRLPNL_API_TOKEN,
});
// Before calling your AI
const input = await ctrlpnl.transform(userPrompt);
if (input.blocked) {
return "Sorry, I can't help with that.";
}
// Call your AI with the safe prompt
const aiResponse = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: input.prompt }],
});
// After getting the response
const output = await ctrlpnl.complete(
input.traceId,
aiResponse.choices[0].message.content
);
return output.response;Features
- PII Redaction: Automatically detect and redact emails, phone numbers, SSNs, credit cards
- Secret Detection: Remove API keys, tokens, and passwords before they reach your AI
- Prompt Injection Blocking: Detect and block prompt injection attempts
- Content Filtering: Custom rules to block or modify unwanted content
- Policy Inheritance: Global policies that apply across all your pipelines
- Full Audit Trail: Every transformation is traced for compliance
API Reference
new Ctrlpnl(config)
Create a new Ctrlpnl client.
const ctrlpnl = new Ctrlpnl({
// Required: Your API token from https://ctrlpnl.ai/settings/api-keys
apiToken: "cp_live_...",
// Optional: Request timeout in ms (default: 30000)
timeout: 30000,
// Optional: Default policy ID for all requests
defaultPolicyId: "my-policy",
// Optional: Enable debug logging
debug: false,
});ctrlpnl.transform(prompt, options?)
Transform a prompt through your input pipeline before sending to AI.
const result = await ctrlpnl.transform("My email is [email protected]", {
// Optional: Override the default policy
policyId: "strict-policy",
// Optional: Context for policy evaluation
context: {
userId: "user_123",
sessionId: "session_456",
environment: "production",
metadata: { role: "admin" },
},
});
console.log(result.prompt); // "My email is [REDACTED]"
console.log(result.blocked); // false
console.log(result.traceId); // "abc-123-..."
console.log(result.appliedSteps); // [{ stepName: "Redact PII", applied: true, ... }]ctrlpnl.complete(traceId, response, options?)
Complete a trace by processing the AI response through your output pipeline.
const output = await ctrlpnl.complete(input.traceId, aiResponse, {
// Optional: AI metadata for observability
aiMetadata: {
model: "gpt-4",
provider: "openai",
tokenCount: 150,
latencyMs: 1200,
},
});
console.log(output.response); // Safe response to return to user
console.log(output.blocked); // false
console.log(output.appliedSteps); // Output pipeline steps that ranUtilities
wrap()
Wrap an AI call with full pipeline protection in one function.
import Ctrlpnl, { wrap } from "@ctrlpnl/node";
const ctrlpnl = new Ctrlpnl();
const result = await wrap(
ctrlpnl,
userPrompt,
async (safePrompt) => {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: safePrompt }],
});
return response.choices[0].message.content;
},
{
context: { userId: "user_123" },
onBlocked: (info) => {
console.log(`Blocked at ${info.phase}: ${info.reason}`);
return "Sorry, I can't help with that.";
},
}
);
console.log(result.value); // The safe AI response (or onBlocked return value)
console.log(result.blocked); // Whether it was blocked
console.log(result.blockedAt); // "input" | "output" | undefinedprepareStream()
Prepare for streaming AI responses.
import Ctrlpnl, { prepareStream } from "@ctrlpnl/node";
const ctrlpnl = new Ctrlpnl();
const { safePrompt, traceId, complete } = await prepareStream(
ctrlpnl,
userPrompt
);
// Start streaming
const stream = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: safePrompt }],
stream: true,
});
// Collect and stream to user
let fullResponse = "";
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || "";
fullResponse += content;
process.stdout.write(content);
}
// Complete the trace
const output = await complete(fullResponse, {
model: "gpt-4",
provider: "openai",
});
// Check if output was blocked
if (output.blocked) {
console.log("Response was filtered:", output.blockReason);
}batchTransform()
Transform multiple prompts in parallel.
import Ctrlpnl, { batchTransform } from "@ctrlpnl/node";
const ctrlpnl = new Ctrlpnl();
const { results, errors } = await batchTransform(
ctrlpnl,
["prompt 1", "prompt 2", "prompt 3"],
{ context: { userId: "batch_user" } },
5 // concurrency
);
console.log(`Transformed ${results.length} prompts`);
console.log(`Failed: ${errors.length}`);Error Handling
import Ctrlpnl, {
BlockedError,
AuthenticationError,
RateLimitError,
} from "@ctrlpnl/node";
try {
const result = await ctrlpnl.transform(prompt);
} catch (error) {
if (error instanceof BlockedError) {
console.log("Request blocked:", error.message);
console.log("Trace ID:", error.traceId);
console.log("Blocking step:", error.blockingStep);
} else if (error instanceof AuthenticationError) {
console.log("Invalid API token");
} else if (error instanceof RateLimitError) {
console.log("Rate limited, retry after:", error.retryAfter);
}
}Environment Variables
The SDK reads these environment variables:
CTRLPNL_API_TOKEN- Your API token (can also be passed to constructor)
TypeScript
Full TypeScript support with exported types:
import type {
CtrlpnlConfig,
TransformResult,
CompleteResult,
AppliedStepInfo,
PipelineContext,
} from "@ctrlpnl/node";License
MIT
