promptworx-sdk
v0.1.2
Published
Fire-and-forget LLM security telemetry SDK — log every AI interaction to Sentinel for real-time threat detection
Downloads
367
Maintainers
Readme
promptworx-sdk
Fire-and-forget LLM security telemetry SDK. Log every AI interaction to Sentinel for real-time prompt injection detection, PII monitoring, and data exfiltration alerting.
Install
npm install promptworx-sdk
# or
pnpm add promptworx-sdk
# or
yarn add promptworx-sdkQuick Start
import { Telemetry } from "promptworx-sdk";
const sentinel = new Telemetry({
apiKey: process.env.SENTINEL_API_KEY!,
projectId: process.env.SENTINEL_PROJECT_ID!,
endpoint: "https://promptworx.exception.digital/api",
});
// After every LLM call
sentinel.logEvent({
userMessage: userInput,
modelResponse: llmOutput,
userIdentifier: req.user?.id,
modelName: "gpt-4o",
latencyMs: Date.now() - start,
});
// Flush remaining events on shutdown
process.on("SIGTERM", async () => {
await sentinel.flush();
sentinel.destroy();
});Configuration
const sentinel = new Telemetry({
apiKey: "sk_live_...", // Required — from Sentinel dashboard → Settings
projectId: "uuid", // Required — from Sentinel dashboard → Settings
endpoint: "https://...", // Default: http://localhost:3000/api
flushIntervalMs: 5000, // Default: 5 000 ms — how often to batch-send
maxBatchSize: 50, // Default: 50 — send early when buffer is full
});Logging Events
sentinel.logEvent({
// Required
userMessage: "What is the weather in Paris?",
// Optional — all fields improve detection quality
userIdentifier: "user-123", // groups incidents per user
modelName: "gpt-4o-mini",
modelResponse: "The weather is...",
latencyMs: 342,
tokenCounts: {
prompt: 120,
completion: 80,
total: 200,
},
// Pass retrieved RAG chunks for context-aware detection
retrievedContext: [
{ content: "...", source: "docs/policy.md" },
],
// Attach your own pre-computed verdicts (merged with server-side results)
verdicts: {
injection_flag: false,
pii_flag: false,
},
correlationId: req.headers["x-request-id"], // links to APM traces
timestamp: new Date().toISOString(), // default: server ingestion time
});How It Works
- Events are buffered in memory and sent in batches automatically.
- The SDK never throws into your application — all errors are swallowed internally.
- On the server, Sentinel runs built-in detection rules (prompt injection, PII, data exfiltration) and any custom rules you configure, then groups flagged events into incidents.
- If you configure webhooks, Sentinel fires a signed HTTP POST to your endpoint whenever rules trigger.
Manual Flush
// Force-send all buffered events now (e.g. before a cold-start function returns)
await sentinel.flush();Cleanup
// Stop the background timer and flush remaining events
sentinel.destroy();TypeScript
The SDK ships with full TypeScript definitions. All event fields are typed:
import type { LogEventInput, TelemetryConfig, Verdict, TokenCounts } from "promptworx-sdk";License
MIT
