@bstockwelldev/prompt-guardrails-core
v1.1.0
Published
Stateless server-side prompt guardrails: policy, input/output validation, structured invocation, and privacy-safe telemetry
Maintainers
Readme
@bstockwelldev/prompt-guardrails-core
Stateless server-side prompt guardrails: policy, input/output validation, structured invocation, and privacy-safe telemetry.
Scope
In scope: Versioned prompt policy, input validation, structured prompt builders, output validation, JSON repair, telemetry contracts, refusal typing.
Out of scope: React, Next.js routes, provider SDKs, domain logic.
Installation
npm install @bstockwelldev/prompt-guardrails-coreFor Vercel/CI: Use the npm package (^1.1.0). For local development before first publish, consumers may use file:../prompt-guardrails-core; after publishing, switch to ^1.1.0.
Public API
createPromptRuntime(config)– Create guarded prompt runtimewithPromptGuardrails(config, handler)– Wrap route handlers / server actionsbuildStructuredPrompt({ system, userData, context })– Instruction/data separationvalidatePromptInput(input, policy)– Input guardrailsvalidatePromptOutput(output, schema, policy)– Output validationrepairJsonOutput(raw, schemaName)– JSON repair orchestrationcreateTelemetryEvent(event)– Privacy-safe telemetryPromptGuardrailsError– Machine-readable refusal reasons
Usage
import {
createPromptRuntime,
parsePolicyRef,
type PromptPolicy,
} from '@bstockwelldev/prompt-guardrails-core';
const getPolicy: (id: string, version: string) => PromptPolicy | null = (id, version) => {
if (id === 'support_chatbot' && version === '1.3.0') {
return {
id,
version,
key: `${id}@${version}`,
system: 'You are a courteous support agent.',
constraints: { maxTokens: 800, allowUrls: false },
};
}
return null;
};
const runtime = createPromptRuntime({
getPolicy,
invokeGateway: async ({ messages, policy }) => {
// Host app performs actual LLM call
return await yourGateway.generateText(messages);
},
});
const result = await runtime.invoke('[email protected]', userInput, {
requestId: 'req-123',
tenantId: 'tenant-1',
});Refusal Reasons
injection_detected– Prompt injection pattern detectedurl_blocked– URLs not allowed by policypolicy_disabled– Prompt disabled or unknownschema_invalid– Output failed schema validationoutput_leakage_detected– Output contains disallowed contentinput_too_long– Input exceeds max lengthcontrol_characters_blocked– Disallowed control chars in input
Repository
- Source: https://github.com/bstockwelldev/prompt-guardrails-core
- npm: https://www.npmjs.com/package/@bstockwelldev/prompt-guardrails-core
