langpatrol
v0.1.7
Published
Developer SDK for pre-inference prompt validation and linting
Readme
LangPatrol
Developer SDK for pre-inference prompt validation and linting — think of it as ESLint or Prettier, but for prompts sent to large language models.
Installation
npm install langpatrolQuick Start
import { analyzePrompt } from 'langpatrol';
const report = await analyzePrompt({
prompt: 'Continue the list.',
messages: [{ role: 'user', content: 'Continue the list.' }],
model: 'gpt-5'
});
if (report.issues.length) {
console.log('Issues found:', report.issues);
}API
analyzePrompt(input: AnalyzeInput): Promise<Report>
Analyzes a prompt or message history and returns a report with issues and suggestions.
Input:
prompt?: string- Single prompt stringmessages?: Msg[]- Chat message historyschema?: JSONSchema7- Optional JSON schemamodel?: string- Model name for token estimationtemplateDialect?: 'handlebars' | 'jinja' | 'mustache' | 'ejs'- Template dialectattachments?: Attachment[]- File attachments metadataoptions?: { maxCostUSD?: number; maxInputTokens?: number; referenceHeads?: string[]; apiKey?: string; // API key for cloud API apiBaseUrl?: string; // Base URL for cloud API (default: 'http://localhost:3000') check_context?: { // Domain context checking (cloud-only, requires apiKey and AI Analytics subscription) domains: string[]; // List of domain keywords/topics to validate the prompt against }; }
Output:
issues: Issue[]- Detected issuessuggestions: Suggestion[]- Suggested fixescost?: { estInputTokens: number; estUSD?: number }- Cost estimatesmeta?: { latencyMs: number; modelHint?: string }- Metadata
optimizePrompt(input: OptimizeInput): Promise<OptimizeResponse>
Optimizes (compresses) a user prompt to help reduce token usage. This is a cloud-only feature and requires an API key.
Input:
prompt: string- The prompt text to optimizemodel?: string- Optional target model nameoptions?: { apiKey: string; // Required: cloud API key apiBaseUrl?: string; // Optional: base URL for cloud API (default: 'http://localhost:3000') }
Output:
optimized_prompt: string- Optimized prompt textratio: string- Compression ratio (e.g., "33.00%")origin_tokens: number- Original token countoptimized_tokens: number- Optimized token count
Example:
import { optimizePrompt } from 'langpatrol';
const optimized = await optimizePrompt({
prompt: 'Write a detailed project proposal for building a new mobile app...',
model: 'gpt-4',
options: {
apiKey: process.env.LANGPATROL_API_KEY!,
apiBaseUrl: 'https://api.langpatrol.com' // optional override
}
});
console.log('Compressed prompt:', optimized.compressed);
console.log('Ratio:', optimized.ratio);
console.log('Tokens:', optimized.origin_tokens, '->', optimized.optimized_tokens);Issue Codes
MISSING_PLACEHOLDER- Unresolved template variablesMISSING_REFERENCE- Deictic references without contextCONFLICTING_INSTRUCTION- Contradictory directivesSCHEMA_RISK- JSON schema mismatchesINVALID_SCHEMA- Invalid JSON Schema structureTOKEN_OVERAGE- Token limits exceededOUT_OF_CONTEXT- Prompt doesn't match specified domain activity (cloud-only, requirescheck_contextoption)
Examples
Vercel AI SDK
import { analyzePrompt } from 'langpatrol';
export async function guardedCall(messages, model) {
const report = await analyzePrompt({ messages, model });
if (report.issues.find(i => i.code === 'TOKEN_OVERAGE')) {
// Summarize or trim, then proceed
}
// Then call your model
}Domain Context Checking (Cloud-only)
Validate that prompts match your domain activity using the check_context option. This feature requires an API key and AI Analytics subscription.
import { analyzePrompt } from 'langpatrol';
const report = await analyzePrompt({
prompt: 'Generate a marketing email for our SaaS product',
model: 'gpt-4',
options: {
apiKey: 'your-api-key',
check_context: {
domains: ['saas', 'marketing', 'email', 'software'] // Domain keywords/topics
}
}
});
if (report.issues.find(i => i.code === 'OUT_OF_CONTEXT')) {
console.warn('Prompt is out of context for your domain');
// Handle out-of-context prompt
}Note: The check_context option:
- Requires an
apiKeyto be provided - Automatically routes to the
/api/v1/ai-analyticsendpoint - Returns a high-severity
OUT_OF_CONTEXTerror when the prompt doesn't match the specified domains - Requires an AI Analytics subscription on the cloud API
License
MIT License
