@limits/js
v0.0.11
Published
TypeScript SDK for Policy Evaluation API
Downloads
123
Readme
limits-js
The official TypeScript SDK for Limits — the policy engine for AI agents and applications. Enforce business rules, validate LLM outputs, and run safety guardrails with a single API call.
Table of Contents
- Installation
- Quick Start
- Core Concepts
- Conditions Mode —
check() - Instructions Mode —
evaluate() - Guardrails Mode —
guard() - Escalations (Human-in-the-Loop)
- Result Shape
- Error Handling
- TypeScript Types
- Configuration
- Requirements
- Development
- License
Installation
npm install @limits/jsyarn add @limits/jspnpm add @limits/jsQuick Start
import { Limits } from '@limits/js';
// 1. Initialize the client
const limits = new Limits({
apiKey: process.env.LIMITS_API_KEY!, // must start with sk_
});
// 2. Evaluate a policy
const decision = await limits.check('transaction-amount-limit', {
amount: 5000,
currency: 'USD',
customer_id: 'cus_abc123',
});
// 3. Act on the result
if (decision.isAllowed) {
// Proceed with the transaction
processPayment();
} else if (decision.isEscalated) {
// Requires human review — notify your team
notifyReviewTeam(decision.data.reason);
} else if (decision.isBlocked) {
// Blocked by policy — reject the request
throw new Error(decision.data.reason);
}Core Concepts
Policy Key vs Tag
Every evaluation method accepts a single identifier string. It can be either:
| Type | Format | Example | Behavior |
|------|--------|---------|----------|
| Policy Key | Plain string | 'transaction-amount-limit' | Evaluates one specific policy |
| Tag | Prefixed with # | '#payments' | Evaluates all policies with that tag |
When using a tag, multiple policies are evaluated and the strictest result wins:
BLOCK > ESCALATE > ALLOWIf any policy blocks, the decision is BLOCK. Otherwise, if any escalates, the decision is ESCALATE. Otherwise, ALLOW.
Three Evaluation Modes
Limits supports three distinct evaluation modes, each designed for a specific type of policy:
| Mode | Method | Use Case | Input |
|------|--------|----------|-------|
| Conditions | check() | Business rules on structured data | Object (key-value pairs) |
| Instructions | evaluate() | LLM output validation against instructions | Prompt string + response |
| Guardrails | guard() | Safety checks on text content | String (text to scan) |
Conditions Mode — check()
Use check() to evaluate business rules against structured data. This is the most common mode for enforcing conditions like transaction limits, access control, and input validation.
check(policyKeyOrTag: string, input: Record<string, unknown>): Promise<PolicyEvaluationResult>Single Policy
Evaluate a single policy by its key:
// Enforce a minimum transaction amount
const decision = await limits.check('transaction-amount-limit', {
amount: 15000,
currency: 'USD',
customer_id: 'cus_abc123',
});
if (decision.isBlocked) {
console.error('Transaction blocked:', decision.data.reason);
// => "Transaction blocked: Amount exceeds the $10,000 single-transaction limit"
}Multiple Policies with a Tag
Evaluate all policies tagged with a given label. Useful when you want a single call to enforce multiple rules at once:
// Evaluate ALL policies tagged #payments (e.g. amount limit, velocity check, fraud score)
const decision = await limits.check('#payments', {
amount: 500,
currency: 'USD',
customer_id: 'cus_abc123',
merchant_category: 'electronics',
ip_country: 'US',
});
if (decision.isBlocked) {
console.error('Payment rejected:', decision.data.reason);
} else if (decision.isEscalated) {
console.warn('Payment needs review:', decision.data.reason);
} else {
console.log('Payment approved');
chargeCustomer();
}Real-World Example: E-Commerce Checkout
async function processCheckout(order: Order) {
const decision = await limits.check('#checkout', {
order_total: order.total,
currency: order.currency,
customer_id: order.customerId,
items_count: order.items.length,
shipping_country: order.shippingAddress.country,
});
switch (decision.data.action) {
case 'ALLOW':
return await submitOrder(order);
case 'ESCALATE':
await flagForReview(order, decision.data.reason);
return { status: 'pending_review', message: decision.data.reason };
case 'BLOCK':
return { status: 'rejected', message: decision.data.reason };
}
}Instructions Mode — evaluate()
Use evaluate() to validate an LLM response against a set of instructions (your policy). This is designed for AI applications where you need to ensure the model's output complies with your business rules before returning it to the user.
evaluate(policyKeyOrTag: string, prompt: string, response: string | object): Promise<PolicyEvaluationResult>Single Policy
const userPrompt = 'Can I get a refund for my order from 3 months ago?';
const llmResponse = 'Sure! I have processed a full refund of $299.99 to your card.';
const decision = await limits.evaluate('refund-policy', userPrompt, llmResponse);
if (decision.isBlocked) {
// The LLM response violates refund policy — don't send it to the user
console.error('Response violates policy:', decision.data.reason);
// => "Response violates policy: Refunds are only available within 30 days of purchase"
}Multiple Policies with a Tag
// Evaluate ALL instruction policies tagged #customer-service
const decision = await limits.evaluate(
'#customer-service',
userPrompt,
llmResponse
);
if (decision.isBlocked) {
// Replace the LLM response with a safe fallback
return 'I apologize, but I need to check with a team member before proceeding. One moment please.';
}Real-World Example: LLM Middleware
Use evaluate() as middleware between your LLM and the end user to enforce compliance:
import { Limits } from '@limits/js';
const limits = new Limits({ apiKey: process.env.LIMITS_API_KEY! });
async function generateResponse(userMessage: string): Promise<string> {
// 1. Get the LLM response
const llmResponse = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: userMessage }],
});
const responseText = llmResponse.choices[0].message.content;
// 2. Validate against instruction policies before returning
const decision = await limits.evaluate('#compliance', userMessage, responseText);
if (decision.isAllowed) {
return responseText;
}
if (decision.isEscalated) {
// Log for human review, but still return a safe message
await logEscalation(userMessage, responseText, decision.data.reason);
return 'This request has been flagged for review. A team member will follow up shortly.';
}
// Blocked — return a fallback
return 'I\'m unable to help with that request. Please contact support for assistance.';
}Express Middleware Example
import express from 'express';
import { Limits } from '@limits/js';
const app = express();
const limits = new Limits({ apiKey: process.env.LIMITS_API_KEY! });
// Middleware that validates LLM responses before sending to client
function limitsMiddleware(policyKeyOrTag: string) {
return async (req: express.Request, res: express.Response, next: express.NextFunction) => {
const { prompt, response } = req.body;
const decision = await limits.evaluate(policyKeyOrTag, prompt, response);
if (decision.isBlocked) {
return res.status(403).json({
error: 'Response blocked by policy',
reason: decision.data.reason,
});
}
if (decision.isEscalated) {
// Attach escalation info and continue
req.body._escalated = true;
req.body._escalationReason = decision.data.reason;
}
next();
};
}
app.post('/api/chat', limitsMiddleware('#customer-service'), async (req, res) => {
res.json({ message: req.body.response });
});Guardrails Mode — guard()
Use guard() to run safety guardrails on text content. This mode is designed for scanning model outputs (or user inputs) for issues like PII leakage, harmful content, off-topic responses, and prompt injection.
guard(policyKeyOrTag: string, input: string): Promise<PolicyEvaluationResult>Single Policy
const modelOutput = 'Your account number is 4532-1234-5678-9012 and your SSN is 123-45-6789.';
const decision = await limits.guard('pii-detection', modelOutput);
if (decision.isBlocked) {
console.error('PII detected:', decision.data.reason);
// => "PII detected: Response contains credit card number and SSN"
}Multiple Policies with a Tag
Run all safety guardrails at once by using a tag:
// Evaluate ALL guardrail policies tagged #safety
// (e.g. PII detection, toxicity check, prompt injection detection)
const decision = await limits.guard('#safety', modelOutput);
if (decision.isBlocked) {
return 'I\'m sorry, I can\'t provide that information.';
} else if (decision.isEscalated) {
await flagForSafetyReview(modelOutput, decision.data.reason);
return 'This response is under review.';
} else {
return modelOutput;
}Applying All Enabled Guardrails
Use a broad tag to apply every enabled guardrail policy in your organization:
// Apply all policies under the #guardrails tag
const decision = await limits.guard('#guardrails', modelOutput);
if (!decision.isAllowed) {
// Any guardrail flagged the content
console.warn('Guardrail triggered:', decision.data.reason);
}Real-World Example: Guardrails Middleware
Use guard() as middleware to automatically scan every response before it reaches your users:
import { Limits } from '@limits/js';
const limits = new Limits({ apiKey: process.env.LIMITS_API_KEY! });
async function safeRespond(userMessage: string): Promise<string> {
// 1. Guard the user input (prompt injection, jailbreak attempts)
const inputCheck = await limits.guard('#input-safety', userMessage);
if (inputCheck.isBlocked) {
return 'Your message was flagged by our safety system. Please rephrase your request.';
}
// 2. Generate the LLM response
const llmResponse = await generateLLMResponse(userMessage);
// 3. Guard the output (PII, harmful content, off-topic)
const outputCheck = await limits.guard('#output-safety', llmResponse);
if (outputCheck.isBlocked) {
return 'I\'m unable to provide that response. Let me try a different approach.';
}
return llmResponse;
}Next.js API Route Example
import { NextRequest, NextResponse } from 'next/server';
import { Limits } from '@limits/js';
const limits = new Limits({ apiKey: process.env.LIMITS_API_KEY! });
export async function POST(req: NextRequest) {
const { message } = await req.json();
// Generate response from your AI
const aiResponse = await getAIResponse(message);
// Run guardrails before returning
const decision = await limits.guard('#safety', aiResponse);
if (decision.isBlocked) {
return NextResponse.json(
{ error: 'Response blocked by safety policy', reason: decision.data.reason },
{ status: 451 }
);
}
return NextResponse.json({ response: aiResponse });
}Escalations (Human-in-the-Loop)
When a policy returns ESCALATE, the request is flagged for human review. Limits tracks these as escalations — pending items that your team can approve or decline through the SDK or the Limits Dashboard.
This is useful for high-value transactions, edge cases, or any scenario where you want a human in the loop before proceeding.
How Escalations Work
Policy evaluates request → Result is ESCALATE → Escalation record created
↓
Your app returns "pending review" to the user
↓
Reviewer sees the escalation in Dashboard or via SDK
↓
Reviewer approves or declines → Your app acts on the decisionList All Escalations
Retrieve all escalations across your organization:
const escalations = await limits.listEscalations();
for (const escalation of escalations) {
console.log(`[${escalation.status}] ${escalation.id}`);
console.log(` Policy: ${escalation.policy_id}`);
console.log(` Reason: ${escalation.response?.message}`);
console.log(` Request:`, escalation.request);
console.log(` Created: ${escalation.created_at}`);
}List Escalations by Policy
Filter escalations for a specific policy:
const policyId = '550e8400-e29b-41d4-a716-446655440000';
const escalations = await limits.listEscalationsByPolicyId(policyId);
const pending = escalations.filter(e => e.status === 'PENDING');
console.log(`${pending.length} escalations awaiting review`);Approve an Escalation
const approved = await limits.approveEscalation('550e8400-e29b-41d4-a716-446655440000');
console.log('Escalation approved:', approved.status); // => "ALLOWED"Decline an Escalation
const declined = await limits.declineEscalation('550e8400-e29b-41d4-a716-446655440000');
console.log('Escalation declined:', declined.status); // => "DECLINED"Real-World Example: Escalation Review Dashboard
import { Limits } from '@limits/js';
const limits = new Limits({ apiKey: process.env.LIMITS_API_KEY! });
// Fetch and display pending escalations for a review dashboard
async function getReviewQueue() {
const escalations = await limits.listEscalations();
return escalations
.filter(e => e.status === 'PENDING')
.map(e => ({
id: e.id,
policyId: e.policy_id,
reason: e.response?.message ?? 'No reason provided',
request: e.request,
createdAt: e.created_at,
}));
}
// Reviewer takes action
async function handleReviewDecision(escalationId: string, action: 'approve' | 'decline') {
if (action === 'approve') {
const result = await limits.approveEscalation(escalationId);
console.log(`Approved: ${result.id} → ${result.status}`);
} else {
const result = await limits.declineEscalation(escalationId);
console.log(`Declined: ${result.id} → ${result.status}`);
}
}Escalation Object Shape
interface Escalation {
id: string; // Unique escalation ID (UUID)
policy_id: string; // Policy that triggered the escalation
organization_id: string; // Your organization ID
status: 'PENDING' | 'ALLOWED' | 'DECLINED';
request?: any; // The original input you sent (varies by use case)
response?: { // Policy result at time of escalation
action: 'ALLOW' | 'BLOCK' | 'ESCALATE';
message: string;
success: boolean;
violated: boolean;
};
action_by?: string | null; // User ID who took action (null if pending)
action_by_user?: { // User details (when resolved)
id: string;
first_name: string;
last_name: string;
email: string;
};
created_at: string; // ISO 8601 timestamp
updated_at: string; // ISO 8601 timestamp
}Result Shape
Every evaluation method (check, evaluate, guard) returns a PolicyEvaluationResult:
interface PolicyEvaluationResult {
data: {
action: 'ALLOW' | 'BLOCK' | 'ESCALATE'; // The policy decision
reason: string; // Human-readable explanation
};
isAllowed: boolean; // true when action === 'ALLOW'
isBlocked: boolean; // true when action === 'BLOCK'
isEscalated: boolean; // true when action === 'ESCALATE'
errors?: string[]; // Validation errors (if any)
}The boolean helpers (isAllowed, isBlocked, isEscalated) make it easy to branch on the result without comparing strings:
const decision = await limits.check('my-policy', { amount: 100 });
// Use the boolean helpers
if (decision.isAllowed) { /* proceed */ }
if (decision.isBlocked) { /* reject */ }
if (decision.isEscalated) { /* flag for review */ }
// Or use the action string directly
switch (decision.data.action) {
case 'ALLOW': break;
case 'BLOCK': break;
case 'ESCALATE': break;
}
// Access the reason
console.log(decision.data.reason);
// => "Transaction amount is within the allowed limit"
// Check for validation errors
if (decision.errors) {
console.warn('Validation issues:', decision.errors);
// => ["Missing required parameter: 'currency' in request"]
}Error Handling
The SDK throws typed errors that you can catch with instanceof:
| Error Class | When | Key Properties |
|---|---|---|
| InvalidAPIKeyError | API key missing, empty, or doesn't start with sk_ | message |
| InvalidPolicyKeyError | Policy key or tag is empty or invalid | message |
| InvalidInputError | Input is missing, empty, or wrong type | message |
| APIRequestError | API returned 4xx or 5xx | statusCode, response |
| NetworkError | Request failed (DNS, connection refused, etc.) | originalError |
| TimeoutError | Request timed out | message |
All errors extend LimitsSDKError, which extends Error.
Example
import {
Limits,
InvalidAPIKeyError,
InvalidPolicyKeyError,
InvalidInputError,
APIRequestError,
NetworkError,
TimeoutError,
} from '@limits/js';
const limits = new Limits({ apiKey: process.env.LIMITS_API_KEY! });
try {
const decision = await limits.check('transaction-limit', {
amount: 5000,
currency: 'USD',
});
// handle decision...
} catch (err) {
if (err instanceof InvalidAPIKeyError) {
// Bad API key — check your environment variables
console.error('Invalid API key:', err.message);
} else if (err instanceof InvalidPolicyKeyError) {
// Bad policy key — check your policy configuration
console.error('Invalid policy key:', err.message);
} else if (err instanceof InvalidInputError) {
// Bad input — check the data you're sending
console.error('Invalid input:', err.message);
} else if (err instanceof APIRequestError) {
// API error — inspect the status code
console.error(`API error (${err.statusCode}):`, err.message);
if (err.statusCode === 401) console.error('Check your API key');
if (err.statusCode === 404) console.error('Policy not found');
if (err.statusCode === 429) console.error('Rate limited — slow down');
} else if (err instanceof NetworkError) {
// Network issue — retry or alert
console.error('Network error:', err.message);
} else if (err instanceof TimeoutError) {
// Timeout — retry with backoff
console.error('Request timed out:', err.message);
} else {
throw err; // Unknown error
}
}TypeScript Types
All types are exported from @limits/js and fully documented:
import type {
PolicyEvaluationResult, // Result from check(), evaluate(), guard()
PolicyAction, // 'ALLOW' | 'BLOCK' | 'ESCALATE'
SDKConfig, // Constructor config ({ apiKey, debug? })
Escalation, // Escalation record
EscalationResponse, // Policy result stored on an escalation
ActionByUser, // User info on resolved escalations
EvaluatePolicyInput, // Internal input shape
APIErrorResponse, // API error body
} from '@limits/js';
import { EscalationStatus } from '@limits/js';
// EscalationStatus.PENDING | EscalationStatus.ALLOWED | EscalationStatus.DECLINEDConfiguration
Constructor Options
const limits = new Limits({
apiKey: 'sk_live_...', // Required — your Limits API key (must start with sk_)
debug: false, // Optional — enable debug logging (default: false)
});Debug Mode
Enable debug logging to see request/response details in the console:
const limits = new Limits({
apiKey: process.env.LIMITS_API_KEY!,
debug: true,
});
// Logs: SDK initialized, Making request, Received response, Check result, etc.Authentication
All requests are authenticated with Authorization: Bearer <apiKey>. Get your API key from the Limits Dashboard under Dashboard > API Keys.
Requirements
- Node.js 18+ (uses native
fetch— no external HTTP dependencies) - TypeScript 5.x (optional but recommended — full type definitions included)
The SDK ships both ESM and CommonJS builds with complete type declarations.
Development
# Install dependencies
npm install
# Build the SDK (CJS + ESM + type declarations)
npm run build
# Run tests
npm test
# Run tests in watch mode
npm run test:watch
# Run tests with coverage
npm run test:coverage
# Lint
npm run lint
# Format code
npm run formatLicense
MIT -- GitHub -- Issues -- Limits Dashboard
