shield-aeris
v0.1.0
Published
Shield of Aeris - AI Security SDK for TypeScript/JavaScript
Maintainers
Readme
Shield of Aeris TypeScript SDK
Protect your LLM applications from prompt injection, data leakage, and other AI security threats.
Installation
npm install @shield-aeris/sdk
# or
yarn add @shield-aeris/sdk
# or
pnpm add @shield-aeris/sdkQuick Start
import { Shield } from '@shield-aeris/sdk';
// Initialize the client
const shield = new Shield({ apiKey: 'sk_test_your_api_key' });
// Scan a prompt
const result = await shield.scanPrompt('Hello, how can you help me today?');
console.log(`Safe: ${result.safe}`);
console.log(`Risk Score: ${result.riskScore}`);
// Handle unsafe prompts
if (!result.safe) {
console.log(`Threats detected:`, result.threats);
}OpenAI Integration
Automatically protect all your OpenAI API calls:
import OpenAI from 'openai';
import { Shield, UnsafePromptError } from '@shield-aeris/sdk';
const shield = new Shield({ apiKey: 'sk_test_your_api_key' });
// Wrap your OpenAI client
const client = shield.wrapOpenAI(new OpenAI());
// All calls are now automatically protected
try {
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'What is the weather like?' }
]
});
console.log(response.choices[0].message.content);
} catch (error) {
if (error instanceof UnsafePromptError) {
console.log('Blocked:', error.scanResult.threats);
}
}Manual Scanning
For more control, scan prompts and outputs manually:
import { Shield, UnsafePromptError } from '@shield-aeris/sdk';
const shield = new Shield({ apiKey: 'sk_test_your_api_key' });
async function handleUserMessage(userInput: string) {
// Scan before sending to LLM
const promptResult = await shield.scanPrompt(userInput);
if (!promptResult.safe) {
return { error: 'Your message was blocked for security reasons' };
}
// Call your LLM
const llmResponse = await callYourLLM(userInput);
// Scan the output before returning to user
const outputResult = await shield.scanOutput(llmResponse);
if (!outputResult.safe) {
return { error: 'Response contained sensitive information' };
}
return { response: llmResponse };
}Express/Hono Middleware
import express from 'express';
import { Shield } from '@shield-aeris/sdk';
const app = express();
const shield = new Shield({ apiKey: 'sk_test_your_api_key' });
// Add middleware to scan all incoming messages
app.use(express.json());
app.use('/api/chat', shield.middleware({ fieldName: 'message' }));
app.post('/api/chat', (req, res) => {
// Request is already scanned - safe to process
res.json({ response: 'Hello!' });
});Configuration
const shield = new Shield({
apiKey: 'sk_test_your_api_key',
baseUrl: 'https://api.shieldofaeris.com', // Custom endpoint
timeout: 10000, // Request timeout in milliseconds
});Threat Types
The scanner detects the following threat types:
prompt_injection- Direct prompt manipulation attemptsindirect_injection- Injection via external contentjailbreak- Safety bypass attempts (DAN, etc.)pii_detected- Personal identifiable informationdata_leakage- Sensitive data exposuretoxic_content- Harmful or inappropriate content
Error Handling
import {
Shield,
ShieldError,
UnsafePromptError,
UnsafeOutputError
} from '@shield-aeris/sdk';
try {
const result = await shield.scanPrompt(userInput);
} catch (error) {
if (error instanceof UnsafePromptError) {
// Handle blocked prompts
console.log('Risk score:', error.scanResult.riskScore);
console.log('Threats:', error.scanResult.threats);
} else if (error instanceof ShieldError) {
// Handle API errors
console.log('API error:', error.message);
}
}License
MIT License - see LICENSE for details.
