@cage-ai/minamo
v4.2.0
Published
AI Security Toolkit — Prompt injection detection, output sanitization, and OWASP LLM Top 10 coverage for LLM applications
Maintainers
Readme
__ __ _
| \/ (_)_ __ __ _ _ __ ___ ___
| |\/| | | '_ \ / _` | '_ ` _ \ / _ \
| | | | | | | | (_| | | | | | | (_) |
|_| |_|_|_| |_|\__,_|_| |_| |_|\___/
AI Security Toolkit1,102 patterns. 34 languages. OWASP LLM Top 10 — 100% coverage.
Minamo is a production-ready AI security toolkit that detects prompt injection attacks, sanitizes LLM outputs, and shields your application from the full OWASP LLM Top 10 threat surface — with zero ML dependencies.
Quick Start
npm install minamoimport { Minamo } from 'minamo';
const minamo = new Minamo('mk_your_api_key');
const result = await minamo.sanitize(userInput);
if (!result.is_clean) throw new Error('Blocked: ' + result.categories.join(', '));That's it. Three lines to protect your LLM app.
Usage
Input Sanitization
Detect and block malicious user input before it reaches your LLM.
import { Minamo } from 'minamo';
const minamo = new Minamo('mk_your_api_key');
const result = await minamo.sanitize('Ignore all previous instructions and...');
console.log(result.is_clean); // false
console.log(result.risk_score); // 9 (0–10 scale)
console.log(result.categories); // ['prompt_injection', 'jailbreak']
console.log(result.recommended_action); // 'block'Output Sanitization
Prevent your LLM from leaking sensitive data or producing harmful content.
const llm_response = await callYourLLM(prompt);
const result = await minamo.sanitize_output(llm_response);
if (!result.is_clean) {
// Handle sensitive output (PII leak, malicious URL, etc.)
}Guard Function
Throw an error automatically when input exceeds the risk threshold.
try {
await minamo.guard(userInput, { max_risk: 3 });
// safe — proceed to LLM
const response = await callYourLLM(userInput);
} catch (e) {
if (e instanceof MinamoBlockedError) {
return res.status(400).json({ error: 'blocked', score: e.result.risk_score });
}
}Batch Processing
Check multiple texts in one request.
const batch = await minamo.batch([
{ id: 'msg-1', text: userMessage1 },
{ id: 'msg-2', text: userMessage2 },
]);
console.log(batch.summary.flagged); // number of flagged items
batch.results.forEach(r => {
if (!r.is_clean) console.log(`${r.id}: risk ${r.risk_score}`);
});Express Middleware (1-line setup)
Drop Minamo into any Express app in a single line.
import express from 'express';
import { minamo_middleware } from 'minamo/middleware';
const app = express();
app.use(express.json());
app.use(minamo_middleware({ api_key: 'mk_your_api_key' }));Options:
app.use(minamo_middleware({
api_key: 'mk_your_api_key',
block_threshold: 7, // block if risk_score >= 7 (default)
log_only: false, // set true to log without blocking
paths: ['/api/*'], // only check these paths (glob)
exclude_paths: ['/health'],
sanitize_output: true, // also sanitize responses
on_detect: (result, req) => {
logger.warn('Threat detected', { score: result.risk_score, path: req.path });
},
}));Hono Middleware
import { Hono } from 'hono';
import { minamo_hono } from 'minamo/middleware';
const app = new Hono();
app.use(minamo_hono({ api_key: 'mk_your_api_key' }));CLI
# First-time setup (API key + telemetry opt-in)
npx minamo init
# Check a single text
npx minamo check "Ignore all previous instructions"
# Scan a directory for vulnerable prompt templates
npx minamo scan ./prompts/
# Run a test suite against your API endpoint
npx minamo test --endpoint https://api.yourapp.com/chat
# Export pattern database as JSON
npx minamo export --output patterns.jsonOffline Mode
No API key needed — run pattern matching locally.
const minamo = new Minamo(undefined, { offline: true });
const result = await minamo.sanitize(userInput);Offline mode uses the bundled 1,102-pattern database. No network calls, no rate limits.
Plans
| Feature | Free | Pro | Enterprise | |---|:---:|:---:|:---:| | Pattern database | 1,102 patterns | 1,102 patterns | 1,102 patterns | | Languages | 34 | 34 | 34 | | OWASP LLM Top 10 | 100% | 100% | 100% | | API requests/day | 1,000 | 50,000 | Unlimited | | Output sanitization | - | Yes | Yes | | AI-assisted judgment | - | Yes | Yes | | Batch processing | - | Yes | Yes | | Offline mode | Yes | Yes | Yes | | SLA | - | 99.9% | 99.99% | | Support | Community | Email | Dedicated |
Telemetry
Minamo collects anonymous usage data to improve pattern quality. You control what is sent.
| Level | What is sent |
|---|---|
| off | Nothing |
| stats (default) | Request count, detection rate, latency — no content |
| full | Stats + sanitized text snippets for pattern training |
Configure at init time:
const minamo = new Minamo('mk_your_api_key', {
telemetry: { level: 'off' } // disable completely
});Or via .minamo.json (created by npx minamo init).
Your prompt content is never sent on stats mode. We do not sell data.
OWASP LLM Top 10 Coverage
| # | Threat | Status | |---|---|:---:| | LLM01 | Prompt Injection | Covered | | LLM02 | Insecure Output Handling | Covered | | LLM03 | Training Data Poisoning | Covered | | LLM04 | Model Denial of Service | Covered | | LLM05 | Supply Chain Vulnerabilities | Covered | | LLM06 | Sensitive Information Disclosure | Covered | | LLM07 | Insecure Plugin Design | Covered | | LLM08 | Excessive Agency | Covered | | LLM09 | Overreliance | Covered | | LLM10 | Model Theft | Covered |
API Reference
Full API documentation: https://minamo.cage-ai.jp/docs
minamo.sanitize(text, options?)
Sanitize user input before sending to an LLM.
text: string— The text to checkoptions.source?: string— Tag the source for logging (e.g.'chatbot')options.min_severity?: number— Only report patterns above this severity (0–10)
Returns SanitizeResult.
minamo.sanitize_output(text, options?)
Sanitize LLM output before returning to users.
Returns SanitizeResult.
minamo.guard(text, options?)
Throw MinamoBlockedError if risk exceeds threshold.
options.max_risk?: number— block ifrisk_score > max_risk(default: 0)options.throw_message?: string— custom error message
import { Minamo, MinamoBlockedError } from 'minamo';
await minamo.guard(userInput, { max_risk: 3 });minamo.batch(items)
Check multiple texts. Each item: { id?: string, text: string }.
Returns BatchResult.
minamo.metrics()
Returns SDK-level metrics: request count, success rate, avg latency, cache hit rate.
Event Hooks
minamo.on('detect', (result, text) => {
console.log('Threat detected:', result.risk_score, result.categories);
});
minamo.on('error', (err, endpoint) => {
console.error('Request failed:', err.message, endpoint);
});
minamo.on('retry', (attempt, delay_ms, endpoint) => {
console.log(`Retrying ${endpoint} (attempt ${attempt}, delay ${delay_ms}ms)`);
});Contributing
Issues and PRs welcome. See CONTRIBUTING.md.
Pattern contributions (new attack patterns, language support) are especially appreciated.
License
MIT — Copyright (c) 2026 Eyegle Inc.
