@ekipnico/image-mod
v1.1.1
Published
NSFW & content moderation for images
Downloads
292
Readme
@ekipnico/image-mod
AI-powered NSFW and content moderation for images.
Installation
npm install @ekipnico/image-modQuick Start
import { createImageModMesh, ImageModerator } from '@ekipnico/image-mod';
const mesh = createImageModMesh();
const moderator = mesh.resolve(ImageModerator);
const result = await moderator.check(imageBuffer);
// { safe: true, flagged: [], scores: { adult: 0.1, violence: 0.0, ... } }Model Configuration
Models can be specified in two ways:
1. String ID (Built-in Providers)
Use string IDs for OpenAI, Anthropic, and Google models:
// Via environment variable (recommended for defaults)
process.env.AI_MESH_DEFAULT_MODEL = 'gpt-4o';
// Or per-request
const result = await moderator.check(imageBuffer, { model: 'gemini-flash' });Built-in models: gpt-4o, gpt-4o-mini, gemini-flash, gemini-2.0-flash, claude-sonnet, etc.
2. LanguageModel (Any Provider)
Pass any Vercel AI SDK model directly for providers like Groq, Mistral, DeepSeek, etc:
import { createGroq } from '@ai-sdk/groq';
const groq = createGroq({ apiKey: process.env.GROQ_API_KEY });
const result = await moderator.check(imageBuffer, {
model: groq('llama-3.3-70b-versatile'),
});import { createMistral } from '@ai-sdk/mistral';
const mistral = createMistral({ apiKey: process.env.MISTRAL_API_KEY });
const result = await moderator.check(imageBuffer, {
model: mistral('pixtral-large-latest'),
});Environment Variables
| Variable | Description |
|----------|-------------|
| AI_MESH_DEFAULT_MODEL | Default model when not specified (default: gpt-4o) |
| OPENAI_API_KEY | Required for OpenAI models |
| ANTHROPIC_API_KEY | Required for Anthropic models |
| GOOGLE_API_KEY | Required for Google/Gemini models |
API
moderator.check(input, config?)
Checks an image for moderation issues.
Parameters:
input-ImageInput(Buffer, URL, file path, or ImageInput object)config.model-ModelSpec(string ID or LanguageModel)config.adult- Threshold for adult content (0-1, default: 1)config.violence- Threshold for violence (0-1, default: 1)config.racy- Threshold for racy content (0-1, default: 1)config.medical- Threshold for medical content (0-1, default: 0.5)config.spoof- Threshold for manipulated content (0-1, default: 0.5)
Threshold behavior:
0= Skip this category0.01-1= Flag if score >= threshold (higher = stricter)
Returns:
{
safe: boolean; // true if no categories flagged
flagged: string[]; // Categories that exceeded thresholds
scores: ModerationScores // Raw scores (0-1) for each category
}Integrating into Existing Mesh
import { Mesh } from 'mesh-ioc';
import { registerImageModServices, ImageModerator } from '@ekipnico/image-mod';
const mesh = new Mesh('MyApp');
// ... register your services ...
registerImageModServices(mesh);
const moderator = mesh.resolve(ImageModerator);