@torqlab/check-forbidden-content
v1.0.4
Published
Content moderation utility for non-trusted texts.
Readme
@torqlab/check-forbidden-content
A utility library for detecting forbidden content patterns in text. Used to validate prompts and descriptions before AI image generation to ensure compliance with content policies.
Installation
npm install @torqlab/check-forbidden-contentOr with Bun:
bun add @torqlab/check-forbidden-contentUsage
import checkForbiddenContent from '@torqlab/check-forbidden-content';
const hasForbidenContent = checkForbiddenContent('A photo of a person running');
console.log(hasForbidenContent); // true
const isSafe = checkForbiddenContent('An athlete running through a scenic landscape');
console.log(isSafe); // falseForbidden Content
This library detects the following categories of forbidden content:
Real Persons or Identifiable Individuals
- Keywords:
person,people,individual,human,man,woman,child,kid,baby,face,portrait,photo,picture,image
Political or Ideological Symbols
- Keywords:
political,politics,government,president,election,vote,democracy,republican,democrat,flag,banner,symbol,emblem,crest
Violence or Combat
- Keywords:
violence,violent,fight,war,battle,weapon,gun,knife,sword,attack,kill,death,blood,combat,military,soldier,army,navy
Sexual Content
- Keywords:
sexual,sex,nude,naked,explicit,adult,porn
Typography and Text Instructions
- Keywords:
text,word,letter,alphabet,caption,typography,font
API
checkForbiddenContent(text: string): boolean
Checks if the provided text contains any forbidden content patterns.
Parameters:
text(string): The text to check for forbidden content.
Returns:
- (boolean):
trueif forbidden content is detected,falseotherwise.
License
MIT
