npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

text-moderate

v1.0.7

Published

A comprehensive JavaScript library for content moderation, including profanity filtering, sentiment analysis, and toxicity detection. Leveraging advanced algorithms and external APIs, TextModerate provides developers with tools to create safer and more po

Downloads

28

Readme

TextModerate

TextModerate is a JavaScript library developed for text analysis. It integrates profanity/badwords filtering, sentiment analysis, and toxicity detection capabilities. By leveraging the Badwords lists, AFINN-165 wordlist, Emoji Sentiment Ranking, and the Perspective API. TextModerate offers a robust toolkit for enhancing content moderation and fostering healthier online interactions.

Features

  • Profanity Filtering: Detects and filters out profane/bad words from texts, with options for customization.
  • Sentiment Analysis: Assesses the emotional tone of text using the AFINN-165 wordlist and Emoji Sentiment Ranking.
  • Toxicity Detection: Evaluates text for potential toxicity via the Perspective API, aiming to support positive communication environments. This is interface to call Perspective API developed by Google.

Supported Languages

English and French for Profanity and Sentiment Analysis, can be extended. Toxicity Detection works for any language.

Installation

npm install text-moderate --save

Usage and Example Outputs

Profanity Filtering

Profanity Filtering

Censor or identify profanity within text inputs automatically. Using badwords-list from Google's WDYL Project.

const TextModerate = require('text-moderate');
const textModerate = new TextModerate();

console.log(textModerate.isProfane("Don't be an ash0le"))
// Output: True

console.log(textModerate.clean("Don't be an ash0le"));
// Output: "Don't be an ******"

Placeholder Overrides for Filtering

var customTextModerate = new TextModerate({ placeHolder: 'x'});

customTextModerate.clean("Don't be an ash0le"); // Don't be an xxxxxx

Adding Words to the Blacklist

textModerate.addWords('some', 'bad', 'word');

textModerate.clean("some bad word!") // **** *** ****!

Remove words from the blacklist

textModerate.removeWords('hells', 'sadist');

textModerate.clean("some hells word!"); //some hells word!

This functions helps maintain respectful communication by recognizing and replacing recognized profane words with placeholders.

Sentiment Dectection

Sentiment Analysis

Evaluate textual sentiment, identifying whether the content is positive, neutral, or negative.

const result = textModerate.analyzeSentiment('Cats are amazing.');
console.dir(result);

Example output:

{
  "score": 3,
  "comparative": 1,
  "calculation": [{"amazing": 3}],
  "tokens": ["cats", "are", "amazing"],
  "words": ["amazing"],
  "positive": ["amazing"],
  "negative": []
}

The output demonstrates a positive sentiment score, reflecting the text's overall positive tone.

Here, "comparative" Score can be seen as main metric if it's zero netural and greater 0.5 is positive and less than -0.5 is negative

Registering New Language for Sentiment Analysis

var frLanguage = {
  labels: { 'stupide': -2 }
};
textModerate.registerLanguage('fr', frLanguage);

var result = textModerate.analyzeSentiment('Le chat est stupide.', { language: 'fr' });
console.dir(result);    // Score: -2, Comparative: -0.5

Toxicity Dectection

Toxicity Analysis

Analyze text for toxicity with the Perspective API to maintain constructive discourse. Perspective API is developed and maintianed by Google.

const API_KEY = 'your_api_key_here'; // Replace with your Persective API key from Google API Services
textModerate.analyzeToxicity("Your text to analyzeSentiment", API_KEY)
  .then(result => console.log(JSON.stringify(result)))
  .catch(err => console.error(err));

The Perspective API is currently free API with rate limit of 60 per minute. (As of 2023 Decemeber) Link: https://support.perspectiveapi.com/s/docs-get-started?language=en_US

Sample output:

{
  "attributeScores": {
    "TOXICITY": {
      "summaryScore": {
        "value": 0.021196328,
        "type": "PROBABILITY"
      }
    }
  },
  "languages": ["en"],
  "detectedLanguages": ["en"]
}

This provides a toxicity score, indicating how likely the text is perceived as toxic, aiding in moderating content effectively. According to this paper and experiments:- soft toxicity score is 0.5 and hard toxicty score is 0.7 Refer to this paper : https://aclanthology.org/2021.findings-emnlp.210.pdf

API

Table of Contents

constructor

TextModerate constructor. Combines functionalities of word filtering and sentiment analysis.

Parameters

  • options Object TextModerate instance options. (optional, default {})
    • options.emptyList boolean Instantiate filter with no blacklist. (optional, default false)
    • options.list array Instantiate filter with custom list. (optional, default [])
    • options.placeHolder string Character used to replace profane words. (optional, default '*')
    • options.regex string Regular expression used to sanitize words before comparing them to blacklist. (optional, default /[^a-zA-Z0-9|\$|\@]|\^/g)
    • options.replaceRegex string Regular expression used to replace profane words with placeHolder. (optional, default /\w/g)
    • options.splitRegex string Regular expression used to split a string into words. (optional, default /\b/)
    • options.sentimentOptions Object Options for sentiment analysis. (optional, default {})

isProfane

Determine if a string contains profane language.

Parameters

  • string string String to evaluate for profanity.

replaceWord

Replace a word with placeHolder characters.

Parameters

  • string string String to replace.

clean

Evaluate a string for profanity and return an edited version.

Parameters

  • string string Sentence to filter.

addWords

Add word(s) to blacklist filter / remove words from whitelist filter.

Parameters

  • words ...any
  • word ...string Word(s) to add to blacklist.

removeWords

Add words to whitelist filter.

Parameters

  • words ...any
  • word ...string Word(s) to add to whitelist.

registerLanguage

Registers the specified language.

Parameters

  • languageCode String Two-digit code for the language to register.
  • language Object The language module to register.

analyzeSentiment

Performs sentiment analysis on the provided input 'phrase'.

Parameters

  • phrase String Input phrase.
  • opts Object Options. (optional, default {})
  • callback function Optional callback.

Returns Object

analyzeToxicity

Analyzes the toxicity of a given text using the Perspective API.

Parameters

  • text string Text to analyze.
  • apiKey string API key for the Perspective API.

Returns Promise A promise that resolves with the analysis result.

tokenize

Remove special characters and return an array of tokens (words).

Parameters

Returns array Array of tokens

addLanguage

Registers the specified language

Parameters

  • languageCode String Two-digit code for the language to register
  • language Object The language module to register

getLanguage

Retrieves a language object from the cache, or tries to load it from the set of supported languages

Parameters

  • languageCode String Two-digit code for the language to fetch

getLabels

Returns AFINN-165 weighted labels for the specified language

Parameters

  • languageCode String Two-digit language code

Returns Object

applyScoringStrategy

Applies a scoring strategy for the current token

Parameters

  • languageCode String Two-digit language code
  • tokens Array Tokens of the phrase to analyze
  • cursor int Cursor of the current token being analyzed
  • tokenScore int The score of the current token being analyzed

How Sentiment Analysis works Here

AFINN

AFINN is a list of words rated for valence with an integer between minus five (negative) and plus five (positive). Sentiment analysis is performed by cross-checking the string tokens (words, emojis) with the AFINN list and getting their respective scores. The comparative score is simply: sum of each token / number of tokens. So for example let's take the following:

I love cats, but I am allergic to them.

That string results in the following:

{
    score: 1,
    comparative: 0.1111111111111111,
    calculation: [ { allergic: -2 }, { love: 3 } ],
    tokens: [
        'i',
        'love',
        'cats',
        'but',
        'i',
        'am',
        'allergic',
        'to',
        'them'
    ],
    words: [
        'allergic',
        'love'
    ],
    positive: [
        'love'
    ],
    negative: [
        'allergic'
    ]
}
  • Returned Objects
    • Score: Score calculated by adding the sentiment values of recognized words.
    • Comparative: Comparative score of the input string.
    • Calculation: An array of words that have a negative or positive valence with their respective AFINN score.
    • Token: All the tokens like words or emojis found in the input string.
    • Words: List of words from input string that were found in AFINN list.
    • Positive: List of positive words in input string that were found in AFINN list.
    • Negative: List of negative words in input string that were found in AFINN list.

In this case, love has a value of 3, allergic has a value of -2, and the remaining tokens are neutral with a value of 0. Because the string has 9 tokens the resulting comparative score looks like: (3 + -2) / 9 = 0.111111111

This approach leaves you with a mid-point of 0 and the upper and lower bounds are constrained to positive and negative 5 respectively (the same as each token! 😸). For example, let's imagine an incredibly "positive" string with 200 tokens and where each token has an AFINN score of 5. Our resulting comparative score would look like this:

(max positive score * number of tokens) / number of tokens
(5 * 200) / 200 = 5

Tokenization

Tokenization works by splitting the lines of input string, then removing the special characters, and finally splitting it using spaces. This is used to get list of words in the string.

To incorporate the "Future Improvement" section into your existing documentation while maintaining the flow and structure, you can simply add it right before the "Credits" section. Here's how it would look:


Future Improvements

The development and enhancement of the "text-moderate" library will continue to focus on making the tool more versatile and effective for developers and content managers. Planned future improvements include:

  1. More Languages Support: Expanding the library to support additional languages for profanity filtering and sentiment analysis, making it more accessible and useful for a global audience.

  2. Sentiment Analysis in a More Robust Way: Enhancing the sentiment analysis feature to provide deeper insights into the emotional tone of texts, possibly by incorporating machine learning techniques for greater accuracy.

  3. Toxicity Category Attribute Along with Score: Introducing a detailed breakdown of toxicity attributes (e.g., insult, threat, obscenity) alongside the overall toxicity score to give users a more nuanced understanding of content analysis results.

By focusing on these areas, "text-moderate" aims to remain at the forefront of content moderation technology, providing developers with the tools they need to maintain positive and safe online environments.

Credits

  1. Perspective API
  2. Sentiment
  3. Badwords

License

The MIT License (MIT)

Copyright (c) 2013 Michael Price

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.