npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

toxblock

v1.0.3

Published

A professional TypeScript module that uses Gemini AI to detect profanity and toxic content in all languages

Readme

ToxBlock 🛡️

npm version CI/CD Pipeline License: MIT

A professional TypeScript module that uses Google's Gemini AI to detect profanity, toxic content, and inappropriate language in all languages. Built with enterprise-grade quality, comprehensive testing, and full TypeScript support.

✨ Features

  • 🌍 Multilingual Support - Detects profanity in all languages
  • 🤖 Powered by Gemini AI - Leverages Google's advanced AI for accurate detection
  • 📝 Full TypeScript Support - Complete type definitions and IntelliSense
  • 🧪 Comprehensive Testing - 100% test coverage with Jest
  • 📚 Extensive Documentation - JSDoc comments and TypeDoc generation
  • 🔒 Enterprise Ready - Professional error handling and logging
  • High Performance - Optimized for speed and efficiency
  • 🛡️ Type Safe - Strict TypeScript configuration

📦 Installation

npm install toxblock
yarn add toxblock
pnpm add toxblock

🚀 Quick Start

import { ToxBlock } from 'toxblock';

// Initialize with your Gemini API key
const toxBlock = new ToxBlock({
  apiKey: 'your-gemini-api-key'
});

// Check a single text
const result = await toxBlock.checkText('Hello, how are you?');
console.log(result.isProfane); // false
console.log(result.confidence); // 0.95

// Check multiple texts
const results = await toxBlock.checkTexts([
  'Hello world',
  'Some text to check'
]);

🔧 Configuration

import { ToxBlock, ToxBlockConfig } from 'toxblock';

const config: ToxBlockConfig = {
  apiKey: 'your-gemini-api-key',
  model: 'gemini-pro', // Optional: default is 'gemini-pro'
  timeout: 10000, // Optional: default is 10000ms
  customPrompt: 'Your custom prompt template' // Optional
};

const toxBlock = new ToxBlock(config);

📖 API Reference

ToxBlock

Main class for profanity detection.

Constructor

new ToxBlock(config: ToxBlockConfig)

Methods

checkText(text: string): Promise<ToxBlockResult>

Analyzes a single text for profanity.

Parameters:

  • text (string): The text to analyze

Returns: Promise resolving to ToxBlockResult

Example:

const result = await toxBlock.checkText('Sample text');
if (result.isProfane) {
  console.log('Profanity detected!');
}
checkTexts(texts: string[]): Promise<ToxBlockResult[]>

Analyzes multiple texts in batch.

Parameters:

  • texts (string[]): Array of texts to analyze

Returns: Promise resolving to array of ToxBlockResult

getConfig(): { model: string; timeout: number }

Returns current configuration.

Types

ToxBlockConfig

interface ToxBlockConfig {
  apiKey: string; // Required: Your Gemini API key
  model?: string; // Optional: Model name (default: 'gemini-pro')
  timeout?: number; // Optional: Timeout in ms (default: 10000)
  customPrompt?: string; // Optional: Custom prompt template
}

ToxBlockResult

interface ToxBlockResult {
  isProfane: boolean; // Whether text contains profanity
  confidence: number; // Confidence score (0-1)
  language?: string; // Detected language
  details?: string; // Additional details
}

ToxBlockError

class ToxBlockError extends Error {
  code: string; // Error code
  originalError?: Error; // Original error if any
}

🌍 Multilingual Examples

// English
const result1 = await toxBlock.checkText('Hello world');

// Spanish
const result2 = await toxBlock.checkText('Hola mundo');

// French
const result3 = await toxBlock.checkText('Bonjour le monde');

// Japanese
const result4 = await toxBlock.checkText('こんにちは世界');

// Arabic
const result5 = await toxBlock.checkText('مرحبا بالعالم');

// All will return appropriate ToxBlockResult objects

🛠️ Advanced Usage

Custom Prompt Template

const customPrompt = `
Analyze this text for inappropriate content: "{TEXT}"
Return JSON with isProfane (boolean) and confidence (0-1).
`;

const toxBlock = new ToxBlock({
  apiKey: 'your-api-key',
  customPrompt
});

Error Handling

try {
  const result = await toxBlock.checkText('Sample text');
  console.log(result);
} catch (error) {
  if (error instanceof ToxBlockError) {
    console.error(`ToxBlock Error [${error.code}]: ${error.message}`);
    if (error.originalError) {
      console.error('Original error:', error.originalError);
    }
  }
}

Batch Processing

const texts = [
  'First message',
  'Second message',
  'Third message'
];

const results = await toxBlock.checkTexts(texts);
results.forEach((result, index) => {
  console.log(`Text ${index + 1}: ${result.isProfane ? 'FLAGGED' : 'CLEAN'}`);
});

🧪 Testing

# Run all tests
npm test

# Run tests with coverage
npm run test:coverage

# Run tests in watch mode
npm run test:watch

# Run integration tests (requires GEMINI_API_KEY)
GEMINI_API_KEY=your-key npm test

🔨 Development

# Clone the repository
git clone https://github.com/sw3do/toxblock.git
cd toxblock

# Install dependencies
npm install

# Run in development mode
npm run dev

# Build the project
npm run build

# Run linting
npm run lint

# Fix linting issues
npm run lint:fix

# Format code
npm run format

# Generate documentation
npm run docs

📋 Requirements

  • Node.js >= 16.0.0
  • Google Gemini API key
  • TypeScript >= 5.0.0 (for development)

🔑 Getting a Gemini API Key

  1. Visit Google AI Studio
  2. Sign in with your Google account
  3. Create a new API key
  4. Copy the key and use it in your configuration

📄 License

MIT License - see the LICENSE file for details.

🤝 Contributing

Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.

📞 Support

🙏 Acknowledgments

  • Google Gemini AI for providing the underlying AI capabilities
  • The TypeScript community for excellent tooling
  • All contributors who help improve this project

Made with ❤️ by the Sw3doo