npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

terlik.js

v2.5.0

Published

Ultra-fast, zero-dependency profanity detection engine. Ships with Turkish, English, Spanish & German — extensible to any language. Lazy compilation, deep agglutination support, ReDoS-safe regex patterns

Readme

terlik.js

terlik.js

CI npm version npm downloads npm bundle size TypeScript zero dependencies License: MIT

Multi-language profanity detection and filtering engine, designed Turkish-first and extensible to any language. Not a naive blacklist — a multi-layered normalization and pattern engine that catches what simple string matching misses.

Ships with Turkish (flagship, full coverage), English, Spanish, and German built-in. Add any language with a folder and two files, or extend at runtime via extendDictionary.

Turkce: Turkce oncelikli, her dile genisletilebilir kufur tespit ve filtreleme motoru. Leet speak, karakter tekrari, ayirici karakterler ve Turkce ek sistemi destegi ile yaratici kufur denemelerini yakalar. Sifir bagimlilik, TypeScript, ~14 KB gzipped.

Features

  • Extensible to any language — ships with TR/EN/ES/DE, add more via language packs or extendDictionary
  • Catches leet speak, separators, char repetition, mixed case, zero-width chars
  • Turkish suffix engine (83 suffixes, ~3,000+ detectable forms from 25 roots)
  • Three detection modes: strict, balanced, loose (with fuzzy matching)
  • Zero dependencies, ~14 KB gzipped
  • ESM + CJS — works in Node.js, Bun, Deno, browsers, Cloudflare Workers, Edge runtimes
  • Lazy compilation: ~1.5ms construction, <1ms per check after warmup
  • ReDoS-safe regex patterns with timeout safety net
  • Full TypeScript support with exported types

Why terlik.js?

Turkish profanity evasion is creative. Users write s2k, $1kt1r, s.i.k.t.i.r, SİKTİR, siiiiiktir, i8ne, or*spu, pu$ttt, 6öt — and expect to get away with it. Turkish is agglutinative — a single root like sik spawns dozens of forms: siktiler, sikerim, siktirler, sikimsonik. Manually listing every variant doesn't scale.

terlik.js catches all of these with a suffix engine that automatically recognizes Turkish grammatical suffixes on profane roots. Here's what a single call handles:

import { Terlik } from "terlik.js";
const terlik = new Terlik();

terlik.clean("s2mle yüzle$ g0t_v3r3n o r o s p u pezev3nk i8ne pu$ttt or*spu");
// "***** yüzle$ ********* *********** ******** **** ****** ******"
// 7 matches, 0 false positives, <2ms

Install

npm install terlik.js
# or
pnpm add terlik.js
# or
yarn add terlik.js

Quick Start

import { Terlik } from "terlik.js";

// Turkish (default)
const tr = new Terlik();
tr.containsProfanity("siktir git");  // true
tr.clean("siktir git burdan");       // "****** git burdan"

// English
const en = new Terlik({ language: "en" });
en.containsProfanity("what the fuck"); // true
en.containsProfanity("siktir git");    // false (Turkish not loaded)

// Spanish & German
const es = new Terlik({ language: "es" });
const de = new Terlik({ language: "de" });
es.containsProfanity("hijo de puta");  // true
de.containsProfanity("scheiße");       // true

What It Catches

| Evasion technique | Example | Detected as | |---|---|---| | Plain text | siktir | sik | | Turkish İ/I | SİKTİR | sik | | Leet speak | $1kt1r, @pt@l | sik, aptal | | Visual leet (TR) | 8ok, 6öt, i8ne, s2k | bok, göt, ibne, sik | | Turkish number words | s2mle (s+iki+mle) | sik (sikimle) | | Separators | s.i.k.t.i.r, s_i_k | sik | | Spaces | o r o s p u | orospu | | Char repetition | siiiiiktir, pu$ttt | sik, puşt | | Mixed punctuation | or*spu, g0t_v3r3n | orospu, göt | | Combined | $1kt1r g0t_v3r3n | both caught | | Suffix forms | siktiler, orospuluk, gotune | sik, orospu, göt | | Suffix + evasion | s.i.k.t.i.r.l.e.r, $1kt1rler | sik | | Suffix chaining | siktirler (sik+tir+ler) | sik | | Deep agglutination | siktiğimin, sikermisiniz, siktirmişcesine | sik | | Zero-width chars | s\u200Bi\u200Bk\u200Bt\u200Bi\u200Br (ZWSP/ZWNJ/ZWJ) | sik | | Phonetic (EN) | phuck, phucking | fuck | | Extended leet (EN) | 8itch, s#it, ni66er | bitch, shit, nigger |

What It Doesn't Catch (on purpose)

Whitelist prevents false positives on legitimate words:

terlik.containsProfanity("Amsterdam");    // false
terlik.containsProfanity("sikke");        // false (Ottoman coin)
terlik.containsProfanity("ambulans");     // false
terlik.containsProfanity("siklet");       // false (boxing weight class)
terlik.containsProfanity("memur");        // false
terlik.containsProfanity("malzeme");      // false
terlik.containsProfanity("ama");          // false (conjunction)
terlik.containsProfanity("amir");         // false
terlik.containsProfanity("dolmen");       // false

How It Works

Six-stage normalization pipeline (language-aware), then pattern matching:

input
  → lowercase (locale-aware: "tr", "en", "es", "de")
  → char folding (language-specific: İ→i, ñ→n, ß→ss, ä→a, ...)
  → number expansion (optional, e.g. Turkish: s2k → sikik)
  → leet speak decode (0→o, 1→i, @→a, $→s, ...)
  → punctuation removal (between letters: s.i.k → sik)
  → repeat collapse (siiiiik → sik)
  → pattern matching (dynamic regex with language-specific char classes)
  → whitelist filtering
  → result

Each language has its own char map, leet map, char classes, and optional number expansions. The engine is language-agnostic — only the data is language-specific. This means any language can be added without modifying the core engine.

For suffixable roots, the engine appends an optional suffix group (up to 2 chained suffixes). Turkish has 83 suffixes (including question particles and adverbial forms), English has 9, Spanish has 13, German has 8.

Language Packs

Community contributions to existing language packs (new words, variants, whitelist entries) and entirely new language packs are welcome! See CONTRIBUTING.md for step-by-step instructions.

Each language lives in its own folder under src/lang/:

src/lang/
  tr/
    config.ts           ← charMap, leetMap, charClasses, locale
    dictionary.json     ← entries, suffixes, whitelist
  en/
    config.ts
    dictionary.json
  ...

Dictionary format (community-friendly JSON, no TypeScript needed):

{
  "version": 1,
  "suffixes": ["ing", "ed", "er", "s"],
  "entries": [
    { "root": "fuck", "variants": ["fucking", "fucker"], "severity": "high", "category": "sexual", "suffixable": true }
  ],
  "whitelist": ["assassin", "class", "grass"]
}

Categories: sexual, insult, slur, general. Severity: high, medium, low.

Adding a New Language

  1. Create src/lang/xx/ folder
  2. Add dictionary.json (entries, suffixes, whitelist)
  3. Add config.ts (locale, charMap, leetMap, charClasses)
  4. Register in src/lang/index.ts (one import line)
  5. Write tests, build, done

Dictionary Strategy

terlik.js ships with a deliberately narrow dictionary — the goal is to minimize false positives while catching real-world evasion patterns. The dictionary is not a massive word list; it's a curated set of roots + variants that the pattern engine expands through normalization, leet decoding, separator tolerance, and suffix chaining.

Coverage

| Language | Status | Roots | Explicit Variants | Suffixes | Whitelist | Effective Forms | |---|---|---|---|---|---|---| | Turkish | Flagship | 39 | 115 | 83 | 67 | ~3,000+ | | English | Full | 56 | 185 | 9 | 96 | ~2,000+ | | Spanish | Community | 29 | 101 | 13 | 21 | ~500+ | | German | Community | 28 | 67 | 8 | 6 | ~300+ |

"Effective forms" = roots × normalization variants × suffix combinations × evasion patterns. A root like sik with 83 possible suffixes, leet decoding, separator tolerance, and repeat collapse produces thousands of detectable surface forms.

Add your language! The engine is language-agnostic. See Adding a New Language or use extendDictionary for runtime extension.

What IS Covered

  • Core profanity roots per language (high-severity sexual, insults, slurs)
  • Grammatical inflections via suffix engine (Turkish agglutination, English -ing/-ed, etc.)
  • Evasion patterns: leet speak, separators, repetition, mixed case, number words (TR)
  • Compound forms: orospucocugu, motherfucker, hijoputa, hurensohn

What is NOT Covered (by design)

  • Slang / regional variants that change rapidly — better handled with customList
  • Context-dependent words that are profane only in certain contexts
  • New coinages — use addWords() at runtime

Why Narrow?

A large dictionary maximizes recall but tanks precision. In production chat systems, false positives are worse than false negatives — blocking "class" or "grass" because the dictionary is too broad erodes user trust. terlik.js defaults to high precision and lets you widen coverage per your needs:

The sık/sik paradox: Turkish sık (frequent/tight) normalizes to sik because ı→i char folding is required to catch evasions like s1kt1r. Making sik suffix-aware would flag sıkıntı (trouble), sıkma (squeeze), sıkı (tight) — extremely common words. Instead, deep agglutination forms like siktiğimin and sikermisiniz are added as explicit variants. This is a deliberate precision-over-recall tradeoff.

// Add domain-specific words
terlik.addWords(["customSlang", "anotherWord"]);

// Or at construction time
const terlik = new Terlik({
  customList: ["customSlang", "anotherWord"],
  whitelist: ["legitimateWord"],
});

// Remove a built-in word if it causes false positives in your domain
terlik.removeWords(["damn"]);

Performance

Lazy Compilation

terlik.js uses lazy compilationnew Terlik() is near-instant (~1.5ms). Regex patterns are compiled on the first detect() call, not at construction time. This eliminates startup cost when creating multiple instances.

| Phase | Cost | When | |---|---|---| | new Terlik() | ~1.5ms | Construction (lookup tables only) | | First detect() | ~200-700ms | Lazy regex compilation + V8 JIT warmup | | Subsequent calls | <1ms | Patterns cached, JIT optimized |

Where do you want to pay the compilation cost?

// Option A: Background warmup (recommended for servers)
// Construction is instant. Patterns compile in the next event loop tick.
// If a request arrives before warmup finishes, it compiles synchronously.
const terlik = new Terlik({ backgroundWarmup: true });

app.post("/chat", (req, res) => {
  const cleaned = terlik.clean(req.body.message); // <1ms (warmup already done)
});
// Option B: Explicit warmup at startup
const terlik = new Terlik();
terlik.containsProfanity("warmup"); // Forces compilation here

app.post("/chat", (req, res) => {
  const cleaned = terlik.clean(req.body.message); // <1ms
});
// Option C: Lazy (pay on first request)
const terlik = new Terlik(); // ~1.5ms

app.post("/chat", (req, res) => {
  const cleaned = terlik.clean(req.body.message); // First call: ~500ms, then <1ms
});
// Option D: Multi-language warmup
const cache = Terlik.warmup(["tr", "en", "es", "de"]);

app.post("/chat", (req, res) => {
  const lang = req.body.language;
  const cleaned = cache.get(lang)!.clean(req.body.message); // <1ms
});

Important: Never create new Terlik() per request. A single cached instance handles requests in microseconds.

Serverless (Lambda, Vercel, Cloudflare Workers): Do NOT use backgroundWarmup. The setTimeout callback may never fire because serverless runtimes freeze the process between invocations. Use explicit warmup instead: const t = new Terlik(); t.containsProfanity("warmup"); at module scope.

Throughput

Benchmark results (Apple Silicon, single core, msgs/sec):

| Scenario | msgs/sec | |---|---| | Clean messages (no matches) | ~193,000 | | Mixed messages (balanced mode) | ~151,000 | | Suffixed dirty messages | ~142,000 | | Strict mode | ~390,000 | | Loose mode (with fuzzy) | ~8,400 |

Note: Loose/fuzzy mode is ~18x slower than balanced mode due to O(n*m) similarity computation. Use it only when typo tolerance is critical, not as a default.

vs Alternatives (English corpus)

Head-to-head comparison on a 290-sample English corpus covering plain text, variants, leet speak, separator evasion, char repetition, combined evasion, false-positive traps, and edge cases. All libraries tested with default settings.

| Library | F1 | Precision | Recall | FPR | check() ops/sec | clean() ops/sec | |---|---|---|---|---|---|---| | terlik.js | 100.0% | 100.0% | 100.0% | 0.0% | 67,623 | 71,321 | | obscenity | 81.7% | 97.4% | 70.4% | 2.3% | 71,914 | 49,978 | | bad-words | 66.1% | 100.0% | 49.4% | 0.0% | 2,831 | 557 | | allprofanity | 59.7% | 100.0% | 42.6% | 0.0% | 45,450 | 45,162 |

terlik.js achieves perfect detection — 100% precision, 100% recall, zero false positives — with competitive throughput (~68K check ops/sec, fastest clean() at 71K ops/sec). It catches 100% of separator and repetition evasions that other libraries miss entirely. See full methodology, per-category breakdown, and limitations.

Throughput note: The multi-pass detection pipeline (NFKD, Cyrillic confusable mapping, CamelCase decompounding) costs ~17% vs a naive single-pass approach — this is what enables 100% recall vs obscenity's 70%. Optional toggles (disableLeetDecode, disableCompound) can recover ~5-8% for controlled inputs. Safety layers (NFKD, diacritics, Cyrillic) are always active. See full toggle guide.

Transparency: This benchmark is maintained by the terlik.js team. Dataset, adapters, and runner are open source. Reproduce with pnpm bench:compare. We document every false positive and miss — see the full report.

Accuracy

Measured on a labeled corpus of 388 samples across 4 languages (profane + clean + whitelist + edge cases):

| Language | Mode | Precision | Recall | F1 | FPR | FNR | |---|---|---|---|---|---|---| | TR | strict | 100.0% | 88.6% | 93.9% | 0.0% | 11.4% | | TR | balanced | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% | | TR | loose | 99.1% | 100.0% | 99.5% | 1.6% | 0.0% | | EN | strict | 100.0% | 95.5% | 97.7% | 0.0% | 4.5% | | EN | balanced | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% | | EN | loose | 98.5% | 100.0% | 99.2% | 2.0% | 0.0% | | ES | strict | 100.0% | 96.7% | 98.3% | 0.0% | 3.3% | | ES | balanced | 100.0% | 96.7% | 98.3% | 0.0% | 3.3% | | ES | loose | 100.0% | 96.7% | 98.3% | 0.0% | 3.3% | | DE | strict | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% | | DE | balanced | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% | | DE | loose | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% |

Mode characteristics:

  • Strict — highest precision (0% FP), trades recall for safety. Misses some suffixed forms and evasion patterns.
  • Balanced — best overall F1. Catches evasion patterns while keeping FPR near zero. Recommended for production.
  • Loose — adds fuzzy matching. Slightly higher FPR due to similarity matches on borderline words.

Reproduce: pnpm bench:accuracy — outputs per-category breakdown, failure list, and JSON results.

Options

const terlik = new Terlik({
  language: "tr",                // built-in: "tr" | "en" | "es" | "de" (default: "tr")
  mode: "balanced",              // "strict" | "balanced" | "loose"
  maskStyle: "stars",            // "stars" | "partial" | "replace"
  replaceMask: "[***]",          // mask text for "replace" style
  customList: ["customword"],    // additional words to detect
  whitelist: ["safeword"],       // additional words to whitelist
  enableFuzzy: false,            // enable fuzzy matching
  fuzzyThreshold: 0.8,           // similarity threshold (0-1). 0.8 ≈ 1 typo per 5 chars
  fuzzyAlgorithm: "levenshtein", // "levenshtein" | "dice"
  maxLength: 10000,              // truncate input beyond this
  backgroundWarmup: false,       // compile patterns in background via setTimeout
  extendDictionary: undefined,   // DictionaryData object to merge with built-in dictionary
});

Detection Modes

| Mode | What it does | Best for | |---|---|---| | strict | Normalize + exact match only | Minimum false positives | | balanced | Normalize + pattern matching with separator/leet tolerance | General use (default) | | loose | Pattern + fuzzy matching (Levenshtein or Dice) | Maximum coverage, typo tolerance |

API

terlik.containsProfanity(text, options?): boolean

Quick boolean check. Runs full detection internally and returns true if any match exists.

terlik.getMatches(text, options?): MatchResult[]

Returns all matches with details:

interface MatchResult {
  word: string;       // matched text from original input
  root: string;       // dictionary root word
  index: number;      // position in original text
  severity: "high" | "medium" | "low";
  method: "exact" | "pattern" | "fuzzy";
}

terlik.clean(text, options?): string

Returns text with profanity masked. Three styles:

terlik.clean("siktir git");                                    // "****** git"
terlik.clean("siktir git", { maskStyle: "partial" });          // "s****r git"
terlik.clean("siktir git", { maskStyle: "replace" });          // "[***] git"

terlik.addWords(words) / removeWords(words)

Runtime dictionary modification. Recompiles patterns automatically.

terlik.addWords(["customword"]);
terlik.containsProfanity("customword"); // true

terlik.removeWords(["salak"]);
terlik.containsProfanity("salak"); // false

Terlik.warmup(languages, options?): Map<string, Terlik>

Static method. Creates and JIT-warms instances for multiple languages at once.

const cache = Terlik.warmup(["tr", "en", "es", "de"]);
cache.get("en")!.containsProfanity("fuck"); // true — no cold start

extendDictionary Option

Merge an external dictionary with the built-in one. Useful for teams managing custom word lists without modifying the core package:

const terlik = new Terlik({
  extendDictionary: {
    version: 1,
    suffixes: ["ci", "cu"],
    entries: [
      { root: "customword", variants: ["cust0mword"], severity: "high", category: "general", suffixable: true },
    ],
    whitelist: ["safeterm"],
  },
});

terlik.containsProfanity("customword");    // true
terlik.containsProfanity("customwordci");  // true (suffix match)
terlik.containsProfanity("safeterm");      // false (whitelisted)
terlik.containsProfanity("siktir");        // true (built-in still works)

The extension dictionary must follow the same schema as built-in dictionaries. Duplicate roots are skipped; suffixes and whitelist entries are merged. Pattern cache is disabled for extended instances.

terlik.language: string

Read-only property. Returns the language code of the instance.

getSupportedLanguages(): string[]

Returns all available language codes.

import { getSupportedLanguages } from "terlik.js";
getSupportedLanguages(); // ["tr", "en", "es", "de"]

normalize(text): string

Standalone export. Uses Turkish locale by default.

import { normalize, createNormalizer } from "terlik.js";

normalize("S.İ.K.T.İ.R"); // "siktir" (Turkish default)

// Custom normalizer for any language
const deNormalize = createNormalizer({
  locale: "de",
  charMap: { ä: "a", ö: "o", ü: "u", ß: "ss" },
  leetMap: { "0": "o", "3": "e" },
});
deNormalize("Scheiße"); // "scheisse"

Testing

972 tests covering all built-in languages, 39 Turkish root words, 56 English roots, suffix detection, lazy compilation, multi-language isolation, normalization, fuzzy matching, cleaning, integration, ReDoS hardening, attack surface coverage, external dictionary merging, and edge cases:

pnpm test          # run once
pnpm test:watch    # watch mode

Live Test Server

An interactive browser-based test environment is included. Chat interface on the left, real-time process log on the right — see exactly what terlik.js does at each step (normalization, pattern matching, match details, timing).

pnpm dev:live      # http://localhost:2026

See tools/README.md for details.

Integration Guide

See Integration Guide for Express, Fastify, Next.js, Nuxt, Socket.io, and multi-language server examples.

Development

pnpm install          # install dependencies
pnpm test             # run tests
pnpm test:coverage    # run tests with coverage report
pnpm typecheck        # TypeScript type checking
pnpm build            # build ESM + CJS output
pnpm bench            # run performance benchmarks
pnpm bench:compare    # run comparison benchmark vs alternatives
pnpm dev:live         # start interactive test server

Pre-commit hooks (via Husky) automatically run type checking on staged .ts files.

See CONTRIBUTING.md for contribution guidelines.

Changelog

See CHANGELOG.md for the full version history.

License

MIT