Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Profanity detection and content moderation library with leetspeak, Unicode homoglyph, and ML-powered detection. Use when filtering user-generated content, moderating comments, checking text for profanity, censoring messages, or building content moderation into applications. Supports 24 languages.
Profanity detection and content moderation library with leetspeak, Unicode homoglyph, and ML-powered detection. Use when filtering user-generated content, moderating comments, checking text for profanity, censoring messages, or building content moderation into applications. Supports 24 languages.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Profanity detection library that catches evasion attempts like leetspeak (f4ck, sh1t), Unicode tricks (Cyrillic lookalikes), and obfuscated text.
# JavaScript/TypeScript npm install glin-profanity # Python pip install glin-profanity
import { checkProfanity, Filter } from 'glin-profanity'; // Simple check const result = checkProfanity("Your text here", { detectLeetspeak: true, normalizeUnicode: true, languages: ['english'] }); result.containsProfanity // boolean result.profaneWords // array of detected words result.processedText // censored version // With Filter instance const filter = new Filter({ replaceWith: '***', detectLeetspeak: true, normalizeUnicode: true }); filter.isProfane("text") // boolean filter.checkProfanity("text") // full result object
from glin_profanity import Filter filter = Filter({ "languages": ["english"], "replace_with": "***", "detect_leetspeak": True }) filter.is_profane("text") # True/False filter.check_profanity("text") # Full result dict
import { useProfanityChecker } from 'glin-profanity'; function ChatInput() { const { result, checkText } = useProfanityChecker({ detectLeetspeak: true }); return ( <input onChange={(e) => checkText(e.target.value)} /> ); }
FeatureDescriptionLeetspeak detectionf4ck, sh1t, @$$ patternsUnicode normalizationCyrillic fսck → fuck24 languagesIncluding Arabic, Chinese, Russian, HindiContext whitelistsMedical, gaming, technical domainsML integrationOptional TensorFlow.js toxicity detectionResult cachingLRU cache for performance
const filter = new Filter({ languages: ['english', 'spanish'], // Languages to check detectLeetspeak: true, // Catch f4ck, sh1t leetspeakLevel: 'moderate', // basic | moderate | aggressive normalizeUnicode: true, // Catch Unicode tricks replaceWith: '*', // Replacement character preserveFirstLetter: false, // f*** vs **** customWords: ['badword'], // Add custom words ignoreWords: ['hell'], // Whitelist words cacheSize: 1000 // LRU cache entries });
import { analyzeContext } from 'glin-profanity'; const result = analyzeContext("The patient has a breast tumor", { domain: 'medical', // medical | gaming | technical | educational contextWindow: 3, // Words around match to consider confidenceThreshold: 0.7 // Minimum confidence to flag });
import { batchCheck } from 'glin-profanity'; const results = batchCheck([ "Comment 1", "Comment 2", "Comment 3" ], { returnOnlyFlagged: true });
import { loadToxicityModel, checkToxicity } from 'glin-profanity/ml'; await loadToxicityModel({ threshold: 0.9 }); const result = await checkToxicity("You're the worst"); // { toxic: true, categories: { toxicity: 0.92, insult: 0.87 } }
const filter = new Filter({ detectLeetspeak: true, normalizeUnicode: true, languages: ['english'] }); bot.on('message', (msg) => { if (filter.isProfane(msg.text)) { deleteMessage(msg); warnUser(msg.author); } });
const result = filter.checkProfanity(userContent); if (result.containsProfanity) { return { valid: false, issues: result.profaneWords, suggestion: result.processedText // Censored version }; }
Docs: https://www.typeweaver.com/docs/glin-profanity Demo: https://www.glincker.com/tools/glin-profanity GitHub: https://github.com/GLINCKER/glin-profanity npm: https://www.npmjs.com/package/glin-profanity PyPI: https://pypi.org/project/glin-profanity/
Messaging, meetings, inboxes, CRM, and teammate communication surfaces.
Largest current source with strong distribution and engagement signals.