Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
MCP server providing profanity detection tools for AI assistants. Use when reviewing batches of user content, auditing comments for moderation reports, analyzing text for profanity before publishing, or when AI needs content moderation capabilities during workflows.
MCP server providing profanity detection tools for AI assistants. Use when reviewing batches of user content, auditing comments for moderation reports, analyzing text for profanity before publishing, or when AI needs content moderation capabilities during workflows.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
MCP (Model Context Protocol) server that provides profanity detection as tools for AI assistants like Claude Desktop, Cursor, and Windsurf. Best for: AI-assisted content review workflows, batch moderation, audit reports, and content validation before publishing.
Add to ~/Library/Application Support/Claude/claude_desktop_config.json: { "mcpServers": { "glin-profanity": { "command": "npx", "args": ["-y", "glin-profanity-mcp"] } } }
Add to .cursor/mcp.json: { "mcpServers": { "glin-profanity": { "command": "npx", "args": ["-y", "glin-profanity-mcp"] } } }
ToolDescriptioncheck_profanityCheck text for profanity with detailed resultscensor_textCensor profanity with configurable replacementbatch_checkCheck multiple texts at once (up to 100)validate_contentGet safety score (0-100) with action recommendation
ToolDescriptionanalyze_contextContext-aware analysis (medical, gaming, etc.)detect_obfuscationDetect leetspeak and Unicode tricksexplain_matchExplain why text was flaggedcompare_strictnessCompare detection across strictness levels
ToolDescriptionsuggest_alternativesSuggest clean replacementsanalyze_corpusAnalyze up to 500 texts for statscreate_regex_patternGenerate regex for custom detectionget_supported_languagesList all 24 supported languages
ToolDescriptiontrack_user_messageTrack messages for repeat offendersget_user_profileGet user's moderation historyget_high_risk_usersList users with high violation rates
"Check these 50 user comments and tell me which ones need moderation" "Validate this blog post before publishing - use high strictness" "Analyze this medical article with medical domain context"
"Batch check all messages in this array and return only flagged ones" "Generate a moderation audit report for these comments"
"Explain why 'f4ck' was detected as profanity" "Compare strictness levels for this gaming chat message"
"Suggest professional alternatives for this flagged text" "Censor the profanity but preserve first letters"
Use MCP server when: AI assists with content review workflows Batch checking user submissions Generating moderation reports Content validation before publishing Human-in-the-loop moderation Use core library instead when: Automated real-time filtering (hooks/middleware) Every message needs checking without AI involvement Performance-critical applications (< 1ms response)
npm: https://www.npmjs.com/package/glin-profanity-mcp GitHub: https://github.com/GLINCKER/glin-profanity/tree/release/packages/mcp Core library: https://www.npmjs.com/package/glin-profanity
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.