{
  "schemaVersion": "1.0",
  "item": {
    "slug": "content-moderation",
    "name": "Content Moderation",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/code-with-brian/content-moderation",
    "canonicalUrl": "https://clawhub.ai/code-with-brian/content-moderation",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/content-moderation",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=content-moderation",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/content-moderation"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/content-moderation",
    "agentPageUrl": "https://openagent3.xyz/skills/content-moderation/agent",
    "manifestUrl": "https://openagent3.xyz/skills/content-moderation/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/content-moderation/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Content Moderation",
        "body": "Moderate user-generated content using Vettly's AI-powered content moderation API. This skill uses the @vettly/mcp MCP server to check text, images, and video against configurable moderation policies with auditable decisions."
      },
      {
        "title": "Setup",
        "body": "Add the @vettly/mcp MCP server to your configuration:\n\n{\n  \"mcpServers\": {\n    \"vettly\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"@vettly/mcp\"],\n      \"env\": {\n        \"VETTLY_API_KEY\": \"your-api-key\"\n      }\n    }\n  }\n}\n\nGet an API key at vettly.dev."
      },
      {
        "title": "moderate_content",
        "body": "Check text, image, or video content against a Vettly moderation policy. Returns a safety assessment with category scores, the action taken, provider used, latency, and cost.\n\nParameters:\n\ncontent (required) - The content to moderate (text string, or URL for images/video)\npolicyId (required) - The policy ID to use for moderation\ncontentType (optional, default: text) - Type of content: text, image, or video"
      },
      {
        "title": "validate_policy",
        "body": "Validate a Vettly policy YAML without saving it. Returns validation results with any syntax or configuration errors. Use this to test policy changes before deploying them.\n\nParameters:\n\nyamlContent (required) - The YAML policy content to validate"
      },
      {
        "title": "list_policies",
        "body": "List all moderation policies available in your Vettly account. Takes no parameters. Use this to discover available policy IDs before moderating content."
      },
      {
        "title": "get_usage_stats",
        "body": "Get usage statistics for your Vettly account including request counts, costs, and moderation outcomes.\n\nParameters:\n\ndays (optional, default: 30) - Number of days to include in statistics (1-365)"
      },
      {
        "title": "get_recent_decisions",
        "body": "Get recent moderation decisions with optional filtering by outcome, content type, or policy.\n\nParameters:\n\nlimit (optional, default: 10) - Number of decisions to return (1-50)\nflagged (optional) - Filter to only flagged content (true) or safe content (false)\npolicyId (optional) - Filter by specific policy ID\ncontentType (optional) - Filter by content type: text, image, or video"
      },
      {
        "title": "When to Use",
        "body": "Moderate user-generated content (comments, posts, uploads) before publishing\nTest and validate moderation policy YAML configs during development\nAudit recent moderation decisions to review flagged content\nMonitor moderation costs and usage across your account\nCompare moderation results across different policies"
      },
      {
        "title": "Moderate a user comment",
        "body": "Moderate this user comment for my community forum policy:\n\"I hate this product, it's the worst thing I've ever used and the developers should be ashamed\"\n\nCall list_policies to find available policies, then moderate_content with the appropriate policy ID and return the safety assessment."
      },
      {
        "title": "Validate a policy before deploying",
        "body": "Validate this moderation policy YAML:\n\ncategories:\n  - name: toxicity\n    threshold: 0.8\n    action: flag\n  - name: spam\n    threshold: 0.6\n    action: block\n\nCall validate_policy and report any syntax or configuration errors."
      },
      {
        "title": "Review recent flagged content",
        "body": "Show me all flagged content from the last week\n\nCall get_recent_decisions with flagged: true to retrieve recent moderation decisions that were flagged."
      },
      {
        "title": "Tips",
        "body": "Always call list_policies first if you don't know which policy ID to use\nUse validate_policy to test policy changes before deploying to production\nUse get_usage_stats to monitor costs and catch unexpected spikes\nFilter get_recent_decisions by contentType or policyId to narrow results\nFor image and video moderation, pass the content URL rather than raw data"
      }
    ],
    "body": "Content Moderation\n\nModerate user-generated content using Vettly's AI-powered content moderation API. This skill uses the @vettly/mcp MCP server to check text, images, and video against configurable moderation policies with auditable decisions.\n\nSetup\n\nAdd the @vettly/mcp MCP server to your configuration:\n\n{\n  \"mcpServers\": {\n    \"vettly\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"@vettly/mcp\"],\n      \"env\": {\n        \"VETTLY_API_KEY\": \"your-api-key\"\n      }\n    }\n  }\n}\n\n\nGet an API key at vettly.dev.\n\nAvailable Tools\nmoderate_content\n\nCheck text, image, or video content against a Vettly moderation policy. Returns a safety assessment with category scores, the action taken, provider used, latency, and cost.\n\nParameters:\n\ncontent (required) - The content to moderate (text string, or URL for images/video)\npolicyId (required) - The policy ID to use for moderation\ncontentType (optional, default: text) - Type of content: text, image, or video\nvalidate_policy\n\nValidate a Vettly policy YAML without saving it. Returns validation results with any syntax or configuration errors. Use this to test policy changes before deploying them.\n\nParameters:\n\nyamlContent (required) - The YAML policy content to validate\nlist_policies\n\nList all moderation policies available in your Vettly account. Takes no parameters. Use this to discover available policy IDs before moderating content.\n\nget_usage_stats\n\nGet usage statistics for your Vettly account including request counts, costs, and moderation outcomes.\n\nParameters:\n\ndays (optional, default: 30) - Number of days to include in statistics (1-365)\nget_recent_decisions\n\nGet recent moderation decisions with optional filtering by outcome, content type, or policy.\n\nParameters:\n\nlimit (optional, default: 10) - Number of decisions to return (1-50)\nflagged (optional) - Filter to only flagged content (true) or safe content (false)\npolicyId (optional) - Filter by specific policy ID\ncontentType (optional) - Filter by content type: text, image, or video\nWhen to Use\nModerate user-generated content (comments, posts, uploads) before publishing\nTest and validate moderation policy YAML configs during development\nAudit recent moderation decisions to review flagged content\nMonitor moderation costs and usage across your account\nCompare moderation results across different policies\nExamples\nModerate a user comment\nModerate this user comment for my community forum policy:\n\"I hate this product, it's the worst thing I've ever used and the developers should be ashamed\"\n\n\nCall list_policies to find available policies, then moderate_content with the appropriate policy ID and return the safety assessment.\n\nValidate a policy before deploying\nValidate this moderation policy YAML:\n\ncategories:\n  - name: toxicity\n    threshold: 0.8\n    action: flag\n  - name: spam\n    threshold: 0.6\n    action: block\n\n\nCall validate_policy and report any syntax or configuration errors.\n\nReview recent flagged content\nShow me all flagged content from the last week\n\n\nCall get_recent_decisions with flagged: true to retrieve recent moderation decisions that were flagged.\n\nTips\nAlways call list_policies first if you don't know which policy ID to use\nUse validate_policy to test policy changes before deploying to production\nUse get_usage_stats to monitor costs and catch unexpected spikes\nFilter get_recent_decisions by contentType or policyId to narrow results\nFor image and video moderation, pass the content URL rather than raw data"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/code-with-brian/content-moderation",
    "publisherUrl": "https://clawhub.ai/code-with-brian/content-moderation",
    "owner": "code-with-brian",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/content-moderation",
    "downloadUrl": "https://openagent3.xyz/downloads/content-moderation",
    "agentUrl": "https://openagent3.xyz/skills/content-moderation/agent",
    "manifestUrl": "https://openagent3.xyz/skills/content-moderation/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/content-moderation/agent.md"
  }
}