{
  "schemaVersion": "1.0",
  "item": {
    "slug": "sentiment-radar",
    "name": "Sentiment Radar",
    "source": "tencent",
    "type": "skill",
    "category": "数据分析",
    "sourceUrl": "https://clawhub.ai/Danielwangyy/sentiment-radar",
    "canonicalUrl": "https://clawhub.ai/Danielwangyy/sentiment-radar",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/sentiment-radar",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=sentiment-radar",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "scripts/analyze.py",
      "scripts/xhs_crawler.py",
      "scripts/dy_scrape.py",
      "references/report-template.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/sentiment-radar"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/sentiment-radar",
    "agentPageUrl": "https://openagent3.xyz/skills/sentiment-radar/agent",
    "manifestUrl": "https://openagent3.xyz/skills/sentiment-radar/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/sentiment-radar/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Sentiment Radar",
        "body": "Multi-platform social media sentiment collection and analysis."
      },
      {
        "title": "Supported Platforms",
        "body": "PlatformMethodAuth Required小红书 (XHS)MediaCrawler (CDP browser)QR code loginTwitterXpoz MCP (xpoz.getTwitterPostsByKeywords)OAuth tokenRedditXpoz MCP (xpoz.getRedditPostsByKeywords)OAuth token"
      },
      {
        "title": "MediaCrawler (for 小红书)",
        "body": "If not installed:\n\ngit clone https://github.com/NanmiCoder/MediaCrawler ~/.openclaw/workspace/skills/media-crawler\ncd ~/.openclaw/workspace/skills/media-crawler\nuv sync\nplaywright install chromium\n\nConfig: config/base_config.py — set ENABLE_CDP_MODE = True, SAVE_DATA_OPTION = \"json\""
      },
      {
        "title": "Xpoz MCP (for Twitter/Reddit)",
        "body": "Requires mcporter with Xpoz OAuth configured. Token at ~/.mcporter/xpoz/tokens.json."
      },
      {
        "title": "Step 1: Define targets",
        "body": "Identify products/brands and search keywords. Example:\n\nProducts: Plaud录音笔, 钉钉闪记, 飞书录音豆\nKeywords (XHS): Plaud录音笔,钉钉闪记,飞书妙记,AI录音笔评测,录音豆\nKeywords (Twitter): Plaud NotePin, DingTalk recorder, Lark voice"
      },
      {
        "title": "Step 2: Collect data",
        "body": "XHS collection\n\nRun MediaCrawler with keywords. Use CDP mode (user's Chrome browser) for anti-detection.\nThe crawler needs QR code scan for login — run in background with exec(background=true).\n\ncd skills/media-crawler\n# Update keywords in config/base_config.py, then:\n.venv/bin/python main.py --platform xhs --lt qrcode\n\nEnvironment fixes for macOS:\n\nexport MPLBACKEND=Agg\nexport PATH=\"/usr/sbin:$PATH\"\n\nData output: data/xhs/json/search_contents_YYYY-MM-DD.json and search_comments_YYYY-MM-DD.json\n\nTwitter/Reddit collection\n\nUse Xpoz MCP tools directly:\n\nxpoz.getTwitterPostsByKeywords — returns posts with engagement metrics\nxpoz.getRedditPostsByKeywords — returns posts with comments"
      },
      {
        "title": "Step 3: Analyze",
        "body": "Run the analysis script on collected data:\n\npython3 scripts/analyze.py \\\n  --data ./data \\\n  --products '{\"Plaud\": [\"plaud\",\"notepin\"], \"钉钉\": [\"钉钉\",\"dingtalk\",\"闪记\"]}' \\\n  --output report.md\n\nThe script performs:\n\nKeyword distribution analysis (notes per keyword, total likes/collects)\nProduct mention frequency in comments\nSentiment classification (positive/negative/concern/neutral)\nTop notes ranking by engagement\nPrice/subscription complaint extraction\nProduct comparison comment extraction"
      },
      {
        "title": "Step 4: Report",
        "body": "The analysis outputs:\n\nJSON results to stdout (for programmatic use)\nMarkdown report to --output path\n\nCombine XHS + Twitter data into a comprehensive report. See references/report-template.md for structure."
      },
      {
        "title": "Key Analysis Dimensions",
        "body": "Sentiment split — positive vs negative vs concern ratio\nProduct mentions — which products get discussed most\nPricing complaints — subscription fatigue, value perception\nComparison comments — head-to-head user opinions\nUser pain points — feature requests, complaints, unmet needs\nEngagement metrics — likes, collects, shares as popularity signals"
      },
      {
        "title": "Notes",
        "body": "XHS data uses Chinese number format (e.g., \"1.1万\") — parse_count() in analyze.py handles this\nMediaCrawler has 2s sleep between requests to avoid rate limiting\nEach keyword returns ~20 notes per page (configurable in MediaCrawler config)\nComments are fetched per note automatically\nFor recurring monitoring, schedule via cron and compare against previous reports"
      }
    ],
    "body": "Sentiment Radar\n\nMulti-platform social media sentiment collection and analysis.\n\nSupported Platforms\nPlatform\tMethod\tAuth Required\n小红书 (XHS)\tMediaCrawler (CDP browser)\tQR code login\nTwitter\tXpoz MCP (xpoz.getTwitterPostsByKeywords)\tOAuth token\nReddit\tXpoz MCP (xpoz.getRedditPostsByKeywords)\tOAuth token\nPrerequisites\nMediaCrawler (for 小红书)\n\nIf not installed:\n\ngit clone https://github.com/NanmiCoder/MediaCrawler ~/.openclaw/workspace/skills/media-crawler\ncd ~/.openclaw/workspace/skills/media-crawler\nuv sync\nplaywright install chromium\n\n\nConfig: config/base_config.py — set ENABLE_CDP_MODE = True, SAVE_DATA_OPTION = \"json\"\n\nXpoz MCP (for Twitter/Reddit)\n\nRequires mcporter with Xpoz OAuth configured. Token at ~/.mcporter/xpoz/tokens.json.\n\nWorkflow\nStep 1: Define targets\n\nIdentify products/brands and search keywords. Example:\n\nProducts: Plaud录音笔, 钉钉闪记, 飞书录音豆\nKeywords (XHS): Plaud录音笔,钉钉闪记,飞书妙记,AI录音笔评测,录音豆\nKeywords (Twitter): Plaud NotePin, DingTalk recorder, Lark voice\n\nStep 2: Collect data\nXHS collection\n\nRun MediaCrawler with keywords. Use CDP mode (user's Chrome browser) for anti-detection. The crawler needs QR code scan for login — run in background with exec(background=true).\n\ncd skills/media-crawler\n# Update keywords in config/base_config.py, then:\n.venv/bin/python main.py --platform xhs --lt qrcode\n\n\nEnvironment fixes for macOS:\n\nexport MPLBACKEND=Agg\nexport PATH=\"/usr/sbin:$PATH\"\n\n\nData output: data/xhs/json/search_contents_YYYY-MM-DD.json and search_comments_YYYY-MM-DD.json\n\nTwitter/Reddit collection\n\nUse Xpoz MCP tools directly:\n\nxpoz.getTwitterPostsByKeywords — returns posts with engagement metrics\nxpoz.getRedditPostsByKeywords — returns posts with comments\nStep 3: Analyze\n\nRun the analysis script on collected data:\n\npython3 scripts/analyze.py \\\n  --data ./data \\\n  --products '{\"Plaud\": [\"plaud\",\"notepin\"], \"钉钉\": [\"钉钉\",\"dingtalk\",\"闪记\"]}' \\\n  --output report.md\n\n\nThe script performs:\n\nKeyword distribution analysis (notes per keyword, total likes/collects)\nProduct mention frequency in comments\nSentiment classification (positive/negative/concern/neutral)\nTop notes ranking by engagement\nPrice/subscription complaint extraction\nProduct comparison comment extraction\nStep 4: Report\n\nThe analysis outputs:\n\nJSON results to stdout (for programmatic use)\nMarkdown report to --output path\n\nCombine XHS + Twitter data into a comprehensive report. See references/report-template.md for structure.\n\nKey Analysis Dimensions\nSentiment split — positive vs negative vs concern ratio\nProduct mentions — which products get discussed most\nPricing complaints — subscription fatigue, value perception\nComparison comments — head-to-head user opinions\nUser pain points — feature requests, complaints, unmet needs\nEngagement metrics — likes, collects, shares as popularity signals\nNotes\nXHS data uses Chinese number format (e.g., \"1.1万\") — parse_count() in analyze.py handles this\nMediaCrawler has 2s sleep between requests to avoid rate limiting\nEach keyword returns ~20 notes per page (configurable in MediaCrawler config)\nComments are fetched per note automatically\nFor recurring monitoring, schedule via cron and compare against previous reports"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Danielwangyy/sentiment-radar",
    "publisherUrl": "https://clawhub.ai/Danielwangyy/sentiment-radar",
    "owner": "Danielwangyy",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/sentiment-radar",
    "downloadUrl": "https://openagent3.xyz/downloads/sentiment-radar",
    "agentUrl": "https://openagent3.xyz/skills/sentiment-radar/agent",
    "manifestUrl": "https://openagent3.xyz/skills/sentiment-radar/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/sentiment-radar/agent.md"
  }
}