← All skills
Tencent SkillHub · Data Analysis

Sentiment Radar

Multi-platform sentiment monitoring and analysis for products/brands/topics. Collect public opinions from Chinese platforms (小红书/XHS via MediaCrawler) and En...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Multi-platform sentiment monitoring and analysis for products/brands/topics. Collect public opinions from Chinese platforms (小红书/XHS via MediaCrawler) and En...

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, scripts/analyze.py, scripts/xhs_crawler.py, scripts/dy_scrape.py, references/report-template.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 10 sections Open source page

Sentiment Radar

Multi-platform social media sentiment collection and analysis.

Supported Platforms

PlatformMethodAuth Required小红书 (XHS)MediaCrawler (CDP browser)QR code loginTwitterXpoz MCP (xpoz.getTwitterPostsByKeywords)OAuth tokenRedditXpoz MCP (xpoz.getRedditPostsByKeywords)OAuth token

MediaCrawler (for 小红书)

If not installed: git clone https://github.com/NanmiCoder/MediaCrawler ~/.openclaw/workspace/skills/media-crawler cd ~/.openclaw/workspace/skills/media-crawler uv sync playwright install chromium Config: config/base_config.py — set ENABLE_CDP_MODE = True, SAVE_DATA_OPTION = "json"

Xpoz MCP (for Twitter/Reddit)

Requires mcporter with Xpoz OAuth configured. Token at ~/.mcporter/xpoz/tokens.json.

Step 1: Define targets

Identify products/brands and search keywords. Example: Products: Plaud录音笔, 钉钉闪记, 飞书录音豆 Keywords (XHS): Plaud录音笔,钉钉闪记,飞书妙记,AI录音笔评测,录音豆 Keywords (Twitter): Plaud NotePin, DingTalk recorder, Lark voice

Step 2: Collect data

XHS collection Run MediaCrawler with keywords. Use CDP mode (user's Chrome browser) for anti-detection. The crawler needs QR code scan for login — run in background with exec(background=true). cd skills/media-crawler # Update keywords in config/base_config.py, then: .venv/bin/python main.py --platform xhs --lt qrcode Environment fixes for macOS: export MPLBACKEND=Agg export PATH="/usr/sbin:$PATH" Data output: data/xhs/json/search_contents_YYYY-MM-DD.json and search_comments_YYYY-MM-DD.json Twitter/Reddit collection Use Xpoz MCP tools directly: xpoz.getTwitterPostsByKeywords — returns posts with engagement metrics xpoz.getRedditPostsByKeywords — returns posts with comments

Step 3: Analyze

Run the analysis script on collected data: python3 scripts/analyze.py \ --data ./data \ --products '{"Plaud": ["plaud","notepin"], "钉钉": ["钉钉","dingtalk","闪记"]}' \ --output report.md The script performs: Keyword distribution analysis (notes per keyword, total likes/collects) Product mention frequency in comments Sentiment classification (positive/negative/concern/neutral) Top notes ranking by engagement Price/subscription complaint extraction Product comparison comment extraction

Step 4: Report

The analysis outputs: JSON results to stdout (for programmatic use) Markdown report to --output path Combine XHS + Twitter data into a comprehensive report. See references/report-template.md for structure.

Key Analysis Dimensions

Sentiment split — positive vs negative vs concern ratio Product mentions — which products get discussed most Pricing complaints — subscription fatigue, value perception Comparison comments — head-to-head user opinions User pain points — feature requests, complaints, unmet needs Engagement metrics — likes, collects, shares as popularity signals

Notes

XHS data uses Chinese number format (e.g., "1.1万") — parse_count() in analyze.py handles this MediaCrawler has 2s sleep between requests to avoid rate limiting Each keyword returns ~20 notes per page (configurable in MediaCrawler config) Comments are fetched per note automatically For recurring monitoring, schedule via cron and compare against previous reports

Category context

Data access, storage, extraction, analysis, reporting, and insight generation.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Scripts2 Docs
  • SKILL.md Primary doc
  • references/report-template.md Docs
  • scripts/analyze.py Scripts
  • scripts/dy_scrape.py Scripts
  • scripts/xhs_crawler.py Scripts