โ† All skills
Tencent SkillHub ยท Developer Tools

Data Scraper

Web page data collection and structured text extraction

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Web page data collection and structured text extraction

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
GUIDE.md, SKILL.md, run.sh

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 14 sections Open source page

data-scraper

Web Data Scraper โ€” Extract structured data from web pages using curl + parsing. Lightweight, no browser required. Supports HTML-to-text, table extraction, price monitoring, and batch scraping.

When to Use

Extract text content from web pages (articles, blogs, docs) Scrape product prices, reviews, or listings Monitor pages for changes (price drops, new content) Batch-collect data from multiple URLs Convert HTML tables to structured formats (JSON/CSV)

Quick Start

# Extract readable text from URL data-scraper fetch "https://example.com/article" # Extract specific elements data-scraper extract "https://example.com" --selector "h2, .price" # Monitor for changes data-scraper watch "https://example.com/product" --interval 3600

Text Mode (default)

Fetches page and extracts readable content, stripping HTML tags, scripts, and styles. Similar to reader mode. data-scraper fetch URL # Output: clean markdown text

Selector Mode

Target specific CSS selectors for precise extraction. data-scraper extract URL --selector ".product-title, .price, .rating" # Output: matched elements as structured data

Table Mode

Extract HTML tables into structured formats. data-scraper table URL --index 0 # Output: JSON array of row objects (header โ†’ value mapping)

Link Mode

Extract all links from a page with optional filtering. data-scraper links URL --filter "*.pdf" # Output: filtered list of absolute URLs

Batch Scraping

# Scrape multiple URLs data-scraper batch urls.txt --output results/ # With rate limiting data-scraper batch urls.txt --delay 2000 --output results/ urls.txt format: https://site1.com/page https://site2.com/page https://site3.com/page

Change Monitoring

# Watch for changes, alert on diff data-scraper watch URL --selector ".price" --interval 3600 # Compare with previous snapshot data-scraper diff URL Stores snapshots in data-scraper/snapshots/ with timestamps. Alerts via notification-hub when changes detected.

Output Formats

FormatFlagUse CaseText--format textReading, summarizationJSON--format jsonData processingCSV--format csvSpreadsheetsMarkdown--format mdDocumentation

Headers & Auth

# Custom headers data-scraper fetch URL --header "Authorization: Bearer TOKEN" # Cookie-based auth data-scraper fetch URL --cookie "session=abc123" # User-Agent override data-scraper fetch URL --ua "Mozilla/5.0..."

Rate Limiting & Ethics

Default: 1 request per second per domain Respects robots.txt when --polite flag is set Configurable delay between requests Stops on 429 (Too Many Requests) and backs off

Error Handling

ErrorBehavior404Log and skip403/401Warn about auth requirement429Exponential backoff (max 3 retries)TimeoutRetry once with longer timeoutSSL errorWarn, option to proceed with --insecure

Integration

web-claude: Use as fallback when web_fetch isn't enough competitor-watch: Feed scraped data into competitor analysis seo-audit: Scrape competitor pages for SEO comparison performance-tracker: Collect social metrics from public profiles

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Docs1 Scripts
  • SKILL.md Primary doc
  • GUIDE.md Docs
  • run.sh Scripts