Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Web page data collection and structured text extraction
Web page data collection and structured text extraction
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Web Data Scraper โ Extract structured data from web pages using curl + parsing. Lightweight, no browser required. Supports HTML-to-text, table extraction, price monitoring, and batch scraping.
Extract text content from web pages (articles, blogs, docs) Scrape product prices, reviews, or listings Monitor pages for changes (price drops, new content) Batch-collect data from multiple URLs Convert HTML tables to structured formats (JSON/CSV)
# Extract readable text from URL data-scraper fetch "https://example.com/article" # Extract specific elements data-scraper extract "https://example.com" --selector "h2, .price" # Monitor for changes data-scraper watch "https://example.com/product" --interval 3600
Fetches page and extracts readable content, stripping HTML tags, scripts, and styles. Similar to reader mode. data-scraper fetch URL # Output: clean markdown text
Target specific CSS selectors for precise extraction. data-scraper extract URL --selector ".product-title, .price, .rating" # Output: matched elements as structured data
Extract HTML tables into structured formats. data-scraper table URL --index 0 # Output: JSON array of row objects (header โ value mapping)
Extract all links from a page with optional filtering. data-scraper links URL --filter "*.pdf" # Output: filtered list of absolute URLs
# Scrape multiple URLs data-scraper batch urls.txt --output results/ # With rate limiting data-scraper batch urls.txt --delay 2000 --output results/ urls.txt format: https://site1.com/page https://site2.com/page https://site3.com/page
# Watch for changes, alert on diff data-scraper watch URL --selector ".price" --interval 3600 # Compare with previous snapshot data-scraper diff URL Stores snapshots in data-scraper/snapshots/ with timestamps. Alerts via notification-hub when changes detected.
FormatFlagUse CaseText--format textReading, summarizationJSON--format jsonData processingCSV--format csvSpreadsheetsMarkdown--format mdDocumentation
# Custom headers data-scraper fetch URL --header "Authorization: Bearer TOKEN" # Cookie-based auth data-scraper fetch URL --cookie "session=abc123" # User-Agent override data-scraper fetch URL --ua "Mozilla/5.0..."
Default: 1 request per second per domain Respects robots.txt when --polite flag is set Configurable delay between requests Stops on 429 (Too Many Requests) and backs off
ErrorBehavior404Log and skip403/401Warn about auth requirement429Exponential backoff (max 3 retries)TimeoutRetry once with longer timeoutSSL errorWarn, option to proceed with --insecure
web-claude: Use as fallback when web_fetch isn't enough competitor-watch: Feed scraped data into competitor analysis seo-audit: Scrape competitor pages for SEO comparison performance-tracker: Collect social metrics from public profiles
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.