Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Build client-ready web scrapers with clean data output. Use when creating scrapers for clients, extracting data from websites, or delivering scraping projects.
Build client-ready web scrapers with clean data output. Use when creating scrapers for clients, extracting data from websites, or delivering scraping projects.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Turn scraping briefs into deliverable scraping projects. Generates the scraper, runs it, cleans the data, and packages everything for the client.
/web-scraper-as-a-service "Scrape all products from example-store.com โ need name, price, description, images. CSV output." /web-scraper-as-a-service https://example.com --fields "title,price,rating,url" --format csv /web-scraper-as-a-service brief.txt
Before writing any code: Fetch the target URL to understand the page structure Identify: Is the site server-rendered (static HTML) or client-rendered (JavaScript/SPA)? What anti-scraping measures are visible? (Cloudflare, CAPTCHAs, rate limits) Pagination pattern (URL params, infinite scroll, load more button) Data structure (product cards, table rows, list items) Total estimated volume (number of pages/items) Choose the right tool: Static HTML โ Python + requests + BeautifulSoup JavaScript-rendered โ Python + playwright API available โ Direct API calls (check network tab patterns)
Generate a complete Python script in scraper/ directory: scraper/ scrape.py # Main scraper script requirements.txt # Dependencies config.json # Target URLs, fields, settings README.md # Setup and usage instructions for client scrape.py must include: # Required features in every scraper: # 1. Configuration import json config = json.load(open('config.json')) # 2. Rate limiting (ALWAYS โ be respectful) import time DELAY_BETWEEN_REQUESTS = 2 # seconds, adjustable in config # 3. Retry logic MAX_RETRIES = 3 RETRY_DELAY = 5 # 4. User-Agent rotation USER_AGENTS = [ "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36...", # ... at least 5 user agents ] # 5. Progress tracking print(f"Scraping page {current}/{total} โ {items_collected} items collected") # 6. Error handling # - Log errors but don't crash on individual page failures # - Save progress incrementally (don't lose data on crash) # - Write errors to error_log.txt # 7. Output # - Save data incrementally (append to file, don't hold in memory) # - Support CSV and JSON output # - Clean and normalize data before saving # 8. Resume capability # - Track last successfully scraped page/URL # - Can resume from where it left off if interrupted
After scraping, clean the data: Remove duplicates (by unique identifier or composite key) Normalize text (strip extra whitespace, fix encoding issues, consistent capitalization) Validate data (no empty required fields, prices are numbers, URLs are valid) Standardize formats (dates to ISO 8601, currency to numbers, consistent units) Generate data quality report: Data Quality Report โโโโโโโโโโโโโโโโโโโ Total records: 2,487 Duplicates removed: 13 Empty fields filled: 0 Fields with issues: price (3 records had non-numeric values โ cleaned) Completeness: 99.5%
Generate a complete deliverable: delivery/ data.csv # Clean data in requested format data.json # JSON alternative data-quality-report.md # Quality metrics scraper-documentation.md # How the scraper works README.md # Quick start guide scraper-documentation.md includes: What was scraped and from where How many records collected Data fields and their descriptions How to re-run the scraper Known limitations Date of scraping
Present: Summary: X records scraped from Y pages, Z% data quality Sample data: First 5 rows of the output File locations: Where the deliverables are saved Client handoff notes: What to tell the client about the data
Based on the target type, use the appropriate template:
Fields: name, price, original_price, discount, description, images, category, sku, rating, review_count, availability, url
Fields: address, price, bedrooms, bathrooms, sqft, lot_size, listing_type, agent, description, images, url
Fields: title, company, location, salary, job_type, description, requirements, posted_date, url
Fields: business_name, address, phone, website, category, rating, review_count, hours, description
Fields: title, author, date, content, tags, url, image
Always respect robots.txt โ check before scraping Rate limit โ minimum 2 second delay between requests Identify yourself โ use realistic but honest User-Agent Don't scrape personal data (emails, phone numbers) unless explicitly authorized by the client AND the data is publicly displayed Cache responses โ don't re-scrape pages unnecessarily Check ToS โ note if the site's terms prohibit scraping and inform the client
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.