Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Legal web scraping with robots.txt compliance, rate limiting, and GDPR/CCPA-aware data handling.
Legal web scraping with robots.txt compliance, rate limiting, and GDPR/CCPA-aware data handling.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Before writing any scraping code: robots.txt โ Fetch {domain}/robots.txt, check if target path is disallowed. If yes, stop. Terms of Service โ Check /terms, /tos, /legal. Explicit scraping prohibition = need permission. Data type โ Public factual data (prices, listings) is safer. Personal data triggers GDPR/CCPA. Authentication โ Data behind login is off-limits without authorization. Never scrape protected content. API available? โ If site offers an API, use it. Always. Scraping when API exists often violates ToS.
Public data, no login โ Generally legal (hiQ v. LinkedIn 2022) Bypassing barriers โ CFAA violation risk (Van Buren v. US 2021) Ignoring robots.txt โ Gray area, often breaches ToS (Meta v. Bright Data 2024) Personal data without consent โ GDPR/CCPA violation Republishing copyrighted content โ Copyright infringement
Rate limit: Minimum 2-3 seconds between requests. Faster = server strain = legal exposure. User-Agent: Real browser string + contact email: Mozilla/5.0 ... (contact: you@email.com) Respect 429: Exponential backoff. Ignoring 429s shows intent to harm. Session reuse: Keep connections open to reduce server load.
Strip PII immediately โ Don't collect names, emails, phones unless legally justified. No fingerprinting โ Don't combine data to identify individuals indirectly. Minimize storage โ Cache only what you need, delete what you don't. Audit trail โ Log what, when, where. Evidence of good faith if challenged. For code patterns and robots.txt parser, see code.md
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.