← All skills
Tencent SkillHub · Developer Tools

Scrapling - Stealth Web Scraper

Web scraping using Scrapling — a Python framework with anti-bot bypass (Cloudflare Turnstile, fingerprint spoofing), adaptive element tracking, stealth headl...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Web scraping using Scrapling — a Python framework with anti-bot bypass (Cloudflare Turnstile, fingerprint spoofing), adaptive element tracking, stealth headl...

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, references/patterns.md, scripts/scrape.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.3

Documentation

ClawHub primary doc Primary doc: SKILL.md 7 sections Open source page

Scrapling Skill

Source: https://github.com/D4Vinci/Scrapling (open source, MIT-like license) PyPI: scrapling — install before first use (see below) ⚠️ Only scrape sites you have permission to access. Respect robots.txt and Terms of Service. Do not use stealth modes to bypass paywalls or access restricted content without authorization.

Installation (one-time, confirm with user before running)

pip install scrapling[all] patchright install chromium # required for stealth/dynamic modes scrapling[all] installs patchright (a stealth fork of Playwright, bundled as a PyPI package — not a typo), curl_cffi, MCP server deps, and IPython shell. patchright install chromium downloads Chromium (~100 MB) via patchright's own installer (same mechanism as playwright install chromium). Confirm with user before running — installs ~200 MB of dependencies and browser binaries.

Script

scripts/scrape.py — CLI wrapper for all three fetcher modes. # Basic fetch (text output) python3 ~/skills/scrapling/scripts/scrape.py <url> -q # CSS selector extraction python3 ~/skills/scrapling/scripts/scrape.py <url> --selector ".class" -q # Stealth mode (Cloudflare bypass) — only on sites you're authorized to access python3 ~/skills/scrapling/scripts/scrape.py <url> --mode stealth -q # JSON output python3 ~/skills/scrapling/scripts/scrape.py <url> --selector "h2" --json -q

Fetcher Modes

http (default) — Fast HTTP with browser TLS fingerprint spoofing. Most sites. stealth — Headless Chrome with anti-detect. For Cloudflare/anti-bot. dynamic — Full Playwright browser. For heavy JS SPAs.

When to Use Each Mode

web_fetch returns 403/429/Cloudflare challenge → use --mode stealth Page content requires JS execution → use --mode dynamic Regular site, just need text/data → use --mode http (default)

Python Inline Usage

For custom logic beyond the CLI, write inline Python. See references/patterns.md for: Adaptive scraping (auto_save / adaptive — saves element fingerprints locally) Session/cookie handling Async usage XPath, find_similar, attribute extraction

Notes

MCP server (scrapling mcp): starts a local network service for AI-native scraping. Only start if explicitly needed and trusted — it exposes a local HTTP server. auto_save=True: persists element fingerprints to disk for adaptive re-scraping. Creates local state in working directory. Stealth/dynamic modes use Chromium headless — no xvfb-run needed. For large-scale crawls, use the Spider API (see Scrapling docs).

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Docs1 Scripts
  • SKILL.md Primary doc
  • references/patterns.md Docs
  • scripts/scrape.py Scripts