โ† All skills
Tencent SkillHub ยท Productivity

Zoomin Docs Portal Scraper Tool

Scrape documentation content from Zoomin Software portals using Playwright browser automation to handle dynamic content loading. Use when standard web fetchi...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Scrape documentation content from Zoomin Software portals using Playwright browser automation to handle dynamic content loading. Use when standard web fetchi...

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, scripts/analyze_docs_batch.py, scripts/run_scraper.sh, scripts/scrape_zoomin.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.2

Documentation

ClawHub primary doc Primary doc: SKILL.md 4 sections Open source page

Zoomin Scraper Skill

This skill provides a mechanism to robustly scrape content from documentation portals powered by Zoomin Software. It leverages Playwright to launch a headless Chromium browser, execute JavaScript, wait for dynamic content to load, and then extract the rendered text from the main article body.

Usage

To use this skill, you need to provide a file containing a list of URLs, one URL per line. The skill will then process each URL, saving the extracted content to a specified output directory.

Prerequisites (Manual Setup)

This skill relies on Playwright. Before using this skill for the first time on a new system, you must manually install Playwright and its browser binaries by running the following commands in your terminal: pip install playwright playwright install chromium These commands should be executed within the virtual environment you intend to use for this skill.

Running the Scraper

To run the scraper, you will invoke the run_scraper.sh script, which is located within this skill's scripts/ directory. This wrapper script will activate your specified Python virtual environment before executing the main Python Playwright script. Parameters for run_scraper.sh: urls_file: The path to a text file containing the URLs to scrape, one URL per line. output_directory (optional): The directory where the scraped content will be saved. If not provided, it defaults to scraped_docs_output. venv_path: The absolute path to your Python virtual environment (e.g., /home/justin/scraper/.env). Example: Assuming your list of URLs is in path/to/urls.txt, you want to save the output to my_scraped_docs/, and your virtual environment is at path/to/my_venv: zoomin-scraper urls_file="path/to/urls.txt" output_directory="my_scraped_docs" venv_path="path/to/my_venv" The script will launch a headless Chromium browser, navigate to each URL, wait for the main content to load (specifically targeting <article id="zDocsContent">), and then save the extracted text. It includes a user agent to mimic a regular browser and a small delay between requests to be polite to the server.

Category context

Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Scripts1 Docs
  • SKILL.md Primary doc
  • scripts/analyze_docs_batch.py Scripts
  • scripts/run_scraper.sh Scripts
  • scripts/scrape_zoomin.py Scripts