Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Search the web and read page contents without API keys. Use when you need to search via DuckDuckGo/Brave/Google (multi-page), extract readable text from URLs...
Search the web and read page contents without API keys. Use when you need to search via DuckDuckGo/Brave/Google (multi-page), extract readable text from URLs...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Four scripts, zero API keys. All output is JSON by default. Dependencies: requests, beautifulsoup4, playwright (with Chromium). Optional: pdfplumber or PyPDF2 for PDF text extraction. Install: pip install requests beautifulsoup4 playwright && playwright install chromium
python3 scripts/google_search.py "query" --pages N --engine ENGINE --engine โ duckduckgo (default), brave, or google Returns [{title, url, snippet}, ...]
python3 scripts/read_page.py "https://url" [--max-chars N] [--visible] [--format json|markdown|text] [--no-dismiss] --format โ json (default), markdown, or text Auto-dismisses cookie consent banners (skip with --no-dismiss)
python3 scripts/browser_session.py open "https://url" # Open + extract python3 scripts/browser_session.py navigate "https://other" # Go to new URL python3 scripts/browser_session.py extract [--format FMT] # Re-read page python3 scripts/browser_session.py screenshot [path] [--full] # Save screenshot python3 scripts/browser_session.py click "Submit" # Click by text/selector python3 scripts/browser_session.py search "keyword" # Search text in page python3 scripts/browser_session.py tab new "https://url" # Open new tab python3 scripts/browser_session.py tab list # List all tabs python3 scripts/browser_session.py tab switch 1 # Switch to tab index python3 scripts/browser_session.py tab close [index] # Close tab python3 scripts/browser_session.py dismiss-cookies # Manually dismiss cookies python3 scripts/browser_session.py close # Close browser Cookie consent auto-dismissed on open/navigate Multiple tabs supported โ open, switch, close independently Search returns matching lines with line numbers Extract supports json/markdown/text output
python3 scripts/download_file.py "https://example.com/doc.pdf" [--output DIR] [--filename NAME] Auto-detects filename from URL/headers PDFs: extracts text if pdfplumber/PyPDF2 installed Returns {status, path, filename, size_bytes, content_type, extracted_text}
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.