Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Web scraping and crawling with Firecrawl API. Fetch webpage content as markdown, take screenshots, extract structured data, search the web, and crawl documentation sites. Use when the user needs to scrape a URL, get current web info, capture a screenshot, extract specific data from pages, or crawl docs for a framework/library.
Web scraping and crawling with Firecrawl API. Fetch webpage content as markdown, take screenshots, extract structured data, search the web, and crawl documentation sites. Use when the user needs to scrape a URL, get current web info, capture a screenshot, extract specific data from pages, or crawl docs for a framework/library.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Scrape, search, and crawl the web using Firecrawl.
Get your API key from firecrawl.dev/app/api-keys Set the environment variable: export FIRECRAWL_API_KEY=fc-your-key-here Install the SDK: pip3 install firecrawl
All commands use the bundled fc.py script in this skill's directory.
Fetch any URL and convert to clean markdown. Handles JavaScript-rendered content. python3 fc.py markdown "https://example.com" python3 fc.py markdown "https://example.com" --main-only # skip nav/footer
Capture a full-page screenshot of any URL. python3 fc.py screenshot "https://example.com" -o screenshot.png
Pull specific fields from a page using a JSON schema. Schema example (schema.json): { "type": "object", "properties": { "title": { "type": "string" }, "price": { "type": "number" }, "features": { "type": "array", "items": { "type": "string" } } } } python3 fc.py extract "https://example.com/product" --schema schema.json python3 fc.py extract "https://example.com/product" --schema schema.json --prompt "Extract the main product details"
Search the web and get content from results (may require paid tier). python3 fc.py search "Python 3.13 new features" --limit 5
Crawl an entire documentation site. Great for learning new frameworks. python3 fc.py crawl "https://docs.example.com" --limit 30 python3 fc.py crawl "https://docs.example.com" --limit 50 --output ./docs Note: Each page costs 1 credit. Set reasonable limits.
Discover all URLs on a website before deciding what to scrape. python3 fc.py map "https://example.com" --limit 100 python3 fc.py map "https://example.com" --search "api"
"Scrape https://blog.example.com/post and summarize it" "Take a screenshot of stripe.com" "Extract the name, price, and features from this product page" "Crawl the Astro docs so you can help me build a site" "Map all the URLs on docs.stripe.com"
Free tier includes 500 credits. 1 credit = 1 page/screenshot/search query.
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.