Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Convert any webpage into structured JSON data using AI. Scrape websites, extract data into custom JSON schemas, and call saved APIs programmatically. Useful for web scraping, data extraction, content monitoring, lead generation, price tracking, and building data pipelines.
Convert any webpage into structured JSON data using AI. Scrape websites, extract data into custom JSON schemas, and call saved APIs programmatically. Useful for web scraping, data extraction, content monitoring, lead generation, price tracking, and building data pipelines.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
PulpMiner converts any webpage into structured JSON using AI. You provide a URL and optionally a JSON template, and PulpMiner scrapes the page, runs it through an LLM, and returns clean structured data.
All API calls require the apikey header: apikey: <PULPMINER_API_KEY> Get your API key from https://pulpminer.com/api โ click "Regenerate Key" if you don't have one.
PulpMiner works in two phases: Create a saved API โ Configure a URL, scraper, LLM, and optional JSON template via the PulpMiner dashboard at https://pulpminer.com/api Call the saved API โ Use the external endpoint with your API key to fetch structured JSON
curl -X GET "https://api.pulpminer.com/external/<apiId>" \ -H "apikey: <PULPMINER_API_KEY>" Returns JSON extracted from the configured webpage.
For APIs saved with template URLs like https://example.com/search?q={{query}}&page={{page}}: curl -X POST "https://api.pulpminer.com/external/<apiId>" \ -H "apikey: <PULPMINER_API_KEY>" \ -H "Content-Type: application/json" \ -d '{"query": "javascript frameworks", "page": "1"}' The {{variable}} placeholders in the saved URL get replaced with the values you provide.
Successful responses return: { "data": { ... }, "errors": null } Error responses return: { "data": null, "errors": "Error message describing what went wrong" }
API responses are cached for 24 hours by default If cache is older than 15 minutes, PulpMiner serves the cached version while refreshing in the background Cache can be disabled per-API in the dashboard settings
When creating a saved API at https://pulpminer.com/api, you can configure: OptionDescriptionURLThe webpage to scrapeJSON TemplateOptional JSON structure for the LLM to follow (e.g., {"name": "", "price": ""})Render JSEnable for SPAs and JS-heavy pages (uses headless browser)CSS SelectorExtract only a specific part of the page (e.g., .product-list, #main-content)Extra InstructionsAdditional guidance for the AI (e.g., "Only extract items with prices above $50")Dynamic URLEnable template variables in the URL with {{variable}} syntaxCacheToggle response caching on/off
For async scraping in Zapier workflows: # Static API curl -X POST "https://api.pulpminer.com/external/zapier/get/<apiId>" \ -H "apikey: <PULPMINER_API_KEY>" \ -d '{"callbackURL": "https://hooks.zapier.com/..."}' # Dynamic API curl -X POST "https://api.pulpminer.com/external/zapier/post/<apiId>" \ -H "apikey: <PULPMINER_API_KEY>" \ -d '{"callbackURL": "https://hooks.zapier.com/...", "query": "value"}' Returns 201 immediately. Sends scraped data to the callback URL when complete.
Verify authentication: curl -X GET "https://api.pulpminer.com/external/n8n/auth" \ -H "apikey: <PULPMINER_API_KEY>" Then use the standard /external/<apiId> endpoints for data fetching.
Each API call costs 0.25โ0.4 credits depending on the endpoint JavaScript rendering adds 0.1 credits extra New users get 5 free credits Purchase more at https://pulpminer.com/credits
Use CSS selectors to narrow down the scraped content and improve accuracy Provide a JSON template for consistent, predictable output structures Enable JS rendering only when needed โ static pages scrape faster and cost fewer credits Use extra instructions to guide the AI (e.g., "Return dates in ISO 8601 format") For monitoring use cases, keep caching enabled to reduce credit usage Use the playground first to verify a URL is scrapable before saving an API config Dynamic APIs are ideal for search pages, paginated content, and parameterized URLs
Website: https://pulpminer.com API Dashboard: https://pulpminer.com/api
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.