Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Automate web browsers, scrape pages, search the web, and run AI prompts on live websites via HARPA AI Grid REST API
Automate web browsers, scrape pages, search the web, and run AI prompts on live websites via HARPA AI Grid REST API
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
HARPA Grid lets you orchestrate real web browsers remotely. You can scrape pages, search the web, run built-in or custom AI commands, and send AI prompts with full page context — all through a single REST endpoint.
The user must have: HARPA AI Chrome Extension installed from https://harpa.ai At least one active Node — a browser with HARPA running (configured in the extension's AUTOMATE tab) A HARPA API key — obtained from the HARPA extension AUTOMATE tab. The key is provided as the HARPA_API_KEY environment variable. If the user hasn't set up HARPA yet, direct them to: https://harpa.ai/grid/browser-automation-node-setup
Endpoint: POST https://api.harpa.ai/api/v1/grid Auth: Authorization: Bearer $HARPA_API_KEY Content-Type: application/json Full reference: https://harpa.ai/grid/grid-rest-api-reference
Extract full page content (as markdown) or specific elements via CSS/XPath/text selectors. Full page scrape: curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "scrape", "url": "https://example.com", "timeout": 15000 }' Targeted element scrape (grab): curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "scrape", "url": "https://example.com/products", "grab": [ { "selector": ".product-title", "selectorType": "css", "at": "all", "take": "innerText", "label": "titles" }, { "selector": ".product-price", "selectorType": "css", "at": "all", "take": "innerText", "label": "prices" } ], "timeout": 15000 }' Grab fields: FieldRequiredDefaultValuesselectoryes—CSS (.class, #id), XPath (//h2), or text contentselectorTypenoautoauto, css, xpath, textatnofirstall, first, last, or a numbertakenoinnerTextinnerText, textContent, innerHTML, outerHTML, href, value, id, className, attributes, styles, [attrName], (styleName)labelnodataCustom label for extracted data
Perform a web search. Supports operators like site:, intitle:. curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "serp", "query": "OpenClaw AI agent framework", "timeout": 15000 }'
Execute one of 100+ built-in HARPA commands or a custom automation on a target page. curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "command", "url": "https://example.com/article", "name": "Extract data", "inputs": "List all headings with their word counts", "connection": "HARPA AI", "resultParam": "message", "timeout": 30000 }' name — command name (e.g. "Summary", "Extract data", or any custom command) inputs — pre-filled user inputs for multi-step commands resultParam — HARPA parameter to return as result (default: "message") connection — AI model to use (e.g. "HARPA AI", "gpt-4o", "claude-3.5-sonnet")
Send a custom AI prompt with page context. Use {{page}} to inject the page content. curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "prompt", "url": "https://example.com", "prompt": "Analyze the current page and extract all contact information. Webpage: {{page}}", "connection": "CHAT AUTO", "timeout": 30000 }'
ParameterRequiredDefaultDescriptionactionyes—scrape, serp, command, or prompturlno—Target page URL (ignored by serp)nodeno—Node ID ("r2d2"), multiple ("r2d2 c3po"), first N ("5"), or all ("*")timeoutno300000Max wait time in ms (max 5 minutes)resultsWebhookno—URL to POST results to asynchronously (retained 30 days)connectionno—AI model for command/prompt actions
Omit node to use the default node "node": "mynode" — target a specific node by ID "node": "node1 node2" — target multiple nodes "node": "3" — use first 3 available nodes "node": "*" — broadcast to all nodes
Set resultsWebhook to receive results asynchronously. The action stays alive for up to 30 days, useful when target nodes are temporarily offline. { "action": "scrape", "url": "https://example.com", "resultsWebhook": "https://your-server.com/webhook", "timeout": 15000 }
Scraping behind-login pages works because HARPA runs inside a real browser session with the user's cookies and auth state. Use the grab array with multiple selectors to extract structured data in a single request. For long-running AI commands, increase timeout (max 300000ms / 5 min) or use resultsWebhook. The {{page}} variable in prompts injects the full page content — use it to give AI context about the current page.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.