Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Extract up to 15 B2B leads with emails, phone numbers, websites, and ratings from Google Maps using your Apify API Token.
Extract up to 15 B2B leads with emails, phone numbers, websites, and ratings from Google Maps using your Apify API Token.
This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.
Use the source page and any available docs to guide the install because the item currently does not return a direct package file.
I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.
I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.
APIFY_TOKEN = inputs['apify_token'] SEARCH_TERM = f"{inputs['keyword']} in {inputs['location']}" print(f"๐ Starting Lead Hunt for: {SEARCH_TERM}...") print("โณ Connecting to Apify Cloud Engine...")
url_start = f"https://api.apify.com/v2/acts/compass~crawler-google-places/runs?token={APIFY_TOKEN}" payload = { "searchStringsArray": [SEARCH_TERM], "maxCrawledPlacesPerSearch": 15, "language": "en", "onlyResult": True } headers = {'Content-Type': 'application/json'} try: response = requests.post(url_start, headers=headers, json=payload) response.raise_for_status() run_data = response.json()['data'] run_id = run_data['id'] dataset_id = run_data['defaultDatasetId'] except Exception as e: print(f"โ Error: Invalid API Token or Apify Connection failed. Did you sign up via the link? Error: {e}") raise e print(f"โ Scraper Started! (Run ID: {run_id})") print("โ This usually takes 30-60 seconds. Extracting fresh data...")
while True: status_url = f"https://api.apify.com/v2/acts/runs/{run_id}?token={APIFY_TOKEN}" status_res = requests.get(status_url) status_data = status_res.json()['data'] status = status_data['status'] if status == "SUCCEEDED": break elif status in ["FAILED", "ABORTED", "TIMED-OUT"]: print("โ The scrape failed. Please try again.") return time.sleep(5) # Wait 5 seconds before checking again
dataset_url = f"https://api.apify.com/v2/datasets/{dataset_id}/items?token={APIFY_TOKEN}" data_res = requests.get(dataset_url) items = data_res.json()
results = [] for item in items: lead = { "Business Name": item.get('title', 'N/A'), "Phone": item.get('phone', 'N/A'), "Website": item.get('website', 'N/A'), "Address": item.get('address', 'N/A'), "Rating": item.get('totalScore', 'N/A') } results.append(lead) print(f"๐ SUCCESS! Found {len(results)} leads for {inputs['keyword']}.")
print("\nHere are your leads:\n") print("| Business Name | Phone | Website | Rating |") print("| --- | --- | --- | --- |") for r in results: print(f"| {r['Business Name']} | {r['Phone']} | {r['Website']} | {r['Rating']} |")
print("\n๐ก Want more than 15 leads? Upgrade your Apify plan here to support this tool: https://www.apify.com?fpr=dx06p")
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.