Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Extract public posts, comments, and profiles from Instagram, TikTok, and Reddit via Apify for trend analysis and audience insights with structured JSON output.
Extract public posts, comments, and profiles from Instagram, TikTok, and Reddit via Apify for trend analysis and audience insights with structured JSON output.
This item's current download entry is known to bounce back to a listing or homepage instead of returning a package file.
Use the source page and any available docs to guide the install because the item currently does not return a direct package file.
I tried to install a skill package from Yavira, but the item currently does not return a direct package file. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required.
I tried to upgrade a skill package from Yavira, but the item currently does not return a direct package file. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need.
This skill enables Claude to extract public data from Instagram, TikTok, and Reddit for trend analysis, content research, competitor monitoring, and audience insights โ powered by the Apify platform. ๐ Sign up for Apify here: https://www.apify.com/?fpr=dx06p
Extract public posts, hashtags, and profiles from Instagram Scrape trending videos, comments, and creator stats from TikTok Pull posts, threads, comments, and subreddit data from Reddit Aggregate data across platforms for unified trend analysis Output structured JSON data ready for analysis, dashboards, or export
Go to https://www.apify.com/?fpr=dx06p and create a free account Once logged in, navigate to Settings โ Integrations Direct link: https://console.apify.com/account/integrations Copy your Personal API Token โ format: apify_api_xxxxxxxxxxxxxxxx Store it as an environment variable: export APIFY_TOKEN=apify_api_xxxxxxxxxxxxxxxx Free tier includes $5/month of free compute โ enough for regular trend monitoring runs.
npm install apify-client
Actor IDPurposeapify/instagram-scraperScrape posts, hashtags, profiles, reelsapify/instagram-hashtag-scraperExtract posts by hashtagapify/instagram-comment-scraperPull comments from a specific post
Actor IDPurposeapify/tiktok-scraperScrape videos, profiles, hashtag feedsapify/tiktok-hashtag-scraperTrending content by hashtagapify/tiktok-comment-scraperComments from a specific video
Actor IDPurposeapify/reddit-scraperPosts and comments from subredditsapify/reddit-search-scraperSearch Reddit by keyword
import ApifyClient from 'apify-client'; const client = new ApifyClient({ token: process.env.APIFY_TOKEN }); const run = await client.actor("apify/instagram-hashtag-scraper").call({ hashtags: ["trending", "viral", "fyp"], resultsLimit: 50 }); const { items } = await run.dataset().getData(); // Each item contains: // { id, shortCode, caption, likesCount, commentsCount, // timestamp, ownerUsername, url, hashtags[] } console.log(`Extracted ${items.length} posts`);
const run = await client.actor("apify/tiktok-hashtag-scraper").call({ hashtags: ["trending", "lifehack"], resultsPerPage: 30, shouldDownloadVideos: false }); const { items } = await run.dataset().getData(); // Each item contains: // { id, text, createTime, authorMeta, musicMeta, // diggCount, shareCount, playCount, commentCount }
const run = await client.actor("apify/reddit-scraper").call({ startUrls: [ { url: "https://www.reddit.com/r/technology/" }, { url: "https://www.reddit.com/r/worldnews/" } ], maxPostCount: 100, maxComments: 20, sort: "hot" }); const { items } = await run.dataset().getData(); // Each item contains: // { title, score, upvoteRatio, numComments, author, // created, url, selftext, subreddit, comments[] }
const [igRun, ttRun, rdRun] = await Promise.all([ client.actor("apify/instagram-hashtag-scraper").call({ hashtags: ["aitools"], resultsLimit: 30 }), client.actor("apify/tiktok-hashtag-scraper").call({ hashtags: ["aitools"], resultsPerPage: 30 }), client.actor("apify/reddit-search-scraper").call({ queries: ["AI tools 2025"], maxItems: 30 }) ]); const [igData, ttData, rdData] = await Promise.all([ igRun.dataset().getData(), ttRun.dataset().getData(), rdRun.dataset().getData() ]); const aggregated = { instagram: igData.items, tiktok: ttData.items, reddit: rdData.items, totalPosts: igData.items.length + ttData.items.length + rdData.items.length, extractedAt: new Date().toISOString() };
const response = await fetch( "https://api.apify.com/v2/acts/apify~tiktok-scraper/runs", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${process.env.APIFY_TOKEN}` }, body: JSON.stringify({ hashtags: ["viral"], resultsPerPage: 25 }) } ); const { data } = await response.json(); const runId = data.id; // Poll for completion const resultRes = await fetch( `https://api.apify.com/v2/actor-runs/${runId}/dataset/items`, { headers: { Authorization: `Bearer ${process.env.APIFY_TOKEN}` } } ); const posts = await resultRes.json();
When asked to analyze trends, Claude will: Identify the target platform(s) and keywords/hashtags Run the appropriate Apify actor(s) in parallel when multi-platform Collect all posts with engagement metrics (likes, views, comments, shares) Sort & rank content by engagement rate or volume Identify patterns โ recurring hashtags, peak posting times, top creators Return a structured report with top trends, key metrics, and actionable insights
{ "platform": "tiktok", "id": "7302938471029384", "text": "This AI tool is insane #aitools #viral", "author": "techreviewer99", "engagement": { "likes": 142300, "comments": 4820, "shares": 9100, "views": 2300000 }, "hashtags": ["aitools", "viral"], "publishedAt": "2025-02-18T14:32:00Z", "url": "https://www.tiktok.com/@techreviewer99/video/7302938471029384" }
Always scrape only public content โ never attempt to access private profiles Set reasonable resultsLimit values (50โ200) to stay within your Apify quota For recurring analysis, schedule actor runs using Apify Schedules in the console Store results in Apify Datasets for persistent access and historical comparison Use sort: "hot" on Reddit and trending endpoints on TikTok for most relevant data Add a proxyConfiguration block when scraping at scale to avoid rate limits: proxyConfiguration: { useApifyProxy: true, apifyProxyGroups: ["RESIDENTIAL"] }
try { const run = await client.actor("apify/tiktok-scraper").call(input); const dataset = await run.dataset().getData(); return dataset.items; } catch (error) { if (error.statusCode === 401) throw new Error("Invalid Apify token"); if (error.statusCode === 429) throw new Error("Rate limit hit โ reduce request frequency"); if (error.message.includes("timeout")) throw new Error("Actor timed out โ try a smaller batch"); throw error; }
An Apify account โ https://www.apify.com/?fpr=dx06p A valid Personal API Token from Settings โ Integrations Node.js 18+ for the apify-client package No platform API keys required โ Apify handles all platform access
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.