Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Manage and analyze Meta (Facebook/Instagram) Ads campaigns. Use this skill when the user asks about ad performance, campaign metrics, ad spend, ROAS, CPA, CT...
Manage and analyze Meta (Facebook/Instagram) Ads campaigns. Use this skill when the user asks about ad performance, campaign metrics, ad spend, ROAS, CPA, CT...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Three MCP tools are available. Always call list_ad_accounts first. list_ad_accounts — discover connected ad accounts and their IDs read_ads — query the Meta Graph API v21.0 via sandboxed JavaScript (GET only) write_ads — same as read_ads, plus metaPost and metaDelete for mutations
GlobalAvailable inDescriptionmetaFetch(endpoint, params?)read_ads, write_adsGET request. Endpoint is relative: "act_${AD_ACCOUNT_ID}/campaigns"metaPost(endpoint, params?)write_ads onlyPOST request for creates/updatesmetaDelete(endpoint)write_ads onlyDELETE requestAD_ACCOUNT_IDbothThe account ID passed in the tool callPERSISTbothData from a previous call via context_id, or null Code must return { out, persist? }. Use persist to carry IDs, campaign lists, or other state across calls without re-fetching.
Never execute write_ads without explicit user confirmation. When recommending a change: Show exactly what will change (campaign name, current value, new value) Wait for the user to approve Only then call write_ads
Tool output consumes context tokens. Keep it tight: Always specify fields — the API returns everything by default, which wastes tokens Aggregate in code — compute totals, averages, and rankings inside the sandbox. Return the summary, not raw rows. Cap lists — return top 5-10 items. The user will ask for more if needed. Format numbers — round to 2 decimals, format currency as "$1,234.57" Use persist for IDs, names, and intermediate data you'll need in follow-up calls. Don't return them in out unless the user asked.
Fire all independent metaFetch() calls before processing any results — this enables parallel execution in the runtime Use persist / context_id to avoid redundant fetches across tool calls All values in out and persist must be JSON-serializable
EndpointDescriptionact_{id}/campaignsList campaignsact_{id}/adsetsList ad setsact_{id}/adsList adsact_{id}/insightsAccount-level insights{campaign_id}/insightsCampaign insights{adset_id}/insightsAd set insights{ad_id}/insightsAd insights
Campaign: id, name, status, effective_status, objective, bid_strategy, daily_budget, lifetime_budget, budget_remaining, start_time, stop_time AdSet: id, name, status, effective_status, campaign_id, optimization_goal, billing_event, bid_amount, daily_budget, lifetime_budget, targeting, promoted_object Ad: id, name, status, effective_status, adset_id, campaign_id, creative, quality_ranking, engagement_rate_ranking, conversion_rate_ranking Insights (metrics): spend, impressions, reach, clicks, ctr, cpc, cpm, frequency, unique_clicks, unique_ctr, actions, action_values, cost_per_action_type, cost_per_conversion, purchase_roas, website_purchase_roas, quality_ranking, engagement_rate_ranking, conversion_rate_ranking
ParamValuesdate_presettoday, yesterday, last_3d, last_7d, last_14d, last_28d, last_30d, last_90d, this_month, last_month, this_quarter, this_year, maximumtime_rangeJSON.stringify({ since: "2024-01-01", until: "2024-01-31" })levelaccount, campaign, adset, adbreakdownsage, gender, country, region, device_platform, publisher_platform, platform_positiontime_increment1 (daily), 7 (weekly), monthly, all_days
Campaign.Status: ACTIVE, PAUSED, ARCHIVED, DELETED Campaign.Objective: OUTCOME_AWARENESS, OUTCOME_ENGAGEMENT, OUTCOME_LEADS, OUTCOME_SALES, OUTCOME_TRAFFIC, OUTCOME_APP_PROMOTION, CONVERSIONS, LINK_CLICKS, REACH, BRAND_AWARENESS, VIDEO_VIEWS, LEAD_GENERATION, MESSAGES, POST_ENGAGEMENT Campaign.BidStrategy: LOWEST_COST_WITHOUT_CAP, COST_CAP, LOWEST_COST_WITH_BID_CAP, LOWEST_COST_WITH_MIN_ROAS AdSet.OptimizationGoal: CONVERSIONS, LINK_CLICKS, IMPRESSIONS, REACH, LANDING_PAGE_VIEWS, OFFSITE_CONVERSIONS, LEAD_GENERATION, THRUPLAY, VALUE
When the user asks "how are my ads doing", "ad performance", "what's my ROAS", or similar: Fetch account insights for last_7d: spend, impressions, clicks, ctr, cpc, actions, purchase_roas Fetch campaign-level insights for the same period to find top and bottom performers Fetch the same metrics for last_30d to establish trends Lead with the headline: total spend and ROAS (or the metric that matters most). Then break down by campaign in a table. Flag anything trending down week-over-week.
List all ACTIVE campaigns: name, objective, bid_strategy, daily_budget, budget_remaining Pull campaign-level insights for last_30d: spend, ctr, cpc, cost_per_action_type, purchase_roas Identify: campaigns burning budget with poor ROAS, underspending campaigns (high budget_remaining), campaigns with no conversions, and winners worth scaling For campaigns with multiple ad sets, check targeting overlap
Fetch insights with breakdowns (age, gender, country, or device_platform) Compute cost-per-result and ROAS by segment Flag high-spend / low-return segments Recommend exclusions or bid adjustments
Fetch ad-level insights: spend, ctr, cost_per_action_type, quality_ranking, engagement_rate_ranking, conversion_rate_ranking Group by ad set for controlled comparison "Below Average" on any quality ranking is a red flag — surface these prominently Recommend pausing low-ranking creatives and scaling winners
Compare cost_per_result across all active campaigns and ad sets Identify where marginal dollars are most efficient Recommend specific budget shifts: "Move $X/day from Campaign A to Campaign B" Flag LOWEST_COST_WITHOUT_CAP campaigns that might benefit from a cost cap
Lead with the answer. Numbers first, context second. Use markdown tables for any comparison across campaigns, ad sets, or segments. Bold the key metrics and numbers, not labels. Be specific with recommendations: "Pause ad set 'Broad US 25-44'" not "consider reviewing underperformers." When suggesting writes, state exactly what will change and wait for confirmation.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.