Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Paid acquisition strategy, budget allocation, and avoiding common advertising mistakes across platforms
Paid acquisition strategy, budget allocation, and avoiding common advertising mistakes across platforms
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Starting with daily budgets too low to exit learning phase โ platforms need 50+ conversions/week per ad set to optimize properly Spreading budget across too many campaigns early โ concentrate spend to gather statistically significant data faster Killing ads before statistical significance โ minimum 100 clicks or 1000 impressions before judging creative performance No contingency for scaling โ reserve 20-30% of budget for doubling down on winners mid-month Treating ad spend as expense, not investment โ track payback period, not just immediate ROAS
Optimizing for CTR instead of conversion โ high CTR with low conversion = curiosity clicks that waste budget Trusting platform-reported conversions โ attribution windows vary (7-day click, 1-day view), always cross-reference with actual revenue Ignoring frequency โ above 3-4 frequency per week, performance degrades and audience burns out CPA tunnel vision โ a $50 CPA is better than $30 CPA if LTV is 3x higher for the $50 cohort Vanity reach metrics โ 1M impressions mean nothing if 0 target customers saw the ad
One variable per test โ changing image AND copy simultaneously teaches nothing about what works Winning ads fatigue in 2-4 weeks โ have next creative batch ready before performance drops Static images often outperform video on cost-per-conversion โ test both, don't assume video is better Headlines matter more than body copy โ 80% of viewers read only the headline User-generated content style outperforms polished brand creative in most direct response contexts
Broad targeting often wins at scale โ platform algorithms find converters better than manual interest stacking Lookalike audiences need minimum 1000 source users โ smaller seeds create unstable lookalikes Retargeting pools need 7-14 day recency caps โ beyond that, intent has faded Exclude converters from prospecting campaigns โ paying to show ads to existing customers wastes budget Test 1% vs 3% vs 5% lookalikes โ tighter isn't always better, depends on market size
Meta: Learning phase resets with significant edits โ avoid editing during first 50 conversions Google: Search intent beats display reach for direct response โ display is for awareness, search is for capture TikTok: First 3 seconds determine everything โ hook must be instant, no slow brand intros LinkedIn: CPMs are 5-10x higher โ only viable for high-LTV B2B where one customer justifies $200+ CPA YouTube: Skippable ads teach you what hooks work โ if they don't skip, your hook is strong
Increasing budget more than 20-30% per day destabilizes campaigns โ gradual scaling preserves algorithm learning Duplicating winning ad sets fragments the audience and causes self-competition Scaling spend without scaling creative โ same ads to larger audience = faster fatigue Ignoring incrementality โ some conversions would have happened organically, true ROAS is lower than reported Geographic expansion without localization โ same ad in new market often fails
Ads are only half the equation โ a 2x better landing page beats 2x more ad spend Message match: ad promise must appear above the fold on landing page โ disconnect kills conversion Page load time over 3 seconds loses 50%+ of paid clicks โ optimize speed before scaling spend One landing page per audience segment โ generic pages convert worse than specific ones Track micro-conversions (scroll depth, time on page) when sample size is too small for macro-conversions
Last-click attribution undervalues awareness campaigns โ multi-touch attribution or holdout tests reveal true impact iOS 14.5+ broke tracking for ~40% of users โ model conversions, don't rely on pixel data alone Offline conversions (calls, in-store) need manual import or integration โ otherwise CPA looks inflated View-through conversions are real but overvalued by platforms โ weight click-through higher 7-day attribution windows miss longer B2B sales cycles โ extend windows or use CRM-based attribution
Always run one control ad โ without baseline, you don't know if new creative is better or platform just performed differently Minimum 2 weeks per test โ weekday/weekend patterns affect results Document every test with hypothesis, result, and learning โ institutional memory prevents repeat mistakes Test audiences before creatives โ wrong audience can't be saved by good creative Negative results are valuable โ knowing what doesn't work prevents future waste
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.