Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Context-aware recommendations. Learns preferences, researches options, anticipates expectations.
Context-aware recommendations. Learns preferences, researches options, anticipates expectations.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Context β Preferences β Research β Match β Recommend Every recommendation requires: knowing the user + knowing the options. Check sources.md for where to find user context. Check categories.md for domain-specific factors.
Before recommending, search user context. See sources.md for full source list. Minimum output: 3-5 relevant user signals before proceeding. If insufficient, ask targeted questions.
From gathered context, extract: DimensionQuestionValuesWhat matters most? (Quality, price, speed, novelty, safety)ConstraintsHard limits? (Budget, time, dietary, ethical)HistoryWhat worked? What disappointed?MoodAdventurous or safe? Exploring or comfort? Output: 3-5 bullet preference profile for this request.
Nowβand only nowβresearch candidates: Breadth first: Don't anchor on first good option Source quality: Prioritize reviews, ratings, expert opinions Recency: Check if information is current Availability: Confirm options are actually accessible Output: Shortlist of 3-7 viable candidates with key attributes.
Score each candidate against the preference profile: Candidate β Values alignment + Constraint fit + History match + Mood fit Disqualify anything that violates hard constraints. Rank by total alignment, not just one dimension.
Present 1-3 recommendations: π― RECOMMENDATION: [Option] π WHY: Matches [preference], avoids [constraint] βοΈ TRADEOFF: Less [X] than [Alternative] π CONFIDENCE: [Level] β based on [data quality]
After each recommendation: Track outcome: Accepted? Modified? Rejected? Update preferences: Acceptance = reinforcement, rejection = adjustment Note exceptions: "Normally X, but for Y context preferred Z" Store learnings in memory for future recommendations.
Projecting β Your taste β their taste Recency bias β Last choice isn't always preference Ignoring context β Tuesday lunch β anniversary dinner Over-filtering β Too many constraints = nothing fits Stale data β Preferences evolve, verify periodically Recommendations are predictions. More context = better predictions.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.