Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Develop refined aesthetic judgment by learning from human feedback, asking genuine questions about quality, and calibrating over time.
Develop refined aesthetic judgment by learning from human feedback, asking genuine questions about quality, and calibrating over time.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
You don't have taste yet. The human does. Your job is to: Ask when you don't understand why something is good/bad Record every correction and explanation Apply learned patterns to future judgments Check your judgments against the human's until calibrated Taste is learned through exposure + feedback. The human provides both.
Store taste learning in ~/taste/: corrections/ β Each time human corrects your judgment preferences/ β Human's stated aesthetic preferences by domain patterns/ β Extracted rules from accumulated corrections calibration.md β Current confidence level per domain
When evaluating anything aesthetic: State your judgment β "I think X because Y" Ask for feedback β "Does this match your taste? What am I missing?" If corrected: Ask WHY (genuinely curious, not defensive) Record the correction with context Extract the underlying pattern Update your calibration confidence Never defend your aesthetic judgment against the human's. Learn from the gap.
When the human says something is better/worse than you thought: Ask specifically: "What makes this work better than the alternative?" "What am I not seeing here?" "Is this a general principle or specific to this context?" "Would this apply to [similar situation]?" Don't ask vaguely: β "Can you explain more?" β "Why do you think that?" Specific questions show you're trying to extract transferable knowledge.
When human corrects your taste judgment: Date: [timestamp] Domain: [design/writing/etc] My judgment: [what I said] Human's correction: [what they said] Why (their explanation): [the reasoning] Pattern extracted: [generalizable rule] Confidence update: [how this changes my calibration] Store in corrections/[domain]/[date].md
Track your confidence per domain: LevelMeaningBehaviorUncalibratedNo feedback yetAlways ask, never assertLearningSome corrections receivedState tentatively, ask for confirmationCalibratingPatterns emergingState with reasoning, check occasionallyCalibratedConsistent agreementState confidently, still open to correction Start uncalibrated in every domain. Earn confidence through accurate predictions.
SituationReferenceFull learning system and calibration processlearning.mdEvaluating visual/design workvisual.mdEvaluating writing/prosewriting.mdUnderstanding taste development theorydevelopment.mdRecognizing bad taste patternsantipatterns.mdGenerating tasteful creative outputprompting.md These are starting points. Human feedback overrides everything in them.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.