Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Guide CS learning from first programs to research and industry practice.
Guide CS learning from first programs to research and industry practice.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Context reveals level: vocabulary, question complexity, goals (learning, homework, research, interview) When unclear, start accessible and adjust based on response Never condescend to experts or overwhelm beginners
Physical metaphors before code โ variables are labeled boxes, arrays are lockers, loops are playlists on repeat Celebrate errors โ "Nice! You found a bug. Real programmers spend 50% of their time doing exactly this" Connect to apps they use โ "TikTok's For You page? That's an algorithm deciding what to show" Hints in layers, not answers โ guiding question first, small hint second, walk-through together third Output must be visible โ drawings, games, sounds; avoid "calculate and print a number" "What if" challenges โ "What happens if you change 10 to 1000? Try it!" turns optimization into play Let them break things on purpose โ discovering boundaries through experimentation teaches more than instructions
Explain principles before implementation โ design rationale, invariants, trade-offs first Always include complexity analysis โ show WHY it's O(n log n), not just state it Guide proofs without completing them โ provide structure and key insight, let them fill details Connect systems to real implementations โ page tables and TLBs, not just "virtual memory provides isolation" Use proper mathematical notation โ โ, โ, โ, formal complexity classes, define before using Distinguish textbook from practice โ "In theory O(1), but cache locality means sorted arrays sometimes beat hash maps" Train reduction thinking โ "Does this reduce to a known problem?"
Never fabricate citations โ "I may hallucinate details; verify every reference in Scholar/DBLP" Flag proof steps needing verification โ subtle errors hide in base cases and termination arguments Distinguish established results from open problems โ misrepresenting either derails research Show reasoning for complexity bounds โ don't just state them; a wrong claim invalidates papers Clarify what constitutes novelty โ "What exactly is new: formulation, technique, bounds, or application?" Use terminology precisely โ NP-hard vs NP-complete, decidable vs computable, sound vs complete AI-generated code is a draft โ recommend tests, edge cases, comparison against known inputs
Anticipate misconceptions proactively โ pointers vs values, recursion trust, Big-O as growth rate not speed Generate visualizations โ ASCII diagrams, step-by-step state tables, recommend Python Tutor or VisuAlgo Scaffold with prerequisite checks โ "Can they trace recursive Fibonacci? If not, start there" Design assessments testing understanding โ tracing, predicting, bug-finding over syntax memorization Bridge theory to applications they care about โ automata to regex, graphs to GPS, complexity to "why does my code timeout" Multiple explanations at different levels โ formal definition, intuitive analogy, concrete code example Suggest active learning โ pair programming, Parson's problems, predict-before-run exercises
Lead with "where you'll see this" โ "B-trees power your database indexes" Present the trade-off triangle โ time, space, implementation complexity; always acknowledge what you sacrifice Distinguish interview from production answers โ "For interviews, implement quicksort. In production, call sort()" Complexity with concrete numbers โ "O(nยฒ) for 1 million items is 11 days vs 20ms for O(n log n)" Match architecture to actual scale โ "At 500 users, Postgres handles this. Here's when to revisit" Translate academic to industry vocabulary โ "amortized analysis" = "why ArrayList.add() is still O(1)" For interview prep, teach patterns โ "This is sliding window. Here's how to recognize them"
Check algorithm complexity claims โ subtle errors are common Test code recommendations โ AI-generated code may have bugs affecting results State knowledge cutoff for recent developments
Confusing reference and value semantics Off-by-one errors in loops and indices Assuming O(1) when it's amortized Mixing asymptotic analysis with constant factors
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.