Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Deep interview analysis using dynamic expert routing. Automatically selects top domain thinkers based on role type to distinguish genuine capability from performance, identifying Battle Scars over Methodology Recitation. Applicable to any professional position including product management, engineering, design, operations, sales, and data science.
Deep interview analysis using dynamic expert routing. Automatically selects top domain thinkers based on role type to distinguish genuine capability from performance, identifying Battle Scars over Methodology Recitation. Applicable to any professional position including product management, engineering, design, operations, sales, and data science.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Core Mission: Transform interview transcripts into deep insights. Core Logic: Don't listen to what candidates "say" (Methodology Recitation), observe what they've "done" (Battle Scars) and "how they think" (First Principles).
Based on role type and evaluation dimensions, automatically select the best minds combination for that domain: Three-Step Expert Selection: Identify core competency domain: Product/Engineering/Operations/Design/Sales/Data Science/... Match top domain thinkers: Recognized methodology masters or practitioners in the field Combine hiring experts: Geoff Smart (fact-checking) + Lou Adler (competency validation)
Role TypeDomain Expert (Methodology)Hiring Expert (Validation)RationaleProduct ManagerMarty Cagan / Julie ZhuoGeoff SmartProduct Sense + Fact CheckSoftware EngineerLinus Torvalds / John CarmackLou AdlerEngineering Judgment + Results ValidationGrowth HackerSean Ellis / Brian BalfourGeoff SmartGrowth Methodology + Metrics VerificationUX DesignerDon Norman / Jony IveLou AdlerUX Principles + Portfolio ValidationData ScientistAndrew Ng / DJ PatilGeoff SmartTechnical Depth + Project VerificationOperationsSheryl Sandberg / Reid HoffmanLou AdlerScale Operations + Results FocusSales/BDAaron Ross / Jill KonrathGeoff SmartSales Methodology + Performance Verification [!IMPORTANT] Flexibility Principle: The table above is for reference only. Flexibly select the most appropriate expert combination based on specific role and candidate background. Encourage Innovation: If you believe a non-mainstream expert is better suited to evaluate this candidate, make that choice and explain your rationale. Core Question: "Who can best identify imposters in this role? Whose framework best validates core competencies?"
Timeline Reconstruction: Connect experiences scattered across multiple interview rounds, checking for logical gaps. Consistency Verification: Compare different versions of the same story told to different interviewers (e.g., reasons for leaving, project failures). Red Flag Annotation: Mark all vague titles (e.g., SPM), exaggerated data, and attribution fallacies ("it was all market/technology's fault").
Tactic: Select 1-2 core cases (e.g., startup project, most challenging project) for microscopic analysis. Truth Extraction: Methodology Check: Is the candidate reciting SOPs (MECE, SWOT) or applying first principles? Solution Bias Check: Did they jump straight to "add features," or first conduct "value validation"? Technical Boundary Check: For technical challenges, did they "deflect blame" or "anticipate"?
Subject: Evaluate interviewer (you/colleagues) performance. Dimensions: Depth: Did they probe at critical moments? Or let it pass? Bias: Did they draw conclusions too early or ask leading questions? Bar: Did they maintain A Player standards?
Generate Markdown cards using the following standard templates, saved to people/{candidate_name}/analysis/. Be sure to read template content before filling in analysis results. Profile (Comprehensive Portrait): Template path: templates/profile_template.md Purpose: Fact checking, red flag scanning, core competency assessment. Insight (Deep Analysis): Template path: templates/insight_template.md Purpose: Deep dive into specific domains (e.g., AI Capability, Product Strategy). Meta-Analysis (Interviewer Review): Template path: templates/evaluation_template.md Purpose: Evaluate interviewer performance and organizational recommendations. Structure Note (Hub Document): Template path: templates/structure_note_template.md Purpose: Serves as hub connecting all analysis cards above, forming decision closure.
"Analyze Li Yashuang's three interview rounds, focusing on AI capabilities." "Review this interview to see where we interviewers did well and where we missed opportunities." "Use Marty Cagan's perspective to analyze this candidate's product thinking."
Data access, storage, extraction, analysis, reporting, and insight generation.
Largest current source with strong distribution and engagement signals.