Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Organize, document, and maintain critical organizational knowledge with audits, taxonomy, templates, and risk management to preserve expertise and improve fi...
Organize, document, and maintain critical organizational knowledge with audits, taxonomy, templates, and risk management to preserve expertise and improve fi...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Turn tribal knowledge into searchable, maintained organizational intelligence. Stop losing expertise when people leave.
Score each dimension 1-5 (1=nonexistent, 5=excellent): DimensionScoreEvidenceDocumentation coverage% of processes documentedFindabilityCan new hire find answers in <5 min?Freshness% of docs updated in last 6 monthsContribution culture% of team actively contributingOnboarding effectivenessTime to productivity for new hiresKnowledge retentionImpact when someone leavesCross-team sharingTeams accessing other teams' knowledge Total Score: ___/35 Interpretation: 28-35: Mature β optimize and maintain 21-27: Developing β fill gaps systematically 14-20: Basic β needs foundational work 7-13: Critical β knowledge is at risk
knowledge_risk: single_points_of_failure: - person: "[Name]" unique_knowledge: "[What only they know]" risk_if_leaves: "high|medium|low" extraction_priority: 1 extraction_method: "interview|shadowing|recording|pair-work" undocumented_processes: - process: "[Name]" frequency: "daily|weekly|monthly|quarterly" complexity: "high|medium|low" current_owner: "[Name]" documentation_priority: 1 tribal_knowledge: - topic: "[What people 'just know']" holders: ["[Name1]", "[Name2]"] impact_area: "[What breaks without it]" capture_method: "interview|workshop|write-up"
For each single-point-of-failure person: Context: "I'm documenting [X] so the team isn't dependent on any one person. This protects you too β less interruptions." Process walk: "Walk me through [X] from start to finish. I'll record/note." Decision points: "Where do you make judgment calls? What factors do you consider?" Edge cases: "What are the weird situations that come up? How do you handle them?" Tools & access: "What tools, credentials, or access do you need?" History: "Why is it done this way? What was tried before?" Gotchas: "What are the things that trip people up?" Output format: Write up as a runbook (see Phase 3 templates).
knowledge_taxonomy: # Level 1: Knowledge Types types: how_to: description: "Step-by-step procedures and guides" examples: ["Deploy to production", "Process a refund", "Set up dev environment"] template: "runbook" reference: description: "Facts, specs, configurations to look up" examples: ["API endpoints", "Config values", "Vendor contacts", "Pricing tables"] template: "reference_doc" explanation: description: "Why things work the way they do" examples: ["Architecture decisions", "Policy rationale", "Historical context"] template: "explainer" decision: description: "How to make specific judgment calls" examples: ["Escalation criteria", "Approval thresholds", "Priority frameworks"] template: "decision_tree" troubleshooting: description: "Diagnosis and fix for known problems" examples: ["Error codes", "Common failures", "Debug procedures"] template: "troubleshooting_guide" # Level 2: Domains (customize per org) domains: - engineering - product - sales - operations - finance - hr_people - customer_success - security - legal_compliance # Level 3: Topics (within each domain) # Example for engineering: engineering_topics: - architecture - deployment - monitoring - incident_response - development_workflow - testing - security - infrastructure
Maximum 3 levels deep β if deeper, reorganize One canonical location per topic β link, don't duplicate Every page has an owner β no orphan docs Every page has a freshness date β reviewed within 6 months or flagged Cross-references over duplication β "See [X]" beats copy-paste Search-first design β assume people search, not browse
[DOMAIN]-[TYPE]-[TOPIC]-[SPECIFICS] Examples: eng-howto-deploy-production eng-ref-api-endpoints-v3 sales-decision-pricing-enterprise ops-troubleshoot-billing-failed-charges product-explain-auth-architecture
knowledge_base: homepage: - quick_links: # Top 10 most-accessed pages - recently_updated: # Last 10 changes - needs_review: # Stale docs flagged by_audience: new_hire: "[Onboarding path β essential reading list]" engineer: "[Dev setup β architecture β deployment β debugging]" manager: "[Policies β processes β templates β reports]" customer_facing: "[Product knowledge β troubleshooting β escalation]" by_domain: "[Taxonomy Level 2 domains]" by_type: "[How-to | Reference | Explanations | Decisions | Troubleshooting]"
# [Subject] Reference **Owner:** [Name] **Last verified:** [YYYY-MM-DD] **Scope:** [What this covers and doesn't cover] ## Overview [1-2 sentence summary of what this reference contains] ## [Main content organized as tables, lists, or structured data] | Item | Value | Notes | |------|-------|-------| | | | | ## Quick Lookup [Most frequently needed items at the top] ## Change Log | Date | Change | By | |------|--------|-----| | | | |
# Troubleshooting: [System/Process Name] **Owner:** [Name] **Last verified:** [YYYY-MM-DD] ## Quick Diagnostic [Flowchart as text] Is [X] happening? β YES: Go to Problem A β NO: Is [Y] happening? β YES: Go to Problem B β NO: Go to Problem C ## Problem A: [Symptom Description] **Likely causes (in order of probability):** 1. [Most common cause] 2. [Second most common] 3. [Rare but possible] **Fix for Cause 1:** [Step-by-step resolution] **Fix for Cause 2:** [Step-by-step resolution] **Escalation:** If none of the above work β [who to contact, what info to provide] ## Problem B: [Next symptom] [Same structure]
The 4C Test (every document must pass all four): Clear β Would a new hire understand this? No jargon without definitions. Correct β Has this been verified by doing/testing? Not from memory. Current β Does this reflect how things work TODAY? Not 6 months ago. Concise β Can anything be cut without losing meaning? Cut it. Formatting rules: Headers: action-oriented ("Deploy to Production" not "Production Deployment") Steps: numbered, one action per step, imperative mood Warnings: callout boxes, before the step (not after) Code/commands: exact, copy-pasteable, tested Screenshots: only if truly needed (they go stale fast) Links: to canonical sources, never paste full URLs inline
contribution_workflow: create: trigger: "New knowledge identified (incident learnings, process change, new tool)" steps: - choose_template: "Match content type to template" - draft: "Write using template structure" - self_review: "Run 4C Test checklist" - peer_review: "SME validates accuracy" - publish: "Add to knowledge base in correct location" - announce: "Notify relevant teams/channels" update: trigger: "Existing doc is wrong, incomplete, or stale" steps: - flag: "Mark as needs-update with reason" - update: "Make changes, update 'Last verified' date" - review: "If significant change, get peer review" - publish: "Update in place" - notify: "If behavioral change, announce" retire: trigger: "Doc no longer relevant (deprecated system, changed process)" steps: - mark: "Status: Deprecated, add redirect to replacement" - archive: "Move to archive after 30 days" - redirect: "Ensure all links point to replacement"
Making it easy (remove friction): Templates pre-filled with structure "Quick capture" channel β dump raw notes, someone structures later Post-incident: "What would have helped?" β becomes a doc Post-onboarding: new hire documents what was confusing Meeting notes β action items include "document [X]" Making it visible (social proof): Monthly "top contributors" shoutout "Docs champion" rotating role β each sprint, one person owns doc health Include documentation in performance criteria Knowledge sharing in team meetings (5-min "TIL" segment) Making it expected (cultural norms): "If you answered a question twice, write it down" PR template includes "Documentation updated? Y/N" Incident postmortem includes "Docs to create/update" Onboarding feedback includes "What couldn't you find?"
Every document should be findable by: Title β descriptive, includes key terms Tags β domain, type, audience, technology Synonyms β include alternate terms people might search Problem description β "When [X] happens" phrasing Tag schema: document_tags: domain: "[engineering|product|sales|ops|finance|hr|cs|security|legal]" type: "[howto|reference|explanation|decision|troubleshooting]" audience: "[all|engineering|management|customer-facing|new-hire]" technology: "[list relevant tools/systems]" status: "[current|needs-review|deprecated]" difficulty: "[beginner|intermediate|advanced]"
Contextual links β Related docs linked at bottom of every page FAQ collections β Per-domain "frequently asked" with links to full docs Onboarding paths β Curated reading lists by role Slack/chat bot β "Ask the KB" β searches and returns relevant docs Weekly digest β "New & updated docs this week" email/message Error-page links β Application errors link to troubleshooting docs
Prioritize search results by: Freshness β Recently updated > stale Verification β Peer-reviewed > unreviewed Usage β Frequently accessed > rarely accessed Completeness β Fully structured > quick notes
After every incident: Immediate (within 24h): Raw timeline and resolution steps Postmortem (within 5 days): Root cause, contributing factors, action items Knowledge extraction (within 10 days): New troubleshooting guide? β Create from postmortem New runbook needed? β Create from resolution steps Existing doc wrong? β Update with correct information Architecture decision needed? β Write ADR Monitoring gap? β Document what to monitor
Meeting types that MUST produce knowledge artifacts: Architecture review β ADR Process change β Updated runbook Strategy decision β Decision record Customer feedback pattern β Product knowledge update Retrospective β Process improvement doc
First 30 days β new hire documents: What was confusing during onboarding Questions that weren't answered by existing docs Things that were wrong in existing docs Suggestions for improvement Template for new hire feedback: onboarding_feedback: week: "[1|2|3|4]" couldnt_find: - topic: "[What they looked for]" where_looked: "[Where they searched]" how_resolved: "[Asked someone? Found eventually? Still unclear?]" wrong_or_outdated: - doc: "[Which document]" issue: "[What's wrong]" suggestions: - "[Free text improvements]"
When someone is leaving: Identify unique knowledge β What do they know that no one else does? Schedule extraction sessions β 1-2 hours per major topic area Record if possible β Video walkthroughs of complex processes Pair them β Have successor shadow for final 2 weeks Review their authored docs β Are they complete? Assign new owners Document tribal knowledge β "Why" questions only they can answer
freshness_policy: review_frequency: critical_operations: "quarterly" # Deployment, incident response, security standard_processes: "semi-annually" # Regular workflows reference_docs: "annually" # Specs, contacts, architecture explanations: "annually" # Background, history, rationale review_process: - owner_notified: "2 weeks before due date" - review_actions: - verify: "Is this still accurate? Test/confirm." - update: "Fix any outdated information" - stamp: "Update 'Last verified' date" - skip: "If can't review, reassign or flag" - escalation: "Unreviewed after 30 days β manager notified" - stale_threshold: "2x review period without update β flagged as stale"
kb_health: date: "[YYYY-MM-DD]" coverage: total_documents: 0 by_type: howto: 0 reference: 0 explanation: 0 decision: 0 troubleshooting: 0 by_domain: {} gaps_identified: [] freshness: current: 0 # Reviewed within policy needs_review: 0 # Due for review stale: 0 # Past review deadline deprecated: 0 freshness_rate: "0%" # current / (current + needs_review + stale) quality: peer_reviewed: "0%" using_templates: "0%" has_owner: "0%" has_tags: "0%" usage: searches_per_week: 0 failed_searches: 0 # Searches with no results top_10_pages: [] pages_never_accessed: 0 contribution: docs_created_this_month: 0 docs_updated_this_month: 0 unique_contributors: 0 contribution_rate: "0%" # contributors / total team size
Agenda (60 min): Dashboard review (10 min) β health metrics trend Gap analysis (15 min) β what's missing? What questions keep being asked? Stale doc triage (15 min) β update, deprecate, or reassign owners Failed searches review (10 min) β what are people searching for and not finding? Process improvements (10 min) β what's working, what isn't?
automation_triggers: incident_resolved: action: "Create task: 'Write troubleshooting guide for [incident title]'" assignee: "Incident commander" due: "+10 days" new_hire_started: action: "Generate personalized onboarding reading list from KB by role" doc_stale: action: "Notify owner, CC manager if unreviewed after 14 days" repeated_question: threshold: "Same question asked 3+ times in support/Slack" action: "Create task: 'Document answer to [question]'" process_changed: trigger: "PR merged that changes workflow/process" action: "Check if related docs need updating, create task if yes" failed_search: threshold: "Same search term fails 5+ times/week" action: "Flag as gap, create task to write missing doc"
kb_chatbot: flow: 1_receive_question: "User asks in designated channel" 2_search: "Semantic search across KB" 3_respond: found_match: "Return relevant doc link + summary" partial_match: "Return closest docs + 'Did you mean...?'" no_match: "Log as gap, route to human expert, create doc task" 4_feedback: "Was this helpful? π/π" 5_improve: "Use feedback to tune search, identify doc improvements" sources: - knowledge_base_docs - slack_saved_answers # Curated from Slack threads - incident_postmortems - meeting_notes_tagged_as_knowledge
MechanismFrequencyFormatAudience"TIL" channelDailyShort post (1-3 sentences + link)AllBrown bag lunchBi-weekly20-min presentation + Q&ACross-teamArchitecture reviewMonthly45-min deep dive + ADREngineeringCustomer insight shareMonthlyTop 5 patterns + implicationsProduct + CS + SalesPostmortem reviewPer incidentWritten + optional walkthroughEngineering + opsNew tool/technique demoAs needed15-min demo + doc linkRelevant teamsQuarterly knowledge reviewQuarterlyDashboard + gap analysisLeadership
knowledge_map: engineering: produces: ["Architecture docs", "Runbooks", "API specs", "ADRs"] consumes_from: product: ["PRDs", "User research", "Roadmap"] customer_success: ["Bug patterns", "Feature requests", "Usage data"] sales: ["Technical requirements", "Integration needs"] product: produces: ["PRDs", "User research", "Roadmap", "Release notes"] consumes_from: engineering: ["Technical feasibility", "Architecture constraints"] customer_success: ["Feature requests", "Churn reasons"] sales: ["Deal requirements", "Competitive intel"] customer_success: produces: ["FAQ", "Troubleshooting guides", "Best practices"] consumes_from: engineering: ["Release notes", "Known issues"] product: ["Feature docs", "Roadmap"] sales: produces: ["Battlecards", "Competitive intel", "Use case docs"] consumes_from: product: ["Feature docs", "Roadmap", "Pricing"] customer_success: ["Case studies", "Success metrics"] engineering: ["Technical capabilities", "Integration docs"]
MetricTargetMeasurementTime to answer<5 min for documented topicsSample timing testsNew hire time to productivityReduce by 30%First solo task dateRepeated questionsDecrease 50% in 6 monthsSupport ticket analysisDoc coverage>80% of critical processesAudit against process listFreshness rate>85% within review policyDashboard metricContribution rate>40% of team contributing monthlyContributor countSearch success rate>80% find what they needSearch analyticsFailed search rate<10% of searchesSearch analyticsKnowledge reuse>60% of team using KB weeklyUsage analytics
Knowledge Management ROI: Time Saved: Reduced question-answering = [hours/week] Γ [avg hourly cost] Γ 52 Faster onboarding = [weeks saved] Γ [new hires/year] Γ [weekly cost] Faster incident resolution = [hours saved/incident] Γ [incidents/year] Γ [hourly cost] Risk Reduced: Key person dependency = [probability of departure] Γ [knowledge reconstruction cost] Compliance documentation = [audit prep hours saved] Γ [hourly cost] Quality Improved: Fewer repeated mistakes = [error rate reduction] Γ [cost per error] Consistent processes = [variance reduction] Γ [rework cost] Total Annual Value = Time Saved + Risk Reduced + Quality Improved Investment = Tool cost + Time spent maintaining KB + Training ROI = (Total Annual Value - Investment) / Investment Γ 100
DimensionWeight0-2 (Poor)3-5 (Adequate)6-8 (Good)9-10 (Excellent)Accuracy20%Unverified, possibly wrongMostly correctVerified, accurateTested, peer-reviewedCompleteness15%Major gapsCovers basicsComprehensiveEdge cases includedClarity15%Confusing, jargon-heavyUnderstandableClear, well-structuredA new hire gets itFindability10%No tags, bad titleSome tagsGood tags, clear titleSynonyms, cross-refsFreshness15%>12 months staleWithin annual reviewWithin semi-annualWithin quarterlyTemplate compliance10%No structurePartial templateFull templateTemplate + extrasActionability10%Theory onlySome stepsClear stepsCopy-paste readyOwnership5%No ownerOwner assignedOwner activeOwner + backup Score interpretation: 90-100: Exemplary β reference model for other docs 75-89: Good β meets standards 60-74: Acceptable β needs minor improvements 40-59: Below standard β needs significant work 0-39: Critical β rewrite from scratch
DimensionWeightMetricCoverage20%% of critical processes documentedFreshness20%% of docs within review policyQuality15%Average document quality scoreUsage15%% of team using KB weeklyContribution15%% of team contributing monthlySearch effectiveness15%% of searches finding results
Start with a single shared doc/wiki, not a full KB platform Focus on: runbooks for critical processes, onboarding guide, decision log One person owns KB health (part-time, not full-time) Review quarterly, not monthly
Default to written over verbal knowledge sharing Record important meetings/decisions (not all meetings) Async-first: every decision documented, not just discussed Time zone coverage: ensure docs cover "what to do when the expert is asleep"
Prioritize onboarding docs above all else Implement "new hire documents what they learn" from day 1 Assign knowledge buddies β each new person paired with a doc mentor Weekly new-hire cohort Q&A β captured and documented
Map compliance requirements to documentation requirements Version control with audit trail (who changed what, when) Approval workflows for regulated content Retention policies aligned with regulations
Map both organizations' knowledge structures Identify overlaps and gaps Prioritize: "how do we work NOW" docs over historical Freeze archives of legacy systems/processes
Don't try to migrate everything β start fresh with new structure Import only: still-accurate, frequently-used docs Redirect old locations to new ones Set a sunset date for old system "If it's not in the new KB, it doesn't exist" (after migration period)
CommandAction"Audit our knowledge management"Run Phase 1 assessment, generate risk register"Design our KB structure"Create taxonomy and navigation architecture"Write a runbook for [X]"Generate using runbook template"Write an ADR for [X]"Generate architecture decision record"Create a troubleshooting guide for [X]"Generate using troubleshooting template"Review KB health"Generate health dashboard and identify gaps"Plan knowledge extraction for [person]"Generate interview guide and schedule"Set up freshness tracking"Create review schedule and notification rules"Design onboarding knowledge path for [role]"Curate reading list from KB"Analyze failed searches"Review search gaps and create tasks"Generate quarterly KB report"Full metrics dashboard with recommendations"Plan KB migration from [source]"Create migration plan with prioritization
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.