Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Combine the best parts of multiple skills into one. Searches ClawHub, GitHub, skills.sh, skillsmp.com and other AI skill repos. Analyzes each safely, compares features, and builds a combined 'Frankenstein' skill with the best of each. Uses skill-auditor for security scanning and sandwrap for safe analysis. Use when: (1) Multiple skills exist for same purpose, (2) Want best-of-breed combination, (3) Building a comprehensive skill from fragments.
Combine the best parts of multiple skills into one. Searches ClawHub, GitHub, skills.sh, skillsmp.com and other AI skill repos. Analyzes each safely, compares features, and builds a combined 'Frankenstein' skill with the best of each. Uses skill-auditor for security scanning and sandwrap for safe analysis. Use when: (1) Multiple skills exist for same purpose, (2) Want best-of-breed combination, (3) Building a comprehensive skill from fragments.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Default: Opus (or best available thinking model) Frankenstein requires deep reasoning to: Compare multiple skill approaches Identify subtle methodology differences Synthesize the best parts creatively Catch security/quality issues others miss Only use a smaller model if user explicitly requests it for cost reasons. The synthesis quality depends heavily on reasoning depth. Create monster skills by combining the best parts of existing ones.
Frankenstein me an SEO audit skill
Search EVERY AI skills repository for matching skills: 1. ClawHub (primary) clawhub search "[topic]" --registry "https://clawhub.ai" 2. GitHub Search: "[topic] AI skill" OR "[topic] claude skill" OR "[topic] agent skill" Look for: SKILL.md, CLAUDE.md, or similar agent instruction files 3. skills.sh https://skills.sh/search?q=[topic] 4. skillsmp.com (Skills Marketplace) https://skillsmp.com/search/[topic] 5. Other sources to check: Anthropic's skill examples OpenAI GPT configurations (convert to skill format) LangChain agent templates AutoGPT/AgentGPT skill repos Gather all candidates before filtering. More sources = better Frankenstein.
Run each skill through skill-auditor. Skip any with HIGH risk scores. For each skill found: Install to temp directory Run skill-auditor scan Score >= 7 = SAFE (proceed) Score < 7 = RISKY (skip with warning)
Analyze safe skills in sandwrap read-only mode. For each safe skill, extract: Core features (what it does) Methodology (how it approaches the problem) Scripts/tools (reusable code) Unique strengths (what makes it special) Weaknesses (what's missing)
Build comparison matrix: Featureskill-Askill-Bskill-CWINNERFeature 1YesNoYesA, CFeature 2BasicAdvancedNoneBFeature 3NoNoYesC
Take the winning approach for each feature: Feature 1 methodology from skill-A Feature 2 implementation from skill-B Feature 3 approach from skill-C
Use skill-creator to assemble the Frankenstein skill: Combine winning features Resolve conflicts (if two approaches clash) Write unified SKILL.md Include scripts from winners Document sources
Run plan โ test โ improve loop until 3 stable passes: Pass 1: 1. Read draft 2. Try to break it (find holes, contradictions, gaps) 3. Document issues 4. Fix them Pass 2: 1. Read improved version 2. Actively try to find MORE issues 3. Fix any found Pass 3+: Continue until you genuinely try to improve but can't find significant issues What to look for each pass: Missing features that sources had Contradictions between combined approaches Vague instructions that aren't actionable Token waste (verbose where concise works) Security gaps Broken references to files/scripts Document in VETTING-LOG.md: Each pass number Issues found Fixes applied Why considered stable Only proceed when: 3 consecutive passes with no major issues Minor issues documented as known limitations
Present the vetted skill for approval: Show what came from where Highlight conflicts resolved Show vetting summary Ask for final OK before saving
Creates a new skill with: Best features from all analyzed skills Clear attribution (credits source skills) Security-scanned components only Unified documentation
This skill uses: clawhub CLI (search/install) skill-auditor (security scanning) sandwrap (safe analysis) skill-creator (building)
When spawning analysis sub-agents, always use Opus (or best thinking model) unless user explicitly requests otherwise: sessions_spawn( task: "FRANKENSTEIN ANALYSIS: [topic]...", model: "opus" ) Cheaper models miss nuances between skills and produce shallow combinations.
Only combines publicly available skills Skips skills that fail security scan Cannot resolve deep architectural conflicts Human judgment needed for final synthesis Quality depends on available skills
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.