← All skills
Tencent SkillHub Β· AI

Skeall Skill Builder

Agent Skills (SKILL.md) builder, auditor, and improver for cross-platform LLM agents. Use for "skeall", "build a skill", "create skill", "improve skill", "au...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Agent Skills (SKILL.md) builder, auditor, and improver for cross-platform LLM agents. Use for "skeall", "build a skill", "create skill", "improve skill", "au...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md, references/advanced-patterns.md, references/anti-patterns.md, references/healthcheck.md, references/scoring.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 39 sections Open source page

Skeall

Create, improve, and audit Agent Skills following the Agent Skills open standard. This skill encodes lessons from real-world skill development and cross-platform compatibility testing.

Quick start

/skeall --create # Interview, then scaffold new skill /skeall --improve <path> # Analyze and improve existing skill /skeall --scan <path> # Audit only, no changes (report) /skeall --scan . # Audit skill in current directory /skeall --scan-all # Batch scan all skills in ~/.claude/skills/ /skeall --scan-all <dir> # Batch scan all skills in custom directory /skeall --healthcheck <path> # Runtime check single skill (orphans, deps, env, URLs) /skeall --healthcheck-all # Runtime check all skills in ~/.openclaw/skills/ /skeall --healthcheck-all <dir> # Runtime check all skills in custom directory

Process

Interview the user (ask questions 1-4 always, then 5-6 if user hasn't already specified complexity or distribution scope): What does this skill do? (one sentence) What category? Reference / Task / MCP Enhancement / Hybrid. See references/advanced-patterns.md What triggers should activate it? (keywords users would type) Does it accept arguments? (e.g., file path, topic β€” use $ARGUMENTS or $ARGUMENTS[N] in body) How complex is it? (single file vs references/ needed) Will this skill be shared? (personal / project / public) β€” affects README, license, metadata Generate the skill structure: {skill-name}/ β”œβ”€β”€ SKILL.md # Core instructions (always loaded) β”œβ”€β”€ references/ # On-demand detail files β”‚ β”œβ”€β”€ {topic-1}.md β”‚ └── {topic-2}.md └── README.md # GitHub-facing (optional) Write SKILL.md following these rules: YAML frontmatter with name and description (see Frontmatter section) Body under 500 lines, under 5000 tokens Instruction-based framing, not persona-based Progressive disclosure: core in SKILL.md, details in references/ Show the generated SKILL.md to user for review. Run --scan on the generated skill. If any HIGH issues found, fix them before delivering. Next step: "Optimize with reprompter?" (optional, see Reprompter section). Then suggest installing the skill.

Process

Read SKILL.md first. Read reference files only if scan identifies issues requiring them (broken links, routing table mismatches). Run the scan checklist (see Mode 3). For each issue found, propose a specific before/after edit. Group edits by priority: HIGH first, then MEDIUM, then LOW. Ask user: "Fix all? Review one by one? Or just the HIGHs?" (recommended: fix all HIGHs automatically, review MEDIUMs) Apply approved edits. Re-scan once. If new issues appear, report them but do not enter an infinite fix loop. Next step: "Run --scan to verify?" or "Commit changes?"

Common improvements

ProblemFixBody over 5000 tokensMove detail sections to references/Redundant contentSingle source of truth, reference elsewherePersona-based framingSwitch to instruction-based framingMissing trigger phrasesAdd keywords to description fieldPlatform-specific patternsReplace with universal formattingNo progressive disclosureAdd routing table to reference files

Process

Read the skill's SKILL.md and directory structure. Check every item in the checklist below. Output a severity-tagged report.

Report format

## Skill Audit: {skill-name} Score: X.X/10 STRUCTURE [PASS] S1 -- SKILL.md exists at root [FAIL] S3 HIGH -- name does not match directory name [WARN] S5 MEDIUM -- No references/ directory FRONTMATTER [PASS] F2 -- Trigger phrases present [FAIL] F1 HIGH -- description over 1024 characters CONTENT [WARN] C5 MEDIUM -- Persona-based framing ("You are an expert") [FAIL] C3 HIGH -- Same content repeated 3 times (lines 45, 120, 280) LLM-FRIENDLINESS [WARN] L4 MEDIUM -- Unicode arrows instead of markdown tables [PASS] L3 -- No emoji markers in headings SECURITY [PASS] SEC1 -- No XML angle brackets in frontmatter [PASS] SEC3 -- No hardcoded secrets CROSS-PLATFORM [PASS] X1 -- No {baseDir} placeholders [WARN] X4 LOW -- No multi-platform install instructions in README Total: 3 HIGH | 4 MEDIUM | 1 LOW Next step after scan: "Want me to fix these? Run /skeall --improve <path>"

Error handling

InputResponseNo SKILL.md found at path"No skill found at {path}. Did you mean --create?"Empty directory for --scan-all"No skills found in {dir}. Skills must have a SKILL.md file."Invalid YAML frontmatterReport the parse error, suggest fixing frontmatter first--improve on non-skill file"Not a valid skill (no YAML frontmatter). Try --create instead."--improve on a skill scoring 10/10"Scan found 0 issues (score 10.0/10). No changes needed. Consider running trigger and functional tests."

Frontmatter (required)

--- name: my-skill-name description: What this skill does and when to use it. Include trigger phrases. --- name rules: Must match the parent directory name Lowercase alphanumeric with hyphens only (unicode lowercase allowed) 1-64 characters, no leading/trailing/consecutive hyphens No spaces, no special characters, no reserved words ("anthropic", "claude") Recommended: gerund form (processing-pdfs, testing-code) or descriptive noun (pdf-processor) description rules: Explain WHAT it does AND WHEN to use it Write in third person ("Processes files", not "I can process" or "You can use") Include trigger phrases users would actually type Put the most important keyword first (platforms weight first words) Spec limit: 1024 characters. Recommended: under 300 for best matching Use noun-phrase style ("Guide for X"), not persona style ("Expert in X") No XML angle brackets (<, >) in any frontmatter value (injection risk)

Optional frontmatter fields

These are silently ignored by platforms that do not support them: license: MIT # For distributed skills compatibility: "Node.js 18+" # Environment requirements (max 500 chars) metadata: # Arbitrary key-value (author, version) author: your-name version: 1.0.0 allowed-tools: "Bash Read" # Experimental: space-delimited tool list user-invocable: true # Show in /slash menu (false = hidden but still callable) disable-model-invocation: true # Block Claude from auto-loading this skill argument-hint: "<file-path>" # Hint shown in /skill autocomplete model: opus # Override model for this skill context: fork # Run in isolated subagent agent: general-purpose # Subagent type: general-purpose, Explore, Plan, or custom hooks: # Skill-scoped lifecycle hooks PostToolCall: "validate.sh"

Directory structure

skill-name/ β”œβ”€β”€ SKILL.md # REQUIRED -- core instructions β”œβ”€β”€ references/ # OPTIONAL -- on-demand detail files β”œβ”€β”€ scripts/ # OPTIONAL -- executable scripts β”œβ”€β”€ assets/ # OPTIONAL -- static assets (images, etc.) └── README.md # OPTIONAL -- GitHub-facing docs

Token budget

LevelContentBudgetMetadata (YAML frontmatter)name + description~100 tokensInstructions (SKILL.md body)Always loaded by LLM< 5000 tokensReferences (each file)Loaded on demand~2000-3000 tokens each Estimation: ~1.5 tokens per word for mixed code+prose markdown. Progressive disclosure: SKILL.md body should handle ~70% of user requests. Reference files handle the remaining 30% (detailed workflows, complete examples, edge cases).

Line limits

GuidelineLimitSKILL.md bodyUnder 500 lines (under 300 for complex skills with many references)Reference filesNo hard limit, but keep each under 700 lines. Add TOC at top if over 100 lines

Structure checks

IDSeverityCheckS1HIGHSKILL.md exists at skill rootS2HIGHYAML frontmatter present with --- delimitersS3HIGHname field present and valid (lowercase, hyphens, 1-64 chars, no consecutive hyphens)S4HIGHdescription field presentS5MEDIUMReferences in references/ not loose at rootS6LOWREADME.md present for GitHub-hosted skillsS7LOWNo unnecessary files (node_modules, .DS_Store, etc.)S8HIGHname field matches parent directory name

Frontmatter checks

IDSeverityCheckF1HIGHDescription under 1024 characters (spec limit)F1bLOWDescription under 300 characters (recommended for matching)F2HIGHDescription includes trigger phrasesF3MEDIUMDescription starts with noun phrase, not "Expert in"F4MEDIUMName 1-64 characters, no leading/trailing/consecutive hyphensF5LOWNo platform-specific fields (keeps universal compatibility)

Content checks

IDSeverityCheckC1HIGHBody under 500 linesC2HIGHEstimated tokens under 5000C3HIGHNo content repeated in SKILL.md body (controlled repetition across reference files is acceptable)C4HIGHCode examples use correct, verified patternsC5MEDIUMInstruction-based framing (not "You are an expert")C6MEDIUMHas routing table to reference files (if references/ exists)C7MEDIUMTroubleshooting section present (for skills with code blocks or CLI commands)C8LOWNo deprecated content at the top (wastes prime token space)C9MEDIUMRouting table completeness: if references/ exists, SKILL.md lists ALL files in references/C10MEDIUMInternal count consistency: claimed counts ("34 patterns", "8 phases") match actual contentC11MEDIUMNo stale references: documented APIs, functions, model names exist in actual source

LLM-friendliness checks

IDSeverityCheckL1HIGHTables for structured data (not bullet lists with arrows)L2HIGHImperative instructions ("Do X", not "You should consider X")L3MEDIUMNo emoji in headings or structural markers (frontmatter metadata values are data, not markers)L4MEDIUMNo Unicode arrows or special characters for data flowL5MEDIUMConsistent heading hierarchy (no skipped levels). Ignore headings inside fenced code blocksL6MEDIUMCode blocks have language tagsL7LOWSentence case headings (not Title Case)L8LOWNo nested blockquotes (some LLMs parse poorly)

Security checks

IDSeverityCheckSEC1HIGHNo XML angle brackets (<, >) in frontmatter valuesSEC2HIGHName does not contain reserved words ("anthropic", "claude")SEC3HIGHNo hardcoded API keys, tokens, or secrets in any skill fileSEC4MEDIUMScripts include error handling (not bare commands)SEC5HIGHNo credential patterns (Bearer eyJ, sk-/pk- prefixes, api_key=/token= + long strings). Ignore $ENV_VAR refs and YOUR_KEY_HERE placeholders

Cross-platform checks

IDSeverityCheckX1HIGHNo {baseDir} placeholders (breaks non-OpenClaw platforms)X2MEDIUMRelative paths from SKILL.md to references/X3MEDIUMInternal links use standard markdown [text](path)X4LOWREADME has multi-platform install paths

Runtime checks (healthcheck mode only)

IDSeverityCheckR1HIGHOrphan skill: not referenced in any config or skill registryR2HIGHDuplicate name: same name field found in 2+ skill directoriesR3HIGHTrigger collision: description phrases 80%+ overlap with another skillR4HIGHBroken dependency: file referenced in SKILL.md does not existR5MEDIUMStale endpoint: URL in curl command returns 404 or times outR6MEDIUMMissing env var: $VAR reference found but not set in environmentR7LOWToken cost: estimated tokens loaded per session

LLM-friendliness patterns

These patterns come from real cross-platform testing. Apply them when creating or improving skills.

Do

Tables over prose for structured data (parameters, options, comparisons) Single source of truth for any concept explained more than once Instruction-based framing: "This skill provides instructions for X. Follow these patterns exactly." Imperative verbs: "Call X after Y", "Use Z for W" Compact routing table at the top pointing to reference files Parameter comments inline in code blocks: providerAddress, // 1st: wallet address Copyable progress checklists for multi-step workflows (LLM tracks completion) Validation feedback loops for quality-sensitive output (generate, score, retry if needed) Consistent freedom level per section β€” do not mix exact scripts with vague guidance. See references/advanced-patterns.md

Do not

Persona-based framing: "You are an expert in..." (Claude-leaning, other LLMs respond better to instructions) Emoji markers in headings or structural elements (token-expensive, parsed inconsistently). Emoji in frontmatter metadata values is data and acceptable Unicode arrows (β†’, ←) for data flow β€” use tables or plain prose Blockquote warnings at top of SKILL.md (wastes prime token space, primes distrust) "When Users Ask" checklists with 10+ items (bury critical rules, use tables instead) Synonym cycling for the same concept (confuses LLMs about whether it's the same thing) Repeated content (wastes tokens, risks contradictions if copies drift) Assuming exclusive activation (other skills may load simultaneously β€” declare dependencies explicitly)

Description field optimization

Good description pattern: {Product/Tool name} guide for {primary use case}. Covers {feature list}. Use this skill for {trigger phrases separated by commas}. Example: description: 0G Compute Network guide for decentralized AI inference and fine-tuning. Covers chatbots, image generation, speech-to-text, SDK integration, CLI commands. Use this skill for any 0G compute, 0G AI, or decentralized GPU question.

Universal format (works everywhere)

Only name and description in frontmatter. Standard markdown body. Relative paths. No platform-specific syntax.

Platform discovery paths

PlatformUser-wideProjectClaude Code~/.claude/skills/{name}/.claude/skills/{name}/OpenAI Codex~/.agents/skills/{name}/.agents/skills/{name}/OpenClaw~/.openclaw/skills/{name}/.openclaw/skills/{name}/CursorStandard SKILL.md discoveryProject skills dirGemini CLIStandard SKILL.md discoveryProject skills dir

Codex-specific extensions

OpenAI Codex adds an optional openai.yaml file alongside SKILL.md for platform metadata (interface, policy, dependencies). SKILL.md itself stays cross-platform. See references/advanced-patterns.md for details.

Things that break cross-platform

PatternProblemFix{baseDir} placeholderOnly OpenClaw resolves itUse relative pathsPlatform-specific instructionsConfuse other LLMsKeep instructions genericHardcoded pathsBreak on other OS/platformsUse relative from SKILL.md

Token estimation

Estimate: wc -w SKILL.md Γ— 1.5 (prose) or Γ— 1.7 (code-heavy files).

Budget allocation guide

Skill complexitySKILL.md targetReferences needed?Simple (one topic, few commands)100-200 lines / ~1500 tokensNoMedium (multiple features, some code)200-350 lines / ~3000 tokens1-2 filesComplex (multi-domain, many patterns)300-450 lines / ~4500 tokens3-5 files

Severity reference

SeverityMeaningActionHIGHBreaks spec compliance or causes LLM confusionMust fixMEDIUMReduces quality or cross-platform compatibilityShould fixLOWMinor improvement opportunityFix if time permits

Mode 4: Batch scan (scan-all)

Scan every skill in a directory at once. Useful for auditing your entire skill collection.

Process

List all subdirectories containing SKILL.md in the target path (default: ~/.claude/skills/). Run Mode 3 (scan) on each skill. Output each skill's score as you complete it. Output a summary table sorted by score ascending (worst first).

Report format

## Batch Skill Audit | Skill | Score | HIGH | MEDIUM | LOW | Status | |-------|-------|------|--------|-----|--------| | seo-optimizer | 5/10 | 3 | 2 | 1 | NEEDS WORK | | reprompter | 6/10 | 2 | 3 | 0 | NEEDS WORK | | blogger | 7/10 | 1 | 1 | 2 | NEEDS WORK | | humanizer-enhanced | 8/10 | 0 | 2 | 1 | PASS | Total: 4 skills scanned PASS: 1 | NEEDS WORK: 3 Top issues across all skills: 1. [HIGH] C2 reprompter: Body exceeds 5000 tokens (est. 8,200) 2. [HIGH] C3 seo-optimizer: Content repeated 4 times 3. [HIGH] C5 reprompter: Persona-based framing PASS threshold: Score 7+ with zero HIGH issues. Next step: "Start with the lowest-scoring skill. Run /skeall --improve <path> on it."

Mode 5: Health check (runtime audit)

Checks whether a skill actually works at runtime β€” beyond what static scan can catch. Run static scan (Mode 3) first and fix HIGH issues before health check.

Process

Run R1-R7 checks against the target skill. For --healthcheck-all: cross-check all skills for duplicates (R2) and trigger collisions (R3). Output severity-tagged report with sections: RUNTIME, DUPLICATES, TRIGGER COLLISIONS. Labels: [FAIL] for HIGH issues (must fix), [WARN] for MEDIUM (runtime risk), [INFO] for LOW. For detection algorithms, report format examples, and batch output format, see references/healthcheck.md.

Scoring methodology

Formula: Score = max(0, 10 - (HIGHs x 1.5) - min(MEDIUMs x 0.5, 3) - min(LOWs x 0.2, 1)) PASS threshold: Score 7+ AND zero HIGH issues. For detailed examples, see references/scoring.md.

Troubleshooting

IssueFixToken estimate seems wrongUse wc -w and multiply by 1.5 (prose) or 1.7 (code-heavy)Scan reports FAIL but skill works fineHIGHs indicate spec/LLM issues, not runtime bugs. Fix them anyway.Batch scan misses a skillSkill directory must contain SKILL.md at rootTwo fixes contradict each otherFlag the conflict, ask user to choose (e.g., "shorten file" vs "add section")Score 7+ but still NEEDS WORKCheck for HIGH issues. Any HIGH = NEEDS WORK regardless of score

References

For detailed checklists and examples, see: Anti-patterns with before/after examples Runtime health check algorithms and real examples Complete SKILL.md template Scoring methodology details Testing patterns and examples Advanced patterns: categories, freedom levels, distribution, MCP, workflows Testing your skill: After create or improve, test trigger activation (3-5 keyword variants), functional output, and negative (unrelated queries stay quiet). See references/testing.md. MCP integration: Use fully qualified tool names (mcp__server__tool_name). Document required MCP servers and provide fallbacks. See references/advanced-patterns.md. Reprompter integration (optional): After --create interview, say "reprompter optimize" to score description variants and validate code examples. Works standalone if reprompter is not installed.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
6 Docs
  • SKILL.md Primary doc
  • README.md Docs
  • references/advanced-patterns.md Docs
  • references/anti-patterns.md Docs
  • references/healthcheck.md Docs
  • references/scoring.md Docs