← All skills
Tencent SkillHub Β· AI

Hugging Face

Discover, evaluate, and run Hugging Face models, datasets, and spaces with license checks, benchmark prompts, and reproducible integration plans.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Discover, evaluate, and run Hugging Face models, datasets, and spaces with license checks, benchmark prompts, and reproducible integration plans.

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, discovery.md, evaluation.md, inference.md, memory-template.md, setup.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 17 sections Open source page

Setup

On first use, read setup.md for integration guidelines and local memory initialization.

When to Use

User needs to find the right Hugging Face model, dataset, or Space for a concrete task and move from browsing to reliable execution. Agent handles discovery, filtering, license checks, quick benchmarking, and integration-ready inference plans.

Architecture

Memory and reusable artifacts live in ~/hugging-face/. See memory-template.md for structure and status fields. ~/hugging-face/ |- memory.md # Stable context, priorities, and defaults |- shortlists.md # Candidate models and datasets by use case |- evaluations.md # Benchmark runs, winners, and caveats |- endpoints.md # Approved endpoints and auth notes `- exports/ # Saved outputs and comparison snapshots

Quick Reference

Load only one focused file at a time to keep context small and decisions explicit. TopicFileSetup processsetup.mdMemory templatememory-template.mdModel and dataset discoverydiscovery.mdInference execution patternsinference.mdEvaluation rubric and scoringevaluation.mdCommon failures and recoverytroubleshooting.md

1. Lock Objective and Constraints First

Before selecting any artifact, confirm task type, latency budget, cost boundary, and deployment target. Use this minimum scope packet: Task type: chat, generation, embedding, classification, vision, or speech Quality priority: best quality, best speed, or balanced Runtime constraints: CPU only, specific GPU class, or hosted endpoint Compliance constraints: license, region, or private data limits

2. Separate Discovery from Execution

Do not run inference on the first candidate found. First create a shortlist of at least three candidates, then execute only on finalists that pass compatibility and license checks.

3. Validate License and Access Before Recommendation

For every candidate, verify license, gated access status, model size, and framework compatibility. If any of these are unknown, mark the candidate as provisional and avoid production recommendation.

4. Benchmark with a Deterministic Mini Suite

Use the same prompt set and output checks across candidates so results are comparable. Minimum benchmark set: One typical request One edge-case request One failure-prone request

5. Minimize External Data

Send only what is required for the selected endpoint. Never send credentials, local paths, or unrelated private context in request payloads.

6. Use a Fallback Ladder

If the preferred model fails, apply ordered fallback: Retry same endpoint with smaller payload Switch to a compatible backup model Switch to local-only workflow if available

7. Keep Runs Reproducible

Log selected model id, endpoint, key parameters, and evaluation result in local memory so future runs are consistent and auditable.

Common Traps

Picking the highest download count as the only criterion -> often misses license, latency, or domain fit. Ignoring gated model requirements -> integration fails at runtime due to access restrictions. Comparing models with different prompts -> quality conclusions become unreliable. Sending full user context to inference endpoints -> unnecessary privacy exposure. Skipping fallback design -> workflows fail hard on transient endpoint errors.

External Endpoints

Use discovery endpoints before inference so candidate selection remains explainable and reproducible. EndpointData SentPurposehttps://huggingface.co/api/modelsSearch terms, filter parametersDiscover model candidateshttps://huggingface.co/api/datasetsSearch terms, filter parametersDiscover dataset candidateshttps://huggingface.co/api/spacesSearch terms, filter parametersDiscover runnable Spaceshttps://api-inference.huggingface.co/models/{model_id}Prompt or task input payload, selected model id, auth tokenRun hosted inference No other data is sent externally.

Security & Privacy

Data that leaves your machine: Search terms and filter inputs sent to Hugging Face discovery APIs. Inference payloads sent to Hugging Face Inference API when execution is requested. Data that stays local: Preferences, shortlists, evaluation notes, and endpoint decisions in ~/hugging-face/. This skill does NOT: Exfiltrate local files by default. Send undeclared network requests. Store raw secrets in local notes. Modify its own skill definition file.

Trust

By using this skill, selected request data is sent to Hugging Face services. Only install if you trust Hugging Face with the inputs you choose to process.

Related Skills

Install with clawhub install <slug> if user confirms: ai - general AI strategy and model-selection framing api - API-first integration patterns and HTTP debugging data-analysis - dataset inspection and quality interpretation data - structured data workflows and extraction patterns code - implementation support for scripts and adapters

Feedback

If useful: clawhub star hugging-face Stay updated: clawhub sync

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
6 Docs
  • SKILL.md Primary doc
  • discovery.md Docs
  • evaluation.md Docs
  • inference.md Docs
  • memory-template.md Docs
  • setup.md Docs