Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Discover, evaluate, and run Hugging Face models, datasets, and spaces with license checks, benchmark prompts, and reproducible integration plans.
Discover, evaluate, and run Hugging Face models, datasets, and spaces with license checks, benchmark prompts, and reproducible integration plans.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
On first use, read setup.md for integration guidelines and local memory initialization.
User needs to find the right Hugging Face model, dataset, or Space for a concrete task and move from browsing to reliable execution. Agent handles discovery, filtering, license checks, quick benchmarking, and integration-ready inference plans.
Memory and reusable artifacts live in ~/hugging-face/. See memory-template.md for structure and status fields. ~/hugging-face/ |- memory.md # Stable context, priorities, and defaults |- shortlists.md # Candidate models and datasets by use case |- evaluations.md # Benchmark runs, winners, and caveats |- endpoints.md # Approved endpoints and auth notes `- exports/ # Saved outputs and comparison snapshots
Load only one focused file at a time to keep context small and decisions explicit. TopicFileSetup processsetup.mdMemory templatememory-template.mdModel and dataset discoverydiscovery.mdInference execution patternsinference.mdEvaluation rubric and scoringevaluation.mdCommon failures and recoverytroubleshooting.md
Before selecting any artifact, confirm task type, latency budget, cost boundary, and deployment target. Use this minimum scope packet: Task type: chat, generation, embedding, classification, vision, or speech Quality priority: best quality, best speed, or balanced Runtime constraints: CPU only, specific GPU class, or hosted endpoint Compliance constraints: license, region, or private data limits
Do not run inference on the first candidate found. First create a shortlist of at least three candidates, then execute only on finalists that pass compatibility and license checks.
For every candidate, verify license, gated access status, model size, and framework compatibility. If any of these are unknown, mark the candidate as provisional and avoid production recommendation.
Use the same prompt set and output checks across candidates so results are comparable. Minimum benchmark set: One typical request One edge-case request One failure-prone request
Send only what is required for the selected endpoint. Never send credentials, local paths, or unrelated private context in request payloads.
If the preferred model fails, apply ordered fallback: Retry same endpoint with smaller payload Switch to a compatible backup model Switch to local-only workflow if available
Log selected model id, endpoint, key parameters, and evaluation result in local memory so future runs are consistent and auditable.
Picking the highest download count as the only criterion -> often misses license, latency, or domain fit. Ignoring gated model requirements -> integration fails at runtime due to access restrictions. Comparing models with different prompts -> quality conclusions become unreliable. Sending full user context to inference endpoints -> unnecessary privacy exposure. Skipping fallback design -> workflows fail hard on transient endpoint errors.
Use discovery endpoints before inference so candidate selection remains explainable and reproducible. EndpointData SentPurposehttps://huggingface.co/api/modelsSearch terms, filter parametersDiscover model candidateshttps://huggingface.co/api/datasetsSearch terms, filter parametersDiscover dataset candidateshttps://huggingface.co/api/spacesSearch terms, filter parametersDiscover runnable Spaceshttps://api-inference.huggingface.co/models/{model_id}Prompt or task input payload, selected model id, auth tokenRun hosted inference No other data is sent externally.
Data that leaves your machine: Search terms and filter inputs sent to Hugging Face discovery APIs. Inference payloads sent to Hugging Face Inference API when execution is requested. Data that stays local: Preferences, shortlists, evaluation notes, and endpoint decisions in ~/hugging-face/. This skill does NOT: Exfiltrate local files by default. Send undeclared network requests. Store raw secrets in local notes. Modify its own skill definition file.
By using this skill, selected request data is sent to Hugging Face services. Only install if you trust Hugging Face with the inputs you choose to process.
Install with clawhub install <slug> if user confirms: ai - general AI strategy and model-selection framing api - API-first integration patterns and HTTP debugging data-analysis - dataset inspection and quality interpretation data - structured data workflows and extraction patterns code - implementation support for scripts and adapters
If useful: clawhub star hugging-face Stay updated: clawhub sync
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.