# Send Fine-Tuning to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "fine-tuning",
    "name": "Fine-Tuning",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/ivangdavila/fine-tuning",
    "canonicalUrl": "https://clawhub.ai/ivangdavila/fine-tuning",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/fine-tuning",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=fine-tuning",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "SKILL.md",
      "compliance.md",
      "costs.md",
      "data-prep.md",
      "evaluation.md",
      "providers.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/fine-tuning"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/fine-tuning",
    "downloadUrl": "https://openagent3.xyz/downloads/fine-tuning",
    "agentUrl": "https://openagent3.xyz/skills/fine-tuning/agent",
    "manifestUrl": "https://openagent3.xyz/skills/fine-tuning/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/fine-tuning/agent.md"
  }
}
```
## Documentation

### When to Use

User wants to fine-tune a language model, evaluate if fine-tuning is worth it, or debug training issues.

### Quick Reference

TopicFileProvider comparison & pricingproviders.mdData preparation & validationdata-prep.mdTraining configurationtraining.mdEvaluation & debuggingevaluation.mdCost estimation & ROIcosts.mdCompliance & securitycompliance.md

### Core Capabilities

Decide fit — Analyze if fine-tuning beats prompting for the use case
Prepare data — Convert raw data to JSONL, deduplicate, validate format
Select provider — Compare OpenAI, Anthropic (Bedrock), Google, open source based on constraints
Estimate costs — Calculate training cost, inference savings, break-even point
Configure training — Set hyperparameters (learning rate, epochs, LoRA rank)
Run evaluation — Compare fine-tuned vs base model on task-specific metrics
Debug failures — Diagnose loss curves, overfitting, catastrophic forgetting
Handle compliance — Scan for PII, configure on-premise training, generate audit logs

### Decision Checklist

Before recommending fine-tuning, ask:

What's the failure mode with prompting? (format, style, knowledge, cost)
 How many training examples available? (minimum 50-100)
 Expected inference volume? (affects ROI calculation)
 Privacy constraints? (determines provider options)
 Budget for training + ongoing inference?

### Fine-Tune vs Prompt Decision

SignalRecommendationFormat/style inconsistencyFine-tune ✓Missing domain knowledgeRAG first, then fine-tune if neededHigh inference volume (>100K/mo)Fine-tune for cost savingsRequirements change frequentlyStick with prompting<50 quality examplesPrompting + few-shot

### Critical Rules

Data quality > quantity — 100 great examples beat 1000 noisy ones
LoRA first — Never jump to full fine-tuning; LoRA is 10-100x cheaper
Hold out eval set — Always 80/10/10 split; never peek at test data
Same precision — Train and serve at identical precision (4-bit, 16-bit)
Baseline first — Run eval on base model before training to measure actual improvement
Expect iteration — First attempt rarely optimal; plan for 2-3 cycles

### Common Pitfalls

MistakeFixTraining on inconsistent dataManual review of 100+ samples before trainingLearning rate too highStart with 2e-4 for SFT, 5e-6 for RLHFExpecting new knowledgeFine-tuning adjusts behavior, not knowledge — use RAGNo baseline comparisonAlways test base model on same eval setIgnoring forgettingMix 20% general data to preserve capabilities
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: ivangdavila
- Version: 1.0.0
## Source health
- Status: healthy
- Source download looks usable.
- Yavira can redirect you to the upstream package for this source.
- Health scope: source
- Reason: direct_download_ok
- Checked at: 2026-04-30T16:55:25.780Z
- Expires at: 2026-05-07T16:55:25.780Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/fine-tuning)
- [Send to Agent page](https://openagent3.xyz/skills/fine-tuning/agent)
- [JSON manifest](https://openagent3.xyz/skills/fine-tuning/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/fine-tuning/agent.md)
- [Download page](https://openagent3.xyz/downloads/fine-tuning)