# Send Ml Model Eval Benchmark to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "ml-model-eval-benchmark",
    "name": "Ml Model Eval Benchmark",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/0x-Professor/ml-model-eval-benchmark",
    "canonicalUrl": "https://clawhub.ai/0x-Professor/ml-model-eval-benchmark",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/ml-model-eval-benchmark",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=ml-model-eval-benchmark",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "SKILL.md",
      "agents/openai.yaml",
      "references/benchmarking-guide.md",
      "scripts/benchmark_models.py"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "ml-model-eval-benchmark",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-02T10:46:31.577Z",
      "expiresAt": "2026-05-09T10:46:31.577Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=ml-model-eval-benchmark",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=ml-model-eval-benchmark",
        "contentDisposition": "attachment; filename=\"ml-model-eval-benchmark-0.1.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "ml-model-eval-benchmark"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/ml-model-eval-benchmark"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/ml-model-eval-benchmark",
    "downloadUrl": "https://openagent3.xyz/downloads/ml-model-eval-benchmark",
    "agentUrl": "https://openagent3.xyz/skills/ml-model-eval-benchmark/agent",
    "manifestUrl": "https://openagent3.xyz/skills/ml-model-eval-benchmark/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/ml-model-eval-benchmark/agent.md"
  }
}
```
## Documentation

### Overview

Produce consistent model ranking outputs from metric-weighted evaluation inputs.

### Workflow

Define metric weights and accepted metric ranges.
Ingest model metrics for each candidate.
Compute weighted score and ranking.
Export leaderboard and promotion recommendation.

### Use Bundled Resources

Run scripts/benchmark_models.py to generate benchmark outputs.
Read references/benchmarking-guide.md for weighting and tie-break guidance.

### Guardrails

Keep metric names and scales consistent across candidates.
Record weighting assumptions in the output.
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: 0x-Professor
- Version: 0.1.0
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-05-02T10:46:31.577Z
- Expires at: 2026-05-09T10:46:31.577Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/ml-model-eval-benchmark)
- [Send to Agent page](https://openagent3.xyz/skills/ml-model-eval-benchmark/agent)
- [JSON manifest](https://openagent3.xyz/skills/ml-model-eval-benchmark/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/ml-model-eval-benchmark/agent.md)
- [Download page](https://openagent3.xyz/downloads/ml-model-eval-benchmark)