Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Verify claims, numbers, and facts in markdown drafts against source data. Use when: reviewing blog posts, reports, or documentation for accuracy before publi...
Verify claims, numbers, and facts in markdown drafts against source data. Use when: reviewing blog posts, reports, or documentation for accuracy before publi...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Given a markdown draft file, this skill extracts every verifiable claim (numbers, dates, model names, scores, causal statements) and cross-references them against available source data to produce a verification report.
python3 skills/fact-checker/scripts/fact_check.py <draft.md> python3 skills/fact-checker/scripts/fact_check.py <draft.md> --output report.md
Numeric claims — integers and floats with surrounding context Model references — model/task (phi4/classify) and model:tag (phi4:latest) Dates — YYYY-MM-DD format Score values — decimal scores like 0.923, 1.000 Percentages — 42%, 95.3%
projects/hybrid-control-plane/FINDINGS.md — primary source of truth Control Plane /status API at http://localhost:8765/status — live scored run data projects/hybrid-control-plane/data/scores/*.json — raw scored run files on disk memory/*.md — daily logs with timestamps and decisions git log in projects/hybrid-control-plane/ — commit hashes, dates, authorship projects/hybrid-control-plane/CHANGELOG.md — sprint history
Each claim produces one line: ✅ CONFIRMED: "phi4/classify scored 1.000" → /status API: phi4_latest_classify mean=1.000 n=23 ⚠️ UNVERIFIABLE: "this took about a day" → no timestamp correlation found in logs ❌ CONTRADICTED: "909 runs" → /status API shows 958 total runs (stale number?) Followed by a summary count of confirmed / unverifiable / contradicted claims.
When asked to "fact-check" or "verify" a draft blog post, report, or documentation file — run this skill and present the report to the user. If any claims are ❌ CONTRADICTED, flag them prominently and suggest corrections.
Run the script with the path to the draft file. Parse the output report. Summarise key findings — especially any ❌ CONTRADICTED claims. Suggest specific corrections with the correct values from the evidence. If the /status API is unavailable, note it and rely on FINDINGS.md + score files.
Data access, storage, extraction, analysis, reporting, and insight generation.
Largest current source with strong distribution and engagement signals.