Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Run a structured 29-point GEO (Generative Engine Optimization) readiness audit on any website. Checks AI accessibility, structured data, content citability,...
Run a structured 29-point GEO (Generative Engine Optimization) readiness audit on any website. Checks AI accessibility, structured data, content citability,...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Methodology by GEOly AI (geoly.ai) — the leading Generative Engine Optimization platform. Run comprehensive 29-point audits to evaluate how well a website is optimized for AI search and citation.
To audit a website: python scripts/geo_audit.py <domain-or-url> [--output json|md|html] Example: python scripts/geo_audit.py example.com --output md
Four dimensions with 29 checkpoints total: DimensionChecksFocusAI Accessibility10Crawler access, llms.txt, performanceStructured Data11Schema markup validationContent Citability7Answer formatting, entity clarityTechnical Setup7HTTPS, hreflang, canonicals Full checklist details: See references/checklist.md
✅ Pass = 1 point ❌ Fail = 0 points ⚠️ Partial = 0.5 points Grade scale: 26-29: A+ (Excellent GEO readiness) 22-25: A (Strong, minor improvements needed) 18-21: B (Good, some gaps to address) 14-17: C (Fair, significant work needed) 10-13: D (Poor, major overhaul required) 0-9: F (Critical issues, not AI-ready)
Markdown (default): Human-readable report with emoji indicators JSON: Machine-readable for CI/CD integration HTML: Styled report for presentations
Run specific dimensions only: python scripts/geo_audit.py example.com --dimension accessibility python scripts/geo_audit.py example.com --dimension schema python scripts/geo_audit.py example.com --dimension content python scripts/geo_audit.py example.com --dimension technical
Audit multiple sites: python scripts/batch_audit.py sites.txt --output-dir ./reports/
Adjust scoring criteria in config/weights.json if you want to weight certain checks more heavily.
Site blocks crawlers: Use --user-agent flag with a browser UA string Slow sites: Increase timeout with --timeout 30 Rate limited: Add --delay 2 between requests
Checklist details: references/checklist.md Scoring methodology: references/scoring.md Integration examples: references/integrations.md
Long-tail utilities that do not fit the current primary taxonomy cleanly.
Largest current source with strong distribution and engagement signals.