Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Autonomous GitHub Issue Resolver Agent with guardrails. Use when the user wants to discover, analyze, and fix open issues in GitHub repositories. Triggers on...
Autonomous GitHub Issue Resolver Agent with guardrails. Use when the user wants to discover, analyze, and fix open issues in GitHub repositories. Triggers on...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Autonomous agent for discovering, analyzing, and fixing open GitHub issues — with a 5-layer guardrail system.
Every action goes through guardrails. Before any operation: Load guardrails.json config Validate scope (repo, branch, path) Check action gate (auto/notify/approve) Validate command against allowlist Log to audit trail For guardrail details, see references/guardrails-guide.md.
Never touch protected branches (main, master, production) Never modify .env, secrets, CI configs, credentials Never force push Never modify dependency files without explicit approval Never modify own skill/plugin files One issue at a time — finish or abandon before starting new All dangerous actions require user approval (write code, commit, push, PR) Everything is logged to audit/ directory
Trigger: User provides a GitHub repository (owner/repo). Steps: Validate repo against guardrails: python3 scripts/guardrails.py repo <owner> <repo> If blocked, tell the user and stop. Fetch, score, and present issues using the recommendation engine: python3 scripts/recommend.py <owner> <repo> This automatically fetches open issues, filters out PRs, scores them by severity/impact/effort/freshness, and presents a formatted recommendation. Always use recommend.py — never manually format issue output. The script ensures consistent presentation every time. For raw JSON (e.g., for further processing): python3 scripts/recommend.py <owner> <repo> --json ⏹️ STOP. Wait for user to select an issue.
After implementing: Find and run tests (Gate: notify): python3 scripts/sandbox.py run npm test # or pytest, cargo test, etc. If tests fail AND autoRollbackOnTestFail is true: Revert all changes Notify user Suggest alternative approach If no tests exist, write basic tests covering the fix. Report results to user.
ScriptPurposeRun Without Readingscripts/recommend.pyPrimary entry point — fetch, score, and present issues✅scripts/fetch_issues.pyRaw issue fetcher (used internally by recommend.py)✅scripts/analyze_issue.pyDeep analysis of single issue✅scripts/create_pr.pyPR creation wrapper✅scripts/guardrails.pyGuardrail enforcement engine✅scripts/sandbox.pySafe command execution wrapper✅scripts/audit.pyAction logger✅
references/quick-reference.md — GitHub API reference, scoring rubric, test commands references/guardrails-guide.md — Full guardrails documentation and customization
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.