Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Autonomously handle coding tasks including writing, debugging, refactoring, testing, and git operations upon user request.
Autonomously handle coding tasks including writing, debugging, refactoring, testing, and git operations upon user request.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Your internal debugging instinct. When something breaks while you're working, this is how you fix it.
These rules apply every time, even for quick fixes. No exceptions.
Read before edit. Never edit without read first โ exact text match required or it fails write overwrites entirely. Use edit for changes to existing files Never delete without asking. Prefer safe deletion over rm -rf Never push without asking. git push only when the user explicitly says to Never commit without asking. Stage and commit only on request Backup awareness. Before large refactors, suggest a branch or stash
Always verify your fix. After every change, re-run the failing command or tests. Never assume it worked Tell the user what happened. After fixing, briefly explain what broke and what you changed Read the error first. Don't guess at fixes โ read the actual error message, stack trace, or test output before touching code Minimal changes. Fix the bug, don't refactor the neighborhood. Keep diffs small and focused
If you hit an error during a task, try a quick fix first while following the rules above. But if you: Get stuck โ your first fix didn't work, same error or new ones Hit something complex โ errors across multiple files, unfamiliar code, architectural issues Need structure โ not sure where the bug is or where to start Then activate Ultra Agent Stinct โ follow the full structured workflows below step by step.
When you encounter an error or something breaks: 1. Reproduce โ Run the failing command: exec command:"<failing command>" workdir:"<project dir>" 2. Read the error โ Parse the stack trace. Identify file + line number. 3. Read the code โ Read the relevant file(s): read path:"<file from stack trace>" 4. Trace the cause โ Follow the call chain. Read imports, dependencies, config. Check for: Typos, wrong variable names Missing imports or dependencies Type mismatches, null/undefined access Wrong paths, missing env vars Logic errors in conditionals 5. Fix โ Apply the minimal correct fix: read path:"<file>" edit path:"<file>" old:"<exact broken code>" new:"<fixed code>" 6. Verify โ Re-run the original failing command. Confirm the fix works. 7. Report โ Tell the user what broke and what you fixed (brief). Then continue your original task.
When you need to create or modify code as part of a task: 1. Understand the project โ Check existing patterns: exec command:"ls -la" workdir:"<project dir>" Read package.json, pyproject.toml, Cargo.toml, or equivalent. Match existing style and conventions. 2. Plan first โ Before writing, outline what you'll create. Think through structure, dependencies, edge cases. 3. Write โ Create the file: write path:"<new file path>" content:"<complete file content>" 4. Verify โ Run it, test it, make sure it actually works before moving on.
1. Find the test runner: Node.js: npm test / npx jest / npx vitest Python: pytest / python -m unittest Rust: cargo test Go: go test ./... 2. Run tests: exec command:"<test command>" workdir:"<project>" timeout:120 3. On failure: Read the failing test, read the source under test, apply Debug Workflow. 4. On success: Report summary and continue.
Only when the user asks to commit, stage, or check git status. exec command:"git status" workdir:"<project>" exec command:"git diff --stat" workdir:"<project>" exec command:"git add <specific files>" workdir:"<project>" exec command:"git commit -m '<message>'" workdir:"<project>" For detailed git workflows, see references/git-workflow.md.
For large tasks (multi-file refactors, entire features, long builds), spawn a background agent: exec pty:true workdir:"<project>" background:true command:"claude '<detailed task>'" Monitor: process action:list process action:log sessionId:<id> process action:poll sessionId:<id> See references/escalation-guide.md for when to self-handle vs delegate.
TaskmacOS/LinuxWindows (Git Bash)Find filesfind . -name "*.ts" -not -path "*/node_modules/*"SameSearch codegrep -rn "pattern" --include="*.ts" .SameProcess listps aux | grep nodetasklist | findstr nodeKill processkill -9 <PID>taskkill //f //pid <PID>Pythonpython3 (or python)pythonOpen fileopen <file>start <file>
Keep tool calls focused โ one task per chain Don't read files already in your system prompt For large files, read targeted sections rather than the whole thing If context is getting heavy, summarize findings before continuing
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.