Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Build cross-device tools without hardcoding paths or account names
Build cross-device tools without hardcoding paths or account names
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Methodology for building tools that work across different devices, naming schemes, and configurations. Based on lessons from OAuth refresher debugging session (2026-01-23).
Never assume your device is the only device. Your local setup is just one of many possible configurations. Build for the general case, not the specific instance.
Before writing any code that reads configuration, data, or credentials: Ask: File paths? (macOS vs Linux, different home dirs) Account names? (user123 vs default vs oauth) Service names? (slight variations in spelling/capitalization) Data structure? (different versions, different formats) Environment? (different shells, different tools available) Example from OAuth refresher: โ Assumed: Account is always "claude" โ Reality: Could be "claude", "Claude Code", "default", etc. Action: List variables, make them configurable or auto-discoverable
Before pushing to production: Test: Wrong configuration (intentionally break config) Missing data (remove expected fields) Multiple entries (ambiguous case) Edge cases (empty values, special characters) Example from OAuth refresher: Test with keychain_account: "wrong-name" โ Fallback should work Test with incomplete keychain data โ Should fail gracefully with helpful error Action: Test failure modes, not just happy path
โ Wrong: # Ambiguous - returns first match security find-generic-password -s "Service" -w โ Correct: # Explicit - returns specific entry security find-generic-password -s "Service" -a "account" -w Rule: If a command can be ambiguous, make it explicit.
โ Wrong: DATA=$(read_config) USE_VALUE="$DATA" # Hope it's valid โ Correct: DATA=$(read_config) if ! validate_structure "$DATA"; then error "Invalid data structure" fi USE_VALUE="$DATA" Rule: Never assume data has expected structure.
โ Wrong: ACCOUNT="claude" # Hardcoded โ Correct: # Try configured โ Try common โ Error with help ACCOUNT="${CONFIG_ACCOUNT}" if ! has_data "$ACCOUNT"; then for fallback in "claude" "default" "oauth"; do if has_data "$fallback"; then ACCOUNT="$fallback" break fi done fi [[ -z "$ACCOUNT" ]] && error "No account found. Tried: ..." Rule: Provide automatic fallbacks for common variations.
Don't ask: "Is it broken?" Ask: "What exact values do you see? How many entries exist? Which one has the data?" Example: # Vague "Check keychain" # Specific "Run: security find-generic-password -l 'Service' | grep 'acct'" "Tell me: 1. How many entries 2. Which has tokens 3. Last modified"
Don't think: "Works on my machine" Think: "What if their setup differs in [X]?" Checklist: Different account names? Different file paths? Different tools/versions? Different permissions? Different data formats?
List all external dependencies (files, commands, services) Document what each dependency provides Identify which parts could vary between devices
Make variations configurable (with sensible defaults) Add validation for each input Build fallback chains for common variations Add --dry-run or --test mode
Test with correct config โ Should work Test with wrong config โ Should fallback or fail gracefully Test with missing data โ Should give helpful error Test with multiple entries โ Should handle ambiguity
Document default assumptions Document how to verify local setup Document common variations and how to handle them Include data flow diagram Add troubleshooting section
# Assumes single entry, no validation, no fallback KEYCHAIN_DATA=$(security find-generic-password -s "Service" -w) REFRESH_TOKEN=$(echo "$KEYCHAIN_DATA" | jq -r '.refreshToken') # Use token (hope it's valid) Problems: Returns first alphabetical match (wrong entry) No validation (could be empty/malformed) No fallback (fails if account name differs)
# Explicit account with validation and fallback validate_data() { echo "$1" | jq -e '.claudeAiOauth.refreshToken' > /dev/null 2>&1 } # Try configured account DATA=$(security find-generic-password -s "$SERVICE" -a "$ACCOUNT" -w 2>&1) if validate_data "$DATA"; then log "โ Using account: $ACCOUNT" else log "โ Trying fallback accounts..." for fallback in "claude" "Claude Code" "default"; do DATA=$(security find-generic-password -s "$SERVICE" -a "$fallback" -w 2>&1) if validate_data "$DATA"; then ACCOUNT="$fallback" log "โ Found data in: $fallback" break fi done fi [[ -z "$DATA" ]] || ! validate_data "$DATA" && error "No valid data found Tried accounts: $ACCOUNT, claude, Claude Code, default Verify with: security find-generic-password -l '$SERVICE'" REFRESH_TOKEN=$(echo "$DATA" | jq -r '.claudeAiOauth.refreshToken') Improvements: โ Explicit account parameter โ Validates data structure โ Automatic fallback to common names โ Helpful error with verification command
FILE="/Users/patrick/.config/app.json" # Hardcoded path Fix: Use $HOME, detect OS, or make configurable
TOKEN=$(cat config.json | jq -r '.token') # What if .token doesn't exist? Script continues with empty value Fix: Validate before using TOKEN=$(cat config.json | jq -r '.token // empty') [[ -z "$TOKEN" ]] && error "No token in config"
# If multiple entries exist, which one? ENTRY=$(find_entry "service") Fix: Be explicit or enumerate all ENTRY=$(find_entry "service" "account") # Specific # OR ALL=$(find_all_entries "service") for entry in $ALL; do validate_and_use "$entry" done
process_data || true # Ignore errors Fix: Fail loudly with context process_data || error "Failed to process Data: $DATA Expected: { ... } Check: command_to_verify"
When building new skills: List what varies between devices Make it configurable or auto-discoverable Test with wrong config Document troubleshooting
Before writing code: What varies between devices? How do I prove this works? What happens when it breaks? Mandatory patterns: Explicit over implicit Validate before use Fallback chains Helpful errors Testing: Correct config โ Works Wrong config โ Fallback or helpful error Missing data โ Clear diagnostic Documentation: Data flow diagram Common variations Troubleshooting guide
A tool is portable when: โ Works on different devices without modification โ Auto-discovers common variations in setup โ Fails gracefully with actionable error messages โ Can be debugged by reading the error output โ Documentation covers "what if my setup differs" Test: Give it to someone with a different setup. If they need to ask you questions, the tool isn't portable yet.
This methodology emerged from debugging the OAuth refresher (2026-01-23): Script read wrong keychain entry (didn't specify account) Assumed single entry existed (multiple did) No validation (used empty data) No fallback (failed on different account names) Patrick's approach: Asked for exact data (how many entries, which has tokens) Demanded proof (show BEFORE/AFTER tokens) Thought cross-device (what if naming differs?) Result: Tool went from single-device/broken to universal/production-ready. Key insight: The bugs weren't in the logic - they were in the assumptions.
Use when: Building tools that read system configuration Working with keychains, credentials, environment variables Creating scripts that run on multiple machines Publishing skills to ClawdHub (others will use them) Apply: Before implementing: Answer the three questions During implementation: Use mandatory patterns Before testing: Run pre-flight checklist After testing: Document variations and troubleshooting Remember: Your device is just one case. Build for the general case.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.