{
  "schemaVersion": "1.0",
  "item": {
    "slug": "openclaw-plus",
    "name": "openclaw-plus",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/Shindo957-Official/openclaw-plus",
    "canonicalUrl": "https://clawhub.ai/Shindo957-Official/openclaw-plus",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/openclaw-plus",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=openclaw-plus",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "CHANGELOG.md",
      "LICENSE.txt",
      "manifest.json",
      "PUBLISHING.md",
      "QUICKSTART.md",
      "README.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "slug": "openclaw-plus",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-12T11:28:59.398Z",
      "expiresAt": "2026-05-19T11:28:59.398Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=openclaw-plus",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=openclaw-plus",
        "contentDisposition": "attachment; filename=\"openclaw-plus-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "openclaw-plus"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/openclaw-plus"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/openclaw-plus",
    "agentPageUrl": "https://openagent3.xyz/skills/openclaw-plus/agent",
    "manifestUrl": "https://openagent3.xyz/skills/openclaw-plus/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/openclaw-plus/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "OpenClaw+ 🚀",
        "body": "A modular super-skill that combines essential developer tools and web capabilities into a unified, powerful workflow."
      },
      {
        "title": "Overview",
        "body": "OpenClaw+ integrates seven core capabilities into one streamlined skill:\n\nDeveloper Skills:\n\nrun_python - Execute Python code with proper environment management\ngit_status - Check repository status and track changes\ngit_commit - Commit changes with meaningful messages\ninstall_package - Install Python packages with dependency handling\n\nWeb Skills:\n\nfetch_url - Retrieve web content with robust error handling\ncall_api - Make API requests with authentication and response parsing\n\nThis modular design allows you to chain operations efficiently - install packages, run code, fetch data, commit results - all in one cohesive workflow."
      },
      {
        "title": "When to Use OpenClaw+",
        "body": "Use this skill when the user's request involves:\n\nRunning Python scripts or code snippets\nInstalling Python packages (pip, conda, system packages)\nChecking git repository status\nCommitting code changes\nFetching content from URLs\nMaking API calls (REST, GraphQL, etc.)\nCombining any of the above in a workflow\n\nCommon patterns:\n\n\"Install pandas and run this analysis\"\n\"Fetch data from this API and save it\"\n\"Check git status and commit my changes\"\n\"Run this script and call this endpoint\"\n\"Install these packages, run the code, then commit\""
      },
      {
        "title": "1. Python Execution (run_python)",
        "body": "Execute Python code with proper environment management and output capture.\n\nKey features:\n\nCaptures stdout, stderr, and return values\nHandles exceptions gracefully\nSupports multi-line scripts\nAccess to installed packages\nEnvironment variable support\n\nUsage patterns:\n\n# Simple execution\nresult = run_python(\"print('Hello, world!')\")\n\n# With installed packages\nrun_python(\"\"\"\nimport pandas as pd\nimport numpy as np\n\ndata = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})\nprint(data.describe())\n\"\"\")\n\n# File operations\nrun_python(\"\"\"\nwith open('output.txt', 'w') as f:\n    f.write('Results: ...')\n\"\"\")\n\nBest practices:\n\nAlways check for syntax errors before execution\nHandle file paths carefully (use absolute paths when needed)\nCapture exceptions and provide clear error messages\nFor large scripts, consider creating a .py file first"
      },
      {
        "title": "2. Package Installation (install_package)",
        "body": "Install Python packages with intelligent dependency resolution.\n\nKey features:\n\nPip package installation\nSystem package support (apt, brew, etc.)\nConda environment support\nDependency conflict detection\nVersion pinning\n\nUsage patterns:\n\n# Install single package\ninstall_package(\"pandas\")\n\n# Install specific version\ninstall_package(\"numpy==1.24.0\")\n\n# Install multiple packages\ninstall_package(\"requests beautifulsoup4 lxml\")\n\n# Install from requirements.txt\ninstall_package(\"-r requirements.txt\")\n\n# System packages (when needed)\ninstall_package(\"libpq-dev\", system=True)\n\nBest practices:\n\nAlways use --break-system-packages flag for pip in this environment\nCheck if package is already installed before installing\nHandle version conflicts explicitly\nProvide clear feedback on installation success/failure\n\nImplementation:\n\npip install <package> --break-system-packages"
      },
      {
        "title": "3. Git Status (git_status)",
        "body": "Check repository status and track changes.\n\nKey features:\n\nShows modified, added, deleted files\nDisplays untracked files\nShows current branch\nIndicates if ahead/behind remote\nSupports custom git directories\n\nUsage patterns:\n\n# Check current directory\ngit_status()\n\n# Check specific directory\ngit_status(\"/path/to/repo\")\n\n# Parse output for automation\nstatus = git_status()\nif \"modified:\" in status:\n    print(\"Changes detected\")\n\nBest practices:\n\nAlways check status before committing\nParse output to detect specific changes\nHandle cases where directory isn't a git repo\nProvide context about what changed\n\nImplementation:\n\ngit status\ngit diff --stat\ngit log -1 --oneline"
      },
      {
        "title": "4. Git Commit (git_commit)",
        "body": "Commit changes with meaningful messages following best practices.\n\nKey features:\n\nConventional commit format support\nMulti-line commit messages\nAutomatic staging option\nCommit message validation\nAmend support\n\nUsage patterns:\n\n# Simple commit\ngit_commit(\"Add new feature\")\n\n# Conventional commit\ngit_commit(\"feat: add user authentication\")\n\n# Multi-line with description\ngit_commit(\"\"\"\nfeat: add data processing pipeline\n\n- Implement CSV reader\n- Add data validation\n- Create output formatter\n\"\"\")\n\n# Stage and commit\ngit_commit(\"fix: resolve parsing error\", stage_all=True)\n\nBest practices:\n\nUse conventional commit format: type(scope): description\nTypes: feat, fix, docs, style, refactor, test, chore\nKeep first line under 50 characters\nAdd detailed description if needed\nReference issue numbers when applicable\n\nImplementation:\n\ngit add <files>  # if stage_all\ngit commit -m \"<message>\"\ngit log -1 --oneline  # confirm commit"
      },
      {
        "title": "5. URL Fetching (fetch_url)",
        "body": "Retrieve content from URLs with robust error handling.\n\nKey features:\n\nHTTP/HTTPS support\nCustom headers\nAuthentication support\nRedirect following\nTimeout handling\nResponse parsing (JSON, XML, HTML, text)\n\nUsage patterns:\n\n# Fetch HTML\nhtml = fetch_url(\"https://example.com\")\n\n# Fetch JSON\ndata = fetch_url(\"https://api.example.com/data\", \n                 parse_json=True)\n\n# With authentication\ncontent = fetch_url(\"https://api.example.com/protected\",\n                    headers={\"Authorization\": \"Bearer TOKEN\"})\n\n# With custom timeout\ncontent = fetch_url(\"https://slow-site.com\", timeout=30)\n\n# POST request\nresponse = fetch_url(\"https://api.example.com/submit\",\n                     method=\"POST\",\n                     data={\"key\": \"value\"})\n\nBest practices:\n\nAlways handle network errors gracefully\nSet appropriate timeouts\nValidate URLs before fetching\nParse response based on content type\nHandle rate limiting\nRespect robots.txt\n\nImplementation:\n\nimport requests\n\nresponse = requests.get(url, headers=headers, timeout=timeout)\nresponse.raise_for_status()\nreturn response.text  # or response.json()"
      },
      {
        "title": "6. API Calls (call_api)",
        "body": "Make API requests with authentication and response parsing.\n\nKey features:\n\nREST API support\nGraphQL support\nAuthentication (Bearer, Basic, API Key)\nRequest/response logging\nError handling with retries\nResponse validation\n\nUsage patterns:\n\n# Simple GET request\ndata = call_api(\"https://api.example.com/users\")\n\n# With authentication\ndata = call_api(\"https://api.example.com/data\",\n                auth_token=\"your-token\")\n\n# POST with JSON body\nresult = call_api(\"https://api.example.com/create\",\n                  method=\"POST\",\n                  json_data={\"name\": \"John\", \"age\": 30})\n\n# With custom headers\ndata = call_api(\"https://api.example.com/endpoint\",\n                headers={\"X-Custom-Header\": \"value\"})\n\n# GraphQL query\nresult = call_api(\"https://api.example.com/graphql\",\n                  method=\"POST\",\n                  json_data={\n                      \"query\": \"{ users { id name } }\"\n                  })\n\nBest practices:\n\nValidate API keys/tokens before use\nHandle rate limits with exponential backoff\nParse response format (JSON, XML, etc.)\nLog requests for debugging\nHandle pagination for large datasets\nValidate response schemas\nUse appropriate HTTP methods (GET, POST, PUT, DELETE, PATCH)\n\nImplementation:\n\nimport requests\n\nheaders = {\"Authorization\": f\"Bearer {token}\"}\nresponse = requests.request(\n    method=method,\n    url=url,\n    headers=headers,\n    json=json_data,\n    timeout=30\n)\nresponse.raise_for_status()\nreturn response.json()"
      },
      {
        "title": "Workflow Patterns",
        "body": "OpenClaw+ shines when combining multiple capabilities:"
      },
      {
        "title": "Pattern 1: Data Pipeline",
        "body": "# 1. Install dependencies\ninstall_package(\"pandas requests\")\n\n# 2. Fetch data from API\ndata = call_api(\"https://api.example.com/dataset\")\n\n# 3. Process with Python\nrun_python(\"\"\"\nimport pandas as pd\nimport json\n\nwith open('raw_data.json', 'r') as f:\n    data = json.load(f)\n\ndf = pd.DataFrame(data)\ndf_cleaned = df.dropna()\ndf_cleaned.to_csv('cleaned_data.csv', index=False)\nprint(f'Processed {len(df_cleaned)} records')\n\"\"\")\n\n# 4. Commit results\ngit_commit(\"feat: add cleaned dataset\")"
      },
      {
        "title": "Pattern 2: Web Scraping & Analysis",
        "body": "# 1. Install scraping tools\ninstall_package(\"beautifulsoup4 lxml requests\")\n\n# 2. Fetch webpage\nhtml = fetch_url(\"https://example.com/data-page\")\n\n# 3. Parse and analyze\nrun_python(\"\"\"\nfrom bs4 import BeautifulSoup\nimport json\n\nwith open('page.html', 'r') as f:\n    soup = BeautifulSoup(f, 'lxml')\n\ndata = []\nfor item in soup.find_all('div', class_='data-item'):\n    data.append({\n        'title': item.find('h2').text,\n        'value': item.find('span', class_='value').text\n    })\n\nwith open('scraped_data.json', 'w') as f:\n    json.dump(data, f, indent=2)\n\"\"\")\n\n# 4. Check and commit\ngit_status()\ngit_commit(\"chore: update scraped data\")"
      },
      {
        "title": "Pattern 3: API Integration Testing",
        "body": "# 1. Install testing tools\ninstall_package(\"pytest requests-mock\")\n\n# 2. Run tests\nrun_python(\"\"\"\nimport requests\nimport json\n\n# Test API endpoint\nresponse = requests.get('https://api.example.com/health')\nassert response.status_code == 200\n\n# Test with authentication\nheaders = {'Authorization': 'Bearer test-token'}\nresponse = requests.get('https://api.example.com/data', headers=headers)\nprint(f'Status: {response.status_code}')\nprint(f'Data: {response.json()}')\n\"\"\")\n\n# 3. Commit test results\ngit_commit(\"test: add API integration tests\")"
      },
      {
        "title": "Pattern 4: Automated Reporting",
        "body": "# 1. Fetch data from multiple sources\napi_data = call_api(\"https://api.example.com/metrics\")\nweb_data = fetch_url(\"https://example.com/reports/latest\")\n\n# 2. Process and generate report\ninstall_package(\"matplotlib pandas\")\nrun_python(\"\"\"\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport json\n\nwith open('api_data.json', 'r') as f:\n    data = json.load(f)\n\ndf = pd.DataFrame(data)\ndf['date'] = pd.to_datetime(df['date'])\n\nplt.figure(figsize=(10, 6))\nplt.plot(df['date'], df['value'])\nplt.title('Metrics Over Time')\nplt.savefig('report.png')\nprint('Report generated')\n\"\"\")\n\n# 3. Commit report\ngit_commit(\"docs: add automated metrics report\")"
      },
      {
        "title": "Error Handling",
        "body": "Each capability includes robust error handling:"
      },
      {
        "title": "Python Execution Errors",
        "body": "try:\n    result = run_python(code)\nexcept SyntaxError as e:\n    print(f\"Syntax error: {e}\")\nexcept RuntimeError as e:\n    print(f\"Runtime error: {e}\")"
      },
      {
        "title": "Package Installation Errors",
        "body": "# Handle already installed\nif package_installed(\"pandas\"):\n    print(\"Package already installed\")\nelse:\n    install_package(\"pandas\")\n\n# Handle installation failure\ntry:\n    install_package(\"nonexistent-package\")\nexcept Exception as e:\n    print(f\"Installation failed: {e}\")"
      },
      {
        "title": "Git Operation Errors",
        "body": "# Not a git repository\nif not is_git_repo():\n    print(\"Not a git repository\")\n    exit(1)\n\n# Nothing to commit\nstatus = git_status()\nif \"nothing to commit\" in status:\n    print(\"No changes to commit\")"
      },
      {
        "title": "Network Errors",
        "body": "# Handle timeouts\ntry:\n    data = fetch_url(url, timeout=5)\nexcept TimeoutError:\n    print(\"Request timed out\")\n\n# Handle HTTP errors\ntry:\n    response = call_api(url)\nexcept requests.HTTPError as e:\n    print(f\"HTTP error: {e.response.status_code}\")"
      },
      {
        "title": "1. Environment Management",
        "body": "Always use --break-system-packages for pip\nCheck if packages are installed before installing\nUse virtual environments when appropriate\nDocument package versions"
      },
      {
        "title": "2. Git Operations",
        "body": "Check status before committing\nUse meaningful commit messages\nFollow conventional commit format\nStage only relevant files"
      },
      {
        "title": "3. Code Execution",
        "body": "Validate syntax before running\nHandle exceptions gracefully\nCapture and log output\nClean up temporary files"
      },
      {
        "title": "4. API/Web Requests",
        "body": "Set appropriate timeouts\nHandle rate limiting\nValidate responses\nLog requests for debugging\nRespect API usage limits"
      },
      {
        "title": "5. Workflow Composition",
        "body": "Chain operations logically\nHandle errors at each step\nProvide progress feedback\nDocument dependencies"
      },
      {
        "title": "API Keys & Credentials",
        "body": "Never hardcode credentials\nUse environment variables\nValidate before use\nRotate regularly"
      },
      {
        "title": "Code Execution",
        "body": "Validate input code\nSandbox when possible\nLimit resource usage\nMonitor execution"
      },
      {
        "title": "Web Requests",
        "body": "Validate URLs\nUse HTTPS when possible\nHandle redirects carefully\nRespect robots.txt"
      },
      {
        "title": "Common Issues",
        "body": "Python execution fails:\n\nCheck syntax with python -m py_compile script.py\nVerify packages are installed\nCheck file paths\nReview error messages\n\nPackage installation fails:\n\nEnsure pip is up to date\nCheck internet connectivity\nVerify package name\nReview dependencies\n\nGit operations fail:\n\nVerify it's a git repository\nCheck file permissions\nEnsure clean working directory\nReview git configuration\n\nAPI/URL requests fail:\n\nVerify URL is correct\nCheck authentication\nReview rate limits\nCheck network connectivity"
      },
      {
        "title": "Example 1: Complete Data Pipeline",
        "body": "# User request: \"Fetch weather data, analyze it, and commit results\"\n\n# Step 1: Install dependencies\ninstall_package(\"requests pandas matplotlib\")\n\n# Step 2: Fetch data\nweather_data = call_api(\n    \"https://api.weather.com/data\",\n    auth_token=\"your-api-key\"\n)\n\n# Step 3: Save and analyze\nrun_python(\"\"\"\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport json\n\n# Load data\nwith open('weather_data.json', 'r') as f:\n    data = json.load(f)\n\n# Create DataFrame\ndf = pd.DataFrame(data['forecast'])\ndf['date'] = pd.to_datetime(df['date'])\n\n# Analyze\navg_temp = df['temperature'].mean()\nmax_temp = df['temperature'].max()\nmin_temp = df['temperature'].min()\n\n# Generate plot\nplt.figure(figsize=(12, 6))\nplt.plot(df['date'], df['temperature'], marker='o')\nplt.title('Temperature Forecast')\nplt.xlabel('Date')\nplt.ylabel('Temperature (°F)')\nplt.grid(True)\nplt.savefig('temperature_forecast.png')\n\n# Save summary\nsummary = {\n    'avg_temp': avg_temp,\n    'max_temp': max_temp,\n    'min_temp': min_temp,\n    'records': len(df)\n}\n\nwith open('weather_summary.json', 'w') as f:\n    json.dump(summary, f, indent=2)\n\nprint(f'Analysis complete: {len(df)} records processed')\nprint(f'Average temperature: {avg_temp:.1f}°F')\n\"\"\")\n\n# Step 4: Commit results\ngit_status()\ngit_commit(\"\"\"\nfeat: add weather data analysis\n\n- Fetch 7-day forecast from API\n- Generate temperature plot\n- Create summary statistics\n\"\"\")"
      },
      {
        "title": "Example 2: Web Scraping & Storage",
        "body": "# User request: \"Scrape product data and save to database\"\n\n# Step 1: Install tools\ninstall_package(\"beautifulsoup4 lxml requests sqlite3\")\n\n# Step 2: Fetch webpage\nhtml = fetch_url(\"https://example-shop.com/products\")\n\n# Step 3: Parse and store\nrun_python(\"\"\"\nfrom bs4 import BeautifulSoup\nimport sqlite3\nimport json\n\n# Parse HTML\nwith open('products.html', 'r') as f:\n    soup = BeautifulSoup(f, 'lxml')\n\nproducts = []\nfor item in soup.find_all('div', class_='product'):\n    product = {\n        'name': item.find('h3').text.strip(),\n        'price': float(item.find('span', class_='price').text.strip('$')),\n        'rating': float(item.find('span', class_='rating').text),\n        'url': item.find('a')['href']\n    }\n    products.append(product)\n\n# Store in SQLite\nconn = sqlite3.connect('products.db')\ncursor = conn.cursor()\n\ncursor.execute('''\n    CREATE TABLE IF NOT EXISTS products (\n        id INTEGER PRIMARY KEY,\n        name TEXT,\n        price REAL,\n        rating REAL,\n        url TEXT\n    )\n''')\n\nfor p in products:\n    cursor.execute('''\n        INSERT INTO products (name, price, rating, url)\n        VALUES (?, ?, ?, ?)\n    ''', (p['name'], p['price'], p['rating'], p['url']))\n\nconn.commit()\nconn.close()\n\nprint(f'Scraped and stored {len(products)} products')\n\"\"\")\n\n# Step 4: Commit\ngit_commit(\"chore: update product database\")"
      },
      {
        "title": "Example 3: API Testing Suite",
        "body": "# User request: \"Test our API endpoints and generate report\"\n\n# Step 1: Install testing framework\ninstall_package(\"pytest requests pytest-html\")\n\n# Step 2: Create test file and run\nrun_python(\"\"\"\nimport requests\nimport json\nfrom datetime import datetime\n\nBASE_URL = \"https://api.example.com\"\nresults = []\n\n# Test 1: Health check\ntry:\n    response = requests.get(f\"{BASE_URL}/health\")\n    results.append({\n        'test': 'Health Check',\n        'status': response.status_code,\n        'passed': response.status_code == 200,\n        'response_time': response.elapsed.total_seconds()\n    })\nexcept Exception as e:\n    results.append({\n        'test': 'Health Check',\n        'status': 'Error',\n        'passed': False,\n        'error': str(e)\n    })\n\n# Test 2: Authentication\ntry:\n    headers = {'Authorization': 'Bearer test-token'}\n    response = requests.get(f\"{BASE_URL}/auth/validate\", headers=headers)\n    results.append({\n        'test': 'Authentication',\n        'status': response.status_code,\n        'passed': response.status_code == 200,\n        'response_time': response.elapsed.total_seconds()\n    })\nexcept Exception as e:\n    results.append({\n        'test': 'Authentication',\n        'status': 'Error',\n        'passed': False,\n        'error': str(e)\n    })\n\n# Test 3: Data retrieval\ntry:\n    response = requests.get(f\"{BASE_URL}/data/users\")\n    data = response.json()\n    results.append({\n        'test': 'Data Retrieval',\n        'status': response.status_code,\n        'passed': response.status_code == 200 and len(data) > 0,\n        'records': len(data) if response.status_code == 200 else 0,\n        'response_time': response.elapsed.total_seconds()\n    })\nexcept Exception as e:\n    results.append({\n        'test': 'Data Retrieval',\n        'status': 'Error',\n        'passed': False,\n        'error': str(e)\n    })\n\n# Generate report\nreport = {\n    'timestamp': datetime.now().isoformat(),\n    'total_tests': len(results),\n    'passed': sum(1 for r in results if r.get('passed')),\n    'failed': sum(1 for r in results if not r.get('passed')),\n    'results': results\n}\n\nwith open('api_test_report.json', 'w') as f:\n    json.dump(report, f, indent=2)\n\nprint(f\"Tests complete: {report['passed']}/{report['total_tests']} passed\")\nfor r in results:\n    status = '✓' if r.get('passed') else '✗'\n    print(f\"{status} {r['test']}\")\n\"\"\")\n\n# Step 3: Check and commit\ngit_status()\ngit_commit(\"test: add API endpoint tests\")"
      },
      {
        "title": "Integration with Other Skills",
        "body": "OpenClaw+ works seamlessly with other skills:"
      },
      {
        "title": "With docx skill:",
        "body": "# Generate data, then create report\ncall_api(\"https://api.example.com/stats\")\nrun_python(\"process_stats.py\")\n# Then use docx skill to create formatted report"
      },
      {
        "title": "With xlsx skill:",
        "body": "# Fetch data, process with Python, export to Excel\nfetch_url(\"https://data-source.com/raw.csv\")\nrun_python(\"clean_and_transform.py\")\n# Then use xlsx skill to create formatted spreadsheet"
      },
      {
        "title": "With pptx skill:",
        "body": "# Generate charts and data visualizations\ninstall_package(\"matplotlib seaborn\")\nrun_python(\"generate_charts.py\")\n# Then use pptx skill to create presentation"
      },
      {
        "title": "Python Execution",
        "body": "run_python(code_string)"
      },
      {
        "title": "Package Management",
        "body": "install_package(\"package_name\")\ninstall_package(\"package==1.0.0\")\ninstall_package(\"-r requirements.txt\")"
      },
      {
        "title": "Git Operations",
        "body": "git_status()\ngit_commit(\"message\")\ngit_commit(\"message\", stage_all=True)"
      },
      {
        "title": "Web Requests",
        "body": "fetch_url(url, timeout=30)\ncall_api(url, method=\"GET\", auth_token=\"token\")"
      },
      {
        "title": "Conclusion",
        "body": "OpenClaw+ provides a unified, powerful toolkit for development and web automation workflows. By combining Python execution, package management, git operations, and web capabilities, it enables complex multi-step workflows with a single cohesive skill.\n\nKey strengths:\n\n✅ Modular design - use only what you need\n✅ Error handling - robust failure recovery\n✅ Workflow composition - chain operations easily\n✅ Production-ready - follows best practices\n✅ Well-documented - clear examples and patterns\n\nUse OpenClaw+ whenever your task involves code execution, package management, version control, or web interactions - or any combination thereof!"
      }
    ],
    "body": "OpenClaw+ 🚀\n\nA modular super-skill that combines essential developer tools and web capabilities into a unified, powerful workflow.\n\nOverview\n\nOpenClaw+ integrates seven core capabilities into one streamlined skill:\n\nDeveloper Skills:\n\nrun_python - Execute Python code with proper environment management\ngit_status - Check repository status and track changes\ngit_commit - Commit changes with meaningful messages\ninstall_package - Install Python packages with dependency handling\n\nWeb Skills:\n\nfetch_url - Retrieve web content with robust error handling\ncall_api - Make API requests with authentication and response parsing\n\nThis modular design allows you to chain operations efficiently - install packages, run code, fetch data, commit results - all in one cohesive workflow.\n\nWhen to Use OpenClaw+\n\nUse this skill when the user's request involves:\n\nRunning Python scripts or code snippets\nInstalling Python packages (pip, conda, system packages)\nChecking git repository status\nCommitting code changes\nFetching content from URLs\nMaking API calls (REST, GraphQL, etc.)\nCombining any of the above in a workflow\n\nCommon patterns:\n\n\"Install pandas and run this analysis\"\n\"Fetch data from this API and save it\"\n\"Check git status and commit my changes\"\n\"Run this script and call this endpoint\"\n\"Install these packages, run the code, then commit\"\nCore Capabilities\n1. Python Execution (run_python)\n\nExecute Python code with proper environment management and output capture.\n\nKey features:\n\nCaptures stdout, stderr, and return values\nHandles exceptions gracefully\nSupports multi-line scripts\nAccess to installed packages\nEnvironment variable support\n\nUsage patterns:\n\n# Simple execution\nresult = run_python(\"print('Hello, world!')\")\n\n# With installed packages\nrun_python(\"\"\"\nimport pandas as pd\nimport numpy as np\n\ndata = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})\nprint(data.describe())\n\"\"\")\n\n# File operations\nrun_python(\"\"\"\nwith open('output.txt', 'w') as f:\n    f.write('Results: ...')\n\"\"\")\n\n\nBest practices:\n\nAlways check for syntax errors before execution\nHandle file paths carefully (use absolute paths when needed)\nCapture exceptions and provide clear error messages\nFor large scripts, consider creating a .py file first\n2. Package Installation (install_package)\n\nInstall Python packages with intelligent dependency resolution.\n\nKey features:\n\nPip package installation\nSystem package support (apt, brew, etc.)\nConda environment support\nDependency conflict detection\nVersion pinning\n\nUsage patterns:\n\n# Install single package\ninstall_package(\"pandas\")\n\n# Install specific version\ninstall_package(\"numpy==1.24.0\")\n\n# Install multiple packages\ninstall_package(\"requests beautifulsoup4 lxml\")\n\n# Install from requirements.txt\ninstall_package(\"-r requirements.txt\")\n\n# System packages (when needed)\ninstall_package(\"libpq-dev\", system=True)\n\n\nBest practices:\n\nAlways use --break-system-packages flag for pip in this environment\nCheck if package is already installed before installing\nHandle version conflicts explicitly\nProvide clear feedback on installation success/failure\n\nImplementation:\n\npip install <package> --break-system-packages\n\n3. Git Status (git_status)\n\nCheck repository status and track changes.\n\nKey features:\n\nShows modified, added, deleted files\nDisplays untracked files\nShows current branch\nIndicates if ahead/behind remote\nSupports custom git directories\n\nUsage patterns:\n\n# Check current directory\ngit_status()\n\n# Check specific directory\ngit_status(\"/path/to/repo\")\n\n# Parse output for automation\nstatus = git_status()\nif \"modified:\" in status:\n    print(\"Changes detected\")\n\n\nBest practices:\n\nAlways check status before committing\nParse output to detect specific changes\nHandle cases where directory isn't a git repo\nProvide context about what changed\n\nImplementation:\n\ngit status\ngit diff --stat\ngit log -1 --oneline\n\n4. Git Commit (git_commit)\n\nCommit changes with meaningful messages following best practices.\n\nKey features:\n\nConventional commit format support\nMulti-line commit messages\nAutomatic staging option\nCommit message validation\nAmend support\n\nUsage patterns:\n\n# Simple commit\ngit_commit(\"Add new feature\")\n\n# Conventional commit\ngit_commit(\"feat: add user authentication\")\n\n# Multi-line with description\ngit_commit(\"\"\"\nfeat: add data processing pipeline\n\n- Implement CSV reader\n- Add data validation\n- Create output formatter\n\"\"\")\n\n# Stage and commit\ngit_commit(\"fix: resolve parsing error\", stage_all=True)\n\n\nBest practices:\n\nUse conventional commit format: type(scope): description\nTypes: feat, fix, docs, style, refactor, test, chore\nKeep first line under 50 characters\nAdd detailed description if needed\nReference issue numbers when applicable\n\nImplementation:\n\ngit add <files>  # if stage_all\ngit commit -m \"<message>\"\ngit log -1 --oneline  # confirm commit\n\n5. URL Fetching (fetch_url)\n\nRetrieve content from URLs with robust error handling.\n\nKey features:\n\nHTTP/HTTPS support\nCustom headers\nAuthentication support\nRedirect following\nTimeout handling\nResponse parsing (JSON, XML, HTML, text)\n\nUsage patterns:\n\n# Fetch HTML\nhtml = fetch_url(\"https://example.com\")\n\n# Fetch JSON\ndata = fetch_url(\"https://api.example.com/data\", \n                 parse_json=True)\n\n# With authentication\ncontent = fetch_url(\"https://api.example.com/protected\",\n                    headers={\"Authorization\": \"Bearer TOKEN\"})\n\n# With custom timeout\ncontent = fetch_url(\"https://slow-site.com\", timeout=30)\n\n# POST request\nresponse = fetch_url(\"https://api.example.com/submit\",\n                     method=\"POST\",\n                     data={\"key\": \"value\"})\n\n\nBest practices:\n\nAlways handle network errors gracefully\nSet appropriate timeouts\nValidate URLs before fetching\nParse response based on content type\nHandle rate limiting\nRespect robots.txt\n\nImplementation:\n\nimport requests\n\nresponse = requests.get(url, headers=headers, timeout=timeout)\nresponse.raise_for_status()\nreturn response.text  # or response.json()\n\n6. API Calls (call_api)\n\nMake API requests with authentication and response parsing.\n\nKey features:\n\nREST API support\nGraphQL support\nAuthentication (Bearer, Basic, API Key)\nRequest/response logging\nError handling with retries\nResponse validation\n\nUsage patterns:\n\n# Simple GET request\ndata = call_api(\"https://api.example.com/users\")\n\n# With authentication\ndata = call_api(\"https://api.example.com/data\",\n                auth_token=\"your-token\")\n\n# POST with JSON body\nresult = call_api(\"https://api.example.com/create\",\n                  method=\"POST\",\n                  json_data={\"name\": \"John\", \"age\": 30})\n\n# With custom headers\ndata = call_api(\"https://api.example.com/endpoint\",\n                headers={\"X-Custom-Header\": \"value\"})\n\n# GraphQL query\nresult = call_api(\"https://api.example.com/graphql\",\n                  method=\"POST\",\n                  json_data={\n                      \"query\": \"{ users { id name } }\"\n                  })\n\n\nBest practices:\n\nValidate API keys/tokens before use\nHandle rate limits with exponential backoff\nParse response format (JSON, XML, etc.)\nLog requests for debugging\nHandle pagination for large datasets\nValidate response schemas\nUse appropriate HTTP methods (GET, POST, PUT, DELETE, PATCH)\n\nImplementation:\n\nimport requests\n\nheaders = {\"Authorization\": f\"Bearer {token}\"}\nresponse = requests.request(\n    method=method,\n    url=url,\n    headers=headers,\n    json=json_data,\n    timeout=30\n)\nresponse.raise_for_status()\nreturn response.json()\n\nWorkflow Patterns\n\nOpenClaw+ shines when combining multiple capabilities:\n\nPattern 1: Data Pipeline\n# 1. Install dependencies\ninstall_package(\"pandas requests\")\n\n# 2. Fetch data from API\ndata = call_api(\"https://api.example.com/dataset\")\n\n# 3. Process with Python\nrun_python(\"\"\"\nimport pandas as pd\nimport json\n\nwith open('raw_data.json', 'r') as f:\n    data = json.load(f)\n\ndf = pd.DataFrame(data)\ndf_cleaned = df.dropna()\ndf_cleaned.to_csv('cleaned_data.csv', index=False)\nprint(f'Processed {len(df_cleaned)} records')\n\"\"\")\n\n# 4. Commit results\ngit_commit(\"feat: add cleaned dataset\")\n\nPattern 2: Web Scraping & Analysis\n# 1. Install scraping tools\ninstall_package(\"beautifulsoup4 lxml requests\")\n\n# 2. Fetch webpage\nhtml = fetch_url(\"https://example.com/data-page\")\n\n# 3. Parse and analyze\nrun_python(\"\"\"\nfrom bs4 import BeautifulSoup\nimport json\n\nwith open('page.html', 'r') as f:\n    soup = BeautifulSoup(f, 'lxml')\n\ndata = []\nfor item in soup.find_all('div', class_='data-item'):\n    data.append({\n        'title': item.find('h2').text,\n        'value': item.find('span', class_='value').text\n    })\n\nwith open('scraped_data.json', 'w') as f:\n    json.dump(data, f, indent=2)\n\"\"\")\n\n# 4. Check and commit\ngit_status()\ngit_commit(\"chore: update scraped data\")\n\nPattern 3: API Integration Testing\n# 1. Install testing tools\ninstall_package(\"pytest requests-mock\")\n\n# 2. Run tests\nrun_python(\"\"\"\nimport requests\nimport json\n\n# Test API endpoint\nresponse = requests.get('https://api.example.com/health')\nassert response.status_code == 200\n\n# Test with authentication\nheaders = {'Authorization': 'Bearer test-token'}\nresponse = requests.get('https://api.example.com/data', headers=headers)\nprint(f'Status: {response.status_code}')\nprint(f'Data: {response.json()}')\n\"\"\")\n\n# 3. Commit test results\ngit_commit(\"test: add API integration tests\")\n\nPattern 4: Automated Reporting\n# 1. Fetch data from multiple sources\napi_data = call_api(\"https://api.example.com/metrics\")\nweb_data = fetch_url(\"https://example.com/reports/latest\")\n\n# 2. Process and generate report\ninstall_package(\"matplotlib pandas\")\nrun_python(\"\"\"\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport json\n\nwith open('api_data.json', 'r') as f:\n    data = json.load(f)\n\ndf = pd.DataFrame(data)\ndf['date'] = pd.to_datetime(df['date'])\n\nplt.figure(figsize=(10, 6))\nplt.plot(df['date'], df['value'])\nplt.title('Metrics Over Time')\nplt.savefig('report.png')\nprint('Report generated')\n\"\"\")\n\n# 3. Commit report\ngit_commit(\"docs: add automated metrics report\")\n\nError Handling\n\nEach capability includes robust error handling:\n\nPython Execution Errors\ntry:\n    result = run_python(code)\nexcept SyntaxError as e:\n    print(f\"Syntax error: {e}\")\nexcept RuntimeError as e:\n    print(f\"Runtime error: {e}\")\n\nPackage Installation Errors\n# Handle already installed\nif package_installed(\"pandas\"):\n    print(\"Package already installed\")\nelse:\n    install_package(\"pandas\")\n\n# Handle installation failure\ntry:\n    install_package(\"nonexistent-package\")\nexcept Exception as e:\n    print(f\"Installation failed: {e}\")\n\nGit Operation Errors\n# Not a git repository\nif not is_git_repo():\n    print(\"Not a git repository\")\n    exit(1)\n\n# Nothing to commit\nstatus = git_status()\nif \"nothing to commit\" in status:\n    print(\"No changes to commit\")\n\nNetwork Errors\n# Handle timeouts\ntry:\n    data = fetch_url(url, timeout=5)\nexcept TimeoutError:\n    print(\"Request timed out\")\n\n# Handle HTTP errors\ntry:\n    response = call_api(url)\nexcept requests.HTTPError as e:\n    print(f\"HTTP error: {e.response.status_code}\")\n\nBest Practices\n1. Environment Management\nAlways use --break-system-packages for pip\nCheck if packages are installed before installing\nUse virtual environments when appropriate\nDocument package versions\n2. Git Operations\nCheck status before committing\nUse meaningful commit messages\nFollow conventional commit format\nStage only relevant files\n3. Code Execution\nValidate syntax before running\nHandle exceptions gracefully\nCapture and log output\nClean up temporary files\n4. API/Web Requests\nSet appropriate timeouts\nHandle rate limiting\nValidate responses\nLog requests for debugging\nRespect API usage limits\n5. Workflow Composition\nChain operations logically\nHandle errors at each step\nProvide progress feedback\nDocument dependencies\nSecurity Considerations\nAPI Keys & Credentials\nNever hardcode credentials\nUse environment variables\nValidate before use\nRotate regularly\nCode Execution\nValidate input code\nSandbox when possible\nLimit resource usage\nMonitor execution\nWeb Requests\nValidate URLs\nUse HTTPS when possible\nHandle redirects carefully\nRespect robots.txt\nDebugging & Troubleshooting\nCommon Issues\n\nPython execution fails:\n\nCheck syntax with python -m py_compile script.py\nVerify packages are installed\nCheck file paths\nReview error messages\n\nPackage installation fails:\n\nEnsure pip is up to date\nCheck internet connectivity\nVerify package name\nReview dependencies\n\nGit operations fail:\n\nVerify it's a git repository\nCheck file permissions\nEnsure clean working directory\nReview git configuration\n\nAPI/URL requests fail:\n\nVerify URL is correct\nCheck authentication\nReview rate limits\nCheck network connectivity\nExamples\nExample 1: Complete Data Pipeline\n# User request: \"Fetch weather data, analyze it, and commit results\"\n\n# Step 1: Install dependencies\ninstall_package(\"requests pandas matplotlib\")\n\n# Step 2: Fetch data\nweather_data = call_api(\n    \"https://api.weather.com/data\",\n    auth_token=\"your-api-key\"\n)\n\n# Step 3: Save and analyze\nrun_python(\"\"\"\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport json\n\n# Load data\nwith open('weather_data.json', 'r') as f:\n    data = json.load(f)\n\n# Create DataFrame\ndf = pd.DataFrame(data['forecast'])\ndf['date'] = pd.to_datetime(df['date'])\n\n# Analyze\navg_temp = df['temperature'].mean()\nmax_temp = df['temperature'].max()\nmin_temp = df['temperature'].min()\n\n# Generate plot\nplt.figure(figsize=(12, 6))\nplt.plot(df['date'], df['temperature'], marker='o')\nplt.title('Temperature Forecast')\nplt.xlabel('Date')\nplt.ylabel('Temperature (°F)')\nplt.grid(True)\nplt.savefig('temperature_forecast.png')\n\n# Save summary\nsummary = {\n    'avg_temp': avg_temp,\n    'max_temp': max_temp,\n    'min_temp': min_temp,\n    'records': len(df)\n}\n\nwith open('weather_summary.json', 'w') as f:\n    json.dump(summary, f, indent=2)\n\nprint(f'Analysis complete: {len(df)} records processed')\nprint(f'Average temperature: {avg_temp:.1f}°F')\n\"\"\")\n\n# Step 4: Commit results\ngit_status()\ngit_commit(\"\"\"\nfeat: add weather data analysis\n\n- Fetch 7-day forecast from API\n- Generate temperature plot\n- Create summary statistics\n\"\"\")\n\nExample 2: Web Scraping & Storage\n# User request: \"Scrape product data and save to database\"\n\n# Step 1: Install tools\ninstall_package(\"beautifulsoup4 lxml requests sqlite3\")\n\n# Step 2: Fetch webpage\nhtml = fetch_url(\"https://example-shop.com/products\")\n\n# Step 3: Parse and store\nrun_python(\"\"\"\nfrom bs4 import BeautifulSoup\nimport sqlite3\nimport json\n\n# Parse HTML\nwith open('products.html', 'r') as f:\n    soup = BeautifulSoup(f, 'lxml')\n\nproducts = []\nfor item in soup.find_all('div', class_='product'):\n    product = {\n        'name': item.find('h3').text.strip(),\n        'price': float(item.find('span', class_='price').text.strip('$')),\n        'rating': float(item.find('span', class_='rating').text),\n        'url': item.find('a')['href']\n    }\n    products.append(product)\n\n# Store in SQLite\nconn = sqlite3.connect('products.db')\ncursor = conn.cursor()\n\ncursor.execute('''\n    CREATE TABLE IF NOT EXISTS products (\n        id INTEGER PRIMARY KEY,\n        name TEXT,\n        price REAL,\n        rating REAL,\n        url TEXT\n    )\n''')\n\nfor p in products:\n    cursor.execute('''\n        INSERT INTO products (name, price, rating, url)\n        VALUES (?, ?, ?, ?)\n    ''', (p['name'], p['price'], p['rating'], p['url']))\n\nconn.commit()\nconn.close()\n\nprint(f'Scraped and stored {len(products)} products')\n\"\"\")\n\n# Step 4: Commit\ngit_commit(\"chore: update product database\")\n\nExample 3: API Testing Suite\n# User request: \"Test our API endpoints and generate report\"\n\n# Step 1: Install testing framework\ninstall_package(\"pytest requests pytest-html\")\n\n# Step 2: Create test file and run\nrun_python(\"\"\"\nimport requests\nimport json\nfrom datetime import datetime\n\nBASE_URL = \"https://api.example.com\"\nresults = []\n\n# Test 1: Health check\ntry:\n    response = requests.get(f\"{BASE_URL}/health\")\n    results.append({\n        'test': 'Health Check',\n        'status': response.status_code,\n        'passed': response.status_code == 200,\n        'response_time': response.elapsed.total_seconds()\n    })\nexcept Exception as e:\n    results.append({\n        'test': 'Health Check',\n        'status': 'Error',\n        'passed': False,\n        'error': str(e)\n    })\n\n# Test 2: Authentication\ntry:\n    headers = {'Authorization': 'Bearer test-token'}\n    response = requests.get(f\"{BASE_URL}/auth/validate\", headers=headers)\n    results.append({\n        'test': 'Authentication',\n        'status': response.status_code,\n        'passed': response.status_code == 200,\n        'response_time': response.elapsed.total_seconds()\n    })\nexcept Exception as e:\n    results.append({\n        'test': 'Authentication',\n        'status': 'Error',\n        'passed': False,\n        'error': str(e)\n    })\n\n# Test 3: Data retrieval\ntry:\n    response = requests.get(f\"{BASE_URL}/data/users\")\n    data = response.json()\n    results.append({\n        'test': 'Data Retrieval',\n        'status': response.status_code,\n        'passed': response.status_code == 200 and len(data) > 0,\n        'records': len(data) if response.status_code == 200 else 0,\n        'response_time': response.elapsed.total_seconds()\n    })\nexcept Exception as e:\n    results.append({\n        'test': 'Data Retrieval',\n        'status': 'Error',\n        'passed': False,\n        'error': str(e)\n    })\n\n# Generate report\nreport = {\n    'timestamp': datetime.now().isoformat(),\n    'total_tests': len(results),\n    'passed': sum(1 for r in results if r.get('passed')),\n    'failed': sum(1 for r in results if not r.get('passed')),\n    'results': results\n}\n\nwith open('api_test_report.json', 'w') as f:\n    json.dump(report, f, indent=2)\n\nprint(f\"Tests complete: {report['passed']}/{report['total_tests']} passed\")\nfor r in results:\n    status = '✓' if r.get('passed') else '✗'\n    print(f\"{status} {r['test']}\")\n\"\"\")\n\n# Step 3: Check and commit\ngit_status()\ngit_commit(\"test: add API endpoint tests\")\n\nIntegration with Other Skills\n\nOpenClaw+ works seamlessly with other skills:\n\nWith docx skill:\n# Generate data, then create report\ncall_api(\"https://api.example.com/stats\")\nrun_python(\"process_stats.py\")\n# Then use docx skill to create formatted report\n\nWith xlsx skill:\n# Fetch data, process with Python, export to Excel\nfetch_url(\"https://data-source.com/raw.csv\")\nrun_python(\"clean_and_transform.py\")\n# Then use xlsx skill to create formatted spreadsheet\n\nWith pptx skill:\n# Generate charts and data visualizations\ninstall_package(\"matplotlib seaborn\")\nrun_python(\"generate_charts.py\")\n# Then use pptx skill to create presentation\n\nQuick Reference\nPython Execution\nrun_python(code_string)\n\nPackage Management\ninstall_package(\"package_name\")\ninstall_package(\"package==1.0.0\")\ninstall_package(\"-r requirements.txt\")\n\nGit Operations\ngit_status()\ngit_commit(\"message\")\ngit_commit(\"message\", stage_all=True)\n\nWeb Requests\nfetch_url(url, timeout=30)\ncall_api(url, method=\"GET\", auth_token=\"token\")\n\nConclusion\n\nOpenClaw+ provides a unified, powerful toolkit for development and web automation workflows. By combining Python execution, package management, git operations, and web capabilities, it enables complex multi-step workflows with a single cohesive skill.\n\nKey strengths:\n\n✅ Modular design - use only what you need\n✅ Error handling - robust failure recovery\n✅ Workflow composition - chain operations easily\n✅ Production-ready - follows best practices\n✅ Well-documented - clear examples and patterns\n\nUse OpenClaw+ whenever your task involves code execution, package management, version control, or web interactions - or any combination thereof!"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Shindo957-Official/openclaw-plus",
    "publisherUrl": "https://clawhub.ai/Shindo957-Official/openclaw-plus",
    "owner": "Shindo957-Official",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/openclaw-plus",
    "downloadUrl": "https://openagent3.xyz/downloads/openclaw-plus",
    "agentUrl": "https://openagent3.xyz/skills/openclaw-plus/agent",
    "manifestUrl": "https://openagent3.xyz/skills/openclaw-plus/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/openclaw-plus/agent.md"
  }
}