{
  "schemaVersion": "1.0",
  "item": {
    "slug": "confidant",
    "name": "Confidant",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/ericsantos/confidant",
    "canonicalUrl": "https://clawhub.ai/ericsantos/confidant",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/confidant",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=confidant",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "scripts/check-server.sh",
      "scripts/request-secret.sh",
      "scripts/setup.sh"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/confidant"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/confidant",
    "agentPageUrl": "https://openagent3.xyz/skills/confidant/agent",
    "manifestUrl": "https://openagent3.xyz/skills/confidant/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/confidant/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Confidant",
        "body": "Receive secrets from humans securely — no chat exposure, no copy-paste, no history leaks."
      },
      {
        "title": "🚨 CRITICAL FLOW — Read This First",
        "body": "This is a human-in-the-loop process. You CANNOT retrieve the secret yourself.\n\nRun the script → you get a secure URL\nSEND the URL to the user in chat ← THIS IS MANDATORY\nWAIT for the user to open the URL in their browser and submit the secret\nThe script handles the rest (receives, saves to disk, confirms)\n\n❌ DO NOT curl/fetch the secret URL yourself — it's a web form for humans\n❌ DO NOT skip sharing the URL — the user MUST receive it in chat\n❌ DO NOT poll the API to check if the secret arrived — the script does this\n❌ DO NOT proceed without confirming the secret was received\n✅ Share URL → Wait → Confirm success → Use the secret silently"
      },
      {
        "title": "🔧 Setup (once per environment)",
        "body": "Run this once to install the CLI globally (avoids slow npx calls):\n\nbash {skill}/scripts/setup.sh\n\n{skill} is the absolute path to the directory containing this SKILL.md file. Agents can resolve it at runtime:\nSKILL_DIR=$(find \"$HOME\" -name \"SKILL.md\" -path \"*/confidant/skill*\" -exec dirname {} \\; 2>/dev/null | head -1)\n# Then use: bash \"$SKILL_DIR/scripts/setup.sh\""
      },
      {
        "title": "⚡ Quick Start",
        "body": "You need an API key from the user? One command:\n\nbash {skill}/scripts/request-secret.sh --label \"OpenAI API Key\" --service openai\n\nThe script handles everything:\n\n✅ Starts server if not running (or reuses existing one)\n✅ Creates a secure request with web form\n✅ Detects existing tunnels (ngrok or localtunnel)\n✅ Returns the URL to share with the user\n✅ Polls until the secret is submitted\n✅ Saves to ~/.config/openai/api_key (chmod 600) and exits\n\nIf the user is remote (not on the same network), add --tunnel:\n\nbash {skill}/scripts/request-secret.sh --label \"OpenAI API Key\" --service openai --tunnel\n\nThis starts a localtunnel automatically (no account needed) and returns a public URL.\n\nOutput example:\n\n🔐 Secure link created!\n\nURL: https://gentle-pig-42.loca.lt/requests/abc123\n  (tunnel: localtunnel | local: http://localhost:3000/requests/abc123)\nSave to: ~/.config/openai/api_key\n\nShare the URL above with the user. Secret expires after submission or 24h.\n\nShare the URL → user opens it → submits the secret → script saves to disk → done.\n\nWithout --service or --save, the script still polls and prints the secret to stdout (useful for piping or manual inspection)."
      },
      {
        "title": "request-secret.sh — Request, receive, and save a secret (recommended)",
        "body": "# Save to ~/.config/<service>/api_key (convention)\nbash {skill}/scripts/request-secret.sh --label \"SerpAPI Key\" --service serpapi\n\n# Save to explicit path\nbash {skill}/scripts/request-secret.sh --label \"Token\" --save ~/.credentials/token.txt\n\n# Save + set env var\nbash {skill}/scripts/request-secret.sh --label \"API Key\" --service openai --env OPENAI_API_KEY\n\n# Just receive (no auto-save)\nbash {skill}/scripts/request-secret.sh --label \"Password\"\n\n# Remote user — start tunnel automatically\nbash {skill}/scripts/request-secret.sh --label \"Key\" --service myapp --tunnel\n\n# JSON output (for automation)\nbash {skill}/scripts/request-secret.sh --label \"Key\" --service myapp --json\n\nFlagDescription--label <text>Description shown on the web form (required)--service <name>Auto-save to ~/.config/<name>/api_key--save <path>Auto-save to explicit file path--env <varname>Set env var (requires --service or --save)--tunnelStart localtunnel if no tunnel detected (for remote users)--port <number>Server port (default: 3000)--timeout <secs>Max wait for startup (default: 30)--jsonOutput JSON instead of human-readable text"
      },
      {
        "title": "check-server.sh — Server diagnostics (no side effects)",
        "body": "bash {skill}/scripts/check-server.sh\nbash {skill}/scripts/check-server.sh --json\n\nReports server status, port, PID, and tunnel state (ngrok or localtunnel)."
      },
      {
        "title": "⏱ Long-Running Process — Use tmux",
        "body": "The request-secret.sh script blocks until the secret is submitted (it polls continuously). Most agent runtimes (including OpenClaw's exec tool) impose execution timeouts that will kill the process before the user has time to submit.\n\nAlways run Confidant inside a tmux session:\n\n# 1. Start server in tmux\ntmux new-session -d -s confidant\ntmux send-keys -t confidant \"confidant serve --port 3000\" Enter\n\n# 2. Create request in a second tmux window\ntmux new-window -t confidant -n request\ntmux send-keys -t confidant:request \"confidant request --label 'API Key' --service openai\" Enter\n\n# 3. Share the URL with the user (read from tmux output)\ntmux capture-pane -p -t confidant:request -S -30\n\n# 4. After user submits, check the result\ntmux capture-pane -p -t confidant:request -S -10\n\nWhy not exec? Agent runtimes typically kill processes after 30-60s. Since the script waits for human input (which can take minutes), it gets SIGKILL before completion. tmux keeps the process alive independently.\n\nIf your agent platform supports long-running background processes without timeouts, exec with request-secret.sh works fine. But when in doubt, use tmux."
      },
      {
        "title": "Rules for Agents",
        "body": "NEVER ask users to paste secrets in chat — always use this skill\nNEVER reveal received secrets in chat — not even partially\nNEVER curl the Confidant API directly — use the scripts\nNEVER kill an existing server to start a new one\nNEVER try to expose the port directly (public IP, firewall rules, etc.) — use --tunnel instead\nALWAYS share the URL with the user in chat — this is the entire point of the tool\nALWAYS wait for the script to finish — it polls automatically and saves/outputs the secret; do not try to retrieve it yourself\nUse --tunnel when the user is remote (not on the same machine/network)\nPrefer --service for API keys — cleanest convention\nAfter receiving: confirm success, use the secret silently"
      },
      {
        "title": "Exit Codes (Scripts)",
        "body": "Agents can branch on exit codes for programmatic error handling:\n\nCodeConstantMeaning0—Success — secret received (saved to disk or printed to stdout)1MISSING_LABEL--label flag not provided2MISSING_DEPENDENCYcurl, jq, npm, or confidant not installed3SERVER_TIMEOUT / SERVER_CRASHServer failed to start or died during startup4REQUEST_FAILEDAPI returned empty URL — request not created≠0(from CLI)confidant request --poll failed (expired, not found, etc.)\n\nWith --json, all errors include a \"code\" field for programmatic branching:\n\n{ \"error\": \"...\", \"code\": \"MISSING_DEPENDENCY\", \"hint\": \"...\" }"
      },
      {
        "title": "Example Agent Conversation",
        "body": "This is what the interaction should look like:\n\nUser: Can you set up my OpenAI key?\nAgent: I'll create a secure link for you to submit your API key safely.\n       [runs: request-secret.sh --label \"OpenAI API Key\" --service openai --tunnel]\nAgent: Here's your secure link — open it in your browser and paste your key:\n       🔐 https://gentle-pig-42.loca.lt/requests/abc123\n       The link expires after you submit or after 24h.\nUser: Done, I submitted it.\nAgent: ✅ Received and saved to ~/.config/openai/api_key. You're all set!\n\n⚠️ Notice: the agent SENDS the URL and WAITS. It does NOT try to access the URL itself."
      },
      {
        "title": "How It Works",
        "body": "Script starts a Confidant server (or reuses existing one on port 3000)\nCreates a request via the API with a unique ID and secure web form\nOptionally starts a localtunnel for public access (or detects existing ngrok/localtunnel)\nPrints the URL — agent shares it with the user in chat\nDelegates polling to confidant request --poll which blocks until the secret is submitted\nWith --service or --save: secret is saved to disk (chmod 600), then destroyed on server\nWithout --service/--save: secret is printed to stdout, then destroyed on server"
      },
      {
        "title": "Tunnel Options",
        "body": "ProviderAccount neededHowlocaltunnel (default)No--tunnel flag or npx localtunnel --port 3000ngrokYes (free tier)Auto-detected if running on same port\n\nThe script auto-detects both. If neither is running and --tunnel is passed, it starts localtunnel."
      },
      {
        "title": "Advanced: Direct CLI Usage",
        "body": "For edge cases not covered by the scripts:\n\n# Start server only\nconfidant serve --port 3000 &\n\n# Start server + create request + poll (single command)\nconfidant serve-request --label \"Key\" --service myapp\n\n# Create request on running server\nconfidant request --label \"Key\" --service myapp\n\n# Submit a secret (agent-to-agent)\nconfidant fill \"<url>\" --secret \"<value>\"\n\n# Check status of a specific request\nconfidant get-request <id>\n\n# Retrieve a delivered secret (by secret ID, not request ID)\nconfidant get <secret-id>\n\nIf confidant is not installed globally, run bash {skill}/scripts/setup.sh first, or prefix with npx @aiconnect/confidant.\n\n⚠️ Only use direct CLI if the scripts don't cover your case."
      }
    ],
    "body": "Confidant\n\nReceive secrets from humans securely — no chat exposure, no copy-paste, no history leaks.\n\n🚨 CRITICAL FLOW — Read This First\n\nThis is a human-in-the-loop process. You CANNOT retrieve the secret yourself.\n\nRun the script → you get a secure URL\nSEND the URL to the user in chat ← THIS IS MANDATORY\nWAIT for the user to open the URL in their browser and submit the secret\nThe script handles the rest (receives, saves to disk, confirms)\n❌ DO NOT curl/fetch the secret URL yourself — it's a web form for humans\n❌ DO NOT skip sharing the URL — the user MUST receive it in chat\n❌ DO NOT poll the API to check if the secret arrived — the script does this\n❌ DO NOT proceed without confirming the secret was received\n✅ Share URL → Wait → Confirm success → Use the secret silently\n\n🔧 Setup (once per environment)\n\nRun this once to install the CLI globally (avoids slow npx calls):\n\nbash {skill}/scripts/setup.sh\n\n\n{skill} is the absolute path to the directory containing this SKILL.md file. Agents can resolve it at runtime:\n\nSKILL_DIR=$(find \"$HOME\" -name \"SKILL.md\" -path \"*/confidant/skill*\" -exec dirname {} \\; 2>/dev/null | head -1)\n# Then use: bash \"$SKILL_DIR/scripts/setup.sh\"\n\n⚡ Quick Start\n\nYou need an API key from the user? One command:\n\nbash {skill}/scripts/request-secret.sh --label \"OpenAI API Key\" --service openai\n\n\nThe script handles everything:\n\n✅ Starts server if not running (or reuses existing one)\n✅ Creates a secure request with web form\n✅ Detects existing tunnels (ngrok or localtunnel)\n✅ Returns the URL to share with the user\n✅ Polls until the secret is submitted\n✅ Saves to ~/.config/openai/api_key (chmod 600) and exits\n\nIf the user is remote (not on the same network), add --tunnel:\n\nbash {skill}/scripts/request-secret.sh --label \"OpenAI API Key\" --service openai --tunnel\n\n\nThis starts a localtunnel automatically (no account needed) and returns a public URL.\n\nOutput example:\n\n🔐 Secure link created!\n\nURL: https://gentle-pig-42.loca.lt/requests/abc123\n  (tunnel: localtunnel | local: http://localhost:3000/requests/abc123)\nSave to: ~/.config/openai/api_key\n\nShare the URL above with the user. Secret expires after submission or 24h.\n\n\nShare the URL → user opens it → submits the secret → script saves to disk → done.\n\nWithout --service or --save, the script still polls and prints the secret to stdout (useful for piping or manual inspection).\n\nScripts\nrequest-secret.sh — Request, receive, and save a secret (recommended)\n# Save to ~/.config/<service>/api_key (convention)\nbash {skill}/scripts/request-secret.sh --label \"SerpAPI Key\" --service serpapi\n\n# Save to explicit path\nbash {skill}/scripts/request-secret.sh --label \"Token\" --save ~/.credentials/token.txt\n\n# Save + set env var\nbash {skill}/scripts/request-secret.sh --label \"API Key\" --service openai --env OPENAI_API_KEY\n\n# Just receive (no auto-save)\nbash {skill}/scripts/request-secret.sh --label \"Password\"\n\n# Remote user — start tunnel automatically\nbash {skill}/scripts/request-secret.sh --label \"Key\" --service myapp --tunnel\n\n# JSON output (for automation)\nbash {skill}/scripts/request-secret.sh --label \"Key\" --service myapp --json\n\nFlag\tDescription\n--label <text>\tDescription shown on the web form (required)\n--service <name>\tAuto-save to ~/.config/<name>/api_key\n--save <path>\tAuto-save to explicit file path\n--env <varname>\tSet env var (requires --service or --save)\n--tunnel\tStart localtunnel if no tunnel detected (for remote users)\n--port <number>\tServer port (default: 3000)\n--timeout <secs>\tMax wait for startup (default: 30)\n--json\tOutput JSON instead of human-readable text\ncheck-server.sh — Server diagnostics (no side effects)\nbash {skill}/scripts/check-server.sh\nbash {skill}/scripts/check-server.sh --json\n\n\nReports server status, port, PID, and tunnel state (ngrok or localtunnel).\n\n⏱ Long-Running Process — Use tmux\n\nThe request-secret.sh script blocks until the secret is submitted (it polls continuously). Most agent runtimes (including OpenClaw's exec tool) impose execution timeouts that will kill the process before the user has time to submit.\n\nAlways run Confidant inside a tmux session:\n\n# 1. Start server in tmux\ntmux new-session -d -s confidant\ntmux send-keys -t confidant \"confidant serve --port 3000\" Enter\n\n# 2. Create request in a second tmux window\ntmux new-window -t confidant -n request\ntmux send-keys -t confidant:request \"confidant request --label 'API Key' --service openai\" Enter\n\n# 3. Share the URL with the user (read from tmux output)\ntmux capture-pane -p -t confidant:request -S -30\n\n# 4. After user submits, check the result\ntmux capture-pane -p -t confidant:request -S -10\n\n\nWhy not exec? Agent runtimes typically kill processes after 30-60s. Since the script waits for human input (which can take minutes), it gets SIGKILL before completion. tmux keeps the process alive independently.\n\nIf your agent platform supports long-running background processes without timeouts, exec with request-secret.sh works fine. But when in doubt, use tmux.\n\nRules for Agents\nNEVER ask users to paste secrets in chat — always use this skill\nNEVER reveal received secrets in chat — not even partially\nNEVER curl the Confidant API directly — use the scripts\nNEVER kill an existing server to start a new one\nNEVER try to expose the port directly (public IP, firewall rules, etc.) — use --tunnel instead\nALWAYS share the URL with the user in chat — this is the entire point of the tool\nALWAYS wait for the script to finish — it polls automatically and saves/outputs the secret; do not try to retrieve it yourself\nUse --tunnel when the user is remote (not on the same machine/network)\nPrefer --service for API keys — cleanest convention\nAfter receiving: confirm success, use the secret silently\nExit Codes (Scripts)\n\nAgents can branch on exit codes for programmatic error handling:\n\nCode\tConstant\tMeaning\n0\t—\tSuccess — secret received (saved to disk or printed to stdout)\n1\tMISSING_LABEL\t--label flag not provided\n2\tMISSING_DEPENDENCY\tcurl, jq, npm, or confidant not installed\n3\tSERVER_TIMEOUT / SERVER_CRASH\tServer failed to start or died during startup\n4\tREQUEST_FAILED\tAPI returned empty URL — request not created\n≠0\t(from CLI)\tconfidant request --poll failed (expired, not found, etc.)\n\nWith --json, all errors include a \"code\" field for programmatic branching:\n\n{ \"error\": \"...\", \"code\": \"MISSING_DEPENDENCY\", \"hint\": \"...\" }\n\nExample Agent Conversation\n\nThis is what the interaction should look like:\n\nUser: Can you set up my OpenAI key?\nAgent: I'll create a secure link for you to submit your API key safely.\n       [runs: request-secret.sh --label \"OpenAI API Key\" --service openai --tunnel]\nAgent: Here's your secure link — open it in your browser and paste your key:\n       🔐 https://gentle-pig-42.loca.lt/requests/abc123\n       The link expires after you submit or after 24h.\nUser: Done, I submitted it.\nAgent: ✅ Received and saved to ~/.config/openai/api_key. You're all set!\n\n\n⚠️ Notice: the agent SENDS the URL and WAITS. It does NOT try to access the URL itself.\n\nHow It Works\nScript starts a Confidant server (or reuses existing one on port 3000)\nCreates a request via the API with a unique ID and secure web form\nOptionally starts a localtunnel for public access (or detects existing ngrok/localtunnel)\nPrints the URL — agent shares it with the user in chat\nDelegates polling to confidant request --poll which blocks until the secret is submitted\nWith --service or --save: secret is saved to disk (chmod 600), then destroyed on server\nWithout --service/--save: secret is printed to stdout, then destroyed on server\nTunnel Options\nProvider\tAccount needed\tHow\nlocaltunnel (default)\tNo\t--tunnel flag or npx localtunnel --port 3000\nngrok\tYes (free tier)\tAuto-detected if running on same port\n\nThe script auto-detects both. If neither is running and --tunnel is passed, it starts localtunnel.\n\nAdvanced: Direct CLI Usage\n\nFor edge cases not covered by the scripts:\n\n# Start server only\nconfidant serve --port 3000 &\n\n# Start server + create request + poll (single command)\nconfidant serve-request --label \"Key\" --service myapp\n\n# Create request on running server\nconfidant request --label \"Key\" --service myapp\n\n# Submit a secret (agent-to-agent)\nconfidant fill \"<url>\" --secret \"<value>\"\n\n# Check status of a specific request\nconfidant get-request <id>\n\n# Retrieve a delivered secret (by secret ID, not request ID)\nconfidant get <secret-id>\n\n\nIf confidant is not installed globally, run bash {skill}/scripts/setup.sh first, or prefix with npx @aiconnect/confidant.\n\n⚠️ Only use direct CLI if the scripts don't cover your case."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/ericsantos/confidant",
    "publisherUrl": "https://clawhub.ai/ericsantos/confidant",
    "owner": "ericsantos",
    "version": "1.5.3",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/confidant",
    "downloadUrl": "https://openagent3.xyz/downloads/confidant",
    "agentUrl": "https://openagent3.xyz/skills/confidant/agent",
    "manifestUrl": "https://openagent3.xyz/skills/confidant/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/confidant/agent.md"
  }
}