{
  "schemaVersion": "1.0",
  "item": {
    "slug": "usewhisper-autohook",
    "name": "usewhisper-autohook",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/Alinxus/usewhisper-autohook",
    "canonicalUrl": "https://clawhub.ai/Alinxus/usewhisper-autohook",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/usewhisper-autohook",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=usewhisper-autohook",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "README.md",
      "SKILL.md",
      "usewhisper-autohook.mjs"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/usewhisper-autohook"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/usewhisper-autohook",
    "agentPageUrl": "https://openagent3.xyz/skills/usewhisper-autohook/agent",
    "manifestUrl": "https://openagent3.xyz/skills/usewhisper-autohook/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/usewhisper-autohook/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "usewhisper-autohook (OpenClaw Skill)",
        "body": "This skill is a thin wrapper designed to make \"automatic memory\" easy:\n\nget_whisper_context(user_id, session_id, current_query) for pre-response context injection\ningest_whisper_turn(user_id, session_id, user_msg, assistant_msg) for post-response ingestion\n\nIt defaults to the token-saving settings you almost always want:\n\ncompress: true\ncompression_strategy: \"delta\"\nuse_cache: true\ninclude_memories: true\n\nIt also persists the last context_hash locally (per api_url + project + user_id + session_id) so delta compression works by default without you needing to pass previous_context_hash."
      },
      {
        "title": "Install (ClawHub)",
        "body": "npx clawhub@latest install usewhisper-autohook"
      },
      {
        "title": "Setup",
        "body": "Set env vars wherever OpenClaw runs your agent:\n\nWHISPER_CONTEXT_API_URL=https://context.usewhisper.dev\nWHISPER_CONTEXT_API_KEY=YOUR_KEY\nWHISPER_CONTEXT_PROJECT=openclaw-yourname\n\nNotes:\n\nWHISPER_CONTEXT_API_URL is optional (defaults to https://context.usewhisper.dev).\nThe helper will auto-create the project on first use if it does not exist yet."
      },
      {
        "title": "The \"Auto Loop\" Prompt (Copy/Paste)",
        "body": "Add this to your agent's system instruction (or equivalent):\n\nBefore you think or respond to any message:\n1) Call get_whisper_context with:\n   user_id = \"telegram:{from_id}\"\n   session_id = \"telegram:{chat_id}\"\n   current_query = the user's message text\n2) If the returned context is not empty, prepend it to your prompt as:\n   \"Relevant long-term memory:\\n{context}\\n\\nNow respond to:\\n{user_message}\"\n\nAfter you generate your final response:\n1) Call ingest_whisper_turn with the same user_id and session_id and:\n   user_msg = the full user message\n   assistant_msg = your full final reply\n\nAlways do this. Never skip.\n\nIf you are not on Telegram, keep the same structure: the important part is that user_id and session_id are stable."
      },
      {
        "title": "If Your Agent Still Replays Full Chat History (Proxy Mode)",
        "body": "If you cannot control how your agent/framework constructs prompts (it always sends the full conversation history), a system prompt cannot reduce token spend: the tokens are already sent to the model.\n\nIn that case, run the built-in OpenAI-compatible proxy so the network payload is actually reduced. The proxy:\n\nreceives POST /v1/chat/completions\nqueries Whisper memory\nstrips chat history down to system + last user message\ninjects Relevant long-term memory: ...\ncalls your upstream OpenAI-compatible provider\ningests the turn back into Whisper\n\nStart the proxy:\n\nexport OPENAI_API_KEY=\"YOUR_UPSTREAM_KEY\"\nnode usewhisper-autohook.mjs serve_openai_proxy --port 8787\n\nThen point your agent’s OpenAI base URL to http://127.0.0.1:8787 (exact env/config depends on your agent).\n\nIf your agent supports overriding the upstream base URL, you can set:\n\nOPENAI_BASE_URL (for OpenAI-compatible upstreams)\nANTHROPIC_BASE_URL (for Anthropic upstreams)\n\nOr pass --upstream_base_url when starting the proxy.\n\nFor correct per-user/session memory, pass headers on each request:\n\nx-whisper-user-id: telegram:{from_id}\nx-whisper-session-id: telegram:{chat_id}"
      },
      {
        "title": "Anthropic Native Proxy (/v1/messages)",
        "body": "If your agent uses Anthropic's native API (not OpenAI-compatible), run the Anthropic proxy instead:\n\nexport ANTHROPIC_API_KEY=\"YOUR_ANTHROPIC_KEY\"\nnode usewhisper-autohook.mjs serve_anthropic_proxy --port 8788\n\nThen point your agent’s Anthropic base URL to http://127.0.0.1:8788.\n\nPass IDs via headers (recommended):\n\nx-whisper-user-id: telegram:{from_id}\nx-whisper-session-id: telegram:{chat_id}\n\nIf you do not pass headers, the proxies will attempt to infer stable IDs from OpenClaw's system prompt / session key if present. This is best-effort; headers are still the most reliable."
      },
      {
        "title": "CLI Usage (what the tools call)",
        "body": "All commands print JSON to stdout."
      },
      {
        "title": "Get packed context",
        "body": "node usewhisper-autohook.mjs get_whisper_context \\\n  --current_query \"What did we decide last time?\" \\\n  --user_id \"telegram:123\" \\\n  --session_id \"telegram:456\""
      },
      {
        "title": "Ingest a completed turn",
        "body": "node usewhisper-autohook.mjs ingest_whisper_turn \\\n  --user_id \"telegram:123\" \\\n  --session_id \"telegram:456\" \\\n  --user_msg \"...\" \\\n  --assistant_msg \"...\"\n\nFor large content, pass JSON via stdin:\n\necho '{ \"user_msg\": \"....\", \"assistant_msg\": \"....\" }' | node usewhisper-autohook.mjs ingest_whisper_turn --session_id \"telegram:456\" --user_id \"telegram:123\" --turn_json -"
      },
      {
        "title": "Output Format",
        "body": "get_whisper_context returns:\n\ncontext: the packed context string to prepend\ncontext_hash: a short hash you can store and pass back as previous_context_hash next time (optional)\nmeta: cache hit and compression info (useful for debugging)"
      }
    ],
    "body": "usewhisper-autohook (OpenClaw Skill)\n\nThis skill is a thin wrapper designed to make \"automatic memory\" easy:\n\nget_whisper_context(user_id, session_id, current_query) for pre-response context injection\ningest_whisper_turn(user_id, session_id, user_msg, assistant_msg) for post-response ingestion\n\nIt defaults to the token-saving settings you almost always want:\n\ncompress: true\ncompression_strategy: \"delta\"\nuse_cache: true\ninclude_memories: true\n\nIt also persists the last context_hash locally (per api_url + project + user_id + session_id) so delta compression works by default without you needing to pass previous_context_hash.\n\nInstall (ClawHub)\nnpx clawhub@latest install usewhisper-autohook\n\nSetup\n\nSet env vars wherever OpenClaw runs your agent:\n\nWHISPER_CONTEXT_API_URL=https://context.usewhisper.dev\nWHISPER_CONTEXT_API_KEY=YOUR_KEY\nWHISPER_CONTEXT_PROJECT=openclaw-yourname\n\n\nNotes:\n\nWHISPER_CONTEXT_API_URL is optional (defaults to https://context.usewhisper.dev).\nThe helper will auto-create the project on first use if it does not exist yet.\nThe \"Auto Loop\" Prompt (Copy/Paste)\n\nAdd this to your agent's system instruction (or equivalent):\n\nBefore you think or respond to any message:\n1) Call get_whisper_context with:\n   user_id = \"telegram:{from_id}\"\n   session_id = \"telegram:{chat_id}\"\n   current_query = the user's message text\n2) If the returned context is not empty, prepend it to your prompt as:\n   \"Relevant long-term memory:\\n{context}\\n\\nNow respond to:\\n{user_message}\"\n\nAfter you generate your final response:\n1) Call ingest_whisper_turn with the same user_id and session_id and:\n   user_msg = the full user message\n   assistant_msg = your full final reply\n\nAlways do this. Never skip.\n\n\nIf you are not on Telegram, keep the same structure: the important part is that user_id and session_id are stable.\n\nIf Your Agent Still Replays Full Chat History (Proxy Mode)\n\nIf you cannot control how your agent/framework constructs prompts (it always sends the full conversation history), a system prompt cannot reduce token spend: the tokens are already sent to the model.\n\nIn that case, run the built-in OpenAI-compatible proxy so the network payload is actually reduced. The proxy:\n\nreceives POST /v1/chat/completions\nqueries Whisper memory\nstrips chat history down to system + last user message\ninjects Relevant long-term memory: ...\ncalls your upstream OpenAI-compatible provider\ningests the turn back into Whisper\n\nStart the proxy:\n\nexport OPENAI_API_KEY=\"YOUR_UPSTREAM_KEY\"\nnode usewhisper-autohook.mjs serve_openai_proxy --port 8787\n\n\nThen point your agent’s OpenAI base URL to http://127.0.0.1:8787 (exact env/config depends on your agent).\n\nIf your agent supports overriding the upstream base URL, you can set:\n\nOPENAI_BASE_URL (for OpenAI-compatible upstreams)\nANTHROPIC_BASE_URL (for Anthropic upstreams)\n\nOr pass --upstream_base_url when starting the proxy.\n\nFor correct per-user/session memory, pass headers on each request:\n\nx-whisper-user-id: telegram:{from_id}\nx-whisper-session-id: telegram:{chat_id}\nAnthropic Native Proxy (/v1/messages)\n\nIf your agent uses Anthropic's native API (not OpenAI-compatible), run the Anthropic proxy instead:\n\nexport ANTHROPIC_API_KEY=\"YOUR_ANTHROPIC_KEY\"\nnode usewhisper-autohook.mjs serve_anthropic_proxy --port 8788\n\n\nThen point your agent’s Anthropic base URL to http://127.0.0.1:8788.\n\nPass IDs via headers (recommended):\n\nx-whisper-user-id: telegram:{from_id}\nx-whisper-session-id: telegram:{chat_id}\n\nIf you do not pass headers, the proxies will attempt to infer stable IDs from OpenClaw's system prompt / session key if present. This is best-effort; headers are still the most reliable.\n\nCLI Usage (what the tools call)\n\nAll commands print JSON to stdout.\n\nGet packed context\nnode usewhisper-autohook.mjs get_whisper_context \\\n  --current_query \"What did we decide last time?\" \\\n  --user_id \"telegram:123\" \\\n  --session_id \"telegram:456\"\n\nIngest a completed turn\nnode usewhisper-autohook.mjs ingest_whisper_turn \\\n  --user_id \"telegram:123\" \\\n  --session_id \"telegram:456\" \\\n  --user_msg \"...\" \\\n  --assistant_msg \"...\"\n\n\nFor large content, pass JSON via stdin:\n\necho '{ \"user_msg\": \"....\", \"assistant_msg\": \"....\" }' | node usewhisper-autohook.mjs ingest_whisper_turn --session_id \"telegram:456\" --user_id \"telegram:123\" --turn_json -\n\nOutput Format\n\nget_whisper_context returns:\n\ncontext: the packed context string to prepend\ncontext_hash: a short hash you can store and pass back as previous_context_hash next time (optional)\nmeta: cache hit and compression info (useful for debugging)"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Alinxus/usewhisper-autohook",
    "publisherUrl": "https://clawhub.ai/Alinxus/usewhisper-autohook",
    "owner": "Alinxus",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/usewhisper-autohook",
    "downloadUrl": "https://openagent3.xyz/downloads/usewhisper-autohook",
    "agentUrl": "https://openagent3.xyz/skills/usewhisper-autohook/agent",
    "manifestUrl": "https://openagent3.xyz/skills/usewhisper-autohook/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/usewhisper-autohook/agent.md"
  }
}