{
  "schemaVersion": "1.0",
  "item": {
    "slug": "prompt-assemble",
    "name": "Prompt Safe",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/alexunitario-sketch/prompt-assemble",
    "canonicalUrl": "https://clawhub.ai/alexunitario-sketch/prompt-assemble",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/prompt-assemble",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=prompt-assemble",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "references/memory_standards.md",
      "references/token_estimation.md",
      "scripts/prompt_assemble.py"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/prompt-assemble"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/prompt-assemble",
    "agentPageUrl": "https://openagent3.xyz/skills/prompt-assemble/agent",
    "manifestUrl": "https://openagent3.xyz/skills/prompt-assemble/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/prompt-assemble/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Overview",
        "body": "A standardized, token-safe prompt assembly framework that guarantees API stability. Implements Two-Phase Context Construction and Memory Safety Valve to prevent token overflow while maximizing relevant context.\n\nDesign Goals:\n\n✅ Never fail due to memory-related token overflow\n✅ Memory is always discardable enhancement, never rigid dependency\n✅ Token budget decisions centralized at prompt assemble layer"
      },
      {
        "title": "When to Use",
        "body": "Use this skill when:\n\nBuilding or modifying any agent that constructs prompts\nImplementing memory retrieval systems\nAdding new prompt-related logic to existing agents\nAny scenario where token budget safety is required"
      },
      {
        "title": "Core Workflow",
        "body": "User Input\n    ↓\nNeed-Memory Decision\n    ↓\nMinimal Context Build\n    ↓\nMemory Retrieval (Optional)\n    ↓\nMemory Summarization\n    ↓\nToken Estimation\n    ↓\nSafety Valve Decision\n    ↓\nFinal Prompt → LLM Call"
      },
      {
        "title": "Phase 0: Base Configuration",
        "body": "# Model Context Windows (2026-02-04)\n# - MiniMax-M2.1: 204,000 tokens (default)\n# - Claude 3.5 Sonnet: 200,000 tokens\n# - GPT-4o: 128,000 tokens\n\nMAX_TOKENS = 204000  # Set to your model's context limit\nSAFETY_MARGIN = 0.75 * MAX_TOKENS  # Conservative: 75% threshold = 153,000 tokens\nMEMORY_TOP_K = 3                     # Max 3 memories\nMEMORY_SUMMARY_MAX = 3 lines        # Max 3 lines per memory\n\nDesign Philosophy:\n\nLeave 25% buffer for safety (model overhead, estimation errors, spikes)\nBetter to underutilize capacity than to overflow"
      },
      {
        "title": "Phase 1: Minimal Context",
        "body": "System prompt\nRecent N messages (N=3, trimmed)\nCurrent user input\nNo memory by default"
      },
      {
        "title": "Phase 2: Memory Need Decision",
        "body": "def need_memory(user_input):\n    triggers = [\n        \"previously\",\n        \"earlier we discussed\",\n        \"do you remember\",\n        \"as I mentioned before\",\n        \"continuing from\",\n        \"before we\",\n        \"last time\",\n        \"previously mentioned\"\n    ]\n    for trigger in triggers:\n        if trigger.lower() in user_input.lower():\n            return True\n    return False"
      },
      {
        "title": "Phase 3: Memory Retrieval (Optional)",
        "body": "memories = memory_search(query=user_input, top_k=MEMORY_TOP_K)\nfor mem in memories:\n    summarized_memories.append(summarize(mem, max_lines=MEMORY_SUMMARY_MAX))"
      },
      {
        "title": "Phase 4: Token Estimation",
        "body": "Calculate estimated tokens for base_context + summarized_memories."
      },
      {
        "title": "Phase 5: Safety Valve (Critical)",
        "body": "if estimated_tokens > SAFETY_MARGIN:\n    base_context.append(\"[System Notice] Relevant memory skipped due to token budget.\")\n    return assemble(base_context)\n\nHard Rules:\n\n❌ Never downgrade system prompt\n❌ Never truncate user input\n❌ No \"lucky splicing\"\n✅ Only memory layer is expendable"
      },
      {
        "title": "Phase 6: Final Assembly",
        "body": "final_prompt = assemble(base_context + summarized_memories)\nreturn final_prompt"
      },
      {
        "title": "Allowed in Long-Term Memory",
        "body": "✅ User preferences / identity / long-term goals\n✅ Confirmed important conclusions\n✅ System-level settings and rules"
      },
      {
        "title": "Forbidden in Long-Term Memory",
        "body": "❌ Raw conversation logs\n❌ Reasoning traces\n❌ Temporary discussions\n❌ Information recoverable from chat history"
      },
      {
        "title": "Quick Start",
        "body": "Copy scripts/prompt_assemble.py to your agent and use:\n\nfrom prompt_assemble import build_prompt\n\n# In your agent's prompt construction:\nfinal_prompt = build_prompt(user_input, memory_search_fn, get_recent_dialog_fn)"
      },
      {
        "title": "scripts/",
        "body": "prompt_assemble.py - Complete implementation with all phases (PromptAssembler class)"
      },
      {
        "title": "references/",
        "body": "memory_standards.md - Detailed memory content guidelines\ntoken_estimation.md - Token counting strategies"
      }
    ],
    "body": "Prompt Assemble\nOverview\n\nA standardized, token-safe prompt assembly framework that guarantees API stability. Implements Two-Phase Context Construction and Memory Safety Valve to prevent token overflow while maximizing relevant context.\n\nDesign Goals:\n\n✅ Never fail due to memory-related token overflow\n✅ Memory is always discardable enhancement, never rigid dependency\n✅ Token budget decisions centralized at prompt assemble layer\nWhen to Use\n\nUse this skill when:\n\nBuilding or modifying any agent that constructs prompts\nImplementing memory retrieval systems\nAdding new prompt-related logic to existing agents\nAny scenario where token budget safety is required\nCore Workflow\nUser Input\n    ↓\nNeed-Memory Decision\n    ↓\nMinimal Context Build\n    ↓\nMemory Retrieval (Optional)\n    ↓\nMemory Summarization\n    ↓\nToken Estimation\n    ↓\nSafety Valve Decision\n    ↓\nFinal Prompt → LLM Call\n\nPhase Details\nPhase 0: Base Configuration\n# Model Context Windows (2026-02-04)\n# - MiniMax-M2.1: 204,000 tokens (default)\n# - Claude 3.5 Sonnet: 200,000 tokens\n# - GPT-4o: 128,000 tokens\n\nMAX_TOKENS = 204000  # Set to your model's context limit\nSAFETY_MARGIN = 0.75 * MAX_TOKENS  # Conservative: 75% threshold = 153,000 tokens\nMEMORY_TOP_K = 3                     # Max 3 memories\nMEMORY_SUMMARY_MAX = 3 lines        # Max 3 lines per memory\n\n\nDesign Philosophy:\n\nLeave 25% buffer for safety (model overhead, estimation errors, spikes)\nBetter to underutilize capacity than to overflow\nPhase 1: Minimal Context\nSystem prompt\nRecent N messages (N=3, trimmed)\nCurrent user input\nNo memory by default\nPhase 2: Memory Need Decision\ndef need_memory(user_input):\n    triggers = [\n        \"previously\",\n        \"earlier we discussed\",\n        \"do you remember\",\n        \"as I mentioned before\",\n        \"continuing from\",\n        \"before we\",\n        \"last time\",\n        \"previously mentioned\"\n    ]\n    for trigger in triggers:\n        if trigger.lower() in user_input.lower():\n            return True\n    return False\n\nPhase 3: Memory Retrieval (Optional)\nmemories = memory_search(query=user_input, top_k=MEMORY_TOP_K)\nfor mem in memories:\n    summarized_memories.append(summarize(mem, max_lines=MEMORY_SUMMARY_MAX))\n\nPhase 4: Token Estimation\n\nCalculate estimated tokens for base_context + summarized_memories.\n\nPhase 5: Safety Valve (Critical)\nif estimated_tokens > SAFETY_MARGIN:\n    base_context.append(\"[System Notice] Relevant memory skipped due to token budget.\")\n    return assemble(base_context)\n\n\nHard Rules:\n\n❌ Never downgrade system prompt\n❌ Never truncate user input\n❌ No \"lucky splicing\"\n✅ Only memory layer is expendable\nPhase 6: Final Assembly\nfinal_prompt = assemble(base_context + summarized_memories)\nreturn final_prompt\n\nMemory Data Standards\nAllowed in Long-Term Memory\n✅ User preferences / identity / long-term goals\n✅ Confirmed important conclusions\n✅ System-level settings and rules\nForbidden in Long-Term Memory\n❌ Raw conversation logs\n❌ Reasoning traces\n❌ Temporary discussions\n❌ Information recoverable from chat history\nQuick Start\n\nCopy scripts/prompt_assemble.py to your agent and use:\n\nfrom prompt_assemble import build_prompt\n\n# In your agent's prompt construction:\nfinal_prompt = build_prompt(user_input, memory_search_fn, get_recent_dialog_fn)\n\nResources\nscripts/\nprompt_assemble.py - Complete implementation with all phases (PromptAssembler class)\nreferences/\nmemory_standards.md - Detailed memory content guidelines\ntoken_estimation.md - Token counting strategies"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/alexunitario-sketch/prompt-assemble",
    "publisherUrl": "https://clawhub.ai/alexunitario-sketch/prompt-assemble",
    "owner": "alexunitario-sketch",
    "version": "1.0.4",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/prompt-assemble",
    "downloadUrl": "https://openagent3.xyz/downloads/prompt-assemble",
    "agentUrl": "https://openagent3.xyz/skills/prompt-assemble/agent",
    "manifestUrl": "https://openagent3.xyz/skills/prompt-assemble/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/prompt-assemble/agent.md"
  }
}