{
  "schemaVersion": "1.0",
  "item": {
    "slug": "prompting",
    "name": "Prompting",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/ivangdavila/prompting",
    "canonicalUrl": "https://clawhub.ai/ivangdavila/prompting",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/prompting",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=prompting",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "failures.md",
      "iteration.md",
      "memory-template.md",
      "models.md",
      "techniques.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/prompting"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/prompting",
    "agentPageUrl": "https://openagent3.xyz/skills/prompting/agent",
    "manifestUrl": "https://openagent3.xyz/skills/prompting/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/prompting/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Architecture",
        "body": "Prompt patterns and user preferences live in ~/prompting/.\n\n~/prompting/\n├── memory.md          # HOT: user voice, model preferences, learned corrections\n├── patterns/          # Reusable prompt templates by task type\n└── history.md         # Past prompts with outcomes\n\nSee memory-template.md for initial setup."
      },
      {
        "title": "Quick Reference",
        "body": "TopicFileCommon failure modesfailures.mdModel-specific quirksmodels.mdIteration workflowiteration.mdAdvanced techniquestechniques.md"
      },
      {
        "title": "1. Ask Before Assuming",
        "body": "Before writing any prompt, ask:\n\nWhat model? (GPT-4, Claude, Haiku, Gemini)\nWhat's the failure mode you're seeing? (if iterating)\nToken budget? (cost-sensitive vs. quality-first)\n\nNever default to verbose. Simpler often wins."
      },
      {
        "title": "2. Preserve What Works",
        "body": "When improving a failing prompt:\n\nChange ONE thing at a time\nNote what's currently working\nSurgical fixes > rewrites"
      },
      {
        "title": "3. Model-Specific Adaptation",
        "body": "See models.md — key differences:\n\nClaude: explicit constraints, less scaffolding needed\nGPT-4: benefits from step-by-step, tolerates verbose\nHaiku/fast models: brevity critical, skip examples when possible\n\nPrompt optimized for one model will underperform on others."
      },
      {
        "title": "4. Voice Lock",
        "body": "When user provides writing samples:\n\nExtract specific patterns (sentence length, punctuation, vocabulary)\nApply consistently throughout session\nCheck output against samples before delivering"
      },
      {
        "title": "5. True Variation",
        "body": "When generating alternatives, vary:\n\nStructure (not just synonyms)\nEmotional angle\nOpening hook\nCall-to-action style\n\n\"Top 5 reasons\" → \"The hidden truth about\" → \"What nobody tells you about\" = real variation."
      },
      {
        "title": "6. Failure Classification",
        "body": "When a prompt fails, classify the failure type:\n\nHallucination → add grounding, sources, constraints\nFormat break → strengthen output spec, add examples\nInstruction drift → move critical constraints earlier\nRefusal → rephrase intent, remove ambiguity\n\nDifferent failures need different fixes. See failures.md."
      },
      {
        "title": "7. Compression Bias",
        "body": "Default to removing words, not adding. Test: \"Does removing this line change the output?\" If no, remove.\n\nToken costs matter. A prompt that works with 50 tokens beats one that needs 500."
      },
      {
        "title": "8. Test Case Generation",
        "body": "When asked to test a prompt:\n\nGenerate edge cases (empty input, very long, special chars)\nInclude adversarial inputs\nTest boundary conditions\n\nDon't just test happy path."
      },
      {
        "title": "9. Platform-Native Output",
        "body": "For content prompts, know platform constraints:\n\nTwitter: 280 chars, no markdown\nLinkedIn: longer ok, hashtags matter\nInstagram: emoji-friendly, visual hooks\n\nPrompt should enforce format, not hope for it."
      },
      {
        "title": "10. Memory Persistence",
        "body": "Store in ~/prompting/memory.md:\n\nUser's preferred style (terse vs detailed)\nTarget models they commonly use\nPast corrections (\"I told you I don't want emojis\")\n\nReference before every prompting task."
      }
    ],
    "body": "Architecture\n\nPrompt patterns and user preferences live in ~/prompting/.\n\n~/prompting/\n├── memory.md          # HOT: user voice, model preferences, learned corrections\n├── patterns/          # Reusable prompt templates by task type\n└── history.md         # Past prompts with outcomes\n\n\nSee memory-template.md for initial setup.\n\nQuick Reference\nTopic\tFile\nCommon failure modes\tfailures.md\nModel-specific quirks\tmodels.md\nIteration workflow\titeration.md\nAdvanced techniques\ttechniques.md\nCore Rules\n1. Ask Before Assuming\n\nBefore writing any prompt, ask:\n\nWhat model? (GPT-4, Claude, Haiku, Gemini)\nWhat's the failure mode you're seeing? (if iterating)\nToken budget? (cost-sensitive vs. quality-first)\n\nNever default to verbose. Simpler often wins.\n\n2. Preserve What Works\n\nWhen improving a failing prompt:\n\nChange ONE thing at a time\nNote what's currently working\nSurgical fixes > rewrites\n3. Model-Specific Adaptation\n\nSee models.md — key differences:\n\nClaude: explicit constraints, less scaffolding needed\nGPT-4: benefits from step-by-step, tolerates verbose\nHaiku/fast models: brevity critical, skip examples when possible\n\nPrompt optimized for one model will underperform on others.\n\n4. Voice Lock\n\nWhen user provides writing samples:\n\nExtract specific patterns (sentence length, punctuation, vocabulary)\nApply consistently throughout session\nCheck output against samples before delivering\n5. True Variation\n\nWhen generating alternatives, vary:\n\nStructure (not just synonyms)\nEmotional angle\nOpening hook\nCall-to-action style\n\n\"Top 5 reasons\" → \"The hidden truth about\" → \"What nobody tells you about\" = real variation.\n\n6. Failure Classification\n\nWhen a prompt fails, classify the failure type:\n\nHallucination → add grounding, sources, constraints\nFormat break → strengthen output spec, add examples\nInstruction drift → move critical constraints earlier\nRefusal → rephrase intent, remove ambiguity\n\nDifferent failures need different fixes. See failures.md.\n\n7. Compression Bias\n\nDefault to removing words, not adding. Test: \"Does removing this line change the output?\" If no, remove.\n\nToken costs matter. A prompt that works with 50 tokens beats one that needs 500.\n\n8. Test Case Generation\n\nWhen asked to test a prompt:\n\nGenerate edge cases (empty input, very long, special chars)\nInclude adversarial inputs\nTest boundary conditions\n\nDon't just test happy path.\n\n9. Platform-Native Output\n\nFor content prompts, know platform constraints:\n\nTwitter: 280 chars, no markdown\nLinkedIn: longer ok, hashtags matter\nInstagram: emoji-friendly, visual hooks\n\nPrompt should enforce format, not hope for it.\n\n10. Memory Persistence\n\nStore in ~/prompting/memory.md:\n\nUser's preferred style (terse vs detailed)\nTarget models they commonly use\nPast corrections (\"I told you I don't want emojis\")\n\nReference before every prompting task."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/ivangdavila/prompting",
    "publisherUrl": "https://clawhub.ai/ivangdavila/prompting",
    "owner": "ivangdavila",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/prompting",
    "downloadUrl": "https://openagent3.xyz/downloads/prompting",
    "agentUrl": "https://openagent3.xyz/skills/prompting/agent",
    "manifestUrl": "https://openagent3.xyz/skills/prompting/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/prompting/agent.md"
  }
}