{
  "schemaVersion": "1.0",
  "item": {
    "slug": "ai-content-pipeline",
    "name": "Ai Content Pipeline",
    "source": "tencent",
    "type": "skill",
    "category": "内容创作",
    "sourceUrl": "https://clawhub.ai/okaris/ai-content-pipeline",
    "canonicalUrl": "https://clawhub.ai/okaris/ai-content-pipeline",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/ai-content-pipeline",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=ai-content-pipeline",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/ai-content-pipeline"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/ai-content-pipeline",
    "agentPageUrl": "https://openagent3.xyz/skills/ai-content-pipeline/agent",
    "manifestUrl": "https://openagent3.xyz/skills/ai-content-pipeline/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/ai-content-pipeline/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "AI Content Pipeline",
        "body": "Build multi-step content creation pipelines via inference.sh CLI."
      },
      {
        "title": "Quick Start",
        "body": "curl -fsSL https://cli.inference.sh | sh && infsh login\n\n# Simple pipeline: Generate image -> Animate to video\ninfsh app run falai/flux-dev --input '{\"prompt\": \"portrait of a woman smiling\"}' > image.json\ninfsh app run falai/wan-2-5 --input '{\"image_url\": \"<url-from-previous>\"}'\n\nInstall note: The install script only detects your OS/architecture, downloads the matching binary from dist.inference.sh, and verifies its SHA-256 checksum. No elevated permissions or background processes. Manual install & verification available."
      },
      {
        "title": "Pattern 1: Image -> Video -> Audio",
        "body": "[FLUX Image] -> [Wan 2.5 Video] -> [Foley Sound]"
      },
      {
        "title": "Pattern 2: Script -> Speech -> Avatar",
        "body": "[LLM Script] -> [Kokoro TTS] -> [OmniHuman Avatar]"
      },
      {
        "title": "Pattern 3: Research -> Content -> Distribution",
        "body": "[Tavily Search] -> [Claude Summary] -> [FLUX Visual] -> [Twitter Post]"
      },
      {
        "title": "YouTube Short Pipeline",
        "body": "Create a complete short-form video from a topic.\n\n# 1. Generate script with Claude\ninfsh app run openrouter/claude-sonnet-45 --input '{\n  \"prompt\": \"Write a 30-second script about the future of AI. Make it engaging and conversational. Just the script, no stage directions.\"\n}' > script.json\n\n# 2. Generate voiceover with Kokoro\ninfsh app run infsh/kokoro-tts --input '{\n  \"text\": \"<script-text>\",\n  \"voice\": \"af_sarah\"\n}' > voice.json\n\n# 3. Generate background image with FLUX\ninfsh app run falai/flux-dev --input '{\n  \"prompt\": \"Futuristic city skyline at sunset, cyberpunk aesthetic, 4K wallpaper\"\n}' > background.json\n\n# 4. Animate image to video with Wan\ninfsh app run falai/wan-2-5 --input '{\n  \"image_url\": \"<background-url>\",\n  \"prompt\": \"slow camera pan across cityscape, subtle movement\"\n}' > video.json\n\n# 5. Add captions (manually or with another tool)\n\n# 6. Merge video with audio\ninfsh app run infsh/media-merger --input '{\n  \"video_url\": \"<video-url>\",\n  \"audio_url\": \"<voice-url>\"\n}'"
      },
      {
        "title": "Talking Head Video Pipeline",
        "body": "Create an AI avatar presenting content.\n\n# 1. Write the script\ninfsh app run openrouter/claude-sonnet-45 --input '{\n  \"prompt\": \"Write a 1-minute explainer script about quantum computing for beginners.\"\n}' > script.json\n\n# 2. Generate speech\ninfsh app run infsh/kokoro-tts --input '{\n  \"text\": \"<script>\",\n  \"voice\": \"am_michael\"\n}' > speech.json\n\n# 3. Generate or use a portrait image\ninfsh app run falai/flux-dev --input '{\n  \"prompt\": \"Professional headshot of a friendly tech presenter, neutral background, looking at camera\"\n}' > portrait.json\n\n# 4. Create talking head video\ninfsh app run bytedance/omnihuman-1-5 --input '{\n  \"image_url\": \"<portrait-url>\",\n  \"audio_url\": \"<speech-url>\"\n}' > talking_head.json"
      },
      {
        "title": "Product Demo Pipeline",
        "body": "Create a product showcase video.\n\n# 1. Generate product image\ninfsh app run falai/flux-dev --input '{\n  \"prompt\": \"Sleek wireless earbuds on white surface, studio lighting, product photography\"\n}' > product.json\n\n# 2. Animate product reveal\ninfsh app run falai/wan-2-5 --input '{\n  \"image_url\": \"<product-url>\",\n  \"prompt\": \"slow 360 rotation, smooth motion\"\n}' > product_video.json\n\n# 3. Upscale video quality\ninfsh app run falai/topaz-video-upscaler --input '{\n  \"video_url\": \"<product-video-url>\"\n}' > upscaled.json\n\n# 4. Add background music\ninfsh app run infsh/media-merger --input '{\n  \"video_url\": \"<upscaled-url>\",\n  \"audio_url\": \"https://your-music.mp3\",\n  \"audio_volume\": 0.3\n}'"
      },
      {
        "title": "Blog to Video Pipeline",
        "body": "Convert written content to video format.\n\n# 1. Summarize blog post\ninfsh app run openrouter/claude-haiku-45 --input '{\n  \"prompt\": \"Summarize this blog post into 5 key points for a video script: <blog-content>\"\n}' > summary.json\n\n# 2. Generate images for each point\nfor i in 1 2 3 4 5; do\n  infsh app run falai/flux-dev --input \"{\n    \\\"prompt\\\": \\\"Visual representing point $i: <point-text>\\\"\n  }\" > \"image_$i.json\"\ndone\n\n# 3. Animate each image\nfor i in 1 2 3 4 5; do\n  infsh app run falai/wan-2-5 --input \"{\n    \\\"image_url\\\": \\\"<image-$i-url>\\\"\n  }\" > \"video_$i.json\"\ndone\n\n# 4. Generate voiceover\ninfsh app run infsh/kokoro-tts --input '{\n  \"text\": \"<full-script>\",\n  \"voice\": \"bf_emma\"\n}' > narration.json\n\n# 5. Merge all clips\ninfsh app run infsh/media-merger --input '{\n  \"videos\": [\"<video1>\", \"<video2>\", \"<video3>\", \"<video4>\", \"<video5>\"],\n  \"audio_url\": \"<narration-url>\",\n  \"transition\": \"crossfade\"\n}'"
      },
      {
        "title": "Content Generation",
        "body": "StepAppPurposeScriptopenrouter/claude-sonnet-45Write contentResearchtavily/search-assistantGather informationSummaryopenrouter/claude-haiku-45Condense content"
      },
      {
        "title": "Visual Assets",
        "body": "StepAppPurposeImagefalai/flux-devGenerate imagesImagegoogle/imagen-3Alternative image genUpscalefalai/topaz-image-upscalerEnhance quality"
      },
      {
        "title": "Animation",
        "body": "StepAppPurposeI2Vfalai/wan-2-5Animate imagesT2Vgoogle/veo-3-1-fastGenerate from textAvatarbytedance/omnihuman-1-5Talking heads"
      },
      {
        "title": "Audio",
        "body": "StepAppPurposeTTSinfsh/kokoro-ttsVoice narrationMusicinfsh/ai-musicBackground musicFoleyinfsh/hunyuanvideo-foleySound effects"
      },
      {
        "title": "Post-Production",
        "body": "StepAppPurposeUpscalefalai/topaz-video-upscalerEnhance videoMergeinfsh/media-mergerCombine mediaCaptioninfsh/caption-videoAdd subtitles"
      },
      {
        "title": "Best Practices",
        "body": "Plan the pipeline first - Map out each step before running\nSave intermediate results - Store outputs for iteration\nUse appropriate quality - Fast models for drafts, quality for finals\nMatch resolutions - Keep consistent aspect ratios throughout\nTest each step - Verify outputs before proceeding"
      },
      {
        "title": "Related Skills",
        "body": "# Video generation models\nnpx skills add inference-sh/skills@ai-video-generation\n\n# Image generation\nnpx skills add inference-sh/skills@ai-image-generation\n\n# Text-to-speech\nnpx skills add inference-sh/skills@text-to-speech\n\n# LLM models for scripts\nnpx skills add inference-sh/skills@llm-models\n\n# Full platform skill\nnpx skills add inference-sh/skills@inference-sh\n\nBrowse all apps: infsh app list"
      },
      {
        "title": "Documentation",
        "body": "Content Pipeline Example - Official pipeline guide\nBuilding Workflows - Workflow best practices"
      }
    ],
    "body": "AI Content Pipeline\n\nBuild multi-step content creation pipelines via inference.sh CLI.\n\nQuick Start\ncurl -fsSL https://cli.inference.sh | sh && infsh login\n\n# Simple pipeline: Generate image -> Animate to video\ninfsh app run falai/flux-dev --input '{\"prompt\": \"portrait of a woman smiling\"}' > image.json\ninfsh app run falai/wan-2-5 --input '{\"image_url\": \"<url-from-previous>\"}'\n\n\nInstall note: The install script only detects your OS/architecture, downloads the matching binary from dist.inference.sh, and verifies its SHA-256 checksum. No elevated permissions or background processes. Manual install & verification available.\n\nPipeline Patterns\nPattern 1: Image -> Video -> Audio\n[FLUX Image] -> [Wan 2.5 Video] -> [Foley Sound]\n\nPattern 2: Script -> Speech -> Avatar\n[LLM Script] -> [Kokoro TTS] -> [OmniHuman Avatar]\n\nPattern 3: Research -> Content -> Distribution\n[Tavily Search] -> [Claude Summary] -> [FLUX Visual] -> [Twitter Post]\n\nComplete Workflows\nYouTube Short Pipeline\n\nCreate a complete short-form video from a topic.\n\n# 1. Generate script with Claude\ninfsh app run openrouter/claude-sonnet-45 --input '{\n  \"prompt\": \"Write a 30-second script about the future of AI. Make it engaging and conversational. Just the script, no stage directions.\"\n}' > script.json\n\n# 2. Generate voiceover with Kokoro\ninfsh app run infsh/kokoro-tts --input '{\n  \"text\": \"<script-text>\",\n  \"voice\": \"af_sarah\"\n}' > voice.json\n\n# 3. Generate background image with FLUX\ninfsh app run falai/flux-dev --input '{\n  \"prompt\": \"Futuristic city skyline at sunset, cyberpunk aesthetic, 4K wallpaper\"\n}' > background.json\n\n# 4. Animate image to video with Wan\ninfsh app run falai/wan-2-5 --input '{\n  \"image_url\": \"<background-url>\",\n  \"prompt\": \"slow camera pan across cityscape, subtle movement\"\n}' > video.json\n\n# 5. Add captions (manually or with another tool)\n\n# 6. Merge video with audio\ninfsh app run infsh/media-merger --input '{\n  \"video_url\": \"<video-url>\",\n  \"audio_url\": \"<voice-url>\"\n}'\n\nTalking Head Video Pipeline\n\nCreate an AI avatar presenting content.\n\n# 1. Write the script\ninfsh app run openrouter/claude-sonnet-45 --input '{\n  \"prompt\": \"Write a 1-minute explainer script about quantum computing for beginners.\"\n}' > script.json\n\n# 2. Generate speech\ninfsh app run infsh/kokoro-tts --input '{\n  \"text\": \"<script>\",\n  \"voice\": \"am_michael\"\n}' > speech.json\n\n# 3. Generate or use a portrait image\ninfsh app run falai/flux-dev --input '{\n  \"prompt\": \"Professional headshot of a friendly tech presenter, neutral background, looking at camera\"\n}' > portrait.json\n\n# 4. Create talking head video\ninfsh app run bytedance/omnihuman-1-5 --input '{\n  \"image_url\": \"<portrait-url>\",\n  \"audio_url\": \"<speech-url>\"\n}' > talking_head.json\n\nProduct Demo Pipeline\n\nCreate a product showcase video.\n\n# 1. Generate product image\ninfsh app run falai/flux-dev --input '{\n  \"prompt\": \"Sleek wireless earbuds on white surface, studio lighting, product photography\"\n}' > product.json\n\n# 2. Animate product reveal\ninfsh app run falai/wan-2-5 --input '{\n  \"image_url\": \"<product-url>\",\n  \"prompt\": \"slow 360 rotation, smooth motion\"\n}' > product_video.json\n\n# 3. Upscale video quality\ninfsh app run falai/topaz-video-upscaler --input '{\n  \"video_url\": \"<product-video-url>\"\n}' > upscaled.json\n\n# 4. Add background music\ninfsh app run infsh/media-merger --input '{\n  \"video_url\": \"<upscaled-url>\",\n  \"audio_url\": \"https://your-music.mp3\",\n  \"audio_volume\": 0.3\n}'\n\nBlog to Video Pipeline\n\nConvert written content to video format.\n\n# 1. Summarize blog post\ninfsh app run openrouter/claude-haiku-45 --input '{\n  \"prompt\": \"Summarize this blog post into 5 key points for a video script: <blog-content>\"\n}' > summary.json\n\n# 2. Generate images for each point\nfor i in 1 2 3 4 5; do\n  infsh app run falai/flux-dev --input \"{\n    \\\"prompt\\\": \\\"Visual representing point $i: <point-text>\\\"\n  }\" > \"image_$i.json\"\ndone\n\n# 3. Animate each image\nfor i in 1 2 3 4 5; do\n  infsh app run falai/wan-2-5 --input \"{\n    \\\"image_url\\\": \\\"<image-$i-url>\\\"\n  }\" > \"video_$i.json\"\ndone\n\n# 4. Generate voiceover\ninfsh app run infsh/kokoro-tts --input '{\n  \"text\": \"<full-script>\",\n  \"voice\": \"bf_emma\"\n}' > narration.json\n\n# 5. Merge all clips\ninfsh app run infsh/media-merger --input '{\n  \"videos\": [\"<video1>\", \"<video2>\", \"<video3>\", \"<video4>\", \"<video5>\"],\n  \"audio_url\": \"<narration-url>\",\n  \"transition\": \"crossfade\"\n}'\n\nPipeline Building Blocks\nContent Generation\nStep\tApp\tPurpose\nScript\topenrouter/claude-sonnet-45\tWrite content\nResearch\ttavily/search-assistant\tGather information\nSummary\topenrouter/claude-haiku-45\tCondense content\nVisual Assets\nStep\tApp\tPurpose\nImage\tfalai/flux-dev\tGenerate images\nImage\tgoogle/imagen-3\tAlternative image gen\nUpscale\tfalai/topaz-image-upscaler\tEnhance quality\nAnimation\nStep\tApp\tPurpose\nI2V\tfalai/wan-2-5\tAnimate images\nT2V\tgoogle/veo-3-1-fast\tGenerate from text\nAvatar\tbytedance/omnihuman-1-5\tTalking heads\nAudio\nStep\tApp\tPurpose\nTTS\tinfsh/kokoro-tts\tVoice narration\nMusic\tinfsh/ai-music\tBackground music\nFoley\tinfsh/hunyuanvideo-foley\tSound effects\nPost-Production\nStep\tApp\tPurpose\nUpscale\tfalai/topaz-video-upscaler\tEnhance video\nMerge\tinfsh/media-merger\tCombine media\nCaption\tinfsh/caption-video\tAdd subtitles\nBest Practices\nPlan the pipeline first - Map out each step before running\nSave intermediate results - Store outputs for iteration\nUse appropriate quality - Fast models for drafts, quality for finals\nMatch resolutions - Keep consistent aspect ratios throughout\nTest each step - Verify outputs before proceeding\nRelated Skills\n# Video generation models\nnpx skills add inference-sh/skills@ai-video-generation\n\n# Image generation\nnpx skills add inference-sh/skills@ai-image-generation\n\n# Text-to-speech\nnpx skills add inference-sh/skills@text-to-speech\n\n# LLM models for scripts\nnpx skills add inference-sh/skills@llm-models\n\n# Full platform skill\nnpx skills add inference-sh/skills@inference-sh\n\n\nBrowse all apps: infsh app list\n\nDocumentation\nContent Pipeline Example - Official pipeline guide\nBuilding Workflows - Workflow best practices"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/okaris/ai-content-pipeline",
    "publisherUrl": "https://clawhub.ai/okaris/ai-content-pipeline",
    "owner": "okaris",
    "version": "0.1.5",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/ai-content-pipeline",
    "downloadUrl": "https://openagent3.xyz/downloads/ai-content-pipeline",
    "agentUrl": "https://openagent3.xyz/skills/ai-content-pipeline/agent",
    "manifestUrl": "https://openagent3.xyz/skills/ai-content-pipeline/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/ai-content-pipeline/agent.md"
  }
}