{
  "schemaVersion": "1.0",
  "item": {
    "slug": "falai",
    "name": "Fal Ai",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/Sxela/falai",
    "canonicalUrl": "https://clawhub.ai/Sxela/falai",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/falai",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=falai",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md",
      "references/models.json",
      "scripts/fal_client.py"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/falai"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/falai",
    "agentPageUrl": "https://openagent3.xyz/skills/falai/agent",
    "manifestUrl": "https://openagent3.xyz/skills/falai/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/falai/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "fal.ai Integration",
        "body": "Generate and edit images via fal.ai's queue-based API."
      },
      {
        "title": "Setup",
        "body": "Add your API key to TOOLS.md:\n\n### fal.ai\nFAL_KEY: your-key-here\n\nGet a key at: https://fal.ai/dashboard/keys\n\nThe script checks (in order): FAL_KEY env var → TOOLS.md"
      },
      {
        "title": "fal-ai/nano-banana-pro (Text → Image)",
        "body": "Google's Gemini 3 Pro for text-to-image generation.\n\ninput_data = {\n    \"prompt\": \"A cat astronaut on the moon\",      # required\n    \"aspect_ratio\": \"1:1\",                        # auto|21:9|16:9|3:2|4:3|5:4|1:1|4:5|3:4|2:3|9:16\n    \"resolution\": \"1K\",                           # 1K|2K|4K\n    \"output_format\": \"png\",                       # jpeg|png|webp\n    \"safety_tolerance\": \"4\"                       # 1 (strict) to 6 (permissive)\n}"
      },
      {
        "title": "fal-ai/nano-banana-pro/edit (Image → Image)",
        "body": "Gemini 3 Pro for image editing. Slower (~20s) but handles complex edits well.\n\ninput_data = {\n    \"prompt\": \"Transform into anime style\",       # required\n    \"image_urls\": [image_data_uri],               # required - array of URLs or base64 data URIs\n    \"aspect_ratio\": \"auto\",\n    \"resolution\": \"1K\",\n    \"output_format\": \"png\"\n}"
      },
      {
        "title": "fal-ai/flux/dev/image-to-image (Image → Image)",
        "body": "FLUX.1 dev model. Faster (~2-3s) for style transfers.\n\ninput_data = {\n    \"prompt\": \"Anime style portrait\",             # required\n    \"image_url\": image_data_uri,                  # required - single URL or base64 data URI\n    \"strength\": 0.85,                             # 0-1, higher = more change\n    \"num_inference_steps\": 40,\n    \"guidance_scale\": 7.5,\n    \"output_format\": \"png\"\n}"
      },
      {
        "title": "fal-ai/kling-video/o3/pro/video-to-video/edit (Video → Video)",
        "body": "Kling O3 Pro for video transformation with AI effects.\n\nLimits:\n\nFormats: .mp4, .mov only\nDuration: 3-10 seconds\nResolution: 720-2160px\nMax file size: 200MB\nMax elements: 4 total (elements + reference images combined)\n\ninput_data = {\n    # Required\n    \"prompt\": \"Change environment to be fully snow as @Image1. Replace animal with @Element1\",\n    \"video_url\": \"https://example.com/video.mp4\",    # .mp4/.mov, 3-10s, 720-2160px, max 200MB\n    \n    # Optional\n    \"image_urls\": [                                  # style/appearance references\n        \"https://example.com/snow_ref.jpg\"           # use as @Image1, @Image2 in prompt\n    ],\n    \"keep_audio\": True,                              # keep original audio (default: true)\n    \"elements\": [                                    # characters/objects to inject\n        {\n            \"reference_image_urls\": [                # reference images for the element\n                \"https://example.com/element_ref1.png\"\n            ],\n            \"frontal_image_url\": \"https://example.com/element_front.png\"  # frontal view (better results)\n        }\n    ],                                               # use as @Element1, @Element2 in prompt\n    \"shot_type\": \"customize\"                         # multi-shot type (default: customize)\n}\n\nPrompt references:\n\n@Video1 — the input video\n@Image1, @Image2 — reference images for style/appearance\n@Element1, @Element2 — elements (characters/objects) to inject"
      },
      {
        "title": "Input Validation",
        "body": "The skill validates inputs before submission. For multi-input models, ensure all required fields are provided:\n\n# Check what a model needs\npython3 scripts/fal_client.py model-info \"fal-ai/kling-video/o3/standard/video-to-video/edit\"\n\n# List all models with their requirements\npython3 scripts/fal_client.py models\n\nBefore submitting, verify:\n\n✅ All required fields are present and non-empty\n✅ File fields (image_url, video_url, etc.) are URLs or base64 data URIs\n✅ Arrays (image_urls) have at least one item\n✅ Video files are within limits (200MB, 720-2160p)\n\nExample validation output:\n\n⚠️  Note: Reference video in prompt as @Video1\n⚠️  Note: Max 4 total elements (video + images combined)\n❌ Validation failed:\n   - Missing required field: video_url"
      },
      {
        "title": "CLI Commands",
        "body": "# Check API key\npython3 scripts/fal_client.py check-key\n\n# Submit a request\npython3 scripts/fal_client.py submit \"fal-ai/nano-banana-pro\" '{\"prompt\": \"A sunset over mountains\"}'\n\n# Check status\npython3 scripts/fal_client.py status \"fal-ai/nano-banana-pro\" \"<request_id>\"\n\n# Get result\npython3 scripts/fal_client.py result \"fal-ai/nano-banana-pro\" \"<request_id>\"\n\n# Poll all pending requests\npython3 scripts/fal_client.py poll\n\n# List pending requests\npython3 scripts/fal_client.py list\n\n# Convert local image to base64 data URI\npython3 scripts/fal_client.py to-data-uri /path/to/image.jpg\n\n# Convert local video to base64 data URI (with validation)\npython3 scripts/fal_client.py video-to-uri /path/to/video.mp4"
      },
      {
        "title": "Python Usage",
        "body": "import sys\nsys.path.insert(0, 'scripts')\nfrom fal_client import submit, check_status, get_result, image_to_data_uri, poll_pending\n\n# Text to image\nresult = submit('fal-ai/nano-banana-pro', {\n    'prompt': 'A futuristic city at night'\n})\nprint(result['request_id'])\n\n# Image to image (with local file)\nimg_uri = image_to_data_uri('/path/to/photo.jpg')\nresult = submit('fal-ai/nano-banana-pro/edit', {\n    'prompt': 'Transform into watercolor painting',\n    'image_urls': [img_uri]\n})\n\n# Poll until complete\ncompleted = poll_pending()\nfor req in completed:\n    if 'result' in req:\n        print(req['result']['images'][0]['url'])"
      },
      {
        "title": "Queue System",
        "body": "fal.ai uses async queues. Requests go through stages:\n\nIN_QUEUE → waiting\nIN_PROGRESS → generating\nCOMPLETED → done, fetch result\nFAILED → error occurred\n\nPending requests are saved to ~/. openclaw/workspace/fal-pending.json and survive restarts."
      },
      {
        "title": "Polling Strategy",
        "body": "Manual: Run python3 scripts/fal_client.py poll periodically.\n\nHeartbeat: Add to HEARTBEAT.md:\n\n- Poll fal.ai pending requests if any exist\n\nCron: Schedule polling every few minutes for background jobs."
      },
      {
        "title": "Adding New Models",
        "body": "Find the model on fal.ai and check its /api page\nAdd entry to references/models.json with input/output schema\nTest with a simple request\n\nNote: Queue URLs use base model path (e.g., fal-ai/flux not fal-ai/flux/dev/image-to-image). The script handles this automatically."
      },
      {
        "title": "Files",
        "body": "skills/fal-ai/\n├── SKILL.md                    ← This file\n├── scripts/\n│   └── fal_client.py           ← CLI + Python library\n└── references/\n    └── models.json             ← Model schemas"
      },
      {
        "title": "Troubleshooting",
        "body": "\"No FAL_KEY found\" → Add key to TOOLS.md or set FAL_KEY env var\n\n405 Method Not Allowed → URL routing issue, ensure using base model path for status/result\n\nRequest stuck → Check fal-pending.json, may need manual cleanup"
      }
    ],
    "body": "fal.ai Integration\n\nGenerate and edit images via fal.ai's queue-based API.\n\nSetup\n\nAdd your API key to TOOLS.md:\n\n### fal.ai\nFAL_KEY: your-key-here\n\n\nGet a key at: https://fal.ai/dashboard/keys\n\nThe script checks (in order): FAL_KEY env var → TOOLS.md\n\nSupported Models\nfal-ai/nano-banana-pro (Text → Image)\n\nGoogle's Gemini 3 Pro for text-to-image generation.\n\ninput_data = {\n    \"prompt\": \"A cat astronaut on the moon\",      # required\n    \"aspect_ratio\": \"1:1\",                        # auto|21:9|16:9|3:2|4:3|5:4|1:1|4:5|3:4|2:3|9:16\n    \"resolution\": \"1K\",                           # 1K|2K|4K\n    \"output_format\": \"png\",                       # jpeg|png|webp\n    \"safety_tolerance\": \"4\"                       # 1 (strict) to 6 (permissive)\n}\n\nfal-ai/nano-banana-pro/edit (Image → Image)\n\nGemini 3 Pro for image editing. Slower (~20s) but handles complex edits well.\n\ninput_data = {\n    \"prompt\": \"Transform into anime style\",       # required\n    \"image_urls\": [image_data_uri],               # required - array of URLs or base64 data URIs\n    \"aspect_ratio\": \"auto\",\n    \"resolution\": \"1K\",\n    \"output_format\": \"png\"\n}\n\nfal-ai/flux/dev/image-to-image (Image → Image)\n\nFLUX.1 dev model. Faster (~2-3s) for style transfers.\n\ninput_data = {\n    \"prompt\": \"Anime style portrait\",             # required\n    \"image_url\": image_data_uri,                  # required - single URL or base64 data URI\n    \"strength\": 0.85,                             # 0-1, higher = more change\n    \"num_inference_steps\": 40,\n    \"guidance_scale\": 7.5,\n    \"output_format\": \"png\"\n}\n\nfal-ai/kling-video/o3/pro/video-to-video/edit (Video → Video)\n\nKling O3 Pro for video transformation with AI effects.\n\nLimits:\n\nFormats: .mp4, .mov only\nDuration: 3-10 seconds\nResolution: 720-2160px\nMax file size: 200MB\nMax elements: 4 total (elements + reference images combined)\ninput_data = {\n    # Required\n    \"prompt\": \"Change environment to be fully snow as @Image1. Replace animal with @Element1\",\n    \"video_url\": \"https://example.com/video.mp4\",    # .mp4/.mov, 3-10s, 720-2160px, max 200MB\n    \n    # Optional\n    \"image_urls\": [                                  # style/appearance references\n        \"https://example.com/snow_ref.jpg\"           # use as @Image1, @Image2 in prompt\n    ],\n    \"keep_audio\": True,                              # keep original audio (default: true)\n    \"elements\": [                                    # characters/objects to inject\n        {\n            \"reference_image_urls\": [                # reference images for the element\n                \"https://example.com/element_ref1.png\"\n            ],\n            \"frontal_image_url\": \"https://example.com/element_front.png\"  # frontal view (better results)\n        }\n    ],                                               # use as @Element1, @Element2 in prompt\n    \"shot_type\": \"customize\"                         # multi-shot type (default: customize)\n}\n\n\nPrompt references:\n\n@Video1 — the input video\n@Image1, @Image2 — reference images for style/appearance\n@Element1, @Element2 — elements (characters/objects) to inject\nInput Validation\n\nThe skill validates inputs before submission. For multi-input models, ensure all required fields are provided:\n\n# Check what a model needs\npython3 scripts/fal_client.py model-info \"fal-ai/kling-video/o3/standard/video-to-video/edit\"\n\n# List all models with their requirements\npython3 scripts/fal_client.py models\n\n\nBefore submitting, verify:\n\n✅ All required fields are present and non-empty\n✅ File fields (image_url, video_url, etc.) are URLs or base64 data URIs\n✅ Arrays (image_urls) have at least one item\n✅ Video files are within limits (200MB, 720-2160p)\n\nExample validation output:\n\n⚠️  Note: Reference video in prompt as @Video1\n⚠️  Note: Max 4 total elements (video + images combined)\n❌ Validation failed:\n   - Missing required field: video_url\n\nUsage\nCLI Commands\n# Check API key\npython3 scripts/fal_client.py check-key\n\n# Submit a request\npython3 scripts/fal_client.py submit \"fal-ai/nano-banana-pro\" '{\"prompt\": \"A sunset over mountains\"}'\n\n# Check status\npython3 scripts/fal_client.py status \"fal-ai/nano-banana-pro\" \"<request_id>\"\n\n# Get result\npython3 scripts/fal_client.py result \"fal-ai/nano-banana-pro\" \"<request_id>\"\n\n# Poll all pending requests\npython3 scripts/fal_client.py poll\n\n# List pending requests\npython3 scripts/fal_client.py list\n\n# Convert local image to base64 data URI\npython3 scripts/fal_client.py to-data-uri /path/to/image.jpg\n\n# Convert local video to base64 data URI (with validation)\npython3 scripts/fal_client.py video-to-uri /path/to/video.mp4\n\nPython Usage\nimport sys\nsys.path.insert(0, 'scripts')\nfrom fal_client import submit, check_status, get_result, image_to_data_uri, poll_pending\n\n# Text to image\nresult = submit('fal-ai/nano-banana-pro', {\n    'prompt': 'A futuristic city at night'\n})\nprint(result['request_id'])\n\n# Image to image (with local file)\nimg_uri = image_to_data_uri('/path/to/photo.jpg')\nresult = submit('fal-ai/nano-banana-pro/edit', {\n    'prompt': 'Transform into watercolor painting',\n    'image_urls': [img_uri]\n})\n\n# Poll until complete\ncompleted = poll_pending()\nfor req in completed:\n    if 'result' in req:\n        print(req['result']['images'][0]['url'])\n\nQueue System\n\nfal.ai uses async queues. Requests go through stages:\n\nIN_QUEUE → waiting\nIN_PROGRESS → generating\nCOMPLETED → done, fetch result\nFAILED → error occurred\n\nPending requests are saved to ~/. openclaw/workspace/fal-pending.json and survive restarts.\n\nPolling Strategy\n\nManual: Run python3 scripts/fal_client.py poll periodically.\n\nHeartbeat: Add to HEARTBEAT.md:\n\n- Poll fal.ai pending requests if any exist\n\n\nCron: Schedule polling every few minutes for background jobs.\n\nAdding New Models\nFind the model on fal.ai and check its /api page\nAdd entry to references/models.json with input/output schema\nTest with a simple request\n\nNote: Queue URLs use base model path (e.g., fal-ai/flux not fal-ai/flux/dev/image-to-image). The script handles this automatically.\n\nFiles\nskills/fal-ai/\n├── SKILL.md                    ← This file\n├── scripts/\n│   └── fal_client.py           ← CLI + Python library\n└── references/\n    └── models.json             ← Model schemas\n\nTroubleshooting\n\n\"No FAL_KEY found\" → Add key to TOOLS.md or set FAL_KEY env var\n\n405 Method Not Allowed → URL routing issue, ensure using base model path for status/result\n\nRequest stuck → Check fal-pending.json, may need manual cleanup"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Sxela/falai",
    "publisherUrl": "https://clawhub.ai/Sxela/falai",
    "owner": "Sxela",
    "version": "1.0.2",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/falai",
    "downloadUrl": "https://openagent3.xyz/downloads/falai",
    "agentUrl": "https://openagent3.xyz/skills/falai/agent",
    "manifestUrl": "https://openagent3.xyz/skills/falai/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/falai/agent.md"
  }
}