{
  "schemaVersion": "1.0",
  "item": {
    "slug": "teamclaw",
    "name": "teamclaw_visualization",
    "source": "tencent",
    "type": "skill",
    "category": "数据分析",
    "sourceUrl": "https://clawhub.ai/Avalon-467/teamclaw",
    "canonicalUrl": "https://clawhub.ai/Avalon-467/teamclaw",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/teamclaw",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=teamclaw",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "OASIS_GUIDE.md",
      "README.md",
      "SKILL.md",
      "chatbot/QQbot.py",
      "chatbot/setup.py",
      "chatbot/telegrambot.py"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/teamclaw"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/teamclaw",
    "agentPageUrl": "https://openagent3.xyz/skills/teamclaw/agent",
    "manifestUrl": "https://openagent3.xyz/skills/teamclaw/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/teamclaw/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "TeamClaw  Agent Subsystem Skill",
        "body": "https://github.com/Avalon-467/Teamclaw"
      },
      {
        "title": "Introduction",
        "body": "TeamClaw is an OpenClaw-like multi-agent sub-platform with a built-in lightweight agent (similar to OpenClaw's), featuring computer use capabilities and social platform integrations (e.g., Telegram). It can run independently without blocking the main agent, or be directly controlled by an OpenClaw agent to orchestrate the built-in OASIS collaboration platform. It also supports exposing the frontend to the public internet via Cloudflare, enabling remote visual multi-agent workflow programming from mobile devices or any browser.\n\nTeamClaw is a versatile AI Agent service providing:\n\nConversational Agent: A LangGraph-based multi-tool AI assistant supporting streaming/non-streaming conversations\nOASIS Forum: A multi-expert parallel discussion/execution engine for orchestrating multiple agents\nScheduled Tasks: An APScheduler-based task scheduling center\nBark Push: Mobile push notifications\nFrontend Web UI: A complete chat interface"
      },
      {
        "title": "Skill Scripts",
        "body": "All scripts are located in selfskill/scripts/, invoked uniformly via the run.sh entry point, all non-interactive.\n\nselfskill/scripts/\n run.sh          # Main entry (start/stop/status/setup/add-user/configure)\n adduser.py      # Non-interactive user creation\n configure.py    # Non-interactive .env configuration management"
      },
      {
        "title": "Quick Start",
        "body": "All commands are executed in the project root directory.\n\nThree-step launch flow: setup → configure → start"
      },
      {
        "title": "1. First Deployment",
        "body": "# Install dependencies\nbash selfskill/scripts/run.sh setup\n\n# Initialize configuration file\nbash selfskill/scripts/run.sh configure --init\n\n# Configure LLM (required)\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-your-key \\\n  LLM_BASE_URL=https://api.deepseek.com \\\n  LLM_MODEL=deepseek-chat\n\n# ⚠️ Create user account (REQUIRED — without this you CANNOT log in to the Web UI or call API)\nbash selfskill/scripts/run.sh add-user system MySecurePass123\n\n⚠️ You MUST create at least one user account before starting the service!\n\nThe Web UI login page requires username + password.\nAll API calls require Authorization: Bearer <user_id>:<password> (or INTERNAL_TOKEN:<user_id>).\nIf you skip this step, you will be locked out of the entire system.\nYou can create multiple users. The first argument is the username, the second is the password."
      },
      {
        "title": "2. Start / Stop / Status",
        "body": "bash selfskill/scripts/run.sh start     # Start in background\nbash selfskill/scripts/run.sh status    # Check status\nbash selfskill/scripts/run.sh stop      # Stop service"
      },
      {
        "title": "3. Bark Push vs Chatbot (Telegram/QQ) — Startup Differences",
        "body": "ComponentHow it startsConfiguration neededNotesBark Push (port 58010)Automatically started by launcher.pyNone — works out of the boxA standalone binary (bin/bark-server). Auto-downloaded on first setup. No env vars needed.Telegram BotRequires manual setupTELEGRAM_BOT_TOKEN, TELEGRAM_ALLOWED_USERS in .envlauncher.py calls chatbot/setup.py which has an interactive menu (input()). In headless/background mode this will block. To avoid blocking, configure the bot tokens in .env beforehand and start the bot separately: nohup python chatbot/telegrambot.py > logs/telegrambot.log 2>&1 &QQ BotRequires manual setupQQ_APP_ID, QQ_BOT_SECRET, QQ_BOT_USERNAME in .envSame as Telegram — interactive setup will block in headless mode. Start separately: nohup python chatbot/QQbot.py > logs/qqbot.log 2>&1 &\n\n⚠️ Important for Agent/headless usage: The chatbot/setup.py script contains interactive input() prompts. When launcher.py runs in the background (via run.sh start), if chatbot/setup.py exists it will be called and block indefinitely waiting for user input. To prevent this:\n\nEither remove/rename chatbot/setup.py before starting, OR\nPre-configure all bot tokens in .env and start bots independently (bypassing setup.py)."
      },
      {
        "title": "4. Configuration Management",
        "body": "# View current configuration (sensitive values masked)\nbash selfskill/scripts/run.sh configure --show\n\n# Set a single item\nbash selfskill/scripts/run.sh configure PORT_AGENT 51200\n\n# Batch set\nbash selfskill/scripts/run.sh configure --batch TTS_MODEL=gemini-2.5-flash-preview-tts TTS_VOICE=charon"
      },
      {
        "title": "Configuration Options",
        "body": "OptionDescriptionDefaultLLM_API_KEYLLM API key (required)LLM_BASE_URLLLM API URLhttps://api.deepseek.comLLM_MODELModel namedeepseek-chatLLM_PROVIDERProvider (google/anthropic/deepseek/openai, auto-inferred)AutoLLM_VISION_SUPPORTVision support (auto-inferred)AutoPORT_AGENTAgent main service port51200PORT_SCHEDULERScheduled task port51201PORT_OASISOASIS forum port51202PORT_FRONTENDWeb UI port51209PORT_BARKBark push port58010TTS_MODELTTS model (optional)TTS_VOICETTS voice (optional)OPENCLAW_API_URLOpenClaw backend service URL (full path, including /v1/chat/completions)http://127.0.0.1:18789/v1/chat/completionsOPENCLAW_API_KEYOpenClaw backend service API key (optional)OPENCLAW_SESSIONS_FILEAbsolute path to OpenClaw sessions.json file (required when using OpenClaw)NoneINTERNAL_TOKENInternal communication secret (auto-generated)Auto"
      },
      {
        "title": "Ports & Services",
        "body": "PortService51200AI Agent main service51201Scheduled tasks51202OASIS forum51209Web UI"
      },
      {
        "title": "Method 1: User Authentication",
        "body": "Authorization: Bearer <user_id>:<password>"
      },
      {
        "title": "Method 2: Internal Token (for inter-service calls, recommended)",
        "body": "Authorization: Bearer <INTERNAL_TOKEN>:<user_id>\n\nINTERNAL_TOKEN is auto-generated on first startup; view it via configure --show-raw."
      },
      {
        "title": "Core API",
        "body": "Base URL: http://127.0.0.1:51200"
      },
      {
        "title": "Chat (OpenAI-compatible)",
        "body": "POST /v1/chat/completions\nAuthorization: Bearer <token>\n\n{\"model\":\"mini-timebot\",\"messages\":[{\"role\":\"user\",\"content\":\"Hello\"}],\"stream\":true,\"session_id\":\"my-session\"}"
      },
      {
        "title": "System Trigger (internal call)",
        "body": "POST /system_trigger\nX-Internal-Token: <INTERNAL_TOKEN>\n\n{\"user_id\":\"system\",\"text\":\"Please execute a task\",\"session_id\":\"task-001\"}"
      },
      {
        "title": "Cancel Session",
        "body": "POST /cancel\n\n{\"user_id\":\"<user_id>\",\"session_id\":\"<session_id>\"}"
      },
      {
        "title": "OASIS Four Operating Modes (Default: Discussion Mode)",
        "body": "📖 Dedicated OASIS usage guide (especially for OpenClaw agent integration): OASIS_GUIDE.md\n\nThe \"four modes\" are two orthogonal switches:\n\nDiscussion vs Execution: Determines whether expert output is \"forum-style discussion/voting\" or \"workflow-style execution/deliverables\".\nSynchronous vs Detach: Determines whether the caller blocks waiting for results."
      },
      {
        "title": "1) Discussion Mode vs Execution Mode",
        "body": "Discussion Mode (discussion=true, default)\n\nPurpose: Multiple experts provide different perspectives, pros/cons analysis, clarify disputes, and can form consensus.\nUse case: Solution reviews, technical route selection, questions that need \"why\".\n\nExecution Mode (discussion=false)\n\nPurpose: Use OASIS as an orchestrator to complete tasks in planned sequential/parallel order, emphasizing direct output (code/scripts/checklists/finalized plans).\nUse case: Delivery tasks with clear objectives that don't need debate."
      },
      {
        "title": "2) Synchronous Mode vs Detach Mode",
        "body": "Detach (detach=true, default)\n\nBehavior: Returns topic_id immediately, continues running/discussing in the background; later use check_oasis_discussion(topic_id) to track progress and results.\nUse case: Most tasks, especially multi-round/multi-expert/long-running/tool-calling tasks.\n\nSynchronous (detach=false)\n\nBehavior: After calling post_to_oasis, waits for completion and returns the final result directly.\nUse case: Quick tasks where you need the deliverable immediately to continue iterating."
      },
      {
        "title": "3) Auto-selection Rules (Recommended Default Strategy)",
        "body": "When not explicitly specified, the following default strategy is recommended:\n\nDefault = Discussion + Detach\n\ndiscussion=true\ndetach=true\n\n\n\nSwitch to Execution Mode when these signals appear:\n\n\"Give me the final version / copy-pasteable / executable script / just conclusions no discussion\"\n\"Generate SOP / checklist / table step by step and finalize\"\n\n\n\nSwitch to Synchronous Mode when these signals appear:\n\n\"Wait for the result / I need it now / give me the answer directly\"\nQuick single-round tasks where the deliverable is needed immediately"
      },
      {
        "title": "4) Four Combinations Quick Reference",
        "body": "CombinationParametersReturnsUse CaseDiscussion + Detach (default)discussion=true, detach=truetopic_id, check laterDecision/review/collect opinionsDiscussion + Syncdiscussion=true, detach=falseSee discussion & conclusion on the spotQuick discussion needing immediate resultExecution + Detachdiscussion=false, detach=truetopic_id, check laterLong execution/complex pipelinesExecution + Syncdiscussion=false, detach=falseDirect deliverablesGenerate code/plans/checklists"
      },
      {
        "title": "OASIS Four Agent Types",
        "body": "OASIS supports four types of agents, distinguished by the name format in schedule_yaml:\n\n#TypeName FormatEngine ClassDescription1Direct LLMtag#temp#NExpertAgentStateless single LLM call. Each round reads all posts  one LLM call  publish + vote. No cross-round memory. tag maps to preset expert name/persona, N is instance number (same expert can have multiple copies).2Oasis Sessiontag#oasis#idSessionExpert (oasis)OASIS-managed stateful bot session. tag maps to preset expert, persona injected as system prompt on first round. Bot retains conversation memory across rounds (incremental context). id can be any string; new ID auto-creates session on first use.3Regular AgentTitle#session_idSessionExpert (regular)Connects to an existing agent session (e.g., Assistant#default, Coder#my-project). No identity injectionthe session's own system prompt defines the agent. Suitable for bringing personal bot sessions into discussions.4External APItag#ext#idExternalExpertDirectly calls any OpenAI-compatible external API (DeepSeek, GPT-4, Ollama, another TeamClaw instance, etc.). Does not go through local agent. External service assumed stateful. Supports custom request headers via YAML headers field."
      },
      {
        "title": "Session ID Format",
        "body": "tag#temp#N            ExpertAgent   (stateless, direct LLM)\ntag#oasis#<id>        SessionExpert (oasis-managed, stateful bot)\nTitle#session_id      SessionExpert (regular agent session)\ntag#ext#<id>          ExternalExpert (external API, e.g. OpenClaw agent)\n\nSpecial Suffix:\n\nAppending #new to the end of any session name forces creation of a brand new session (ID replaced with random UUID, ensuring no reuse):\n\ncreative#oasis#abc#new  #new stripped, ID replaced with UUID\nAssistant#my-session#new  Same processing\n\nOasis Session Conventions:\n\nOasis sessions are identified by #oasis# in session_id (e.g., creative#oasis#ab12cd34)\nStored in the regular Agent checkpoint DB (data/agent_memory.db), no separate storage\nAuto-created on first use, no pre-creation needed\ntag part maps to preset expert configuration to find persona"
      },
      {
        "title": "YAML Example",
        "body": "version: 1\nplan:\n  # Type 1: Direct LLM (stateless, fast)\n  - expert: \"creative#temp#1\"\n  - expert: \"critical#temp#2\"\n\n  # Type 2: Oasis session (stateful, with memory)\n  - expert: \"data#oasis#analysis01\"\n  - expert: \"synthesis#oasis#new#new\"   # Force new session\n\n  # Type 3: Regular agent session (your existing bot)\n  - expert: \"Assistant#default\"\n  - expert: \"Coder#my-project\"\n\n  # Type 4: External API (DeepSeek, GPT-4, etc.)\n  # Note: api_key is auto-read from OPENCLAW_API_KEY env var; use \"****\" mask in YAML (never write plaintext keys)\n  - expert: \"deepseek#ext#ds1\"\n\n  # Type 4: OpenClaw External API (local Agent service)\n  # api_key auto-resolved from OPENCLAW_API_KEY env var when set to \"****\"\n  - expert: \"coder#ext#oc1\"\n    api_url: \"http://127.0.0.1:23001/v1/chat/completions\"\n    api_key: \"****\"              # Masked — real key read from OPENCLAW_API_KEY env var at runtime\n    model: \"agent:main:test1\"    # agent:<agent_name>:<session>, session auto-created if not exists\n\n  # Parallel execution\n  - parallel:\n      - expert: \"creative#temp#1\"\n        instruction: \"Analyze from innovation perspective\"\n      - expert: \"critical#temp#2\"\n        instruction: \"Analyze from risk perspective\"\n\n  # All experts speak + manual injection\n  - all_experts: true\n  - manual:\n      author: \"Moderator\"\n      content: \"Please focus on feasibility\""
      },
      {
        "title": "DAG Mode — Dependency-Driven Parallel Execution",
        "body": "When the workflow has fan-in (a node has multiple predecessors) or fan-out (a node has multiple successors), use DAG mode with id and depends_on fields. The engine maximizes parallelism — each node starts as soon as all its dependencies are satisfied.\n\nDAG YAML Example:\n\nversion: 1\nrepeat: false\nplan:\n  - id: research\n    expert: \"creative#temp#1\"                # Root — starts immediately\n  - id: analysis\n    expert: \"critical#temp#1\"                # Root — runs in PARALLEL with research\n  - id: synthesis\n    expert: \"synthesis#temp#1\"\n    depends_on: [research, analysis]         # Fan-in: waits for BOTH to complete\n  - id: review\n    expert: \"data#temp#1\"\n    depends_on: [synthesis]                  # Runs after synthesis\n\nDAG Rules:\n\nEvery step must have a unique id field.\ndepends_on is a list of step ids that must complete before this step starts. Omit for root nodes.\nThe graph must be acyclic (no circular dependencies).\nSteps with no dependency relationship run in parallel automatically.\nThe visual Canvas auto-detects fan-in/fan-out and generates DAG format.\nmanual steps can also have id/depends_on."
      },
      {
        "title": "External API (Type 4) Detailed Configuration",
        "body": "Type 4 external agents support additional configuration fields in YAML steps:\n\nversion: 1\nplan:\n  - expert: \"#ext#analyst\"\n    api_url: \"https://api.deepseek.com\"          # Required: External API base URL (auto-completes to /v1/chat/completions)\n    api_key: \"****\"                               # Masked — real key auto-read from OPENCLAW_API_KEY env var at runtime\n    model: \"deepseek-chat\"                        # Optional: Model name, default gpt-3.5-turbo\n    headers:                                      # Optional: Custom HTTP headers (key-value dict)\n      X-Custom-Header: \"value\"\n\n🔒 API Key Security: You no longer need to write plaintext API keys in YAML. Set api_key: \"****\" (or omit it entirely) and the system will automatically read the real key from the OPENCLAW_API_KEY environment variable at runtime. The frontend canvas also displays **** instead of the real key. If you do write a plaintext key, it will still work (backward compatible).\nConfiguration Field Description:\n\nFieldRequiredDescriptionapi_urlExternal API address, auto-completes path to /v1/chat/completionsapi_keyUse **** mask — auto-read from OPENCLAW_API_KEY env var. Plaintext keys also supported (backward compatible)modelDefault gpt-3.5-turboheadersAny key-value dict, merged into HTTP request headers\n\nOpenClaw-specific Configuration:\n\nOpenClaw is a locally running OpenAI-compatible Agent service. After setting up OpenClaw-specific endpoints in .env, the frontend orchestration panel will auto-fill api_url and api_key when dragging in an OpenClaw expert, no manual input needed:\n\n# Configure OpenClaw endpoint and sessions file path\nbash selfskill/scripts/run.sh configure --batch \\\nOPENCLAW_SESSIONS_FILE=./data/sessions.json \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENCLAW_API_KEY=your-openclaw-key-if-needed\n\n** Note:**\n\nOPENCLAW_SESSIONS_FILE is a prerequisite for using the OpenClaw feature and must point to the absolute path of OpenClaw's sessions.json file. The frontend orchestration panel will not load OpenClaw sessions if unconfigured.\nPath Convention: ./agents/main/sessions/sessions.json is a common path structure for OpenClaw agent sessions. This path convention allows the system to properly access and orchestrate OpenClaw agents.\nSession Management: Accessing session information is a necessary process for OpenClaw agent orchestration, enabling multi-agent workflow coordination and visual canvas operations.\nOPENCLAW_API_URL should contain the full path (including /v1/chat/completions); the system will auto-strip the suffix to generate the base URL for YAML. The api_url field in YAML only needs the base URL (e.g., http://127.0.0.1:18789); the engine auto-completes the path.\nIf your OpenClaw service runs on a non-default port, be sure to modify these settings.\n\nOpenClaw model Field Format:\n\nagent:<agent_name>:<session_name>\n\nagent_name: Agent name in OpenClaw, usually main\nsession_name: Session name, e.g., test1, default, etc. You can enter a non-existent session name to auto-create\n\nExamples:\n\nagent:main:default  Use main agent's default session\nagent:main:test1  Use main agent's test1 session (auto-created if not exists)\nagent:main:code-review  Use main agent's code-review session\n\nRequest Header Assembly Logic:\nFinal request headers = Content-Type: application/json + Authorization: Bearer <api_key> (if present) + all key-value pairs from YAML headers.\n\nx-openclaw-session-key — Deterministic OpenClaw Session Routing:\n\nWhen calling an OpenClaw agent via External API (Type 4), the x-openclaw-session-key HTTP header is the key mechanism for routing requests to a specific, deterministic OpenClaw session. Without this header, OpenClaw may not correctly associate the request with the intended session.\n\nThe frontend orchestration panel automatically sets this header when you drag an OpenClaw session onto the canvas.\nWhen writing YAML manually or calling the API programmatically, you must include this header in the headers field to ensure session determinism.\n\n# Example: Connecting to a specific OpenClaw session\n- expert: \"coder#ext#oc1\"\n  api_url: \"http://127.0.0.1:18789\"\n  api_key: \"****\"                                      # ← Masked; real key from OPENCLAW_API_KEY env var\n  model: \"agent:main:my-session\"\n  headers:\n    x-openclaw-session-key: \"agent:main:my-session\"   # ← This header determines the exact OpenClaw session\n\nThe value of x-openclaw-session-key should match the model field's session identifier (format: agent:<agent_name>:<session_name>). This ensures the external request is routed to the correct OpenClaw agent session, maintaining conversation continuity and state."
      },
      {
        "title": "Using OASIS Server Independently",
        "body": "The OASIS Server (port 51202) can be used independently of the Agent main service. External scripts, other services, or manual curl can directly operate all OASIS features without going through MCP tools or Agent conversations.\n\nIndependent Use Scenarios:\n\nInitiate multi-expert discussions/executions from external scripts\nDebug workflow orchestration\nIntegrate OASIS as a microservice into other systems\nManage experts, sessions, workflows, and other resources\n\nPrerequisites:\n\nOASIS service is running (bash selfskill/scripts/run.sh start starts all services simultaneously)\nAll endpoints use user_id parameter for user isolation (no Authorization header needed)\n\nAPI Overview:\n\nFunctionMethodPathList expertsGET/experts?user_id=xxxCreate custom expertPOST/experts/userUpdate/delete custom expertPUT/DELETE/experts/user/{tag}List oasis sessionsGET/sessions/oasis?user_id=xxxSave workflowPOST/workflowsList workflowsGET/workflows?user_id=xxxYAML  LayoutPOST/layouts/from-yamlCreate discussion/executionPOST/topicsView discussion detailsGET/topics/{topic_id}?user_id=xxxGet conclusion (blocking)GET/topics/{topic_id}/conclusion?user_id=xxx&timeout=300SSE real-time streamGET/topics/{topic_id}/stream?user_id=xxxCancel discussionDELETE/topics/{topic_id}?user_id=xxxList all topicsGET/topics?user_id=xxx\n\nThese endpoints share the same backend implementation as MCP tools, ensuring consistent behavior."
      },
      {
        "title": "OASIS Discussion/Execution",
        "body": "POST http://127.0.0.1:51202/topics\n\n{\"question\":\"Discussion topic\",\"user_id\":\"system\",\"max_rounds\":3,\"discussion\":true,\"schedule_file\":\"...\",\"schedule_yaml\":\"...\",\"callback_url\":\"http://127.0.0.1:51200/system_trigger\",\"callback_session_id\":\"my-session\"}\n\nPrefer using schedule_yaml to avoid repeated YAML input; this is the absolute path to the YAML workflow file, usually under /XXXXX/TeamClaw/data/user_files/username."
      },
      {
        "title": "Externally Participating in OASIS Server via curl (Complete Methods)",
        "body": "The OASIS Server (port 51202), in addition to being called by MCP tools, also supports direct curl operations for external scripts or debugging. All endpoints use user_id parameter for user isolation.\n\n1. Expert Management\n\n# List all experts (public + user custom)\ncurl 'http://127.0.0.1:51202/experts?user_id=xinyuan'\n\n# Create custom expert\ncurl -X POST 'http://127.0.0.1:51202/experts/user' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"name\":\"Product Manager\",\"tag\":\"pm\",\"persona\":\"You are an experienced product manager skilled in requirements analysis and product planning\",\"temperature\":0.7}'\n\n# Update custom expert\ncurl -X PUT 'http://127.0.0.1:51202/experts/user/pm' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"persona\":\"Updated expert description\"}'\n\n# Delete custom expert\ncurl -X DELETE 'http://127.0.0.1:51202/experts/user/pm?user_id=xinyuan'\n\n2. Session Management\n\n# List OASIS-managed expert sessions (sessions containing #oasis#)\ncurl 'http://127.0.0.1:51202/sessions/oasis?user_id=xinyuan'\n\n3. Workflow Management\n\n# List user's saved workflows\ncurl 'http://127.0.0.1:51202/workflows?user_id=xinyuan'\n\n# Save workflow (auto-generate layout)\ncurl -X POST 'http://127.0.0.1:51202/workflows' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"name\":\"trio_discussion\",\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"description\":\"Trio discussion\",\"save_layout\":true}'\n\n4. Layout Generation\n\n# Generate layout from YAML\ncurl -X POST 'http://127.0.0.1:51202/layouts/from-yaml' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"yaml_source\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"layout_name\":\"trio_layout\"}'\n\n5. Discussion/Execution\n\n# Create discussion topic (synchronous, wait for conclusion)\ncurl -X POST 'http://127.0.0.1:51202/topics' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"question\":\"Discussion topic\",\"max_rounds\":3,\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"discussion\":true}'\n\n# Create discussion topic (async, returns topic_id)\ncurl -X POST 'http://127.0.0.1:51202/topics' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"question\":\"Discussion topic\",\"max_rounds\":3,\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"discussion\":true,\"callback_url\":\"http://127.0.0.1:51200/system_trigger\",\"callback_session_id\":\"my-session\"}'\n\n# View discussion details\ncurl 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'\n\n# Get discussion conclusion (blocking wait)\ncurl 'http://127.0.0.1:51202/topics/{topic_id}/conclusion?user_id=xinyuan&timeout=300'\n\n# Cancel discussion\ncurl -X DELETE 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'\n\n# List all discussion topics\ncurl 'http://127.0.0.1:51202/topics?user_id=xinyuan'\n\n6. Real-time Stream\n\n# SSE real-time update stream (discussion mode)\ncurl 'http://127.0.0.1:51202/topics/{topic_id}/stream?user_id=xinyuan'\n\nStorage Locations:\n\nWorkflows (YAML): data/user_files/{user}/oasis/yaml/{file}.yaml (canvas layouts are converted from YAML in real-time, no longer stored as separate layout JSON)\nUser custom experts: data/oasis_user_experts/{user}.json\nDiscussion records: data/oasis_topics/{user}/{topic_id}.json\n\nNote: These endpoints share the same backend implementation as MCP tools list_oasis_experts, add_oasis_expert, update_oasis_expert, delete_oasis_expert, list_oasis_sessions, set_oasis_workflow, list_oasis_workflows, yaml_to_layout, post_to_oasis, check_oasis_discussion, cancel_oasis_discussion, list_oasis_topics, ensuring consistent behavior."
      },
      {
        "title": "Example Configuration Reference",
        "body": "Below is an actual running configuration example (sensitive info redacted):\n\nbash selfskill/scripts/run.sh configure --init\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxx4c74 \\\n  LLM_BASE_URL=https://deepseek.com \\\n  LLM_MODEL=deepseek-chat \\\n  LLM_VISION_SUPPORT=true \\\n  TTS_MODEL=gemini-2.5-flash-preview-tts \\\n  TTS_VOICE=charon \\\n  PORT_AGENT=51200 \\\n  PORT_SCHEDULER=51201 \\\n  PORT_OASIS=51202 \\\n  PORT_FRONTEND=51209 \\\n  PORT_BARK=58010 \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENAI_STANDARD_MODE=false\nbash selfskill/scripts/run.sh add-user system <your-password>\n\nOutput after configure --show:\n\nPORT_SCHEDULER=51201\n  PORT_AGENT=51200\n  PORT_FRONTEND=51209\n  PORT_OASIS=51202\n  OASIS_BASE_URL=http://127.0.0.1:51202\n  PORT_BARK=58010\n  INTERNAL_TOKEN=f1aa****57e7          # Auto-generated, do not leak\n  LLM_API_KEY=sk-7****4c74\n  LLM_BASE_URL=https://deepseek.com\n  LLM_MODEL=deepseek-chat\n  LLM_VISION_SUPPORT=true\n  TTS_MODEL=gemini-2.5-flash-preview-tts\n  TTS_VOICE=charon\n  OPENAI_STANDARD_MODE=false\n\nNote: INTERNAL_TOKEN is auto-generated on first startup; PUBLIC_DOMAIN / BARK_PUBLIC_URL are auto-written by the tunnel; no manual configuration needed."
      },
      {
        "title": "Typical Usage Flow",
        "body": "cd /home/avalon/TeamClaw\n\n# First-time configuration\nbash selfskill/scripts/run.sh setup\nbash selfskill/scripts/run.sh configure --init\nbash selfskill/scripts/run.sh configure --batch LLM_API_KEY=sk-xxx LLM_BASE_URL=https://api.deepseek.com LLM_MODEL=deepseek-chat\nbash selfskill/scripts/run.sh add-user system MyPass123\n\n# Start\nbash selfskill/scripts/run.sh start\n\n# Call API\ncurl -X POST http://127.0.0.1:51200/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer system:MyPass123\" \\\n  -d '{\"model\":\"mini-timebot\",\"messages\":[{\"role\":\"user\",\"content\":\"Hello\"}],\"stream\":false,\"session_id\":\"default\"}'\n\n# Stop\nbash selfskill/scripts/run.sh stop"
      },
      {
        "title": "Important Notes",
        "body": "All skill scripts are in selfskill/scripts/, not affecting original project functionality\n\n\nProcess management via PID files, start supports idempotent calls\n\n\nDo not leak INTERNAL_TOKEN\n\n\nLog path: logs/launcher.log\n\n\nBe sure to tell users how to open the visual interface and how to log in to the account for discussions\n\n\nThe OpenClaw session file path can be inferred from the example path and current skill path. If you are an OpenClaw agent, be sure to complete the full OpenClaw-related configuration"
      },
      {
        "title": "Skill",
        "body": "selfskill/scripts/ run.sh ****\n\nselfskill/scripts/\n run.sh          # start/stop/status/setup/add-user/configure\n adduser.py      # \n configure.py    #  .env"
      },
      {
        "title": "1.",
        "body": "# \nbash selfskill/scripts/run.sh setup\n\n# \nbash selfskill/scripts/run.sh configure --init\n\n#  LLM\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-your-key \\\n  LLM_BASE_URL=https://api.deepseek.com \\\n  LLM_MODEL=deepseek-chat\n\n# \nbash selfskill/scripts/run.sh add-user system MySecurePass123"
      },
      {
        "title": "2. //",
        "body": "bash selfskill/scripts/run.sh start     # \nbash selfskill/scripts/run.sh status    # \nbash selfskill/scripts/run.sh stop      #"
      },
      {
        "title": "3.",
        "body": "# \nbash selfskill/scripts/run.sh configure --show\n\n# \nbash selfskill/scripts/run.sh configure PORT_AGENT 51200\n\n# \nbash selfskill/scripts/run.sh configure --batch TTS_MODEL=gemini-2.5-flash-preview-tts TTS_VOICE=charon"
      },
      {
        "title": "1",
        "body": "Authorization: Bearer <user_id>:<password>"
      },
      {
        "title": "2 Token",
        "body": "Authorization: Bearer <INTERNAL_TOKEN>:<user_id>\n\nINTERNAL_TOKEN  configure --show-raw"
      },
      {
        "title": "API",
        "body": "Base URL: http://127.0.0.1:51200"
      },
      {
        "title": "OpenAI",
        "body": "POST /v1/chat/completions\nAuthorization: Bearer <token>\n\n{\"model\":\"mini-timebot\",\"messages\":[{\"role\":\"user\",\"content\":\"\"}],\"stream\":true,\"session_id\":\"my-session\"}"
      },
      {
        "title": "OASIS",
        "body": "📖 专注 OASIS 使用的独立指引文档（尤其是 OpenClaw agent 集成）: OASIS_GUIDE.md\n\n\"\"\n\n** vs \"/\"\"/(\"> - ** vs detach"
      },
      {
        "title": "discussion=true",
        "body": "\"\"\n\ndiscussion=false\n\nOASIS ////"
      },
      {
        "title": "2)  vs detach",
        "body": "detach=true\n\ntopic_id/ check_oasis_discussion(topic_id)\n\ndetach=false\n\npost_to_oasis\n/"
      },
      {
        "title": "3)",
        "body": "** =  + **\n\ndiscussion=true\ndetach=true\n\n\" /  /  / \"\n\" SOP /  / \"\n\n\" /  /  / \"\n/"
      },
      {
        "title": "4)",
        "body": "+  ()discussion=true, detach=truetopic_id//+discussion=true, detach=false/+discussion=false, detach=truetopic_id/+discussion=false, detach=false//"
      },
      {
        "title": "OASIS",
        "body": "OASIS  **** schedule_yaml  name\n\n#Name1Direct LLMtag#temp#NExpertAgentLLM    LLM    + tag /N2Oasis Sessiontag#oasis#idSessionExpert (oasis)OASIS  bot sessiontag  system promptBot id  ID  session3Regular AgentTitle#session_idSessionExpert (regular)agent session #default``Coder#my-projectsession  system prompt  agent bot session4External APItag#ext#idExternalExpertOpenAI  APIDeepSeekGPT-4Ollama TeamClaw  agent YAML headers"
      },
      {
        "title": "Session ID",
        "body": "tag#temp#N            ExpertAgent   (, LLM)\ntag#oasis#<id>        SessionExpert (oasis, bot)\nTitle#session_id      SessionExpert (agent session)\ntag#ext#<id>          ExternalExpert (APIopenclaw agent)\n\nsession  #new ** session**ID  UUID\ncreative#oasis#abc#new  #new ID  UUID\n#my-session#new\n\n**Oasis session **\n\nOasis session  session_id  #oasis#  creative#oasis#ab12cd34\nAgent checkpoint DBdata/agent_memory.db\n\ntag"
      },
      {
        "title": "YAML",
        "body": "version: 1\nplan:\n  # Type 1: Direct LLM\n  - expert: \"creative#temp#1\"\n  - expert: \"critical#temp#2\"\n\n  # Type 2: Oasis session\n  - expert: \"data#oasis#analysis01\"\n  - expert: \"synthesis#oasis#new#new\"   # session\n\n  # Type 3: Regular agent sessionbot\n  - expert: \"#default\"\n  - expert: \"Coder#my-project\"\n\n  # Type 4: External APIDeepSeek, GPT-4\n  # 注意：api_key 自动从 OPENCLAW_API_KEY 环境变量读取；YAML 中使用 \"****\" 掩码（切勿写入明文密钥）\n  - expert: \"deepseek#ext#ds1\"\n\n  # Type 4: OpenClaw External API Agent \n  # api_key 从 OPENCLAW_API_KEY 环境变量自动读取，YAML 中使用 \"****\" 掩码\n  - expert: \"coder#ext#oc1\"\n    api_url: \"http://127.0.0.1:23001/v1/chat/completions\"\n    api_key: \"****\"              # 掩码 — 运行时自动从 OPENCLAW_API_KEY 环境变量读取真实密钥\n    model: \"agent:main:test1\"    # agent:<agent_name>:<session>session \n\n  # \n  - parallel:\n      - expert: \"creative#temp#1\"\n        instruction: \"\"\n      - expert: \"critical#temp#2\"\n        instruction: \"\"\n\n  #  + \n  - all_experts: true\n  - manual:\n      author: \"\"\n      content: \"\""
      },
      {
        "title": "DAG 模式 — 依赖驱动的并行执行",
        "body": "当工作流存在 fan-in（一个节点有多个前驱）或 fan-out（一个节点有多个后继）时，使用带 id 和 depends_on 字段的 DAG 模式。引擎会最大化并行——每个节点在所有依赖完成后立即启动，无需等待无关节点。\n\nDAG YAML 示例：\n\nversion: 1\nrepeat: false\nplan:\n  - id: research\n    expert: \"creative#temp#1\"                # 根节点 — 立即启动\n  - id: analysis\n    expert: \"critical#temp#1\"                # 根节点 — 与 research 并行运行\n  - id: synthesis\n    expert: \"synthesis#temp#1\"\n    depends_on: [research, analysis]         # Fan-in：等待两者都完成\n  - id: review\n    expert: \"data#temp#1\"\n    depends_on: [synthesis]                  # synthesis 完成后执行\n\nDAG 规则：\n\n每个步骤必须有唯一的 id 字段。\ndepends_on 是该步骤启动前必须完成的步骤 id 列表。根节点不需要此字段。\n图必须无环（禁止循环依赖）。\n没有依赖关系的步骤自动并行执行。\n可视化画布自动检测 fan-in/fan-out 并生成 DAG 格式。\nmanual 步骤同样支持 id/depends_on。"
      },
      {
        "title": "External API (Type 4)",
        "body": "Type 4  agent  YAML\n\nversion: 1\nplan:\n  - expert: \"#ext#analyst\"\n    api_url: \"https://api.deepseek.com\"          #  API  base URL /v1/chat/completions\n    api_key: \"****\"                               # 掩码 — 运行时自动从 OPENCLAW_API_KEY 环境变量读取真实密钥\n    model: \"deepseek-chat\"                        #  gpt-3.5-turbo\n    headers:                                      #  HTTP key-value \n      X-Custom-Header: \"value\"\n\n🔒 API Key 安全机制：YAML 中无需再写入明文 API Key。设置 api_key: \"****\"（或完全省略）即可，系统运行时会自动从 OPENCLAW_API_KEY 环境变量读取真实密钥。前端画布也仅显示 **** 而非真实密钥。如果你仍然写入明文密钥，也能正常工作（向后兼容）。\n\napi_urlAPI  /v1/chat/completionsapi_key使用 **** 掩码 — 自动从 OPENCLAW_API_KEY 环境变量读取。也支持直接写入明文密钥（向后兼容）modelgpt-3.5-turboheaderskey-value  HTTP\n\n**OpenClaw **\n\nOpenClaw  OpenAI  Agent  .env  OpenClaw  endpoint  OpenClaw **** api_url  api_key\n\n#  OpenClaw endpoint  sessions \nbash selfskill/scripts/run.sh configure --batch \\\nOPENCLAW_SESSIONS_FILE=./data/sessions.json \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENCLAW_API_KEY=your-openclaw-key-if-needed\n\nOPENCLAW_SESSIONS_FILE  OpenClaw **** OpenClaw  sessions.json  OpenClaw sessions\nPath Convention: ./agents/main/sessions/sessions.json  OpenClaw agent sessions  OpenClaw agents\nSession Management: Accessing session information is a necessary process for OpenClaw agent orchestration, enabling multi-agent workflow coordination and visual canvas operations.\nOPENCLAW_API_URL **** /v1/chat/completions base URL  YAMLYAML  api_url  base URL http://127.0.0.1:18789\nOpenClaw\n\n**OpenClaw  model **\n\nagent:<agent_name>:<session_name>\n\nagent_nameOpenClaw  agent  main\n\n\nsession_name test1``default ** session **\n\n\nagent:main:default   main agent  default session\n\n\nagent:main:test1   main agent  test1 session\n\n\nagent:main:code-review   main agent  code-review session\n\n= Content-Type: application/json + Authorization: Bearer <api_key> + YAML headers\n\nx-openclaw-session-key —— OpenClaw 确定性 Session 路由：\n\n通过 External API（Type 4）调用 OpenClaw agent 时，x-openclaw-session-key HTTP header 是将请求路由到指定 OpenClaw session 的关键机制。缺少此 header，OpenClaw 可能无法正确关联到目标 session。\n\n前端编排面板在拖拽 OpenClaw session 到画布时会自动设置此 header。\n手动编写 YAML 或通过 API 调用时，必须在 headers 字段中包含此 header 以确保 session 的确定性。\n\n# 示例：连接到指定的 OpenClaw session\n- expert: \"coder#ext#oc1\"\n  api_url: \"http://127.0.0.1:18789\"\n  api_key: \"****\"                                      # ← 掩码；真实密钥从 OPENCLAW_API_KEY 环境变量读取\n  model: \"agent:main:my-session\"\n  headers:\n    x-openclaw-session-key: \"agent:main:my-session\"   # ← 此 header 决定了目标 OpenClaw session\n\nx-openclaw-session-key 的值应与 model 字段的 session 标识符一致（格式：agent:<agent_name>:<session_name>）。这确保外部请求被路由到正确的 OpenClaw agent session，保持对话连续性和状态。"
      },
      {
        "title": "OASIS Server",
        "body": "OASIS Server 51202** Agent ** curl  OASIS  MCP  Agent\n\n/\nworkflow\nOASIS\nworkflow\n\nOASIS bash selfskill/scripts/run.sh start\nuser_id  Authorization header\n\n**API **\n\nGET/experts?user_id=xxxPOST/experts/user/PUT/DELETE/experts/user/{tag}oasis sessionsGET/sessions/oasis?user_id=xxxworkflowPOST/workflowsworkflowsGET/workflows?user_id=xxxYAML  LayoutPOST/layouts/from-yaml/POST/topicsGET/topics/{topic_id}?user_id=xxxGET/topics/{topic_id}/conclusion?user_id=xxx&timeout=300SSEGET/topics/{topic_id}/stream?user_id=xxxDELETE/topics/{topic_id}?user_id=xxxGET/topics?user_id=xxx\n\nMCP"
      },
      {
        "title": "OASIS /",
        "body": "POST http://127.0.0.1:51202/topics\n\n{\"question\":\"\",\"user_id\":\"system\",\"max_rounds\":3,\"discussion\":true,\"schedule_file\":\"...\",\"schedule_yaml\":\"...\",\"callback_url\":\"http://127.0.0.1:51200/system_trigger\",\"callback_session_id\":\"my-session\"}\n\nschedule_yamlyamlyaml/XXXXX/TeamClaw/data/user_files/username"
      },
      {
        "title": "curl  OASIS",
        "body": "OASIS  51202 MCP  curl  user_id\n\n1.\n\n#  + \ncurl 'http://127.0.0.1:51202/experts?user_id=xinyuan'\n\n# \ncurl -X POST 'http://127.0.0.1:51202/experts/user' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"name\":\"\",\"tag\":\"pm\",\"persona\":\"\",\"temperature\":0.7}'\n\n# \ncurl -X PUT 'http://127.0.0.1:51202/experts/user/pm' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"persona\":\"\"}'\n\n# \ncurl -X DELETE 'http://127.0.0.1:51202/experts/user/pm?user_id=xinyuan'\n\n2.\n\n#  OASIS  #oasis#  session\ncurl 'http://127.0.0.1:51202/sessions/oasis?user_id=xinyuan'\n\n3. Workflow\n\n#  workflows\ncurl 'http://127.0.0.1:51202/workflows?user_id=xinyuan'\n\n#  workflow layout\ncurl -X POST 'http://127.0.0.1:51202/workflows' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"name\":\"trio_discussion\",\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"description\":\"\",\"save_layout\":true}'\n\n4. Layout\n\n#  YAML  layout\ncurl -X POST 'http://127.0.0.1:51202/layouts/from-yaml' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"yaml_source\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"layout_name\":\"trio_layout\"}'\n\n5. /\n\n# \ncurl -X POST 'http://127.0.0.1:51202/topics' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"question\":\"\",\"max_rounds\":3,\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"discussion\":true}'\n\n#  topic_id\ncurl -X POST 'http://127.0.0.1:51202/topics' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"question\":\"\",\"max_rounds\":3,\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"discussion\":true,\"callback_url\":\"http://127.0.0.1:51200/system_trigger\",\"callback_session_id\":\"my-session\"}'\n\n# \ncurl 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'\n\n# \ncurl 'http://127.0.0.1:51202/topics/{topic_id}/conclusion?user_id=xinyuan&timeout=300'\n\n# \ncurl -X DELETE 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'\n\n# \ncurl 'http://127.0.0.1:51202/topics?user_id=xinyuan'\n\n6.\n\n# SSE \ncurl 'http://127.0.0.1:51202/topics/{topic_id}/stream?user_id=xinyuan'\n\nWorkflows (YAML): data/user_files/{user}/oasis/yaml/{file}.yaml YAML  layout JSON\n: data/oasis_user_experts/{user}.json\n: data/oasis_topics/{user}/{topic_id}.json\n\n****  MCP  list_oasis_experts``add_oasis_expert``update_oasis_expert``delete_oasis_expert``list_oasis_sessions``set_oasis_workflow``list_oasis_workflows``yaml_to_layout``post_to_oasis``check_oasis_discussion``cancel_oasis_discussion``list_oasis_topics"
      },
      {
        "title": "⚠️ Before First Launch — Required Configuration",
        "body": "Before starting TeamClaw for the first time, the following environment variables must be configured. Without them the service will not function correctly."
      },
      {
        "title": "1. LLM Configuration (Required)",
        "body": "⚠️ LLM API ≠ OpenClaw API — They are two completely separate sets of credentials!\n\nLLM_API_KEY / LLM_BASE_URL / LLM_MODEL → Your LLM provider (DeepSeek, OpenAI, Google, etc.). Used for the built-in Agent's conversations and OASIS experts.\nOPENCLAW_API_URL / OPENCLAW_API_KEY → Your local OpenClaw service endpoint. Used only for orchestrating OpenClaw agents on the visual Canvas.\n\nDo NOT mix them up. They point to different services, use different keys, and serve different purposes.\n\nVariableDescriptionExampleLLM_API_KEYYour LLM provider's API key. This is mandatory.sk-xxxxxxxxxxxxxxxxLLM_BASE_URLBase URL of your LLM provider's API endpoint.https://api.deepseek.comLLM_MODELThe model name to use for conversations.deepseek-chat / gpt-4o / gemini-2.5-flash\n\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-your-key \\\n  LLM_BASE_URL=https://api.deepseek.com \\\n  LLM_MODEL=deepseek-chat"
      },
      {
        "title": "2. OpenClaw Integration (Required for visual workflow orchestration)",
        "body": "⚠️ Reminder: OpenClaw API is NOT the same as LLM API above!\nThe OPENCLAW_* variables below point to your locally running OpenClaw service, not to an external LLM provider. They have completely different URLs, keys, and purposes.\n\nThese variables are required if you intend to use the OASIS visual Canvas to orchestrate OpenClaw agents:\n\nVariableDescriptionExampleOPENCLAW_SESSIONS_FILEAbsolute path to the OpenClaw sessions.json file. Used to discover existing OpenClaw agent sessions and make them available for drag-and-drop orchestration on the visual Canvas. The frontend orchestration panel will NOT load OpenClaw sessions if this is not set./projects/.moltbot/agents/main/sessions/sessions.jsonOPENCLAW_API_URLThe OpenClaw backend API endpoint. This changes with the gateway port. You MUST first enable OpenClaw's OpenAI-compatible API interface before configuring this. Include the full path with /v1/chat/completions.http://127.0.0.1:18789/v1/chat/completionsOPENCLAW_API_KEYThe API key for accessing OpenClaw via its OpenAI-compatible endpoint. Required if your OpenClaw instance has authentication enabled.your-openclaw-key\n\nImportant: OPENCLAW_API_URL changes whenever the OpenClaw gateway port changes. Always verify the port is correct and that the OpenClaw OpenAI-compatible interface is enabled before starting TeamClaw.\n\nbash selfskill/scripts/run.sh configure --batch \\\n  OPENCLAW_SESSIONS_FILE=/projects/.moltbot/agents/main/sessions/sessions.json \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENCLAW_API_KEY=your-openclaw-key-if-needed"
      },
      {
        "title": "3. Cloudflare Tunnel (Optional — for remote access)",
        "body": "To expose the Web UI to the public internet for remote visual workflow programming (e.g., from a mobile phone):\n\nThe tunnel.py script will automatically write PUBLIC_DOMAIN and BARK_PUBLIC_URL into .env when a Cloudflare Tunnel is established.\nNo manual configuration is needed — just run the tunnel script and the frontend becomes accessible via HTTPS on the public domain.\nNon-blocking start: tunnel.py blocks the terminal by default (main thread joins tunnel threads). To start it without blocking the agent or terminal, run it in the background:\n\nnohup python scripts/tunnel.py > logs/tunnel.log 2>&1 &\nsleep 30  # Wait for tunnels to be established and PUBLIC_DOMAIN written to .env"
      },
      {
        "title": "⚠️ 首次启动前 — 必须配置项",
        "body": "首次启动 TeamClaw 之前，以下环境变量必须配置完毕，否则服务无法正常运行。"
      },
      {
        "title": "1. LLM 配置（必填）",
        "body": "⚠️ LLM API ≠ OpenClaw API —— 这是两组完全不同的配置！\n\nLLM_API_KEY / LLM_BASE_URL / LLM_MODEL → 你的 LLM 服务商（DeepSeek、OpenAI、Google 等）。用于内置 Agent 对话和 OASIS 专家调用。\nOPENCLAW_API_URL / OPENCLAW_API_KEY → 你的 本地 OpenClaw 服务 端点。仅用于在可视化画布上编排 OpenClaw Agent。\n\n切勿混淆！ 它们指向不同的服务，使用不同的密钥，用途完全不同。\n\n变量说明示例LLM_API_KEYLLM 服务商的 API 密钥，必填项。sk-xxxxxxxxxxxxxxxxLLM_BASE_URLLLM 服务商的 API 基础地址。https://api.deepseek.comLLM_MODEL使用的模型名称。deepseek-chat / gpt-4o / gemini-2.5-flash\n\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-your-key \\\n  LLM_BASE_URL=https://api.deepseek.com \\\n  LLM_MODEL=deepseek-chat"
      },
      {
        "title": "2. OpenClaw 集成配置（使用可视化编排时必填）",
        "body": "⚠️ 再次提醒：OpenClaw API 和上面的 LLM API 不是同一个东西！\n下面的 OPENCLAW_* 变量指向你 本地运行的 OpenClaw 服务，而非外部 LLM 服务商。它们的 URL、密钥和用途完全不同。\n\n如果你需要使用 OASIS 可视化画布来编排 OpenClaw Agent，以下变量必须配置：\n\n变量说明示例OPENCLAW_SESSIONS_FILEOpenClaw sessions.json 文件的绝对路径。用于获取已有的 OpenClaw Agent session 号，使其可以在可视化画布中被拖拽使用。未配置此项时前端编排面板将无法加载 OpenClaw sessions。/projects/.moltbot/agents/main/sessions/sessions.jsonOPENCLAW_API_URLOpenClaw 后端 API 地址。该地址随 gateway 端口变化而变化。必须先开启 OpenClaw 的 OpenAI 兼容接口，填写包含 /v1/chat/completions 的完整路径。http://127.0.0.1:18789/v1/chat/completionsOPENCLAW_API_KEY通过 OpenAI 兼容接口访问 OpenClaw 时使用的 API Key。如果你的 OpenClaw 实例启用了鉴权，则此项必填。your-openclaw-key\n\n重要提醒：OPENCLAW_API_URL 会随着 OpenClaw gateway 端口的改变而改变，启动前请务必确认端口正确，且 OpenClaw 的 OpenAI 兼容接口已开启。\n\nbash selfskill/scripts/run.sh configure --batch \\\n  OPENCLAW_SESSIONS_FILE=/projects/.moltbot/agents/main/sessions/sessions.json \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENCLAW_API_KEY=your-openclaw-key-if-needed"
      },
      {
        "title": "3. Cloudflare Tunnel（可选 — 用于远程访问）",
        "body": "如需将前端 Web UI 通过公网 HTTPS 安全暴露，以便在手机或其他远程设备上进行可视化多 Agent 工作流编排：\n\n运行 tunnel.py 脚本后，Cloudflare Tunnel 会自动建立，并将 PUBLIC_DOMAIN 和 BARK_PUBLIC_URL 写入 .env。\n无需手动配置，启动隧道后即可通过 HTTPS 公网域名访问前端。\n非阻塞启动：tunnel.py 默认会阻塞终端（主线程 join 等待隧道线程）。如需避免阻塞 Agent 或终端，请后台启动：\n\nnohup python scripts/tunnel.py > logs/tunnel.log 2>&1 &\nsleep 30  # 等待隧道建立完成，PUBLIC_DOMAIN 写入 .env"
      }
    ],
    "body": "TeamClaw Agent Subsystem Skill\n\nhttps://github.com/Avalon-467/Teamclaw\n\nIntroduction\n\nTeamClaw is an OpenClaw-like multi-agent sub-platform with a built-in lightweight agent (similar to OpenClaw's), featuring computer use capabilities and social platform integrations (e.g., Telegram). It can run independently without blocking the main agent, or be directly controlled by an OpenClaw agent to orchestrate the built-in OASIS collaboration platform. It also supports exposing the frontend to the public internet via Cloudflare, enabling remote visual multi-agent workflow programming from mobile devices or any browser.\n\nTeamClaw is a versatile AI Agent service providing:\n\nConversational Agent: A LangGraph-based multi-tool AI assistant supporting streaming/non-streaming conversations\nOASIS Forum: A multi-expert parallel discussion/execution engine for orchestrating multiple agents\nScheduled Tasks: An APScheduler-based task scheduling center\nBark Push: Mobile push notifications\nFrontend Web UI: A complete chat interface\nSkill Scripts\n\nAll scripts are located in selfskill/scripts/, invoked uniformly via the run.sh entry point, all non-interactive.\n\nselfskill/scripts/\n run.sh          # Main entry (start/stop/status/setup/add-user/configure)\n adduser.py      # Non-interactive user creation\n configure.py    # Non-interactive .env configuration management\n\nQuick Start\n\nAll commands are executed in the project root directory.\n\nThree-step launch flow: setup → configure → start\n\n1. First Deployment\n# Install dependencies\nbash selfskill/scripts/run.sh setup\n\n# Initialize configuration file\nbash selfskill/scripts/run.sh configure --init\n\n# Configure LLM (required)\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-your-key \\\n  LLM_BASE_URL=https://api.deepseek.com \\\n  LLM_MODEL=deepseek-chat\n\n# ⚠️ Create user account (REQUIRED — without this you CANNOT log in to the Web UI or call API)\nbash selfskill/scripts/run.sh add-user system MySecurePass123\n\n\n⚠️ You MUST create at least one user account before starting the service!\n\nThe Web UI login page requires username + password.\nAll API calls require Authorization: Bearer <user_id>:<password> (or INTERNAL_TOKEN:<user_id>).\nIf you skip this step, you will be locked out of the entire system.\nYou can create multiple users. The first argument is the username, the second is the password.\n2. Start / Stop / Status\nbash selfskill/scripts/run.sh start     # Start in background\nbash selfskill/scripts/run.sh status    # Check status\nbash selfskill/scripts/run.sh stop      # Stop service\n\n3. Bark Push vs Chatbot (Telegram/QQ) — Startup Differences\nComponent\tHow it starts\tConfiguration needed\tNotes\nBark Push (port 58010)\tAutomatically started by launcher.py\tNone — works out of the box\tA standalone binary (bin/bark-server). Auto-downloaded on first setup. No env vars needed.\nTelegram Bot\tRequires manual setup\tTELEGRAM_BOT_TOKEN, TELEGRAM_ALLOWED_USERS in .env\tlauncher.py calls chatbot/setup.py which has an interactive menu (input()). In headless/background mode this will block. To avoid blocking, configure the bot tokens in .env beforehand and start the bot separately: nohup python chatbot/telegrambot.py > logs/telegrambot.log 2>&1 &\nQQ Bot\tRequires manual setup\tQQ_APP_ID, QQ_BOT_SECRET, QQ_BOT_USERNAME in .env\tSame as Telegram — interactive setup will block in headless mode. Start separately: nohup python chatbot/QQbot.py > logs/qqbot.log 2>&1 &\n\n⚠️ Important for Agent/headless usage: The chatbot/setup.py script contains interactive input() prompts. When launcher.py runs in the background (via run.sh start), if chatbot/setup.py exists it will be called and block indefinitely waiting for user input. To prevent this:\n\nEither remove/rename chatbot/setup.py before starting, OR\nPre-configure all bot tokens in .env and start bots independently (bypassing setup.py).\n4. Configuration Management\n# View current configuration (sensitive values masked)\nbash selfskill/scripts/run.sh configure --show\n\n# Set a single item\nbash selfskill/scripts/run.sh configure PORT_AGENT 51200\n\n# Batch set\nbash selfskill/scripts/run.sh configure --batch TTS_MODEL=gemini-2.5-flash-preview-tts TTS_VOICE=charon\n\nConfiguration Options\nOption\tDescription\tDefault\nLLM_API_KEY\tLLM API key (required)\t\nLLM_BASE_URL\tLLM API URL\thttps://api.deepseek.com\nLLM_MODEL\tModel name\tdeepseek-chat\nLLM_PROVIDER\tProvider (google/anthropic/deepseek/openai, auto-inferred)\tAuto\nLLM_VISION_SUPPORT\tVision support (auto-inferred)\tAuto\nPORT_AGENT\tAgent main service port\t51200\nPORT_SCHEDULER\tScheduled task port\t51201\nPORT_OASIS\tOASIS forum port\t51202\nPORT_FRONTEND\tWeb UI port\t51209\nPORT_BARK\tBark push port\t58010\nTTS_MODEL\tTTS model (optional)\t\nTTS_VOICE\tTTS voice (optional)\t\nOPENCLAW_API_URL\tOpenClaw backend service URL (full path, including /v1/chat/completions)\thttp://127.0.0.1:18789/v1/chat/completions\nOPENCLAW_API_KEY\tOpenClaw backend service API key (optional)\t\nOPENCLAW_SESSIONS_FILE\tAbsolute path to OpenClaw sessions.json file (required when using OpenClaw)\tNone\nINTERNAL_TOKEN\tInternal communication secret (auto-generated)\tAuto\nPorts & Services\nPort\tService\n51200\tAI Agent main service\n51201\tScheduled tasks\n51202\tOASIS forum\n51209\tWeb UI\nAPI Authentication\nMethod 1: User Authentication\nAuthorization: Bearer <user_id>:<password>\n\nMethod 2: Internal Token (for inter-service calls, recommended)\nAuthorization: Bearer <INTERNAL_TOKEN>:<user_id>\n\n\nINTERNAL_TOKEN is auto-generated on first startup; view it via configure --show-raw.\n\nCore API\n\nBase URL: http://127.0.0.1:51200\n\nChat (OpenAI-compatible)\nPOST /v1/chat/completions\nAuthorization: Bearer <token>\n\n{\"model\":\"mini-timebot\",\"messages\":[{\"role\":\"user\",\"content\":\"Hello\"}],\"stream\":true,\"session_id\":\"my-session\"}\n\nSystem Trigger (internal call)\nPOST /system_trigger\nX-Internal-Token: <INTERNAL_TOKEN>\n\n{\"user_id\":\"system\",\"text\":\"Please execute a task\",\"session_id\":\"task-001\"}\n\nCancel Session\nPOST /cancel\n\n{\"user_id\":\"<user_id>\",\"session_id\":\"<session_id>\"}\n\nOASIS Four Operating Modes (Default: Discussion Mode)\n\n📖 Dedicated OASIS usage guide (especially for OpenClaw agent integration): OASIS_GUIDE.md\n\nThe \"four modes\" are two orthogonal switches:\n\nDiscussion vs Execution: Determines whether expert output is \"forum-style discussion/voting\" or \"workflow-style execution/deliverables\".\nSynchronous vs Detach: Determines whether the caller blocks waiting for results.\n1) Discussion Mode vs Execution Mode\n\nDiscussion Mode (discussion=true, default)\n\nPurpose: Multiple experts provide different perspectives, pros/cons analysis, clarify disputes, and can form consensus.\nUse case: Solution reviews, technical route selection, questions that need \"why\".\n\nExecution Mode (discussion=false)\n\nPurpose: Use OASIS as an orchestrator to complete tasks in planned sequential/parallel order, emphasizing direct output (code/scripts/checklists/finalized plans).\nUse case: Delivery tasks with clear objectives that don't need debate.\n2) Synchronous Mode vs Detach Mode\n\nDetach (detach=true, default)\n\nBehavior: Returns topic_id immediately, continues running/discussing in the background; later use check_oasis_discussion(topic_id) to track progress and results.\nUse case: Most tasks, especially multi-round/multi-expert/long-running/tool-calling tasks.\n\nSynchronous (detach=false)\n\nBehavior: After calling post_to_oasis, waits for completion and returns the final result directly.\nUse case: Quick tasks where you need the deliverable immediately to continue iterating.\n3) Auto-selection Rules (Recommended Default Strategy)\n\nWhen not explicitly specified, the following default strategy is recommended:\n\nDefault = Discussion + Detach\n\ndiscussion=true\ndetach=true\n\nSwitch to Execution Mode when these signals appear:\n\n\"Give me the final version / copy-pasteable / executable script / just conclusions no discussion\"\n\"Generate SOP / checklist / table step by step and finalize\"\n\nSwitch to Synchronous Mode when these signals appear:\n\n\"Wait for the result / I need it now / give me the answer directly\"\nQuick single-round tasks where the deliverable is needed immediately\n4) Four Combinations Quick Reference\nCombination\tParameters\tReturns\tUse Case\nDiscussion + Detach (default)\tdiscussion=true, detach=true\ttopic_id, check later\tDecision/review/collect opinions\nDiscussion + Sync\tdiscussion=true, detach=false\tSee discussion & conclusion on the spot\tQuick discussion needing immediate result\nExecution + Detach\tdiscussion=false, detach=true\ttopic_id, check later\tLong execution/complex pipelines\nExecution + Sync\tdiscussion=false, detach=false\tDirect deliverables\tGenerate code/plans/checklists\nOASIS Four Agent Types\n\nOASIS supports four types of agents, distinguished by the name format in schedule_yaml:\n\n#\tType\tName Format\tEngine Class\tDescription\n1\tDirect LLM\ttag#temp#N\tExpertAgent\tStateless single LLM call. Each round reads all posts one LLM call publish + vote. No cross-round memory. tag maps to preset expert name/persona, N is instance number (same expert can have multiple copies).\n2\tOasis Session\ttag#oasis#id\tSessionExpert (oasis)\tOASIS-managed stateful bot session. tag maps to preset expert, persona injected as system prompt on first round. Bot retains conversation memory across rounds (incremental context). id can be any string; new ID auto-creates session on first use.\n3\tRegular Agent\tTitle#session_id\tSessionExpert (regular)\tConnects to an existing agent session (e.g., Assistant#default, Coder#my-project). No identity injectionthe session's own system prompt defines the agent. Suitable for bringing personal bot sessions into discussions.\n4\tExternal API\ttag#ext#id\tExternalExpert\tDirectly calls any OpenAI-compatible external API (DeepSeek, GPT-4, Ollama, another TeamClaw instance, etc.). Does not go through local agent. External service assumed stateful. Supports custom request headers via YAML headers field.\nSession ID Format\ntag#temp#N            ExpertAgent   (stateless, direct LLM)\ntag#oasis#<id>        SessionExpert (oasis-managed, stateful bot)\nTitle#session_id      SessionExpert (regular agent session)\ntag#ext#<id>          ExternalExpert (external API, e.g. OpenClaw agent)\n\n\nSpecial Suffix:\n\nAppending #new to the end of any session name forces creation of a brand new session (ID replaced with random UUID, ensuring no reuse):\ncreative#oasis#abc#new #new stripped, ID replaced with UUID\nAssistant#my-session#new Same processing\n\nOasis Session Conventions:\n\nOasis sessions are identified by #oasis# in session_id (e.g., creative#oasis#ab12cd34)\nStored in the regular Agent checkpoint DB (data/agent_memory.db), no separate storage\nAuto-created on first use, no pre-creation needed\ntag part maps to preset expert configuration to find persona\nYAML Example\nversion: 1\nplan:\n  # Type 1: Direct LLM (stateless, fast)\n  - expert: \"creative#temp#1\"\n  - expert: \"critical#temp#2\"\n\n  # Type 2: Oasis session (stateful, with memory)\n  - expert: \"data#oasis#analysis01\"\n  - expert: \"synthesis#oasis#new#new\"   # Force new session\n\n  # Type 3: Regular agent session (your existing bot)\n  - expert: \"Assistant#default\"\n  - expert: \"Coder#my-project\"\n\n  # Type 4: External API (DeepSeek, GPT-4, etc.)\n  # Note: api_key is auto-read from OPENCLAW_API_KEY env var; use \"****\" mask in YAML (never write plaintext keys)\n  - expert: \"deepseek#ext#ds1\"\n\n  # Type 4: OpenClaw External API (local Agent service)\n  # api_key auto-resolved from OPENCLAW_API_KEY env var when set to \"****\"\n  - expert: \"coder#ext#oc1\"\n    api_url: \"http://127.0.0.1:23001/v1/chat/completions\"\n    api_key: \"****\"              # Masked — real key read from OPENCLAW_API_KEY env var at runtime\n    model: \"agent:main:test1\"    # agent:<agent_name>:<session>, session auto-created if not exists\n\n  # Parallel execution\n  - parallel:\n      - expert: \"creative#temp#1\"\n        instruction: \"Analyze from innovation perspective\"\n      - expert: \"critical#temp#2\"\n        instruction: \"Analyze from risk perspective\"\n\n  # All experts speak + manual injection\n  - all_experts: true\n  - manual:\n      author: \"Moderator\"\n      content: \"Please focus on feasibility\"\n\nDAG Mode — Dependency-Driven Parallel Execution\n\nWhen the workflow has fan-in (a node has multiple predecessors) or fan-out (a node has multiple successors), use DAG mode with id and depends_on fields. The engine maximizes parallelism — each node starts as soon as all its dependencies are satisfied.\n\nDAG YAML Example:\n\nversion: 1\nrepeat: false\nplan:\n  - id: research\n    expert: \"creative#temp#1\"                # Root — starts immediately\n  - id: analysis\n    expert: \"critical#temp#1\"                # Root — runs in PARALLEL with research\n  - id: synthesis\n    expert: \"synthesis#temp#1\"\n    depends_on: [research, analysis]         # Fan-in: waits for BOTH to complete\n  - id: review\n    expert: \"data#temp#1\"\n    depends_on: [synthesis]                  # Runs after synthesis\n\n\nDAG Rules:\n\nEvery step must have a unique id field.\ndepends_on is a list of step ids that must complete before this step starts. Omit for root nodes.\nThe graph must be acyclic (no circular dependencies).\nSteps with no dependency relationship run in parallel automatically.\nThe visual Canvas auto-detects fan-in/fan-out and generates DAG format.\nmanual steps can also have id/depends_on.\nExternal API (Type 4) Detailed Configuration\n\nType 4 external agents support additional configuration fields in YAML steps:\n\nversion: 1\nplan:\n  - expert: \"#ext#analyst\"\n    api_url: \"https://api.deepseek.com\"          # Required: External API base URL (auto-completes to /v1/chat/completions)\n    api_key: \"****\"                               # Masked — real key auto-read from OPENCLAW_API_KEY env var at runtime\n    model: \"deepseek-chat\"                        # Optional: Model name, default gpt-3.5-turbo\n    headers:                                      # Optional: Custom HTTP headers (key-value dict)\n      X-Custom-Header: \"value\"\n\n\n🔒 API Key Security: You no longer need to write plaintext API keys in YAML. Set api_key: \"****\" (or omit it entirely) and the system will automatically read the real key from the OPENCLAW_API_KEY environment variable at runtime. The frontend canvas also displays **** instead of the real key. If you do write a plaintext key, it will still work (backward compatible). Configuration Field Description:\n\nField\tRequired\tDescription\napi_url\t\tExternal API address, auto-completes path to /v1/chat/completions\napi_key\t\tUse **** mask — auto-read from OPENCLAW_API_KEY env var. Plaintext keys also supported (backward compatible)\nmodel\t\tDefault gpt-3.5-turbo\nheaders\t\tAny key-value dict, merged into HTTP request headers\n\nOpenClaw-specific Configuration:\n\nOpenClaw is a locally running OpenAI-compatible Agent service. After setting up OpenClaw-specific endpoints in .env, the frontend orchestration panel will auto-fill api_url and api_key when dragging in an OpenClaw expert, no manual input needed:\n\n# Configure OpenClaw endpoint and sessions file path\nbash selfskill/scripts/run.sh configure --batch \\\nOPENCLAW_SESSIONS_FILE=./data/sessions.json \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENCLAW_API_KEY=your-openclaw-key-if-needed\n\n\n** Note:**\n\nOPENCLAW_SESSIONS_FILE is a prerequisite for using the OpenClaw feature and must point to the absolute path of OpenClaw's sessions.json file. The frontend orchestration panel will not load OpenClaw sessions if unconfigured.\nPath Convention: ./agents/main/sessions/sessions.json is a common path structure for OpenClaw agent sessions. This path convention allows the system to properly access and orchestrate OpenClaw agents.\nSession Management: Accessing session information is a necessary process for OpenClaw agent orchestration, enabling multi-agent workflow coordination and visual canvas operations.\nOPENCLAW_API_URL should contain the full path (including /v1/chat/completions); the system will auto-strip the suffix to generate the base URL for YAML. The api_url field in YAML only needs the base URL (e.g., http://127.0.0.1:18789); the engine auto-completes the path.\nIf your OpenClaw service runs on a non-default port, be sure to modify these settings.\n\nOpenClaw model Field Format:\n\nagent:<agent_name>:<session_name>\n\nagent_name: Agent name in OpenClaw, usually main\nsession_name: Session name, e.g., test1, default, etc. You can enter a non-existent session name to auto-create\n\nExamples:\n\nagent:main:default Use main agent's default session\nagent:main:test1 Use main agent's test1 session (auto-created if not exists)\nagent:main:code-review Use main agent's code-review session\n\nRequest Header Assembly Logic: Final request headers = Content-Type: application/json + Authorization: Bearer <api_key> (if present) + all key-value pairs from YAML headers.\n\nx-openclaw-session-key — Deterministic OpenClaw Session Routing:\n\nWhen calling an OpenClaw agent via External API (Type 4), the x-openclaw-session-key HTTP header is the key mechanism for routing requests to a specific, deterministic OpenClaw session. Without this header, OpenClaw may not correctly associate the request with the intended session.\n\nThe frontend orchestration panel automatically sets this header when you drag an OpenClaw session onto the canvas.\nWhen writing YAML manually or calling the API programmatically, you must include this header in the headers field to ensure session determinism.\n# Example: Connecting to a specific OpenClaw session\n- expert: \"coder#ext#oc1\"\n  api_url: \"http://127.0.0.1:18789\"\n  api_key: \"****\"                                      # ← Masked; real key from OPENCLAW_API_KEY env var\n  model: \"agent:main:my-session\"\n  headers:\n    x-openclaw-session-key: \"agent:main:my-session\"   # ← This header determines the exact OpenClaw session\n\n\nThe value of x-openclaw-session-key should match the model field's session identifier (format: agent:<agent_name>:<session_name>). This ensures the external request is routed to the correct OpenClaw agent session, maintaining conversation continuity and state.\n\nUsing OASIS Server Independently\n\nThe OASIS Server (port 51202) can be used independently of the Agent main service. External scripts, other services, or manual curl can directly operate all OASIS features without going through MCP tools or Agent conversations.\n\nIndependent Use Scenarios:\n\nInitiate multi-expert discussions/executions from external scripts\nDebug workflow orchestration\nIntegrate OASIS as a microservice into other systems\nManage experts, sessions, workflows, and other resources\n\nPrerequisites:\n\nOASIS service is running (bash selfskill/scripts/run.sh start starts all services simultaneously)\nAll endpoints use user_id parameter for user isolation (no Authorization header needed)\n\nAPI Overview:\n\nFunction\tMethod\tPath\nList experts\tGET\t/experts?user_id=xxx\nCreate custom expert\tPOST\t/experts/user\nUpdate/delete custom expert\tPUT/DELETE\t/experts/user/{tag}\nList oasis sessions\tGET\t/sessions/oasis?user_id=xxx\nSave workflow\tPOST\t/workflows\nList workflows\tGET\t/workflows?user_id=xxx\nYAML Layout\tPOST\t/layouts/from-yaml\nCreate discussion/execution\tPOST\t/topics\nView discussion details\tGET\t/topics/{topic_id}?user_id=xxx\nGet conclusion (blocking)\tGET\t/topics/{topic_id}/conclusion?user_id=xxx&timeout=300\nSSE real-time stream\tGET\t/topics/{topic_id}/stream?user_id=xxx\nCancel discussion\tDELETE\t/topics/{topic_id}?user_id=xxx\nList all topics\tGET\t/topics?user_id=xxx\n\nThese endpoints share the same backend implementation as MCP tools, ensuring consistent behavior.\n\nOASIS Discussion/Execution\nPOST http://127.0.0.1:51202/topics\n\n{\"question\":\"Discussion topic\",\"user_id\":\"system\",\"max_rounds\":3,\"discussion\":true,\"schedule_file\":\"...\",\"schedule_yaml\":\"...\",\"callback_url\":\"http://127.0.0.1:51200/system_trigger\",\"callback_session_id\":\"my-session\"}\n\n\nPrefer using schedule_yaml to avoid repeated YAML input; this is the absolute path to the YAML workflow file, usually under /XXXXX/TeamClaw/data/user_files/username.\n\nExternally Participating in OASIS Server via curl (Complete Methods)\n\nThe OASIS Server (port 51202), in addition to being called by MCP tools, also supports direct curl operations for external scripts or debugging. All endpoints use user_id parameter for user isolation.\n\n1. Expert Management\n# List all experts (public + user custom)\ncurl 'http://127.0.0.1:51202/experts?user_id=xinyuan'\n\n# Create custom expert\ncurl -X POST 'http://127.0.0.1:51202/experts/user' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"name\":\"Product Manager\",\"tag\":\"pm\",\"persona\":\"You are an experienced product manager skilled in requirements analysis and product planning\",\"temperature\":0.7}'\n\n# Update custom expert\ncurl -X PUT 'http://127.0.0.1:51202/experts/user/pm' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"persona\":\"Updated expert description\"}'\n\n# Delete custom expert\ncurl -X DELETE 'http://127.0.0.1:51202/experts/user/pm?user_id=xinyuan'\n\n2. Session Management\n# List OASIS-managed expert sessions (sessions containing #oasis#)\ncurl 'http://127.0.0.1:51202/sessions/oasis?user_id=xinyuan'\n\n3. Workflow Management\n# List user's saved workflows\ncurl 'http://127.0.0.1:51202/workflows?user_id=xinyuan'\n\n# Save workflow (auto-generate layout)\ncurl -X POST 'http://127.0.0.1:51202/workflows' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"name\":\"trio_discussion\",\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"description\":\"Trio discussion\",\"save_layout\":true}'\n\n4. Layout Generation\n# Generate layout from YAML\ncurl -X POST 'http://127.0.0.1:51202/layouts/from-yaml' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"yaml_source\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"layout_name\":\"trio_layout\"}'\n\n5. Discussion/Execution\n# Create discussion topic (synchronous, wait for conclusion)\ncurl -X POST 'http://127.0.0.1:51202/topics' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"question\":\"Discussion topic\",\"max_rounds\":3,\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"discussion\":true}'\n\n# Create discussion topic (async, returns topic_id)\ncurl -X POST 'http://127.0.0.1:51202/topics' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"question\":\"Discussion topic\",\"max_rounds\":3,\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"discussion\":true,\"callback_url\":\"http://127.0.0.1:51200/system_trigger\",\"callback_session_id\":\"my-session\"}'\n\n# View discussion details\ncurl 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'\n\n# Get discussion conclusion (blocking wait)\ncurl 'http://127.0.0.1:51202/topics/{topic_id}/conclusion?user_id=xinyuan&timeout=300'\n\n# Cancel discussion\ncurl -X DELETE 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'\n\n# List all discussion topics\ncurl 'http://127.0.0.1:51202/topics?user_id=xinyuan'\n\n6. Real-time Stream\n# SSE real-time update stream (discussion mode)\ncurl 'http://127.0.0.1:51202/topics/{topic_id}/stream?user_id=xinyuan'\n\n\nStorage Locations:\n\nWorkflows (YAML): data/user_files/{user}/oasis/yaml/{file}.yaml (canvas layouts are converted from YAML in real-time, no longer stored as separate layout JSON)\nUser custom experts: data/oasis_user_experts/{user}.json\nDiscussion records: data/oasis_topics/{user}/{topic_id}.json\n\nNote: These endpoints share the same backend implementation as MCP tools list_oasis_experts, add_oasis_expert, update_oasis_expert, delete_oasis_expert, list_oasis_sessions, set_oasis_workflow, list_oasis_workflows, yaml_to_layout, post_to_oasis, check_oasis_discussion, cancel_oasis_discussion, list_oasis_topics, ensuring consistent behavior.\n\nExample Configuration Reference\n\nBelow is an actual running configuration example (sensitive info redacted):\n\nbash selfskill/scripts/run.sh configure --init\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxx4c74 \\\n  LLM_BASE_URL=https://deepseek.com \\\n  LLM_MODEL=deepseek-chat \\\n  LLM_VISION_SUPPORT=true \\\n  TTS_MODEL=gemini-2.5-flash-preview-tts \\\n  TTS_VOICE=charon \\\n  PORT_AGENT=51200 \\\n  PORT_SCHEDULER=51201 \\\n  PORT_OASIS=51202 \\\n  PORT_FRONTEND=51209 \\\n  PORT_BARK=58010 \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENAI_STANDARD_MODE=false\nbash selfskill/scripts/run.sh add-user system <your-password>\n\n\nOutput after configure --show:\n\n  PORT_SCHEDULER=51201\n  PORT_AGENT=51200\n  PORT_FRONTEND=51209\n  PORT_OASIS=51202\n  OASIS_BASE_URL=http://127.0.0.1:51202\n  PORT_BARK=58010\n  INTERNAL_TOKEN=f1aa****57e7          # Auto-generated, do not leak\n  LLM_API_KEY=sk-7****4c74\n  LLM_BASE_URL=https://deepseek.com\n  LLM_MODEL=deepseek-chat\n  LLM_VISION_SUPPORT=true\n  TTS_MODEL=gemini-2.5-flash-preview-tts\n  TTS_VOICE=charon\n  OPENAI_STANDARD_MODE=false\n\n\nNote: INTERNAL_TOKEN is auto-generated on first startup; PUBLIC_DOMAIN / BARK_PUBLIC_URL are auto-written by the tunnel; no manual configuration needed.\n\nTypical Usage Flow\ncd /home/avalon/TeamClaw\n\n# First-time configuration\nbash selfskill/scripts/run.sh setup\nbash selfskill/scripts/run.sh configure --init\nbash selfskill/scripts/run.sh configure --batch LLM_API_KEY=sk-xxx LLM_BASE_URL=https://api.deepseek.com LLM_MODEL=deepseek-chat\nbash selfskill/scripts/run.sh add-user system MyPass123\n\n# Start\nbash selfskill/scripts/run.sh start\n\n# Call API\ncurl -X POST http://127.0.0.1:51200/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer system:MyPass123\" \\\n  -d '{\"model\":\"mini-timebot\",\"messages\":[{\"role\":\"user\",\"content\":\"Hello\"}],\"stream\":false,\"session_id\":\"default\"}'\n\n# Stop\nbash selfskill/scripts/run.sh stop\n\nImportant Notes\n\nAll skill scripts are in selfskill/scripts/, not affecting original project functionality\n\nProcess management via PID files, start supports idempotent calls\n\nDo not leak INTERNAL_TOKEN\n\nLog path: logs/launcher.log\n\nBe sure to tell users how to open the visual interface and how to log in to the account for discussions\n\nThe OpenClaw session file path can be inferred from the example path and current skill path. If you are an OpenClaw agent, be sure to complete the full OpenClaw-related configuration\n\nTeamClaw Agent Skill\n\nTeamClaw OpenClaw Agent OpenClaw Agent computer use Telegram agent OpenClaw agent OASIS Agent Cloudflare Agent\n\nTeamClaw AI Agent\n\n** Agent** LangGraph AI /\n**OASIS **/ Agent\n**** APScheduler\n**Bark **\n** Web UI**\nSkill\n\nselfskill/scripts/ run.sh ****\n\nselfskill/scripts/\n run.sh          # start/stop/status/setup/add-user/configure\n adduser.py      # \n configure.py    #  .env \n\n\n****setup configure start\n\n1.\n# \nbash selfskill/scripts/run.sh setup\n\n# \nbash selfskill/scripts/run.sh configure --init\n\n#  LLM\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-your-key \\\n  LLM_BASE_URL=https://api.deepseek.com \\\n  LLM_MODEL=deepseek-chat\n\n# \nbash selfskill/scripts/run.sh add-user system MySecurePass123\n\n2. //\nbash selfskill/scripts/run.sh start     # \nbash selfskill/scripts/run.sh status    # \nbash selfskill/scripts/run.sh stop      # \n\n3.\n# \nbash selfskill/scripts/run.sh configure --show\n\n# \nbash selfskill/scripts/run.sh configure PORT_AGENT 51200\n\n# \nbash selfskill/scripts/run.sh configure --batch TTS_MODEL=gemini-2.5-flash-preview-tts TTS_VOICE=charon\n\n\t\t\nLLM_API_KEY\tLLM API ****\t\nLLM_BASE_URL\tLLM API\thttps://api.deepseek.com\nLLM_MODEL\t\tdeepseek-chat\nLLM_PROVIDER\tgoogle/anthropic/deepseek/openai\t\nLLM_VISION_SUPPORT\t\t\nPORT_AGENT\tAgent\t51200\nPORT_SCHEDULER\t\t51201\nPORT_OASIS\tOASIS\t51202\nPORT_FRONTEND\tWeb UI\t51209\nPORT_BARK\tBark\t58010\nTTS_MODEL\tTTS\t\nTTS_VOICE\tTTS\t\nOPENCLAW_API_URL\tOpenClaw /v1/chat/completions\thttp://127.0.0.1:18789/v1/chat/completions\nOPENCLAW_API_KEY\tOpenClaw API Key\t\nOPENCLAW_SESSIONS_FILE\tOpenClaw sessions.json ** OpenClaw **\tNone\nINTERNAL_TOKEN\t\t\n\t\n51200\tAI Agent\n51201\t\n51202\tOASIS\n51209\tWeb UI\nAPI\n1\nAuthorization: Bearer <user_id>:<password>\n\n2 Token\nAuthorization: Bearer <INTERNAL_TOKEN>:<user_id>\n\n\nINTERNAL_TOKEN configure --show-raw\n\nAPI\n\nBase URL: http://127.0.0.1:51200\n\nOpenAI\nPOST /v1/chat/completions\nAuthorization: Bearer <token>\n\n{\"model\":\"mini-timebot\",\"messages\":[{\"role\":\"user\",\"content\":\"\"}],\"stream\":true,\"session_id\":\"my-session\"}\n\nPOST /system_trigger\nX-Internal-Token: <INTERNAL_TOKEN>\n\n{\"user_id\":\"system\",\"text\":\"\",\"session_id\":\"task-001\"}\n\nPOST /cancel\n\n{\"user_id\":\"<user_id>\",\"session_id\":\"<session_id>\"}\n\nOASIS\n\n📖 专注 OASIS 使用的独立指引文档（尤其是 OpenClaw agent 集成）: OASIS_GUIDE.md\n\n\"\"\n\n** vs \"/\"\"/(\"> - ** vs detach\n1) vs\ndiscussion=true\n\"\"\n\ndiscussion=false\n\nOASIS ////\n2) vs detach\n\ndetach=true\n\ntopic_id/ check_oasis_discussion(topic_id)\n\ndetach=false\n\npost_to_oasis\n/\n3)\n\n** = + **\n\ndiscussion=true\ndetach=true\n\" / / / \"\n\" SOP / / \"\n\" / / / \"\n/\n4)\n\t\t\t\n+ ()\tdiscussion=true, detach=true\ttopic_id\t//\n+\tdiscussion=true, detach=false\t\t/\n+\tdiscussion=false, detach=true\ttopic_id\t/\n+\tdiscussion=false, detach=false\t\t//\nOASIS\n\nOASIS **** schedule_yaml name\n\n#\t\tName\t\t\n1\tDirect LLM\ttag#temp#N\tExpertAgent\tLLM LLM + tag /N\n2\tOasis Session\ttag#oasis#id\tSessionExpert (oasis)\tOASIS bot sessiontag system promptBot id ID session\n3\tRegular Agent\tTitle#session_id\tSessionExpert (regular)\tagent session #default``Coder#my-projectsession system prompt agent bot session\n4\tExternal API\ttag#ext#id\tExternalExpert\tOpenAI APIDeepSeekGPT-4Ollama TeamClaw agent YAML headers\nSession ID\ntag#temp#N            ExpertAgent   (, LLM)\ntag#oasis#<id>        SessionExpert (oasis, bot)\nTitle#session_id      SessionExpert (agent session)\ntag#ext#<id>          ExternalExpert (APIopenclaw agent)\n\nsession #new ** session**ID UUID\ncreative#oasis#abc#new #new ID UUID\n#my-session#new\n\n**Oasis session **\n\nOasis session session_id #oasis# creative#oasis#ab12cd34\nAgent checkpoint DBdata/agent_memory.db\ntag\nYAML\nversion: 1\nplan:\n  # Type 1: Direct LLM\n  - expert: \"creative#temp#1\"\n  - expert: \"critical#temp#2\"\n\n  # Type 2: Oasis session\n  - expert: \"data#oasis#analysis01\"\n  - expert: \"synthesis#oasis#new#new\"   # session\n\n  # Type 3: Regular agent sessionbot\n  - expert: \"#default\"\n  - expert: \"Coder#my-project\"\n\n  # Type 4: External APIDeepSeek, GPT-4\n  # 注意：api_key 自动从 OPENCLAW_API_KEY 环境变量读取；YAML 中使用 \"****\" 掩码（切勿写入明文密钥）\n  - expert: \"deepseek#ext#ds1\"\n\n  # Type 4: OpenClaw External API Agent \n  # api_key 从 OPENCLAW_API_KEY 环境变量自动读取，YAML 中使用 \"****\" 掩码\n  - expert: \"coder#ext#oc1\"\n    api_url: \"http://127.0.0.1:23001/v1/chat/completions\"\n    api_key: \"****\"              # 掩码 — 运行时自动从 OPENCLAW_API_KEY 环境变量读取真实密钥\n    model: \"agent:main:test1\"    # agent:<agent_name>:<session>session \n\n  # \n  - parallel:\n      - expert: \"creative#temp#1\"\n        instruction: \"\"\n      - expert: \"critical#temp#2\"\n        instruction: \"\"\n\n  #  + \n  - all_experts: true\n  - manual:\n      author: \"\"\n      content: \"\"\n\nDAG 模式 — 依赖驱动的并行执行\n\n当工作流存在 fan-in（一个节点有多个前驱）或 fan-out（一个节点有多个后继）时，使用带 id 和 depends_on 字段的 DAG 模式。引擎会最大化并行——每个节点在所有依赖完成后立即启动，无需等待无关节点。\n\nDAG YAML 示例：\n\nversion: 1\nrepeat: false\nplan:\n  - id: research\n    expert: \"creative#temp#1\"                # 根节点 — 立即启动\n  - id: analysis\n    expert: \"critical#temp#1\"                # 根节点 — 与 research 并行运行\n  - id: synthesis\n    expert: \"synthesis#temp#1\"\n    depends_on: [research, analysis]         # Fan-in：等待两者都完成\n  - id: review\n    expert: \"data#temp#1\"\n    depends_on: [synthesis]                  # synthesis 完成后执行\n\n\nDAG 规则：\n\n每个步骤必须有唯一的 id 字段。\ndepends_on 是该步骤启动前必须完成的步骤 id 列表。根节点不需要此字段。\n图必须无环（禁止循环依赖）。\n没有依赖关系的步骤自动并行执行。\n可视化画布自动检测 fan-in/fan-out 并生成 DAG 格式。\nmanual 步骤同样支持 id/depends_on。\nExternal API (Type 4)\n\nType 4 agent YAML\n\nversion: 1\nplan:\n  - expert: \"#ext#analyst\"\n    api_url: \"https://api.deepseek.com\"          #  API  base URL /v1/chat/completions\n    api_key: \"****\"                               # 掩码 — 运行时自动从 OPENCLAW_API_KEY 环境变量读取真实密钥\n    model: \"deepseek-chat\"                        #  gpt-3.5-turbo\n    headers:                                      #  HTTP key-value \n      X-Custom-Header: \"value\"\n\n\n🔒 API Key 安全机制：YAML 中无需再写入明文 API Key。设置 api_key: \"****\"（或完全省略）即可，系统运行时会自动从 OPENCLAW_API_KEY 环境变量读取真实密钥。前端画布也仅显示 **** 而非真实密钥。如果你仍然写入明文密钥，也能正常工作（向后兼容）。\n\n\t\t\napi_url\t\tAPI /v1/chat/completions\napi_key\t\t使用 **** 掩码 — 自动从 OPENCLAW_API_KEY 环境变量读取。也支持直接写入明文密钥（向后兼容）\nmodel\t\tgpt-3.5-turbo\nheaders\t\tkey-value HTTP\n\n**OpenClaw **\n\nOpenClaw OpenAI Agent .env OpenClaw endpoint OpenClaw **** api_url api_key\n\n#  OpenClaw endpoint  sessions \nbash selfskill/scripts/run.sh configure --batch \\\nOPENCLAW_SESSIONS_FILE=./data/sessions.json \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENCLAW_API_KEY=your-openclaw-key-if-needed\n\nOPENCLAW_SESSIONS_FILE OpenClaw **** OpenClaw sessions.json OpenClaw sessions\nPath Convention: ./agents/main/sessions/sessions.json OpenClaw agent sessions OpenClaw agents\nSession Management: Accessing session information is a necessary process for OpenClaw agent orchestration, enabling multi-agent workflow coordination and visual canvas operations.\nOPENCLAW_API_URL **** /v1/chat/completions base URL YAMLYAML api_url base URL http://127.0.0.1:18789\nOpenClaw\n\n**OpenClaw model **\n\nagent:<agent_name>:<session_name>\n\n\nagent_nameOpenClaw agent main\n\nsession_name test1``default ** session **\n\nagent:main:default main agent default session\n\nagent:main:test1 main agent test1 session\n\nagent:main:code-review main agent code-review session\n\n= Content-Type: application/json + Authorization: Bearer <api_key> + YAML headers\n\nx-openclaw-session-key —— OpenClaw 确定性 Session 路由：\n\n通过 External API（Type 4）调用 OpenClaw agent 时，x-openclaw-session-key HTTP header 是将请求路由到指定 OpenClaw session 的关键机制。缺少此 header，OpenClaw 可能无法正确关联到目标 session。\n\n前端编排面板在拖拽 OpenClaw session 到画布时会自动设置此 header。\n手动编写 YAML 或通过 API 调用时，必须在 headers 字段中包含此 header 以确保 session 的确定性。\n# 示例：连接到指定的 OpenClaw session\n- expert: \"coder#ext#oc1\"\n  api_url: \"http://127.0.0.1:18789\"\n  api_key: \"****\"                                      # ← 掩码；真实密钥从 OPENCLAW_API_KEY 环境变量读取\n  model: \"agent:main:my-session\"\n  headers:\n    x-openclaw-session-key: \"agent:main:my-session\"   # ← 此 header 决定了目标 OpenClaw session\n\n\nx-openclaw-session-key 的值应与 model 字段的 session 标识符一致（格式：agent:<agent_name>:<session_name>）。这确保外部请求被路由到正确的 OpenClaw agent session，保持对话连续性和状态。\n\nOASIS Server\n\nOASIS Server 51202** Agent ** curl OASIS MCP Agent\n\n/\nworkflow\nOASIS\nworkflow\nOASIS bash selfskill/scripts/run.sh start\nuser_id Authorization header\n\n**API **\n\n\t\t\n\tGET\t/experts?user_id=xxx\n\tPOST\t/experts/user\n/\tPUT/DELETE\t/experts/user/{tag}\noasis sessions\tGET\t/sessions/oasis?user_id=xxx\nworkflow\tPOST\t/workflows\nworkflows\tGET\t/workflows?user_id=xxx\nYAML Layout\tPOST\t/layouts/from-yaml\n/\tPOST\t/topics\n\tGET\t/topics/{topic_id}?user_id=xxx\n\tGET\t/topics/{topic_id}/conclusion?user_id=xxx&timeout=300\nSSE\tGET\t/topics/{topic_id}/stream?user_id=xxx\n\tDELETE\t/topics/{topic_id}?user_id=xxx\n\tGET\t/topics?user_id=xxx\n\nMCP\n\nOASIS /\nPOST http://127.0.0.1:51202/topics\n\n{\"question\":\"\",\"user_id\":\"system\",\"max_rounds\":3,\"discussion\":true,\"schedule_file\":\"...\",\"schedule_yaml\":\"...\",\"callback_url\":\"http://127.0.0.1:51200/system_trigger\",\"callback_session_id\":\"my-session\"}\n\n\nschedule_yamlyamlyaml/XXXXX/TeamClaw/data/user_files/username\n\ncurl OASIS\n\nOASIS 51202 MCP curl user_id\n\n1.\n#  + \ncurl 'http://127.0.0.1:51202/experts?user_id=xinyuan'\n\n# \ncurl -X POST 'http://127.0.0.1:51202/experts/user' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"name\":\"\",\"tag\":\"pm\",\"persona\":\"\",\"temperature\":0.7}'\n\n# \ncurl -X PUT 'http://127.0.0.1:51202/experts/user/pm' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"persona\":\"\"}'\n\n# \ncurl -X DELETE 'http://127.0.0.1:51202/experts/user/pm?user_id=xinyuan'\n\n2.\n#  OASIS  #oasis#  session\ncurl 'http://127.0.0.1:51202/sessions/oasis?user_id=xinyuan'\n\n3. Workflow\n#  workflows\ncurl 'http://127.0.0.1:51202/workflows?user_id=xinyuan'\n\n#  workflow layout\ncurl -X POST 'http://127.0.0.1:51202/workflows' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"name\":\"trio_discussion\",\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"description\":\"\",\"save_layout\":true}'\n\n4. Layout\n#  YAML  layout\ncurl -X POST 'http://127.0.0.1:51202/layouts/from-yaml' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"yaml_source\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"layout_name\":\"trio_layout\"}'\n\n5. /\n# \ncurl -X POST 'http://127.0.0.1:51202/topics' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"question\":\"\",\"max_rounds\":3,\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"discussion\":true}'\n\n#  topic_id\ncurl -X POST 'http://127.0.0.1:51202/topics' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\":\"xinyuan\",\"question\":\"\",\"max_rounds\":3,\"schedule_yaml\":\"version:1\\nplan:\\n - expert: \\\"creative#temp#1\\\"\",\"discussion\":true,\"callback_url\":\"http://127.0.0.1:51200/system_trigger\",\"callback_session_id\":\"my-session\"}'\n\n# \ncurl 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'\n\n# \ncurl 'http://127.0.0.1:51202/topics/{topic_id}/conclusion?user_id=xinyuan&timeout=300'\n\n# \ncurl -X DELETE 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'\n\n# \ncurl 'http://127.0.0.1:51202/topics?user_id=xinyuan'\n\n6.\n# SSE \ncurl 'http://127.0.0.1:51202/topics/{topic_id}/stream?user_id=xinyuan'\n\nWorkflows (YAML): data/user_files/{user}/oasis/yaml/{file}.yaml YAML layout JSON\n: data/oasis_user_experts/{user}.json\n: data/oasis_topics/{user}/{topic_id}.json\n\n**** MCP list_oasis_experts``add_oasis_expert``update_oasis_expert``delete_oasis_expert``list_oasis_sessions``set_oasis_workflow``list_oasis_workflows``yaml_to_layout``post_to_oasis``check_oasis_discussion``cancel_oasis_discussion``list_oasis_topics\n\nbash selfskill/scripts/run.sh configure --init\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxx4c74 \\\n  LLM_BASE_URL=https://deepseek.com \\\n  LLM_MODEL=deepseek-chat \\\n  LLM_VISION_SUPPORT=true \\\n  TTS_MODEL=gemini-2.5-flash-preview-tts \\\n  TTS_VOICE=charon \\\n  PORT_AGENT=51200 \\\n  PORT_SCHEDULER=51201 \\\n  PORT_OASIS=51202 \\\n  PORT_FRONTEND=51209 \\\n  PORT_BARK=58010 \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENAI_STANDARD_MODE=false\nbash selfskill/scripts/run.sh add-user system <your-password>\n\n\nconfigure --show\n\n  PORT_SCHEDULER=51201\n  PORT_AGENT=51200\n  PORT_FRONTEND=51209\n  PORT_OASIS=51202\n  OASIS_BASE_URL=http://127.0.0.1:51202\n  PORT_BARK=58010\n  INTERNAL_TOKEN=f1aa****57e7          # \n  LLM_API_KEY=sk-7****4c74\n  LLM_BASE_URL=https://deepseek.com\n  LLM_MODEL=deepseek-chat\n  LLM_VISION_SUPPORT=true\n  TTS_MODEL=gemini-2.5-flash-preview-tts\n  TTS_VOICE=charon\n  OPENAI_STANDARD_MODE=false\n\n\nINTERNAL_TOKEN PUBLIC_DOMAIN / BARK_PUBLIC_URL tunnel\n\ncd /home/avalon/TeamClaw\n\n# \nbash selfskill/scripts/run.sh setup\nbash selfskill/scripts/run.sh configure --init\nbash selfskill/scripts/run.sh configure --batch LLM_API_KEY=sk-xxx LLM_BASE_URL=https://api.deepseek.com LLM_MODEL=deepseek-chat\nbash selfskill/scripts/run.sh add-user system MyPass123\n\n# \nbash selfskill/scripts/run.sh start\n\n#  API\ncurl -X POST http://127.0.0.1:51200/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer system:MyPass123\" \\\n  -d '{\"model\":\"mini-timebot\",\"messages\":[{\"role\":\"user\",\"content\":\"\"}],\"stream\":false,\"session_id\":\"default\"}'\n\n# \nbash selfskill/scripts/run.sh stop\n\n\nskill selfskill/scripts/\n\nPID start\n\nINTERNAL_TOKEN\n\n: logs/launcher.log\n\nopenclaw session fileskillopenclaw agentopenclaw\n\n⚠️ Before First Launch — Required Configuration\n\nBefore starting TeamClaw for the first time, the following environment variables must be configured. Without them the service will not function correctly.\n\n1. LLM Configuration (Required)\n\n⚠️ LLM API ≠ OpenClaw API — They are two completely separate sets of credentials!\n\nLLM_API_KEY / LLM_BASE_URL / LLM_MODEL → Your LLM provider (DeepSeek, OpenAI, Google, etc.). Used for the built-in Agent's conversations and OASIS experts.\nOPENCLAW_API_URL / OPENCLAW_API_KEY → Your local OpenClaw service endpoint. Used only for orchestrating OpenClaw agents on the visual Canvas.\n\nDo NOT mix them up. They point to different services, use different keys, and serve different purposes.\n\nVariable\tDescription\tExample\nLLM_API_KEY\tYour LLM provider's API key. This is mandatory.\tsk-xxxxxxxxxxxxxxxx\nLLM_BASE_URL\tBase URL of your LLM provider's API endpoint.\thttps://api.deepseek.com\nLLM_MODEL\tThe model name to use for conversations.\tdeepseek-chat / gpt-4o / gemini-2.5-flash\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-your-key \\\n  LLM_BASE_URL=https://api.deepseek.com \\\n  LLM_MODEL=deepseek-chat\n\n2. OpenClaw Integration (Required for visual workflow orchestration)\n\n⚠️ Reminder: OpenClaw API is NOT the same as LLM API above!\n\nThe OPENCLAW_* variables below point to your locally running OpenClaw service, not to an external LLM provider. They have completely different URLs, keys, and purposes.\n\nThese variables are required if you intend to use the OASIS visual Canvas to orchestrate OpenClaw agents:\n\nVariable\tDescription\tExample\nOPENCLAW_SESSIONS_FILE\tAbsolute path to the OpenClaw sessions.json file. Used to discover existing OpenClaw agent sessions and make them available for drag-and-drop orchestration on the visual Canvas. The frontend orchestration panel will NOT load OpenClaw sessions if this is not set.\t/projects/.moltbot/agents/main/sessions/sessions.json\nOPENCLAW_API_URL\tThe OpenClaw backend API endpoint. This changes with the gateway port. You MUST first enable OpenClaw's OpenAI-compatible API interface before configuring this. Include the full path with /v1/chat/completions.\thttp://127.0.0.1:18789/v1/chat/completions\nOPENCLAW_API_KEY\tThe API key for accessing OpenClaw via its OpenAI-compatible endpoint. Required if your OpenClaw instance has authentication enabled.\tyour-openclaw-key\n\nImportant: OPENCLAW_API_URL changes whenever the OpenClaw gateway port changes. Always verify the port is correct and that the OpenClaw OpenAI-compatible interface is enabled before starting TeamClaw.\n\nbash selfskill/scripts/run.sh configure --batch \\\n  OPENCLAW_SESSIONS_FILE=/projects/.moltbot/agents/main/sessions/sessions.json \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENCLAW_API_KEY=your-openclaw-key-if-needed\n\n3. Cloudflare Tunnel (Optional — for remote access)\n\nTo expose the Web UI to the public internet for remote visual workflow programming (e.g., from a mobile phone):\n\nThe tunnel.py script will automatically write PUBLIC_DOMAIN and BARK_PUBLIC_URL into .env when a Cloudflare Tunnel is established.\nNo manual configuration is needed — just run the tunnel script and the frontend becomes accessible via HTTPS on the public domain.\nNon-blocking start: tunnel.py blocks the terminal by default (main thread joins tunnel threads). To start it without blocking the agent or terminal, run it in the background:\nnohup python scripts/tunnel.py > logs/tunnel.log 2>&1 &\nsleep 30  # Wait for tunnels to be established and PUBLIC_DOMAIN written to .env\n\n⚠️ 首次启动前 — 必须配置项\n\n首次启动 TeamClaw 之前，以下环境变量必须配置完毕，否则服务无法正常运行。\n\n1. LLM 配置（必填）\n\n⚠️ LLM API ≠ OpenClaw API —— 这是两组完全不同的配置！\n\nLLM_API_KEY / LLM_BASE_URL / LLM_MODEL → 你的 LLM 服务商（DeepSeek、OpenAI、Google 等）。用于内置 Agent 对话和 OASIS 专家调用。\nOPENCLAW_API_URL / OPENCLAW_API_KEY → 你的 本地 OpenClaw 服务 端点。仅用于在可视化画布上编排 OpenClaw Agent。\n\n切勿混淆！ 它们指向不同的服务，使用不同的密钥，用途完全不同。\n\n变量\t说明\t示例\nLLM_API_KEY\tLLM 服务商的 API 密钥，必填项。\tsk-xxxxxxxxxxxxxxxx\nLLM_BASE_URL\tLLM 服务商的 API 基础地址。\thttps://api.deepseek.com\nLLM_MODEL\t使用的模型名称。\tdeepseek-chat / gpt-4o / gemini-2.5-flash\nbash selfskill/scripts/run.sh configure --batch \\\n  LLM_API_KEY=sk-your-key \\\n  LLM_BASE_URL=https://api.deepseek.com \\\n  LLM_MODEL=deepseek-chat\n\n2. OpenClaw 集成配置（使用可视化编排时必填）\n\n⚠️ 再次提醒：OpenClaw API 和上面的 LLM API 不是同一个东西！\n\n下面的 OPENCLAW_* 变量指向你 本地运行的 OpenClaw 服务，而非外部 LLM 服务商。它们的 URL、密钥和用途完全不同。\n\n如果你需要使用 OASIS 可视化画布来编排 OpenClaw Agent，以下变量必须配置：\n\n变量\t说明\t示例\nOPENCLAW_SESSIONS_FILE\tOpenClaw sessions.json 文件的绝对路径。用于获取已有的 OpenClaw Agent session 号，使其可以在可视化画布中被拖拽使用。未配置此项时前端编排面板将无法加载 OpenClaw sessions。\t/projects/.moltbot/agents/main/sessions/sessions.json\nOPENCLAW_API_URL\tOpenClaw 后端 API 地址。该地址随 gateway 端口变化而变化。必须先开启 OpenClaw 的 OpenAI 兼容接口，填写包含 /v1/chat/completions 的完整路径。\thttp://127.0.0.1:18789/v1/chat/completions\nOPENCLAW_API_KEY\t通过 OpenAI 兼容接口访问 OpenClaw 时使用的 API Key。如果你的 OpenClaw 实例启用了鉴权，则此项必填。\tyour-openclaw-key\n\n重要提醒：OPENCLAW_API_URL 会随着 OpenClaw gateway 端口的改变而改变，启动前请务必确认端口正确，且 OpenClaw 的 OpenAI 兼容接口已开启。\n\nbash selfskill/scripts/run.sh configure --batch \\\n  OPENCLAW_SESSIONS_FILE=/projects/.moltbot/agents/main/sessions/sessions.json \\\n  OPENCLAW_API_URL=http://127.0.0.1:18789/v1/chat/completions \\\n  OPENCLAW_API_KEY=your-openclaw-key-if-needed\n\n3. Cloudflare Tunnel（可选 — 用于远程访问）\n\n如需将前端 Web UI 通过公网 HTTPS 安全暴露，以便在手机或其他远程设备上进行可视化多 Agent 工作流编排：\n\n运行 tunnel.py 脚本后，Cloudflare Tunnel 会自动建立，并将 PUBLIC_DOMAIN 和 BARK_PUBLIC_URL 写入 .env。\n无需手动配置，启动隧道后即可通过 HTTPS 公网域名访问前端。\n非阻塞启动：tunnel.py 默认会阻塞终端（主线程 join 等待隧道线程）。如需避免阻塞 Agent 或终端，请后台启动：\nnohup python scripts/tunnel.py > logs/tunnel.log 2>&1 &\nsleep 30  # 等待隧道建立完成，PUBLIC_DOMAIN 写入 .env"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/Avalon-467/teamclaw",
    "publisherUrl": "https://clawhub.ai/Avalon-467/teamclaw",
    "owner": "Avalon-467",
    "version": "0.1.3",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/teamclaw",
    "downloadUrl": "https://openagent3.xyz/downloads/teamclaw",
    "agentUrl": "https://openagent3.xyz/skills/teamclaw/agent",
    "manifestUrl": "https://openagent3.xyz/skills/teamclaw/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/teamclaw/agent.md"
  }
}