{
  "schemaVersion": "1.0",
  "item": {
    "slug": "aws-strands",
    "name": "Strands",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/TrippingKelsea/aws-strands",
    "canonicalUrl": "https://clawhub.ai/TrippingKelsea/aws-strands",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/aws-strands",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=aws-strands",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "references/cheatsheet.md",
      "SKILL.md",
      "tests/test_imports.py",
      "scripts/create-agent.py",
      "scripts/run-agent.py"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-07T17:22:31.273Z",
      "expiresAt": "2026-05-14T17:22:31.273Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
        "contentDisposition": "attachment; filename=\"afrexai-annual-report-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/aws-strands"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/aws-strands",
    "agentPageUrl": "https://openagent3.xyz/skills/aws-strands/agent",
    "manifestUrl": "https://openagent3.xyz/skills/aws-strands/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/aws-strands/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Strands Agents SDK",
        "body": "Build AI agents in Python using the Strands SDK (Apache-2.0, from AWS).\n\nValidated against: strands-agents==1.23.0, strands-agents-tools==0.2.19"
      },
      {
        "title": "Prerequisites",
        "body": "# Install SDK + tools (via pipx for isolation — recommended)\npipx install strands-agents-builder  # includes strands-agents + strands-agents-tools + CLI\n\n# Or install directly\npip install strands-agents strands-agents-tools"
      },
      {
        "title": "Core Concept: Bedrock Is the Default",
        "body": "Agent() with no model= argument defaults to Amazon Bedrock — specifically us.anthropic.claude-sonnet-4-20250514-v1:0 in us-west-2. This requires AWS credentials. To use a different provider, pass model= explicitly.\n\nDefault model constant: strands.models.bedrock.DEFAULT_BEDROCK_MODEL_ID"
      },
      {
        "title": "Quick Start — Local Agent (Ollama)",
        "body": "from strands import Agent\nfrom strands.models.ollama import OllamaModel\n\n# host is a required positional argument\nmodel = OllamaModel(\"http://localhost:11434\", model_id=\"qwen3:latest\")\nagent = Agent(model=model)\nresult = agent(\"What is the capital of France?\")\nprint(result)\n\nNote: Not all open-source models support tool-calling. Abliterated models often lose function-calling during the abliteration process. Test with a stock model (qwen3, llama3.x, mistral) first."
      },
      {
        "title": "Quick Start — Bedrock (Default Provider)",
        "body": "from strands import Agent\n\n# No model specified → BedrockModel (Claude Sonnet 4, us-west-2)\n# Requires AWS credentials (~/.aws/credentials or env vars)\nagent = Agent()\nresult = agent(\"Explain quantum computing\")\n\n# Explicit Bedrock model:\nfrom strands.models import BedrockModel\nmodel = BedrockModel(model_id=\"us.anthropic.claude-sonnet-4-20250514-v1:0\")\nagent = Agent(model=model)"
      },
      {
        "title": "Quick Start — Anthropic (Direct API)",
        "body": "from strands import Agent\nfrom strands.models.anthropic import AnthropicModel\n\n# max_tokens is Required[int] — must be provided\nmodel = AnthropicModel(model_id=\"claude-sonnet-4-20250514\", max_tokens=4096)\nagent = Agent(model=model)\nresult = agent(\"Explain quantum computing\")\n\nRequires ANTHROPIC_API_KEY environment variable."
      },
      {
        "title": "Quick Start — OpenAI",
        "body": "from strands import Agent\nfrom strands.models.openai import OpenAIModel\n\nmodel = OpenAIModel(model_id=\"gpt-4.1\")\nagent = Agent(model=model)\n\nRequires OPENAI_API_KEY environment variable."
      },
      {
        "title": "Creating Custom Tools",
        "body": "Use the @tool decorator. Type hints become the schema; the docstring becomes the description:\n\nfrom strands import Agent, tool\n\n@tool\ndef read_file(path: str) -> str:\n    \"\"\"Read contents of a file at the given path.\n\n    Args:\n        path: Filesystem path to read.\n    \"\"\"\n    with open(path) as f:\n        return f.read()\n\n@tool\ndef write_file(path: str, content: str) -> str:\n    \"\"\"Write content to a file.\n\n    Args:\n        path: Filesystem path to write.\n        content: Text content to write.\n    \"\"\"\n    with open(path, 'w') as f:\n        f.write(content)\n    return f\"Wrote {len(content)} bytes to {path}\"\n\nagent = Agent(model=model, tools=[read_file, write_file])\nagent(\"Read /tmp/test.txt and summarize it\")"
      },
      {
        "title": "ToolContext",
        "body": "Tools can access agent state via ToolContext:\n\nfrom strands import tool\nfrom strands.types.tools import ToolContext\n\n@tool\ndef stateful_tool(query: str, tool_context: ToolContext) -> str:\n    \"\"\"A tool that accesses agent state.\n\n    Args:\n        query: Input query.\n    \"\"\"\n    # Access shared agent state\n    count = tool_context.state.get(\"call_count\", 0) + 1\n    tool_context.state[\"call_count\"] = count\n    return f\"Call #{count}: {query}\""
      },
      {
        "title": "Built-in Tools (46 available)",
        "body": "strands-agents-tools provides pre-built tools:\n\nfrom strands_tools import calculator, file_read, file_write, shell, http_request\nagent = Agent(model=model, tools=[calculator, file_read, shell])\n\nFull list: calculator, file_read, file_write, shell, http_request, editor, image_reader, python_repl, current_time, think, stop, sleep, environment, retrieve, search_video, chat_video, speak, generate_image, generate_image_stability, diagram, journal, memory, agent_core_memory, elasticsearch_memory, mongodb_memory, mem0_memory, rss, cron, batch, workflow, use_agent, use_llm, use_aws, use_computer, load_tool, handoff_to_user, slack, swarm, graph, a2a_client, mcp_client, exa, tavily, bright_data, nova_reels.\n\nHot reload: Agent(load_tools_from_directory=True) watches ./tools/ for changes."
      },
      {
        "title": "MCP Integration",
        "body": "Connect to any Model Context Protocol server. MCPClient implements ToolProvider — pass it directly in the tools list:\n\nfrom strands import Agent\nfrom strands.tools.mcp import MCPClient\nfrom mcp import stdio_client, StdioServerParameters\n\n# MCPClient takes a callable that returns the transport\nmcp = MCPClient(lambda: stdio_client(StdioServerParameters(\n    command=\"uvx\",\n    args=[\"some-mcp-server@latest\"]\n)))\n\n# Use as context manager — MCPClient is a ToolProvider\nwith mcp:\n    agent = Agent(model=model, tools=[mcp])\n    agent(\"Use the MCP tools to do something\")\n\nSSE transport:\n\nfrom mcp.client.sse import sse_client\nmcp = MCPClient(lambda: sse_client(\"http://localhost:8080/sse\"))"
      },
      {
        "title": "Agents as Tools",
        "body": "Nest agents — inner agents become tools for the outer agent:\n\nresearcher = Agent(model=model, system_prompt=\"You are a research assistant.\")\nwriter = Agent(model=model, system_prompt=\"You are a writer.\")\n\norchestrator = Agent(\n    model=model,\n    tools=[researcher, writer],\n    system_prompt=\"You coordinate research and writing tasks.\"\n)\norchestrator(\"Research quantum computing and write a blog post\")"
      },
      {
        "title": "Swarm Pattern",
        "body": "Self-organizing agent teams with shared context and autonomous handoff coordination:\n\nfrom strands.multiagent.swarm import Swarm\n\n# Agents need name + description for handoff identification\nresearcher = Agent(\n    model=model,\n    name=\"researcher\",\n    description=\"Finds and summarizes information\"\n)\nwriter = Agent(\n    model=model,\n    name=\"writer\",\n    description=\"Creates polished content\"\n)\n\nswarm = Swarm(\n    nodes=[researcher, writer],\n    entry_point=researcher,    # optional — defaults to first agent\n    max_handoffs=20,           # default\n    max_iterations=20,         # default\n    execution_timeout=900.0,   # 15 min default\n    node_timeout=300.0         # 5 min per node default\n)\nresult = swarm(\"Research AI agents, then hand off to writer for a blog post\")\n\nSwarm auto-injects a handoff_to_agent tool. Agents hand off by calling it with the target agent's name. Supports interrupt/resume, session persistence, and repetitive-handoff detection."
      },
      {
        "title": "Graph Pattern (DAG)",
        "body": "Deterministic dependency-based execution via GraphBuilder:\n\nfrom strands.multiagent.graph import GraphBuilder\n\nbuilder = GraphBuilder()\nresearch_node = builder.add_node(researcher, node_id=\"research\")\nwriting_node = builder.add_node(writer, node_id=\"writing\")\nbuilder.add_edge(\"research\", \"writing\")\nbuilder.set_entry_point(\"research\")\n\n# Optional: conditional edges\n# builder.add_edge(\"research\", \"writing\",\n#     condition=lambda state: \"complete\" in str(state.completed_nodes))\n\ngraph = builder.build()\nresult = graph(\"Write a blog post about AI agents\")\n\nSupports cycles (feedback loops) with builder.reset_on_revisit(True), execution timeouts, and nested graphs (Graph as a node in another Graph)."
      },
      {
        "title": "A2A Protocol (Agent-to-Agent)",
        "body": "Expose a Strands agent as an A2A-compatible server for inter-agent communication:\n\nfrom strands.multiagent.a2a import A2AServer\n\nserver = A2AServer(\n    agent=my_agent,\n    host=\"127.0.0.1\",\n    port=9000,\n    version=\"0.0.1\"\n)\nserver.start()  # runs uvicorn\n\nConnect to A2A agents with the a2a_client tool from strands-agents-tools. A2A implements Google's Agent-to-Agent protocol for standardized cross-process/cross-network agent communication."
      },
      {
        "title": "Session Persistence",
        "body": "Persist conversations across agent runs:\n\nfrom strands.session.file_session_manager import FileSessionManager\n\nsession = FileSessionManager(session_file_path=\"./sessions/my_session.json\")\nagent = Agent(model=model, session_manager=session)\n\n# Also available:\nfrom strands.session.s3_session_manager import S3SessionManager\nsession = S3SessionManager(bucket_name=\"my-bucket\", session_id=\"session-1\")\n\nBoth Swarm and Graph support session managers for persisting multi-agent state."
      },
      {
        "title": "Bidirectional Streaming (Experimental)",
        "body": "Real-time voice/text conversations with persistent audio streams:\n\nfrom strands.experimental.bidi.agent import BidiAgent\nfrom strands.experimental.bidi.models.nova_sonic import NovaSonicModel\n\n# Supports: NovaSonicModel, GeminiLiveModel, OpenAIRealtimeModel\nmodel = NovaSonicModel(region=\"us-east-1\")\nagent = BidiAgent(model=model, tools=[my_tool])\n\nSupports interruption detection, concurrent tool execution, and continuous back-and-forth audio. Experimental — API subject to change."
      },
      {
        "title": "System Prompts",
        "body": "agent = Agent(\n    model=model,\n    system_prompt=\"You are Hex, a sharp and witty AI assistant.\",\n    tools=[read_file, write_file]\n)\n\nStrands also supports list[SystemContentBlock] for structured system prompts with cache control."
      },
      {
        "title": "Observability",
        "body": "Native OpenTelemetry tracing:\n\nagent = Agent(\n    model=model,\n    trace_attributes={\"project\": \"my-agent\", \"environment\": \"dev\"}\n)\n\nEvery tool call, model invocation, handoff, and lifecycle event is instrumentable."
      },
      {
        "title": "Bedrock-Specific Features",
        "body": "Guardrails: guardrail_id + guardrail_version in BedrockModel config — content filtering, PII detection, input/output redaction\nCache points: System prompt and tool definition caching for cost optimization\nStreaming: On by default, disable with streaming=False\nRegion: Defaults to us-west-2, override via region_name param or AWS_REGION env\nCross-region inference: Model IDs prefixed with us. use cross-region inference profiles"
      },
      {
        "title": "Scaffolding a New Agent",
        "body": "python3 {baseDir}/scripts/create-agent.py my-agent --provider ollama --model qwen3:latest\npython3 {baseDir}/scripts/create-agent.py my-agent --provider anthropic\npython3 {baseDir}/scripts/create-agent.py my-agent --provider bedrock\npython3 {baseDir}/scripts/create-agent.py my-agent --provider openai --model gpt-4.1\n\nCreates a ready-to-run agent directory with tools, config, and entry point."
      },
      {
        "title": "Running an Agent",
        "body": "python3 {baseDir}/scripts/run-agent.py path/to/agent.py \"Your prompt here\"\npython3 {baseDir}/scripts/run-agent.py path/to/agent.py --interactive"
      },
      {
        "title": "Model Providers Reference (11 total)",
        "body": "ProviderClassInitNotesBedrockBedrockModelBedrockModel(model_id=...)Default, eagerly importedOllamaOllamaModelOllamaModel(\"http://host:11434\", model_id=...)host is positionalAnthropicAnthropicModelAnthropicModel(model_id=..., max_tokens=4096)max_tokens requiredOpenAIOpenAIModelOpenAIModel(model_id=...)OPENAI_API_KEYGeminiGeminiModelGeminiModel(model_id=...)api_key in client_argsMistralMistralModelMistralModel(model_id=...)Mistral API keyLiteLLMLiteLLMModelLiteLLMModel(model_id=...)Meta-provider (Cohere, Groq, etc.)LlamaAPILlamaAPIModelLlamaAPIModel(model_id=...)Meta Llama APIllama.cppLlamaCppModelLlamaCppModel(...)Local server, OpenAI-compatibleSageMakerSageMakerAIModelSageMakerAIModel(...)Custom AWS endpointsWriterWriterModelWriterModel(model_id=...)Writer platform\n\nAll non-Bedrock providers are lazy-loaded — dependencies imported only when referenced.\n\nImport pattern: from strands.models.<provider> import <Class> (or from strands.models import <Class> for lazy-load)."
      },
      {
        "title": "Tips",
        "body": "Agent() without model= requires AWS credentials (Bedrock default)\nAnthropicModel requires max_tokens — omitting it causes a runtime error\nOllamaModel host is positional: OllamaModel(\"http://...\", model_id=\"...\")\nAbliterated Ollama models often lose tool-calling support — use stock models for tool-using agents\nSwarm agents need name= and description= for handoff routing\nAgent(load_tools_from_directory=True) watches ./tools/ for hot-reloaded tool files\nUse agent.tool.my_tool() to call tools directly without LLM routing\nMCPClient is a ToolProvider — pass it directly in tools=[mcp], don't call list_tools_sync() manually when using with Agent\nSession managers work with Agent, Swarm, and Graph\nPin your strands-agents version — the SDK is young and APIs evolve between releases"
      }
    ],
    "body": "Strands Agents SDK\n\nBuild AI agents in Python using the Strands SDK (Apache-2.0, from AWS).\n\nValidated against: strands-agents==1.23.0, strands-agents-tools==0.2.19\n\nPrerequisites\n# Install SDK + tools (via pipx for isolation — recommended)\npipx install strands-agents-builder  # includes strands-agents + strands-agents-tools + CLI\n\n# Or install directly\npip install strands-agents strands-agents-tools\n\nCore Concept: Bedrock Is the Default\n\nAgent() with no model= argument defaults to Amazon Bedrock — specifically us.anthropic.claude-sonnet-4-20250514-v1:0 in us-west-2. This requires AWS credentials. To use a different provider, pass model= explicitly.\n\nDefault model constant: strands.models.bedrock.DEFAULT_BEDROCK_MODEL_ID\n\nQuick Start — Local Agent (Ollama)\nfrom strands import Agent\nfrom strands.models.ollama import OllamaModel\n\n# host is a required positional argument\nmodel = OllamaModel(\"http://localhost:11434\", model_id=\"qwen3:latest\")\nagent = Agent(model=model)\nresult = agent(\"What is the capital of France?\")\nprint(result)\n\n\nNote: Not all open-source models support tool-calling. Abliterated models often lose function-calling during the abliteration process. Test with a stock model (qwen3, llama3.x, mistral) first.\n\nQuick Start — Bedrock (Default Provider)\nfrom strands import Agent\n\n# No model specified → BedrockModel (Claude Sonnet 4, us-west-2)\n# Requires AWS credentials (~/.aws/credentials or env vars)\nagent = Agent()\nresult = agent(\"Explain quantum computing\")\n\n# Explicit Bedrock model:\nfrom strands.models import BedrockModel\nmodel = BedrockModel(model_id=\"us.anthropic.claude-sonnet-4-20250514-v1:0\")\nagent = Agent(model=model)\n\nQuick Start — Anthropic (Direct API)\nfrom strands import Agent\nfrom strands.models.anthropic import AnthropicModel\n\n# max_tokens is Required[int] — must be provided\nmodel = AnthropicModel(model_id=\"claude-sonnet-4-20250514\", max_tokens=4096)\nagent = Agent(model=model)\nresult = agent(\"Explain quantum computing\")\n\n\nRequires ANTHROPIC_API_KEY environment variable.\n\nQuick Start — OpenAI\nfrom strands import Agent\nfrom strands.models.openai import OpenAIModel\n\nmodel = OpenAIModel(model_id=\"gpt-4.1\")\nagent = Agent(model=model)\n\n\nRequires OPENAI_API_KEY environment variable.\n\nCreating Custom Tools\n\nUse the @tool decorator. Type hints become the schema; the docstring becomes the description:\n\nfrom strands import Agent, tool\n\n@tool\ndef read_file(path: str) -> str:\n    \"\"\"Read contents of a file at the given path.\n\n    Args:\n        path: Filesystem path to read.\n    \"\"\"\n    with open(path) as f:\n        return f.read()\n\n@tool\ndef write_file(path: str, content: str) -> str:\n    \"\"\"Write content to a file.\n\n    Args:\n        path: Filesystem path to write.\n        content: Text content to write.\n    \"\"\"\n    with open(path, 'w') as f:\n        f.write(content)\n    return f\"Wrote {len(content)} bytes to {path}\"\n\nagent = Agent(model=model, tools=[read_file, write_file])\nagent(\"Read /tmp/test.txt and summarize it\")\n\nToolContext\n\nTools can access agent state via ToolContext:\n\nfrom strands import tool\nfrom strands.types.tools import ToolContext\n\n@tool\ndef stateful_tool(query: str, tool_context: ToolContext) -> str:\n    \"\"\"A tool that accesses agent state.\n\n    Args:\n        query: Input query.\n    \"\"\"\n    # Access shared agent state\n    count = tool_context.state.get(\"call_count\", 0) + 1\n    tool_context.state[\"call_count\"] = count\n    return f\"Call #{count}: {query}\"\n\nBuilt-in Tools (46 available)\n\nstrands-agents-tools provides pre-built tools:\n\nfrom strands_tools import calculator, file_read, file_write, shell, http_request\nagent = Agent(model=model, tools=[calculator, file_read, shell])\n\n\nFull list: calculator, file_read, file_write, shell, http_request, editor, image_reader, python_repl, current_time, think, stop, sleep, environment, retrieve, search_video, chat_video, speak, generate_image, generate_image_stability, diagram, journal, memory, agent_core_memory, elasticsearch_memory, mongodb_memory, mem0_memory, rss, cron, batch, workflow, use_agent, use_llm, use_aws, use_computer, load_tool, handoff_to_user, slack, swarm, graph, a2a_client, mcp_client, exa, tavily, bright_data, nova_reels.\n\nHot reload: Agent(load_tools_from_directory=True) watches ./tools/ for changes.\n\nMCP Integration\n\nConnect to any Model Context Protocol server. MCPClient implements ToolProvider — pass it directly in the tools list:\n\nfrom strands import Agent\nfrom strands.tools.mcp import MCPClient\nfrom mcp import stdio_client, StdioServerParameters\n\n# MCPClient takes a callable that returns the transport\nmcp = MCPClient(lambda: stdio_client(StdioServerParameters(\n    command=\"uvx\",\n    args=[\"some-mcp-server@latest\"]\n)))\n\n# Use as context manager — MCPClient is a ToolProvider\nwith mcp:\n    agent = Agent(model=model, tools=[mcp])\n    agent(\"Use the MCP tools to do something\")\n\n\nSSE transport:\n\nfrom mcp.client.sse import sse_client\nmcp = MCPClient(lambda: sse_client(\"http://localhost:8080/sse\"))\n\nMulti-Agent Patterns\nAgents as Tools\n\nNest agents — inner agents become tools for the outer agent:\n\nresearcher = Agent(model=model, system_prompt=\"You are a research assistant.\")\nwriter = Agent(model=model, system_prompt=\"You are a writer.\")\n\norchestrator = Agent(\n    model=model,\n    tools=[researcher, writer],\n    system_prompt=\"You coordinate research and writing tasks.\"\n)\norchestrator(\"Research quantum computing and write a blog post\")\n\nSwarm Pattern\n\nSelf-organizing agent teams with shared context and autonomous handoff coordination:\n\nfrom strands.multiagent.swarm import Swarm\n\n# Agents need name + description for handoff identification\nresearcher = Agent(\n    model=model,\n    name=\"researcher\",\n    description=\"Finds and summarizes information\"\n)\nwriter = Agent(\n    model=model,\n    name=\"writer\",\n    description=\"Creates polished content\"\n)\n\nswarm = Swarm(\n    nodes=[researcher, writer],\n    entry_point=researcher,    # optional — defaults to first agent\n    max_handoffs=20,           # default\n    max_iterations=20,         # default\n    execution_timeout=900.0,   # 15 min default\n    node_timeout=300.0         # 5 min per node default\n)\nresult = swarm(\"Research AI agents, then hand off to writer for a blog post\")\n\n\nSwarm auto-injects a handoff_to_agent tool. Agents hand off by calling it with the target agent's name. Supports interrupt/resume, session persistence, and repetitive-handoff detection.\n\nGraph Pattern (DAG)\n\nDeterministic dependency-based execution via GraphBuilder:\n\nfrom strands.multiagent.graph import GraphBuilder\n\nbuilder = GraphBuilder()\nresearch_node = builder.add_node(researcher, node_id=\"research\")\nwriting_node = builder.add_node(writer, node_id=\"writing\")\nbuilder.add_edge(\"research\", \"writing\")\nbuilder.set_entry_point(\"research\")\n\n# Optional: conditional edges\n# builder.add_edge(\"research\", \"writing\",\n#     condition=lambda state: \"complete\" in str(state.completed_nodes))\n\ngraph = builder.build()\nresult = graph(\"Write a blog post about AI agents\")\n\n\nSupports cycles (feedback loops) with builder.reset_on_revisit(True), execution timeouts, and nested graphs (Graph as a node in another Graph).\n\nA2A Protocol (Agent-to-Agent)\n\nExpose a Strands agent as an A2A-compatible server for inter-agent communication:\n\nfrom strands.multiagent.a2a import A2AServer\n\nserver = A2AServer(\n    agent=my_agent,\n    host=\"127.0.0.1\",\n    port=9000,\n    version=\"0.0.1\"\n)\nserver.start()  # runs uvicorn\n\n\nConnect to A2A agents with the a2a_client tool from strands-agents-tools. A2A implements Google's Agent-to-Agent protocol for standardized cross-process/cross-network agent communication.\n\nSession Persistence\n\nPersist conversations across agent runs:\n\nfrom strands.session.file_session_manager import FileSessionManager\n\nsession = FileSessionManager(session_file_path=\"./sessions/my_session.json\")\nagent = Agent(model=model, session_manager=session)\n\n# Also available:\nfrom strands.session.s3_session_manager import S3SessionManager\nsession = S3SessionManager(bucket_name=\"my-bucket\", session_id=\"session-1\")\n\n\nBoth Swarm and Graph support session managers for persisting multi-agent state.\n\nBidirectional Streaming (Experimental)\n\nReal-time voice/text conversations with persistent audio streams:\n\nfrom strands.experimental.bidi.agent import BidiAgent\nfrom strands.experimental.bidi.models.nova_sonic import NovaSonicModel\n\n# Supports: NovaSonicModel, GeminiLiveModel, OpenAIRealtimeModel\nmodel = NovaSonicModel(region=\"us-east-1\")\nagent = BidiAgent(model=model, tools=[my_tool])\n\n\nSupports interruption detection, concurrent tool execution, and continuous back-and-forth audio. Experimental — API subject to change.\n\nSystem Prompts\nagent = Agent(\n    model=model,\n    system_prompt=\"You are Hex, a sharp and witty AI assistant.\",\n    tools=[read_file, write_file]\n)\n\n\nStrands also supports list[SystemContentBlock] for structured system prompts with cache control.\n\nObservability\n\nNative OpenTelemetry tracing:\n\nagent = Agent(\n    model=model,\n    trace_attributes={\"project\": \"my-agent\", \"environment\": \"dev\"}\n)\n\n\nEvery tool call, model invocation, handoff, and lifecycle event is instrumentable.\n\nBedrock-Specific Features\nGuardrails: guardrail_id + guardrail_version in BedrockModel config — content filtering, PII detection, input/output redaction\nCache points: System prompt and tool definition caching for cost optimization\nStreaming: On by default, disable with streaming=False\nRegion: Defaults to us-west-2, override via region_name param or AWS_REGION env\nCross-region inference: Model IDs prefixed with us. use cross-region inference profiles\nScaffolding a New Agent\npython3 {baseDir}/scripts/create-agent.py my-agent --provider ollama --model qwen3:latest\npython3 {baseDir}/scripts/create-agent.py my-agent --provider anthropic\npython3 {baseDir}/scripts/create-agent.py my-agent --provider bedrock\npython3 {baseDir}/scripts/create-agent.py my-agent --provider openai --model gpt-4.1\n\n\nCreates a ready-to-run agent directory with tools, config, and entry point.\n\nRunning an Agent\npython3 {baseDir}/scripts/run-agent.py path/to/agent.py \"Your prompt here\"\npython3 {baseDir}/scripts/run-agent.py path/to/agent.py --interactive\n\nModel Providers Reference (11 total)\nProvider\tClass\tInit\tNotes\nBedrock\tBedrockModel\tBedrockModel(model_id=...)\tDefault, eagerly imported\nOllama\tOllamaModel\tOllamaModel(\"http://host:11434\", model_id=...)\thost is positional\nAnthropic\tAnthropicModel\tAnthropicModel(model_id=..., max_tokens=4096)\tmax_tokens required\nOpenAI\tOpenAIModel\tOpenAIModel(model_id=...)\tOPENAI_API_KEY\nGemini\tGeminiModel\tGeminiModel(model_id=...)\tapi_key in client_args\nMistral\tMistralModel\tMistralModel(model_id=...)\tMistral API key\nLiteLLM\tLiteLLMModel\tLiteLLMModel(model_id=...)\tMeta-provider (Cohere, Groq, etc.)\nLlamaAPI\tLlamaAPIModel\tLlamaAPIModel(model_id=...)\tMeta Llama API\nllama.cpp\tLlamaCppModel\tLlamaCppModel(...)\tLocal server, OpenAI-compatible\nSageMaker\tSageMakerAIModel\tSageMakerAIModel(...)\tCustom AWS endpoints\nWriter\tWriterModel\tWriterModel(model_id=...)\tWriter platform\n\nAll non-Bedrock providers are lazy-loaded — dependencies imported only when referenced.\n\nImport pattern: from strands.models.<provider> import <Class> (or from strands.models import <Class> for lazy-load).\n\nTips\nAgent() without model= requires AWS credentials (Bedrock default)\nAnthropicModel requires max_tokens — omitting it causes a runtime error\nOllamaModel host is positional: OllamaModel(\"http://...\", model_id=\"...\")\nAbliterated Ollama models often lose tool-calling support — use stock models for tool-using agents\nSwarm agents need name= and description= for handoff routing\nAgent(load_tools_from_directory=True) watches ./tools/ for hot-reloaded tool files\nUse agent.tool.my_tool() to call tools directly without LLM routing\nMCPClient is a ToolProvider — pass it directly in tools=[mcp], don't call list_tools_sync() manually when using with Agent\nSession managers work with Agent, Swarm, and Graph\nPin your strands-agents version — the SDK is young and APIs evolve between releases"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/TrippingKelsea/aws-strands",
    "publisherUrl": "https://clawhub.ai/TrippingKelsea/aws-strands",
    "owner": "TrippingKelsea",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/aws-strands",
    "downloadUrl": "https://openagent3.xyz/downloads/aws-strands",
    "agentUrl": "https://openagent3.xyz/skills/aws-strands/agent",
    "manifestUrl": "https://openagent3.xyz/skills/aws-strands/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/aws-strands/agent.md"
  }
}