{
  "schemaVersion": "1.0",
  "item": {
    "slug": "website-usability-test-nova-act",
    "name": "Website Usability Testing using Nova Act",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/adityak6798/website-usability-test-nova-act",
    "canonicalUrl": "https://clawhub.ai/adityak6798/website-usability-test-nova-act",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/website-usability-test-nova-act",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=website-usability-test-nova-act",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "skill.json",
      "README.md",
      "SKILL.md",
      ".gitignore",
      "CHANGELOG.md",
      "assets/report-template.html"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-07T17:22:31.273Z",
      "expiresAt": "2026-05-14T17:22:31.273Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-annual-report",
        "contentDisposition": "attachment; filename=\"afrexai-annual-report-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/website-usability-test-nova-act"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/website-usability-test-nova-act",
    "agentPageUrl": "https://openagent3.xyz/skills/website-usability-test-nova-act/agent",
    "manifestUrl": "https://openagent3.xyz/skills/website-usability-test-nova-act/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/website-usability-test-nova-act/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Nova Act Usability Testing v1.0.2",
        "body": "AI-orchestrated usability testing with digital twin personas powered by Amazon Nova Act."
      },
      {
        "title": "⚠️ Prerequisites & Credentials",
        "body": "This skill requires an Amazon Nova Act API key.\n\nRequirementDetailsAPI KeyNova Act API key from AWS ConsoleConfig Location~/.openclaw/config/nova-act.jsonFormat{\"apiKey\": \"your-nova-act-api-key-here\"}Dependenciespip3 install nova-act pydantic playwrightBrowserplaywright install chromium (~300MB download)"
      },
      {
        "title": "🔒 Data & Privacy Notice",
        "body": "What this skill accesses:\n\nReads: ~/.openclaw/config/nova-act.json (your API key)\nWrites: ./nova_act_logs/ (trace files with screenshots), ./test_results_adaptive.json, ./nova_act_usability_report.html\n\nWhat trace files contain:\n\nScreenshots of every page visited\nFull page content (HTML, text)\nBrowser actions and AI decisions\n\nRecommendations:\n\nRun tests only on non-production or test environments\nBe aware traces may capture PII or sensitive data visible on tested pages\nReview/delete trace files after use if they contain sensitive content\nConsider running in a sandboxed environment (container/VM) for untrusted sites"
      },
      {
        "title": "Features",
        "body": "Agent-Driven Interpretation: The script no longer interprets responses. YOU (the agent) must:\n\nRun the test script → collect raw data\nRead JSON → interpret each raw_response\nSet goal_achieved and overall_success\nGenerate the report\n\nNo hardcoded regex. No extra API calls. The agent doing the work is already running."
      },
      {
        "title": "Quick Start (For AI Agents)",
        "body": "When a user asks to test a website, YOU (the AI agent) must complete ALL 4 phases:\n\nPhaseWhat HappensWho Does It1. SetupGenerate personas, run test scriptAgent + Script2. CollectScript captures raw Nova Act responsesScript3. InterpretRead JSON, determine goal_achieved for each stepAgent4. ReportGenerate HTML report with interpreted resultsAgent\n\n⚠️ The script does NOT interpret responses or generate the final report. You must do phases 3-4."
      },
      {
        "title": "🎯 Recommended: AI Agent Generates Personas",
        "body": "You're already an AI (Claude) - use your intelligence to generate contextual personas!\n\nimport subprocess\nimport os\nimport sys\nimport json\nimport tempfile\n\n# Step 1: Check dependencies\ntry:\n    import nova_act\n    print(\"✅ Dependencies ready\")\nexcept ImportError:\n    print(\"📦 Dependencies not installed. Please run:\")\n    print(\"   pip3 install nova-act pydantic playwright\")\n    print(\"   playwright install chromium\")\n    sys.exit(1)\n\n# Step 2: Verify Nova Act API key\nconfig_file = os.path.expanduser(\"~/.openclaw/config/nova-act.json\")\nwith open(config_file, 'r') as f:\n    config = json.load(f)\n    if config.get('apiKey') == 'your-nova-act-api-key-here':\n        print(f\"⚠️  Please add your Nova Act API key to {config_file}\")\n        sys.exit(1)\n\n# Step 3: YOU (the AI agent) generate personas\n# Example for https://www.pgatour.com/ (golf tournament site)\nwebsite_url = \"https://www.pgatour.com/\"\n\npersonas = [\n    {\n        \"name\": \"Marcus Chen\",\n        \"archetype\": \"tournament_follower\",\n        \"age\": 42,\n        \"tech_proficiency\": \"high\",\n        \"description\": \"Avid golf fan who follows multiple tours and tracks player stats\",\n        \"goals\": [\n            \"Check current tournament leaderboard\",\n            \"View recent tournament results\",\n            \"Track favorite player performance\"\n        ]\n    },\n    {\n        \"name\": \"Dorothy Williams\",\n        \"archetype\": \"casual_viewer\",\n        \"age\": 68,\n        \"tech_proficiency\": \"low\",\n        \"description\": \"Occasional golf viewer who watches major tournaments\",\n        \"goals\": [\n            \"Find when the next tournament is\",\n            \"See who won recently\",\n            \"Understand how to watch online\"\n        ]\n    }\n]\n\n# Step 4: Save personas and run test\nwith tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:\n    json.dump(personas, f, indent=2)\n    personas_file = f.name\n\nskill_dir = os.path.expanduser(\"~/.openclaw/skills/nova-act-usability\")\ntest_script = os.path.join(skill_dir, \"scripts\", \"run_adaptive_test.py\")\n\n# Run with AI-generated personas\nsubprocess.run([sys.executable, test_script, website_url, personas_file])\n\n# Clean up temp file\nos.unlink(personas_file)\n\nPersona Template:\n\n{\n  \"name\": \"FirstName LastName\",\n  \"archetype\": \"descriptive_identifier\",\n  \"age\": 30,\n  \"tech_proficiency\": \"low|medium|high\",\n  \"description\": \"One sentence about who they are\",\n  \"goals\": [\n    \"First goal relevant to this website\",\n    \"Second goal relevant to this website\",\n    \"Third goal relevant to this website\"\n  ]\n}"
      },
      {
        "title": "📝 Alternative: Simple Custom Persona",
        "body": "If user specifies a persona description, pass it as a string:\n\n# User: \"Test PGA Tour site as a golf enthusiast\"\nwebsite_url = \"https://www.pgatour.com/\"\nuser_persona = \"golf enthusiast who follows tournaments closely\"\n\nsubprocess.run([sys.executable, test_script, website_url, user_persona])\n# Script will parse this and create personas automatically"
      },
      {
        "title": "⚠️ Fallback: Auto-Generation (Not Recommended)",
        "body": "Let the script guess personas based on basic category keywords:\n\n# Generic, less contextual personas\nsubprocess.run([sys.executable, test_script, website_url])"
      },
      {
        "title": "Why YOU Should Generate Personas",
        "body": "✅ Advantages:\n\nBetter context: You have full conversation history and domain knowledge\nSmarter inference: You can analyze the URL, industry, and user intent\nNo duplicate API calls: You're already Claude - don't call yourself again!\nUser preferences: You can adapt based on stated preferences\nClarifying questions: You can ask the user about target demographics\n\n❌ What to avoid:\n\nDon't let Python script make its own Claude API call (wasteful)\nDon't rely on generic fallback personas (less accurate)\nDon't skip persona generation (hurts test quality)"
      },
      {
        "title": "💡 Tips for Persona Generation",
        "body": "Analyze the website:\n\nURL domain: .gov → citizens, .edu → students/faculty\nKeywords: \"shop\" → shoppers, \"book\" → travelers, \"play\" → gamers\nIndustry: Golf → fans/players, Banking → customers/businesses\n\nCreate diverse personas:\n\nMix experience levels (beginner, intermediate, expert)\nMix tech proficiency (low, medium, high)\nMix age ranges (young, middle-aged, senior)\nMix motivations (casual, professional, enthusiastic)\n\nGenerate realistic goals:\n\nSpecific to the website's purpose\nActionable and measurable\nMatch the persona's characteristics\n\nExamples by industry:\n\nE-commerce: bargain_hunter, comparison_shopper, impulse_buyer\nNews: daily_reader, topic_follower, casual_browser\nSports: die_hard_fan, casual_viewer, stats_tracker\nTravel: business_traveler, vacation_planner, deal_seeker\nSaaS: power_user, evaluator, beginner"
      },
      {
        "title": "User Invocation",
        "body": "Users can trigger this skill by saying:\n\n\"Test the usability of [website URL]\"\n\"Run a usability test on [website URL]\"\n\"Generate a usability report for [website URL]\"\n\"Evaluate the UX of [website URL]\"\n\"Analyze [website URL] for usability issues\"\nNEW: \"Test the booking flow on [website]\"\nNEW: \"Test the checkout process on [e-commerce site]\"\nNEW: \"Test posting workflow on [social media site]\"\n\nThe AI will automatically:\n\nLoad the Nova Act cookbook for guidance\nAnalyze the page to understand it\nDetect if it's a workflow-based site (booking, e-commerce, social, etc.)\nGenerate contextual personas:\n\nIf custom persona specified → Create persona matching that description\nIf no custom persona → Use Claude AI to infer the 3 most plausible real-world user types\nFallback to category-based personas if AI unavailable\n\n\nCreate realistic test cases (including full workflows when appropriate)\nRun adaptive, iterative tests with Nova Act\nNEW: Apply safety stops before material impact actions (payment, posting, account creation)\nGenerate comprehensive HTML report with trace links\nProvide viewing instructions"
      },
      {
        "title": "Workflow Testing",
        "body": "NEW in this version: The skill now tests complete user journeys, not just information-finding!"
      },
      {
        "title": "Supported Workflows",
        "body": "E-Commerce:\n\nProduct search → Add to cart → Checkout → STOP before payment\n\nFlight/Hotel Booking:\n\nSearch → Select → Fill details → STOP before booking\n\nSocial Media:\n\nCreate post → Add content → STOP before publishing\n\nAccount Signup:\n\nFill registration → STOP before final submission\n\nForm Submission:\n\nFill form → STOP before submit"
      },
      {
        "title": "Safety Guarantees",
        "body": "The skill will NEVER:\n\nComplete actual purchases\nCreate real accounts\nPost publicly\nSend emails/messages\nSubscribe to newsletters\nMake any action with monetary/legal/reputational impact\n\nThe skill will ALWAYS:\n\nTest up to (but not including) the final action\nVerify the final button exists and is accessible\nDocument the safety stop in observations"
      },
      {
        "title": "🧠 Agent Analysis (CRITICAL)",
        "body": "You (the AI agent) must analyze test results! The script collects raw responses but does NOT interpret them."
      },
      {
        "title": "Why Agent Analysis?",
        "body": "The script returns raw Nova Act responses like:\n\n\"No\" - Is there a pricing link?\n\"I don't see any documentation\" - Is there docs?\n\"Amazon Nova Act\" - What is the headline?\n\nYou must determine if each response means the goal was achieved:\n\nResponseGoal Achieved?\"No\"❌ NOT achieved\"I don't see...\"❌ NOT achieved\"Not found\"❌ NOT achieved\"Yes, I found...\"✅ Achieved\"Amazon Nova Act\" (content)✅ Achieved\"The pricing is $29/mo\"✅ Achieved"
      },
      {
        "title": "Result Data Structure",
        "body": "After the test script runs, read the JSON results. Each step contains:\n\n{\n    \"step_name\": \"check_nav_for_pricing\",\n    \"prompt\": \"Is there a pricing link in the navigation?\",\n    \"expected_outcome\": \"Find pricing in navigation\",\n    \"raw_response\": \"No\",\n    \"api_success\": true,\n    \"needs_agent_analysis\": true,\n    \"attempts\": [\n        {\n            \"prompt\": \"Is there a pricing link in the navigation?\",\n            \"response\": \"No\",\n            \"approach\": \"original\"\n        }\n    ]\n}\n\nKey fields you analyze:\n\nraw_response: The actual Nova Act response - YOU determine what it means\napi_success: Did the API call work? (script handles this)\nneeds_agent_analysis: Always true - your cue to interpret\nattempts: All attempts made (script tries up to 3 alternative approaches)"
      },
      {
        "title": "How to Analyze",
        "body": "For each step, determine:\n\ngoal_achieved: Did the response indicate success or failure?\nfriction_level: How hard was it? (attempts.length > 1 = friction)\nobservations: UX insights from the response\n\nAnalysis example:\n\nStep 1: \"Is there a pricing link?\" \n  → Response: \"No\" (1 attempt)\n  → Goal achieved: NO (explicit negative)\n  → Friction: HIGH (not discoverable)\n\nStep 2: \"What is the headline?\" \n  → Response: \"Amazon Nova Act\" (1 attempt)\n  → Goal achieved: YES (actual content)\n  → Friction: LOW (immediately visible)\n\nStep 3: \"Find documentation\" \n  → Response: \"I found a docs link in the footer\" (3 attempts)\n  → Goal achieved: YES (found eventually)\n  → Friction: MEDIUM (required multiple approaches)"
      },
      {
        "title": "Helper Functions (For Script Integration)",
        "body": "The response_interpreter.py provides helpers if you want structured prompts:\n\nfrom scripts.response_interpreter import (\n    format_for_agent_analysis,\n    create_agent_prompt_for_interpretation,\n    create_agent_prompt_for_alternative\n)\n\n# Format all results for analysis\nformatted = format_for_agent_analysis(results)\n\n# Get interpretation prompt for one step\nprompt = create_agent_prompt_for_interpretation(step_result)\n\n# Get retry prompt when goal not achieved  \nretry_prompt = create_agent_prompt_for_alternative(\n    original_prompt=\"Is there a pricing link?\",\n    failed_response=\"No\",\n    attempt_number=2\n)"
      },
      {
        "title": "Complete Analysis Workflow (MANDATORY)",
        "body": "The script does NOT generate the final report automatically. You (the agent) must:\n\nRun the test script → outputs test_results_adaptive.json with raw data\nRead the JSON into your context\nInterpret each step → set goal_achieved: true/false based on raw_response\nSet overall success → set overall_success: true/false on each test\nSave updated JSON\nCall report generator with interpreted results\n\nStep-by-step code for the agent to execute:\n\nimport json\nimport os\nimport sys\n\n# Add skill scripts to path\nsys.path.insert(0, os.path.expanduser(\"~/.openclaw/skills/nova-act-usability/scripts\"))\nfrom enhanced_report_generator import generate_enhanced_report\n\n# 1. Read raw results\nwith open('test_results_adaptive.json', 'r') as f:\n    results = json.load(f)\n\n# 2. YOU (the agent) interpret each step\nfor test in results:\n    goals_achieved = 0\n    for step in test.get('steps', []):\n        raw = step.get('raw_response', '')\n        \n        # AGENT INTERPRETS: Does this response indicate goal was achieved?\n        # You decide based on the response content and expected outcome\n        # Example interpretations:\n        #   \"No\" → goal_achieved = False\n        #   \"Leaderboard, News, Schedule, Players\" → goal_achieved = True (content found)\n        #   \"Yes\" → goal_achieved = True\n        #   \"I don't see any...\" → goal_achieved = False\n        \n        step['goal_achieved'] = ???  # YOU set this based on your interpretation\n        if step['goal_achieved']:\n            goals_achieved += 1\n    \n    # 3. Set overall success (e.g., >= 50% goals achieved)\n    total = len(test.get('steps', []))\n    test['goals_achieved'] = goals_achieved\n    test['overall_success'] = (goals_achieved / total >= 0.5) if total > 0 else False\n\n# 4. Save interpreted results\nwith open('test_results_adaptive.json', 'w') as f:\n    json.dump(results, f, indent=2)\n\n# 5. Generate report with interpreted data\npage_analysis = {\n    'title': '...',  # From your earlier analysis\n    'purpose': '...',\n    'navigation': [...]\n}\nall_traces = []\nfor r in results:\n    all_traces.extend(r.get('trace_files', []))\n\nreport_path = generate_enhanced_report(page_analysis, results, all_traces)\nprint(f\"Report: {report_path}\")\n\nWhy the agent must interpret:\n\nNo hardcoded regex or pattern matching\nYou understand context (what \"Yes\" means for this specific question)\nYou can reason about partial success, edge cases\nNo duplicate Claude API calls - you're already running!"
      },
      {
        "title": "⚠️ Critical: Keep Nova Act Prompts Simple",
        "body": "Nova Act is a browser automation tool, NOT a reasoning engine.\n\nThe Claude agent (you) does all reasoning about:\n\nWhat to test based on the persona\nWhether results are good or bad\nWhat the UX implications are\n\nNova Act just:\n\nClicks, types, scrolls\nReports what it sees"
      },
      {
        "title": "❌ WRONG: Asking Nova Act to reason",
        "body": "# DON'T ask Nova Act to think about personas\nnova.act(\"As a beginner user, can you easily find the documentation?\")\nnova.act(\"Would a business professional find the pricing clear?\")\nnova.act(\"Is this task accomplishable for someone with low technical skills?\")"
      },
      {
        "title": "✅ RIGHT: Simple, direct browser commands",
        "body": "# Simple browser actions\nnova.act(\"Click the Documentation link in the navigation\")\nnova.act(\"Find and click a link containing 'Pricing'\")\nnova.act_get(\"What text is displayed in the main heading?\")\nnova.act_get(\"List the navigation menu items visible on this page\")"
      },
      {
        "title": "The Correct Workflow",
        "body": "Agent (you) decides what to test based on persona: \"Dorothy is 68 with low tech skills - she wants to know how to watch golf online\"\nAgent generates simple Nova Act prompts: \"Click 'Watch & Listen' in the navigation\"\nNova Act executes browser task and returns raw results: \"Clicked Watch & Listen, now on video page\"\nAgent interprets results: \"Dorothy would find this confusing because the options are unclear...\""
      },
      {
        "title": "How This Works",
        "body": "You (the AI) are the orchestrator. This skill provides:\n\nNova Act cookbook (references/nova-act-cookbook.md) - Best practices, workflow patterns, and safety guidelines (automatically loaded at test start)\nAdaptive test orchestrator (run_adaptive_test.py) - Main execution script with workflow detection\nDynamic strategy generator (scripts/dynamic_exploration.py) - Generates workflow-appropriate test strategies\nSession management (scripts/nova_session.py) - Nova Act wrapper\nReport generator (enhanced_report_generator.py) - Auto-generated HTML reports\n\nExecution Flow:"
      },
      {
        "title": "CRITICAL: Check Dependencies First",
        "body": "Before running ANY test, check if dependencies are installed:\n\n# Check if nova-act is installed\npython3 -c \"import nova_act\" 2>/dev/null\nif [ $? -ne 0 ]; then\n    echo \"Dependencies not installed. Please run:\"\n    echo \"  pip3 install nova-act pydantic playwright\"\n    echo \"  playwright install chromium\"\n    exit 1\nfi\n\n# Check API key\nif ! grep -q '\"apiKey\":.*[^\"]' ~/.openclaw/config/nova-act.json; then\n    echo \"⚠️  Please add your Nova Act API key to ~/.openclaw/config/nova-act.json\"\n    exit 1\nfi\n\nOr use Python to check:\n\nimport sys\n\n# Check if nova-act is installed\ntry:\n    import nova_act\n    print(\"✅ Dependencies already installed\")\nexcept ImportError:\n    print(\"📦 Dependencies not installed. Please run:\")\n    print(\"   pip3 install nova-act pydantic playwright\")\n    print(\"   playwright install chromium\")\n    sys.exit(1)"
      },
      {
        "title": "Running Tests (After Dependencies Confirmed)",
        "body": "When a user asks for usability testing:\n\n# Find the skill directory\nSKILL_DIR=~/.openclaw/skills/nova-act-usability\n\n# Run the adaptive test script\npython3 \"$SKILL_DIR/scripts/run_adaptive_test.py\" \"https://example.com\"\n\n# This will:\n# - Create nova_act_logs/ in current directory\n# - Create test_results_adaptive.json in current directory\n# - Create nova_act_usability_report.html in current directory\n# - Provide 60-second status updates during test"
      },
      {
        "title": "⏱️ Timeout Guidance",
        "body": "Recommended timeout: 30 minutes (1800 seconds)\n\nFull usability tests with 3 personas × 3 goals = 9 tests can take 10-20+ minutes depending on:\n\nWebsite load times (slow sites like media-heavy sports sites take longer)\nNova Act API response times (each act() call takes 5-60 seconds)\nNetwork conditions\n\nGraceful shutdown: If the test is interrupted (timeout, SIGTERM, SIGINT), it will:\n\nSave all completed test results to test_results_adaptive.json\nGenerate a partial report clearly marked as incomplete\nShow how many tests completed vs planned\n\nFor shorter tests: Use fewer personas or goals:\n\n# Quick test with 1 persona\npersonas = [{\"name\": \"Test User\", \"archetype\": \"casual\", ...}]"
      },
      {
        "title": "What You (the AI) Need to Do:",
        "body": "Check dependencies (run the check above)\nIf missing: Tell user to run pip3 install nova-act pydantic playwright && playwright install chromium\nIf present: Extract the website URL from user's request\nRun the test with the URL as argument\nMonitor progress (status updates every 60 seconds)\nShare the report viewing instructions with user"
      },
      {
        "title": "Quick Start",
        "body": "When user requests usability testing:\n\nimport subprocess\nimport os\n\n# Get skill directory\nskill_dir = os.path.expanduser(\"~/.openclaw/skills/nova-act-usability\")\nif not os.path.exists(skill_dir):\n    # Try workspace location\n    skill_dir = os.path.join(os.getcwd(), \"nova-act-usability\")\n\nscript_path = os.path.join(skill_dir, \"scripts\", \"run_adaptive_test.py\")\n\n# Run test\nresult = subprocess.run(\n    [\"python3\", script_path, \"https://example.com\"],\n    env={**os.environ, \"NOVA_ACT_SKIP_PLAYWRIGHT_INSTALL\": \"1\"},\n    capture_output=True,\n    text=True\n)\n\nprint(result.stdout)"
      },
      {
        "title": "Detailed Workflow (Internal)",
        "body": "The adaptive test script (run_adaptive_test.py) handles:"
      },
      {
        "title": "Step 1: Page Analysis",
        "body": "Loads the page with Nova Act\nExtracts title, navigation, purpose\nIdentifies key elements (docs, demo, pricing)"
      },
      {
        "title": "Step 2: Contextual Persona Generation",
        "body": "Creates personas based on what the page offers\nDeveloper persona if API/code focused\nBusiness persona for evaluation\nBeginner persona if demo available"
      },
      {
        "title": "Step 3: Realistic Test Case Generation",
        "body": "Top 3 use cases per persona\nBased on actual page content\nMatched to persona goals"
      },
      {
        "title": "Step 4: Iterative Test Execution",
        "body": "For each persona + task combination:\n\nfrom scripts.nova_session import nova_session\nfrom nova_act import BOOL_SCHEMA\nimport time\n\nobservations = []\n\nwith nova_session(website_url) as nova:\n    start_time = time.time()\n    \n    # Initial navigation\n    observations.append({\n        \"step\": \"navigate\",\n        \"action\": f\"Loaded {website_url}\",\n        \"success\": True,\n        \"notes\": \"Initial page load\"\n    })\n    \n    # Execute task step-by-step (AI-orchestrated)\n    # Break into small act() calls based on cookbook guidance\n    \n    # Example: \"Find pricing information\" task\n    \n    # Step 1: Look for pricing link\n    nova.act(\"Look for a link or button for pricing, plans, or subscription\")\n    found = nova.act_get(\n        \"Is there a visible pricing or plans link?\",\n        schema=BOOL_SCHEMA\n    )\n    \n    observations.append({\n        \"step\": \"find_pricing_link\",\n        \"action\": \"Search for pricing navigation\",\n        \"success\": found.parsed_response,\n        \"notes\": \"Easy to find\" if found.parsed_response else \"Not immediately visible - UX friction\"\n    })\n    \n    if found.parsed_response:\n        # Step 2: Navigate to pricing\n        nova.act(\"Click on the pricing or plans link\")\n        \n        # Step 3: Analyze pricing page\n        is_clear = nova.act_get(\n            \"Is the pricing information clearly displayed with prices and features?\",\n            schema=BOOL_SCHEMA\n        )\n        \n        observations.append({\n            \"step\": \"view_pricing\",\n            \"action\": \"Accessed pricing page\",\n            \"success\": is_clear.parsed_response,\n            \"notes\": \"Clear pricing display\" if is_clear.parsed_response else \"Pricing unclear or confusing\"\n        })\n    else:\n        # Alternative path - try search\n        nova.act(\"Look for a search function\")\n        # ... continue orchestrating\n    \n    duration = time.time() - start_time\n    \n    # Document overall task result\n    task_success = all(obs[\"success\"] for obs in observations if obs[\"success\"] is not None)\n    \n    results.append({\n        \"persona\": persona_name,\n        \"task\": task_description,\n        \"success\": task_success,\n        \"duration\": duration,\n        \"observations\": observations,\n        \"friction_points\": [obs for obs in observations if not obs.get(\"success\")]\n    })"
      },
      {
        "title": "Step 5: Pool and Analyze Results",
        "body": "After all tests:\n\nIdentify common friction points across personas\nNote accessibility issues for low-tech personas\nFlag efficiency problems (too many steps)\nDocument task failures (major UX issues)"
      },
      {
        "title": "Step 6: Generate Report",
        "body": "import json\nfrom scripts.enhanced_report_generator import generate_enhanced_report\n\n# Save results\nwith open(\"test_results_adaptive.json\", \"w\") as f:\n    json.dump(results, f, indent=2)\n\n# Generate HTML report\nreport_path = generate_enhanced_report(\n    page_analysis=page_analysis,\n    results=test_results\n)\n\nprint(f\"Report: {report_path}\")"
      },
      {
        "title": "Dynamic Task Decomposition",
        "body": "The AI should decide how to break down each task based on:\n\nWebsite complexity\nPersona's technical level\nTask nature (navigation vs data entry vs search)\n\nLow-tech persona example:\n\n# More explicit, step-by-step\nnova.act(\"Look for a button labeled 'Contact' or 'Contact Us'\")\nnova.act(\"Click on the Contact button\")\nresult = nova.act_get(\"Is there a phone number or email address visible?\")\n\nHigh-tech persona example:\n\n# Test efficiency features\nnova.act(\"Look for keyboard shortcuts or quick access features\")\nnova.act(\"Try to use search (Ctrl+K or Cmd+K)\")"
      },
      {
        "title": "Real-Time Observation",
        "body": "After EVERY act() call, analyze:\n\nDid it succeed?\nWas the UI element easy to find?\nWas the label clear?\nHow many attempts needed?\nAny error messages?\n\nDocument friction immediately in observations."
      },
      {
        "title": "Persona-Aware Prompting",
        "body": "Adapt act() prompts to persona characteristics:\n\nElderly/low-tech: Look for obvious, labeled buttons; read everything\nPower users: Try keyboard shortcuts, advanced features\nMobile users: Test mobile responsiveness, touch targets\nScreen reader users: Test keyboard navigation, ARIA labels"
      },
      {
        "title": "references/nova-act-cookbook.md",
        "body": "MUST READ before starting any test. Contains best practices for:\n\nEffective act() prompting\nTask decomposition strategies\nData extraction patterns\nError handling\nPersona adaptation"
      },
      {
        "title": "references/persona-examples.md",
        "body": "Template personas with detailed profiles:\n\nTech-savvy millennial\nElderly first-timer\nBusy professional\nStudent/budget-conscious\nAccessibility-focused\nInternational/non-native speaker"
      },
      {
        "title": "scripts/nova_session.py",
        "body": "Thin wrapper providing Nova Act session primitive:\n\nwith nova_session(url, headless=True, logs_dir=\"./logs\") as nova:\n    nova.act(\"action\")\n    result = nova.act_get(\"query\", schema=Schema)"
      },
      {
        "title": "scripts/enhanced_report_generator.py",
        "body": "Compiles observations into HTML usability report with trace file links."
      },
      {
        "title": "assets/report-template.html",
        "body": "Professional HTML template for usability reports."
      },
      {
        "title": "⚠️ IMPORTANT: First-Time Setup Required",
        "body": "This skill requires dependencies that must be installed before use."
      },
      {
        "title": "For AI Agents: Dependency Check",
        "body": "ALWAYS check if dependencies are installed before running tests:\n\n# Quick dependency check\ntry:\n    import nova_act\n    print(\"✅ Dependencies installed\")\nexcept ImportError:\n    print(\"📦 Dependencies not installed. Please run:\")\n    print(\"   pip3 install nova-act pydantic playwright\")\n    print(\"   playwright install chromium\")\n    print(\"\")\n    print(\"This will take 2-3 minutes to download browsers (~300MB)\")"
      },
      {
        "title": "For Users: One-Time Setup",
        "body": "Step 1: Install Python packages\n\npip3 install nova-act pydantic playwright\n\nStep 2: Install Playwright browser\n\nplaywright install chromium\n\nStep 3: Configure API key\n\nGet your Nova Act API key from AWS Console\nCreate config file:\n\nmkdir -p ~/.openclaw/config\necho '{\"apiKey\": \"your-key-here\"}' > ~/.openclaw/config/nova-act.json\n\nReplace your-key-here with your actual Nova Act API key"
      },
      {
        "title": "Example: AI-Orchestrated Test",
        "body": "User request: \"Test example.com for elderly users\"\n\nAI orchestration:\n\nRead references/nova-act-cookbook.md\nRead references/persona-examples.md\nGenerate elderly persona (Dorothy, 72, low tech proficiency)\nGenerate tasks:\n\n\"Find contact information\"\n\"Read about services\"\n\"Navigate to FAQ\"\n\n\nFor each task, dynamically orchestrate Nova Act:\n\nStart session\nExecute small act() steps\nObserve and analyze each result\nTake notes on friction (small text, unclear labels, etc.)\nContinue or adapt based on observations\n\n\nPool observations\nGenerate HTML report with findings and recommendations\n\nThe AI decides every step. The skill just provides tools and guidance."
      },
      {
        "title": "File Structure",
        "body": "nova-act-usability/\n├── SKILL.md                          # This file\n├── README.md                         # User documentation\n├── skill.json                        # Skill manifest\n├── scripts/\n│   ├── run_adaptive_test.py          # Main orchestrator (accepts URL arg)\n│   ├── nova_session.py               # Session wrapper\n│   ├── enhanced_report_generator.py  # HTML report generator\n│   └── trace_finder.py               # Extract trace file paths\n├── references/\n│   ├── nova-act-cookbook.md          # Best practices\n│   └── persona-examples.md           # Template personas\n└── assets/\n    └── report-template.html          # HTML template"
      },
      {
        "title": "Output Files (Created in Working Directory)",
        "body": "When you run a test, these files are created in your current working directory:\n\n./\n├── nova_act_logs/                    # Nova Act trace files\n│   ├── act_<id>_output.html         # Session recordings\n│   └── ...\n├── test_results_adaptive.json        # Raw test results\n└── nova_act_usability_report.html   # Final report\n\nAll paths are relative - works from any installation location!"
      }
    ],
    "body": "Nova Act Usability Testing v1.0.2\n\nAI-orchestrated usability testing with digital twin personas powered by Amazon Nova Act.\n\n⚠️ Prerequisites & Credentials\n\nThis skill requires an Amazon Nova Act API key.\n\nRequirement\tDetails\nAPI Key\tNova Act API key from AWS Console\nConfig Location\t~/.openclaw/config/nova-act.json\nFormat\t{\"apiKey\": \"your-nova-act-api-key-here\"}\nDependencies\tpip3 install nova-act pydantic playwright\nBrowser\tplaywright install chromium (~300MB download)\n🔒 Data & Privacy Notice\n\nWhat this skill accesses:\n\nReads: ~/.openclaw/config/nova-act.json (your API key)\nWrites: ./nova_act_logs/ (trace files with screenshots), ./test_results_adaptive.json, ./nova_act_usability_report.html\n\nWhat trace files contain:\n\nScreenshots of every page visited\nFull page content (HTML, text)\nBrowser actions and AI decisions\n\nRecommendations:\n\nRun tests only on non-production or test environments\nBe aware traces may capture PII or sensitive data visible on tested pages\nReview/delete trace files after use if they contain sensitive content\nConsider running in a sandboxed environment (container/VM) for untrusted sites\nFeatures\n\nAgent-Driven Interpretation: The script no longer interprets responses. YOU (the agent) must:\n\nRun the test script → collect raw data\nRead JSON → interpret each raw_response\nSet goal_achieved and overall_success\nGenerate the report\n\nNo hardcoded regex. No extra API calls. The agent doing the work is already running.\n\nQuick Start (For AI Agents)\n\nWhen a user asks to test a website, YOU (the AI agent) must complete ALL 4 phases:\n\nPhase\tWhat Happens\tWho Does It\n1. Setup\tGenerate personas, run test script\tAgent + Script\n2. Collect\tScript captures raw Nova Act responses\tScript\n3. Interpret\tRead JSON, determine goal_achieved for each step\tAgent\n4. Report\tGenerate HTML report with interpreted results\tAgent\n\n⚠️ The script does NOT interpret responses or generate the final report. You must do phases 3-4.\n\n🎯 Recommended: AI Agent Generates Personas\n\nYou're already an AI (Claude) - use your intelligence to generate contextual personas!\n\nimport subprocess\nimport os\nimport sys\nimport json\nimport tempfile\n\n# Step 1: Check dependencies\ntry:\n    import nova_act\n    print(\"✅ Dependencies ready\")\nexcept ImportError:\n    print(\"📦 Dependencies not installed. Please run:\")\n    print(\"   pip3 install nova-act pydantic playwright\")\n    print(\"   playwright install chromium\")\n    sys.exit(1)\n\n# Step 2: Verify Nova Act API key\nconfig_file = os.path.expanduser(\"~/.openclaw/config/nova-act.json\")\nwith open(config_file, 'r') as f:\n    config = json.load(f)\n    if config.get('apiKey') == 'your-nova-act-api-key-here':\n        print(f\"⚠️  Please add your Nova Act API key to {config_file}\")\n        sys.exit(1)\n\n# Step 3: YOU (the AI agent) generate personas\n# Example for https://www.pgatour.com/ (golf tournament site)\nwebsite_url = \"https://www.pgatour.com/\"\n\npersonas = [\n    {\n        \"name\": \"Marcus Chen\",\n        \"archetype\": \"tournament_follower\",\n        \"age\": 42,\n        \"tech_proficiency\": \"high\",\n        \"description\": \"Avid golf fan who follows multiple tours and tracks player stats\",\n        \"goals\": [\n            \"Check current tournament leaderboard\",\n            \"View recent tournament results\",\n            \"Track favorite player performance\"\n        ]\n    },\n    {\n        \"name\": \"Dorothy Williams\",\n        \"archetype\": \"casual_viewer\",\n        \"age\": 68,\n        \"tech_proficiency\": \"low\",\n        \"description\": \"Occasional golf viewer who watches major tournaments\",\n        \"goals\": [\n            \"Find when the next tournament is\",\n            \"See who won recently\",\n            \"Understand how to watch online\"\n        ]\n    }\n]\n\n# Step 4: Save personas and run test\nwith tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:\n    json.dump(personas, f, indent=2)\n    personas_file = f.name\n\nskill_dir = os.path.expanduser(\"~/.openclaw/skills/nova-act-usability\")\ntest_script = os.path.join(skill_dir, \"scripts\", \"run_adaptive_test.py\")\n\n# Run with AI-generated personas\nsubprocess.run([sys.executable, test_script, website_url, personas_file])\n\n# Clean up temp file\nos.unlink(personas_file)\n\n\nPersona Template:\n\n{\n  \"name\": \"FirstName LastName\",\n  \"archetype\": \"descriptive_identifier\",\n  \"age\": 30,\n  \"tech_proficiency\": \"low|medium|high\",\n  \"description\": \"One sentence about who they are\",\n  \"goals\": [\n    \"First goal relevant to this website\",\n    \"Second goal relevant to this website\",\n    \"Third goal relevant to this website\"\n  ]\n}\n\n📝 Alternative: Simple Custom Persona\n\nIf user specifies a persona description, pass it as a string:\n\n# User: \"Test PGA Tour site as a golf enthusiast\"\nwebsite_url = \"https://www.pgatour.com/\"\nuser_persona = \"golf enthusiast who follows tournaments closely\"\n\nsubprocess.run([sys.executable, test_script, website_url, user_persona])\n# Script will parse this and create personas automatically\n\n⚠️ Fallback: Auto-Generation (Not Recommended)\n\nLet the script guess personas based on basic category keywords:\n\n# Generic, less contextual personas\nsubprocess.run([sys.executable, test_script, website_url])\n\nWhy YOU Should Generate Personas\n\n✅ Advantages:\n\nBetter context: You have full conversation history and domain knowledge\nSmarter inference: You can analyze the URL, industry, and user intent\nNo duplicate API calls: You're already Claude - don't call yourself again!\nUser preferences: You can adapt based on stated preferences\nClarifying questions: You can ask the user about target demographics\n\n❌ What to avoid:\n\nDon't let Python script make its own Claude API call (wasteful)\nDon't rely on generic fallback personas (less accurate)\nDon't skip persona generation (hurts test quality)\n💡 Tips for Persona Generation\n\nAnalyze the website:\n\nURL domain: .gov → citizens, .edu → students/faculty\nKeywords: \"shop\" → shoppers, \"book\" → travelers, \"play\" → gamers\nIndustry: Golf → fans/players, Banking → customers/businesses\n\nCreate diverse personas:\n\nMix experience levels (beginner, intermediate, expert)\nMix tech proficiency (low, medium, high)\nMix age ranges (young, middle-aged, senior)\nMix motivations (casual, professional, enthusiastic)\n\nGenerate realistic goals:\n\nSpecific to the website's purpose\nActionable and measurable\nMatch the persona's characteristics\n\nExamples by industry:\n\nE-commerce: bargain_hunter, comparison_shopper, impulse_buyer\nNews: daily_reader, topic_follower, casual_browser\nSports: die_hard_fan, casual_viewer, stats_tracker\nTravel: business_traveler, vacation_planner, deal_seeker\nSaaS: power_user, evaluator, beginner\nUser Invocation\n\nUsers can trigger this skill by saying:\n\n\"Test the usability of [website URL]\"\n\"Run a usability test on [website URL]\"\n\"Generate a usability report for [website URL]\"\n\"Evaluate the UX of [website URL]\"\n\"Analyze [website URL] for usability issues\"\nNEW: \"Test the booking flow on [website]\"\nNEW: \"Test the checkout process on [e-commerce site]\"\nNEW: \"Test posting workflow on [social media site]\"\n\nThe AI will automatically:\n\nLoad the Nova Act cookbook for guidance\nAnalyze the page to understand it\nDetect if it's a workflow-based site (booking, e-commerce, social, etc.)\nGenerate contextual personas:\nIf custom persona specified → Create persona matching that description\nIf no custom persona → Use Claude AI to infer the 3 most plausible real-world user types\nFallback to category-based personas if AI unavailable\nCreate realistic test cases (including full workflows when appropriate)\nRun adaptive, iterative tests with Nova Act\nNEW: Apply safety stops before material impact actions (payment, posting, account creation)\nGenerate comprehensive HTML report with trace links\nProvide viewing instructions\nWorkflow Testing\n\nNEW in this version: The skill now tests complete user journeys, not just information-finding!\n\nSupported Workflows\n\nE-Commerce:\n\nProduct search → Add to cart → Checkout → STOP before payment\n\nFlight/Hotel Booking:\n\nSearch → Select → Fill details → STOP before booking\n\nSocial Media:\n\nCreate post → Add content → STOP before publishing\n\nAccount Signup:\n\nFill registration → STOP before final submission\n\nForm Submission:\n\nFill form → STOP before submit\nSafety Guarantees\n\nThe skill will NEVER:\n\nComplete actual purchases\nCreate real accounts\nPost publicly\nSend emails/messages\nSubscribe to newsletters\nMake any action with monetary/legal/reputational impact\n\nThe skill will ALWAYS:\n\nTest up to (but not including) the final action\nVerify the final button exists and is accessible\nDocument the safety stop in observations\n🧠 Agent Analysis (CRITICAL)\n\nYou (the AI agent) must analyze test results! The script collects raw responses but does NOT interpret them.\n\nWhy Agent Analysis?\n\nThe script returns raw Nova Act responses like:\n\n\"No\" - Is there a pricing link?\n\"I don't see any documentation\" - Is there docs?\n\"Amazon Nova Act\" - What is the headline?\n\nYou must determine if each response means the goal was achieved:\n\nResponse\tGoal Achieved?\n\"No\"\t❌ NOT achieved\n\"I don't see...\"\t❌ NOT achieved\n\"Not found\"\t❌ NOT achieved\n\"Yes, I found...\"\t✅ Achieved\n\"Amazon Nova Act\" (content)\t✅ Achieved\n\"The pricing is $29/mo\"\t✅ Achieved\nResult Data Structure\n\nAfter the test script runs, read the JSON results. Each step contains:\n\n{\n    \"step_name\": \"check_nav_for_pricing\",\n    \"prompt\": \"Is there a pricing link in the navigation?\",\n    \"expected_outcome\": \"Find pricing in navigation\",\n    \"raw_response\": \"No\",\n    \"api_success\": true,\n    \"needs_agent_analysis\": true,\n    \"attempts\": [\n        {\n            \"prompt\": \"Is there a pricing link in the navigation?\",\n            \"response\": \"No\",\n            \"approach\": \"original\"\n        }\n    ]\n}\n\n\nKey fields you analyze:\n\nraw_response: The actual Nova Act response - YOU determine what it means\napi_success: Did the API call work? (script handles this)\nneeds_agent_analysis: Always true - your cue to interpret\nattempts: All attempts made (script tries up to 3 alternative approaches)\nHow to Analyze\n\nFor each step, determine:\n\ngoal_achieved: Did the response indicate success or failure?\nfriction_level: How hard was it? (attempts.length > 1 = friction)\nobservations: UX insights from the response\n\nAnalysis example:\n\nStep 1: \"Is there a pricing link?\" \n  → Response: \"No\" (1 attempt)\n  → Goal achieved: NO (explicit negative)\n  → Friction: HIGH (not discoverable)\n\nStep 2: \"What is the headline?\" \n  → Response: \"Amazon Nova Act\" (1 attempt)\n  → Goal achieved: YES (actual content)\n  → Friction: LOW (immediately visible)\n\nStep 3: \"Find documentation\" \n  → Response: \"I found a docs link in the footer\" (3 attempts)\n  → Goal achieved: YES (found eventually)\n  → Friction: MEDIUM (required multiple approaches)\n\nHelper Functions (For Script Integration)\n\nThe response_interpreter.py provides helpers if you want structured prompts:\n\nfrom scripts.response_interpreter import (\n    format_for_agent_analysis,\n    create_agent_prompt_for_interpretation,\n    create_agent_prompt_for_alternative\n)\n\n# Format all results for analysis\nformatted = format_for_agent_analysis(results)\n\n# Get interpretation prompt for one step\nprompt = create_agent_prompt_for_interpretation(step_result)\n\n# Get retry prompt when goal not achieved  \nretry_prompt = create_agent_prompt_for_alternative(\n    original_prompt=\"Is there a pricing link?\",\n    failed_response=\"No\",\n    attempt_number=2\n)\n\nComplete Analysis Workflow (MANDATORY)\n\nThe script does NOT generate the final report automatically. You (the agent) must:\n\nRun the test script → outputs test_results_adaptive.json with raw data\nRead the JSON into your context\nInterpret each step → set goal_achieved: true/false based on raw_response\nSet overall success → set overall_success: true/false on each test\nSave updated JSON\nCall report generator with interpreted results\n\nStep-by-step code for the agent to execute:\n\nimport json\nimport os\nimport sys\n\n# Add skill scripts to path\nsys.path.insert(0, os.path.expanduser(\"~/.openclaw/skills/nova-act-usability/scripts\"))\nfrom enhanced_report_generator import generate_enhanced_report\n\n# 1. Read raw results\nwith open('test_results_adaptive.json', 'r') as f:\n    results = json.load(f)\n\n# 2. YOU (the agent) interpret each step\nfor test in results:\n    goals_achieved = 0\n    for step in test.get('steps', []):\n        raw = step.get('raw_response', '')\n        \n        # AGENT INTERPRETS: Does this response indicate goal was achieved?\n        # You decide based on the response content and expected outcome\n        # Example interpretations:\n        #   \"No\" → goal_achieved = False\n        #   \"Leaderboard, News, Schedule, Players\" → goal_achieved = True (content found)\n        #   \"Yes\" → goal_achieved = True\n        #   \"I don't see any...\" → goal_achieved = False\n        \n        step['goal_achieved'] = ???  # YOU set this based on your interpretation\n        if step['goal_achieved']:\n            goals_achieved += 1\n    \n    # 3. Set overall success (e.g., >= 50% goals achieved)\n    total = len(test.get('steps', []))\n    test['goals_achieved'] = goals_achieved\n    test['overall_success'] = (goals_achieved / total >= 0.5) if total > 0 else False\n\n# 4. Save interpreted results\nwith open('test_results_adaptive.json', 'w') as f:\n    json.dump(results, f, indent=2)\n\n# 5. Generate report with interpreted data\npage_analysis = {\n    'title': '...',  # From your earlier analysis\n    'purpose': '...',\n    'navigation': [...]\n}\nall_traces = []\nfor r in results:\n    all_traces.extend(r.get('trace_files', []))\n\nreport_path = generate_enhanced_report(page_analysis, results, all_traces)\nprint(f\"Report: {report_path}\")\n\n\nWhy the agent must interpret:\n\nNo hardcoded regex or pattern matching\nYou understand context (what \"Yes\" means for this specific question)\nYou can reason about partial success, edge cases\nNo duplicate Claude API calls - you're already running!\n⚠️ Critical: Keep Nova Act Prompts Simple\n\nNova Act is a browser automation tool, NOT a reasoning engine.\n\nThe Claude agent (you) does all reasoning about:\n\nWhat to test based on the persona\nWhether results are good or bad\nWhat the UX implications are\n\nNova Act just:\n\nClicks, types, scrolls\nReports what it sees\n❌ WRONG: Asking Nova Act to reason\n# DON'T ask Nova Act to think about personas\nnova.act(\"As a beginner user, can you easily find the documentation?\")\nnova.act(\"Would a business professional find the pricing clear?\")\nnova.act(\"Is this task accomplishable for someone with low technical skills?\")\n\n✅ RIGHT: Simple, direct browser commands\n# Simple browser actions\nnova.act(\"Click the Documentation link in the navigation\")\nnova.act(\"Find and click a link containing 'Pricing'\")\nnova.act_get(\"What text is displayed in the main heading?\")\nnova.act_get(\"List the navigation menu items visible on this page\")\n\nThe Correct Workflow\nAgent (you) decides what to test based on persona: \"Dorothy is 68 with low tech skills - she wants to know how to watch golf online\"\nAgent generates simple Nova Act prompts: \"Click 'Watch & Listen' in the navigation\"\nNova Act executes browser task and returns raw results: \"Clicked Watch & Listen, now on video page\"\nAgent interprets results: \"Dorothy would find this confusing because the options are unclear...\"\nHow This Works\n\nYou (the AI) are the orchestrator. This skill provides:\n\nNova Act cookbook (references/nova-act-cookbook.md) - Best practices, workflow patterns, and safety guidelines (automatically loaded at test start)\nAdaptive test orchestrator (run_adaptive_test.py) - Main execution script with workflow detection\nDynamic strategy generator (scripts/dynamic_exploration.py) - Generates workflow-appropriate test strategies\nSession management (scripts/nova_session.py) - Nova Act wrapper\nReport generator (enhanced_report_generator.py) - Auto-generated HTML reports\n\nExecution Flow:\n\nCRITICAL: Check Dependencies First\n\nBefore running ANY test, check if dependencies are installed:\n\n# Check if nova-act is installed\npython3 -c \"import nova_act\" 2>/dev/null\nif [ $? -ne 0 ]; then\n    echo \"Dependencies not installed. Please run:\"\n    echo \"  pip3 install nova-act pydantic playwright\"\n    echo \"  playwright install chromium\"\n    exit 1\nfi\n\n# Check API key\nif ! grep -q '\"apiKey\":.*[^\"]' ~/.openclaw/config/nova-act.json; then\n    echo \"⚠️  Please add your Nova Act API key to ~/.openclaw/config/nova-act.json\"\n    exit 1\nfi\n\n\nOr use Python to check:\n\nimport sys\n\n# Check if nova-act is installed\ntry:\n    import nova_act\n    print(\"✅ Dependencies already installed\")\nexcept ImportError:\n    print(\"📦 Dependencies not installed. Please run:\")\n    print(\"   pip3 install nova-act pydantic playwright\")\n    print(\"   playwright install chromium\")\n    sys.exit(1)\n\nRunning Tests (After Dependencies Confirmed)\n\nWhen a user asks for usability testing:\n\n# Find the skill directory\nSKILL_DIR=~/.openclaw/skills/nova-act-usability\n\n# Run the adaptive test script\npython3 \"$SKILL_DIR/scripts/run_adaptive_test.py\" \"https://example.com\"\n\n# This will:\n# - Create nova_act_logs/ in current directory\n# - Create test_results_adaptive.json in current directory\n# - Create nova_act_usability_report.html in current directory\n# - Provide 60-second status updates during test\n\n⏱️ Timeout Guidance\n\nRecommended timeout: 30 minutes (1800 seconds)\n\nFull usability tests with 3 personas × 3 goals = 9 tests can take 10-20+ minutes depending on:\n\nWebsite load times (slow sites like media-heavy sports sites take longer)\nNova Act API response times (each act() call takes 5-60 seconds)\nNetwork conditions\n\nGraceful shutdown: If the test is interrupted (timeout, SIGTERM, SIGINT), it will:\n\nSave all completed test results to test_results_adaptive.json\nGenerate a partial report clearly marked as incomplete\nShow how many tests completed vs planned\n\nFor shorter tests: Use fewer personas or goals:\n\n# Quick test with 1 persona\npersonas = [{\"name\": \"Test User\", \"archetype\": \"casual\", ...}]\n\nWhat You (the AI) Need to Do:\nCheck dependencies (run the check above)\nIf missing: Tell user to run pip3 install nova-act pydantic playwright && playwright install chromium\nIf present: Extract the website URL from user's request\nRun the test with the URL as argument\nMonitor progress (status updates every 60 seconds)\nShare the report viewing instructions with user\nQuick Start\n\nWhen user requests usability testing:\n\nimport subprocess\nimport os\n\n# Get skill directory\nskill_dir = os.path.expanduser(\"~/.openclaw/skills/nova-act-usability\")\nif not os.path.exists(skill_dir):\n    # Try workspace location\n    skill_dir = os.path.join(os.getcwd(), \"nova-act-usability\")\n\nscript_path = os.path.join(skill_dir, \"scripts\", \"run_adaptive_test.py\")\n\n# Run test\nresult = subprocess.run(\n    [\"python3\", script_path, \"https://example.com\"],\n    env={**os.environ, \"NOVA_ACT_SKIP_PLAYWRIGHT_INSTALL\": \"1\"},\n    capture_output=True,\n    text=True\n)\n\nprint(result.stdout)\n\nDetailed Workflow (Internal)\n\nThe adaptive test script (run_adaptive_test.py) handles:\n\nStep 1: Page Analysis\nLoads the page with Nova Act\nExtracts title, navigation, purpose\nIdentifies key elements (docs, demo, pricing)\nStep 2: Contextual Persona Generation\nCreates personas based on what the page offers\nDeveloper persona if API/code focused\nBusiness persona for evaluation\nBeginner persona if demo available\nStep 3: Realistic Test Case Generation\nTop 3 use cases per persona\nBased on actual page content\nMatched to persona goals\nStep 4: Iterative Test Execution\n\nFor each persona + task combination:\n\nfrom scripts.nova_session import nova_session\nfrom nova_act import BOOL_SCHEMA\nimport time\n\nobservations = []\n\nwith nova_session(website_url) as nova:\n    start_time = time.time()\n    \n    # Initial navigation\n    observations.append({\n        \"step\": \"navigate\",\n        \"action\": f\"Loaded {website_url}\",\n        \"success\": True,\n        \"notes\": \"Initial page load\"\n    })\n    \n    # Execute task step-by-step (AI-orchestrated)\n    # Break into small act() calls based on cookbook guidance\n    \n    # Example: \"Find pricing information\" task\n    \n    # Step 1: Look for pricing link\n    nova.act(\"Look for a link or button for pricing, plans, or subscription\")\n    found = nova.act_get(\n        \"Is there a visible pricing or plans link?\",\n        schema=BOOL_SCHEMA\n    )\n    \n    observations.append({\n        \"step\": \"find_pricing_link\",\n        \"action\": \"Search for pricing navigation\",\n        \"success\": found.parsed_response,\n        \"notes\": \"Easy to find\" if found.parsed_response else \"Not immediately visible - UX friction\"\n    })\n    \n    if found.parsed_response:\n        # Step 2: Navigate to pricing\n        nova.act(\"Click on the pricing or plans link\")\n        \n        # Step 3: Analyze pricing page\n        is_clear = nova.act_get(\n            \"Is the pricing information clearly displayed with prices and features?\",\n            schema=BOOL_SCHEMA\n        )\n        \n        observations.append({\n            \"step\": \"view_pricing\",\n            \"action\": \"Accessed pricing page\",\n            \"success\": is_clear.parsed_response,\n            \"notes\": \"Clear pricing display\" if is_clear.parsed_response else \"Pricing unclear or confusing\"\n        })\n    else:\n        # Alternative path - try search\n        nova.act(\"Look for a search function\")\n        # ... continue orchestrating\n    \n    duration = time.time() - start_time\n    \n    # Document overall task result\n    task_success = all(obs[\"success\"] for obs in observations if obs[\"success\"] is not None)\n    \n    results.append({\n        \"persona\": persona_name,\n        \"task\": task_description,\n        \"success\": task_success,\n        \"duration\": duration,\n        \"observations\": observations,\n        \"friction_points\": [obs for obs in observations if not obs.get(\"success\")]\n    })\n\nStep 5: Pool and Analyze Results\n\nAfter all tests:\n\nIdentify common friction points across personas\nNote accessibility issues for low-tech personas\nFlag efficiency problems (too many steps)\nDocument task failures (major UX issues)\nStep 6: Generate Report\nimport json\nfrom scripts.enhanced_report_generator import generate_enhanced_report\n\n# Save results\nwith open(\"test_results_adaptive.json\", \"w\") as f:\n    json.dump(results, f, indent=2)\n\n# Generate HTML report\nreport_path = generate_enhanced_report(\n    page_analysis=page_analysis,\n    results=test_results\n)\n\nprint(f\"Report: {report_path}\")\n\nKey Principles\nDynamic Task Decomposition\n\nThe AI should decide how to break down each task based on:\n\nWebsite complexity\nPersona's technical level\nTask nature (navigation vs data entry vs search)\n\nLow-tech persona example:\n\n# More explicit, step-by-step\nnova.act(\"Look for a button labeled 'Contact' or 'Contact Us'\")\nnova.act(\"Click on the Contact button\")\nresult = nova.act_get(\"Is there a phone number or email address visible?\")\n\n\nHigh-tech persona example:\n\n# Test efficiency features\nnova.act(\"Look for keyboard shortcuts or quick access features\")\nnova.act(\"Try to use search (Ctrl+K or Cmd+K)\")\n\nReal-Time Observation\n\nAfter EVERY act() call, analyze:\n\nDid it succeed?\nWas the UI element easy to find?\nWas the label clear?\nHow many attempts needed?\nAny error messages?\n\nDocument friction immediately in observations.\n\nPersona-Aware Prompting\n\nAdapt act() prompts to persona characteristics:\n\nElderly/low-tech: Look for obvious, labeled buttons; read everything\nPower users: Try keyboard shortcuts, advanced features\nMobile users: Test mobile responsiveness, touch targets\nScreen reader users: Test keyboard navigation, ARIA labels\nResources\nreferences/nova-act-cookbook.md\n\nMUST READ before starting any test. Contains best practices for:\n\nEffective act() prompting\nTask decomposition strategies\nData extraction patterns\nError handling\nPersona adaptation\nreferences/persona-examples.md\n\nTemplate personas with detailed profiles:\n\nTech-savvy millennial\nElderly first-timer\nBusy professional\nStudent/budget-conscious\nAccessibility-focused\nInternational/non-native speaker\nscripts/nova_session.py\n\nThin wrapper providing Nova Act session primitive:\n\nwith nova_session(url, headless=True, logs_dir=\"./logs\") as nova:\n    nova.act(\"action\")\n    result = nova.act_get(\"query\", schema=Schema)\n\nscripts/enhanced_report_generator.py\n\nCompiles observations into HTML usability report with trace file links.\n\nassets/report-template.html\n\nProfessional HTML template for usability reports.\n\n⚠️ IMPORTANT: First-Time Setup Required\n\nThis skill requires dependencies that must be installed before use.\n\nFor AI Agents: Dependency Check\n\nALWAYS check if dependencies are installed before running tests:\n\n# Quick dependency check\ntry:\n    import nova_act\n    print(\"✅ Dependencies installed\")\nexcept ImportError:\n    print(\"📦 Dependencies not installed. Please run:\")\n    print(\"   pip3 install nova-act pydantic playwright\")\n    print(\"   playwright install chromium\")\n    print(\"\")\n    print(\"This will take 2-3 minutes to download browsers (~300MB)\")\n\nFor Users: One-Time Setup\n\nStep 1: Install Python packages\n\npip3 install nova-act pydantic playwright\n\n\nStep 2: Install Playwright browser\n\nplaywright install chromium\n\n\nStep 3: Configure API key\n\nGet your Nova Act API key from AWS Console\nCreate config file:\nmkdir -p ~/.openclaw/config\necho '{\"apiKey\": \"your-key-here\"}' > ~/.openclaw/config/nova-act.json\n\nReplace your-key-here with your actual Nova Act API key\nExample: AI-Orchestrated Test\n\nUser request: \"Test example.com for elderly users\"\n\nAI orchestration:\n\nRead references/nova-act-cookbook.md\nRead references/persona-examples.md\nGenerate elderly persona (Dorothy, 72, low tech proficiency)\nGenerate tasks:\n\"Find contact information\"\n\"Read about services\"\n\"Navigate to FAQ\"\nFor each task, dynamically orchestrate Nova Act:\nStart session\nExecute small act() steps\nObserve and analyze each result\nTake notes on friction (small text, unclear labels, etc.)\nContinue or adapt based on observations\nPool observations\nGenerate HTML report with findings and recommendations\n\nThe AI decides every step. The skill just provides tools and guidance.\n\nFile Structure\nnova-act-usability/\n├── SKILL.md                          # This file\n├── README.md                         # User documentation\n├── skill.json                        # Skill manifest\n├── scripts/\n│   ├── run_adaptive_test.py          # Main orchestrator (accepts URL arg)\n│   ├── nova_session.py               # Session wrapper\n│   ├── enhanced_report_generator.py  # HTML report generator\n│   └── trace_finder.py               # Extract trace file paths\n├── references/\n│   ├── nova-act-cookbook.md          # Best practices\n│   └── persona-examples.md           # Template personas\n└── assets/\n    └── report-template.html          # HTML template\n\n\nOutput Files (Created in Working Directory)\n\nWhen you run a test, these files are created in your current working directory:\n\n./\n├── nova_act_logs/                    # Nova Act trace files\n│   ├── act_<id>_output.html         # Session recordings\n│   └── ...\n├── test_results_adaptive.json        # Raw test results\n└── nova_act_usability_report.html   # Final report\n\n\nAll paths are relative - works from any installation location!"
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/adityak6798/website-usability-test-nova-act",
    "publisherUrl": "https://clawhub.ai/adityak6798/website-usability-test-nova-act",
    "owner": "adityak6798",
    "version": "1.0.3",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/website-usability-test-nova-act",
    "downloadUrl": "https://openagent3.xyz/downloads/website-usability-test-nova-act",
    "agentUrl": "https://openagent3.xyz/skills/website-usability-test-nova-act/agent",
    "manifestUrl": "https://openagent3.xyz/skills/website-usability-test-nova-act/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/website-usability-test-nova-act/agent.md"
  }
}