# Send Website Usability Test Nova Act to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "nova-act-usability",
    "name": "Website Usability Test Nova Act",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/zouchaoqun/nova-act-usability",
    "canonicalUrl": "https://clawhub.ai/zouchaoqun/nova-act-usability",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/nova-act-usability",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=nova-act-usability",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "CHANGELOG.md",
      "README.md",
      "SKILL.md",
      "_meta.json",
      "assets/report-template.html",
      "references/nova-act-cookbook.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "nova-act-usability",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-10T23:56:15.507Z",
      "expiresAt": "2026-05-17T23:56:15.507Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=nova-act-usability",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=nova-act-usability",
        "contentDisposition": "attachment; filename=\"nova-act-usability-1.0.5.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "nova-act-usability"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/nova-act-usability"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/nova-act-usability",
    "downloadUrl": "https://openagent3.xyz/downloads/nova-act-usability",
    "agentUrl": "https://openagent3.xyz/skills/nova-act-usability/agent",
    "manifestUrl": "https://openagent3.xyz/skills/nova-act-usability/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/nova-act-usability/agent.md"
  }
}
```
## Documentation

### Nova Act Usability Testing v1.0.2

AI-orchestrated usability testing with digital twin personas powered by Amazon Nova Act.

### ⚠️ Prerequisites & Credentials

This skill requires an Amazon Nova Act API key.

RequirementDetailsAPI KeyNova Act API key from AWS ConsoleConfig Location~/.openclaw/config/nova-act.jsonFormat{"apiKey": "your-nova-act-api-key-here"}Dependenciespip3 install nova-act pydantic playwrightBrowserplaywright install chromium (~300MB download)

### 🔒 Data & Privacy Notice

What this skill accesses:

Reads: ~/.openclaw/config/nova-act.json (your API key)
Writes: ./nova_act_logs/ (trace files with screenshots), ./test_results_adaptive.json, ./nova_act_usability_report.html

What trace files contain:

Screenshots of every page visited
Full page content (HTML, text)
Browser actions and AI decisions

Recommendations:

Run tests only on non-production or test environments
Be aware traces may capture PII or sensitive data visible on tested pages
Review/delete trace files after use if they contain sensitive content
Consider running in a sandboxed environment (container/VM) for untrusted sites

### Features

Agent-Driven Interpretation: The script no longer interprets responses. YOU (the agent) must:

Run the test script → collect raw data
Read JSON → interpret each raw_response
Set goal_achieved and overall_success
Generate the report

No hardcoded regex. No extra API calls. The agent doing the work is already running.

### Quick Start (For AI Agents)

When a user asks to test a website, YOU (the AI agent) must complete ALL 4 phases:

PhaseWhat HappensWho Does It1. SetupGenerate personas, run test scriptAgent + Script2. CollectScript captures raw Nova Act responsesScript3. InterpretRead JSON, determine goal_achieved for each stepAgent4. ReportGenerate HTML report with interpreted resultsAgent

⚠️ The script does NOT interpret responses or generate the final report. You must do phases 3-4.

### 🎯 Recommended: AI Agent Generates Personas

You're already an AI (Claude) - use your intelligence to generate contextual personas!

import subprocess
import os
import sys
import json
import tempfile

# Step 1: Check dependencies
try:
    import nova_act
    print("✅ Dependencies ready")
except ImportError:
    print("📦 Dependencies not installed. Please run:")
    print("   pip3 install nova-act pydantic playwright")
    print("   playwright install chromium")
    sys.exit(1)

# Step 2: Verify Nova Act API key
config_file = os.path.expanduser("~/.openclaw/config/nova-act.json")
with open(config_file, 'r') as f:
    config = json.load(f)
    if config.get('apiKey') == 'your-nova-act-api-key-here':
        print(f"⚠️  Please add your Nova Act API key to {config_file}")
        sys.exit(1)

# Step 3: YOU (the AI agent) generate personas
# Example for https://www.pgatour.com/ (golf tournament site)
website_url = "https://www.pgatour.com/"

personas = [
    {
        "name": "Marcus Chen",
        "archetype": "tournament_follower",
        "age": 42,
        "tech_proficiency": "high",
        "description": "Avid golf fan who follows multiple tours and tracks player stats",
        "goals": [
            "Check current tournament leaderboard",
            "View recent tournament results",
            "Track favorite player performance"
        ]
    },
    {
        "name": "Dorothy Williams",
        "archetype": "casual_viewer",
        "age": 68,
        "tech_proficiency": "low",
        "description": "Occasional golf viewer who watches major tournaments",
        "goals": [
            "Find when the next tournament is",
            "See who won recently",
            "Understand how to watch online"
        ]
    }
]

# Step 4: Save personas and run test
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
    json.dump(personas, f, indent=2)
    personas_file = f.name

skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
test_script = os.path.join(skill_dir, "scripts", "run_adaptive_test.py")

# Run with AI-generated personas
subprocess.run([sys.executable, test_script, website_url, personas_file])

# Clean up temp file
os.unlink(personas_file)

Persona Template:

{
  "name": "FirstName LastName",
  "archetype": "descriptive_identifier",
  "age": 30,
  "tech_proficiency": "low|medium|high",
  "description": "One sentence about who they are",
  "goals": [
    "First goal relevant to this website",
    "Second goal relevant to this website",
    "Third goal relevant to this website"
  ]
}

### 📝 Alternative: Simple Custom Persona

If user specifies a persona description, pass it as a string:

# User: "Test PGA Tour site as a golf enthusiast"
website_url = "https://www.pgatour.com/"
user_persona = "golf enthusiast who follows tournaments closely"

subprocess.run([sys.executable, test_script, website_url, user_persona])
# Script will parse this and create personas automatically

### ⚠️ Fallback: Auto-Generation (Not Recommended)

Let the script guess personas based on basic category keywords:

# Generic, less contextual personas
subprocess.run([sys.executable, test_script, website_url])

### Why YOU Should Generate Personas

✅ Advantages:

Better context: You have full conversation history and domain knowledge
Smarter inference: You can analyze the URL, industry, and user intent
No duplicate API calls: You're already Claude - don't call yourself again!
User preferences: You can adapt based on stated preferences
Clarifying questions: You can ask the user about target demographics

❌ What to avoid:

Don't let Python script make its own Claude API call (wasteful)
Don't rely on generic fallback personas (less accurate)
Don't skip persona generation (hurts test quality)

### 💡 Tips for Persona Generation

Analyze the website:

URL domain: .gov → citizens, .edu → students/faculty
Keywords: "shop" → shoppers, "book" → travelers, "play" → gamers
Industry: Golf → fans/players, Banking → customers/businesses

Create diverse personas:

Mix experience levels (beginner, intermediate, expert)
Mix tech proficiency (low, medium, high)
Mix age ranges (young, middle-aged, senior)
Mix motivations (casual, professional, enthusiastic)

Generate realistic goals:

Specific to the website's purpose
Actionable and measurable
Match the persona's characteristics

Examples by industry:

E-commerce: bargain_hunter, comparison_shopper, impulse_buyer
News: daily_reader, topic_follower, casual_browser
Sports: die_hard_fan, casual_viewer, stats_tracker
Travel: business_traveler, vacation_planner, deal_seeker
SaaS: power_user, evaluator, beginner

### User Invocation

Users can trigger this skill by saying:

"Test the usability of [website URL]"
"Run a usability test on [website URL]"
"Generate a usability report for [website URL]"
"Evaluate the UX of [website URL]"
"Analyze [website URL] for usability issues"
NEW: "Test the booking flow on [website]"
NEW: "Test the checkout process on [e-commerce site]"
NEW: "Test posting workflow on [social media site]"

The AI will automatically:

Load the Nova Act cookbook for guidance
Analyze the page to understand it
Detect if it's a workflow-based site (booking, e-commerce, social, etc.)
Generate contextual personas:

If custom persona specified → Create persona matching that description
If no custom persona → Use Claude AI to infer the 3 most plausible real-world user types
Fallback to category-based personas if AI unavailable


Create realistic test cases (including full workflows when appropriate)
Run adaptive, iterative tests with Nova Act
NEW: Apply safety stops before material impact actions (payment, posting, account creation)
Generate comprehensive HTML report with trace links
Provide viewing instructions

### Workflow Testing

NEW in this version: The skill now tests complete user journeys, not just information-finding!

### Supported Workflows

E-Commerce:

Product search → Add to cart → Checkout → STOP before payment

Flight/Hotel Booking:

Search → Select → Fill details → STOP before booking

Social Media:

Create post → Add content → STOP before publishing

Account Signup:

Fill registration → STOP before final submission

Form Submission:

Fill form → STOP before submit

### Safety Guarantees

The skill will NEVER:

Complete actual purchases
Create real accounts
Post publicly
Send emails/messages
Subscribe to newsletters
Make any action with monetary/legal/reputational impact

The skill will ALWAYS:

Test up to (but not including) the final action
Verify the final button exists and is accessible
Document the safety stop in observations

### 🧠 Agent Analysis (CRITICAL)

You (the AI agent) must analyze test results! The script collects raw responses but does NOT interpret them.

### Why Agent Analysis?

The script returns raw Nova Act responses like:

"No" - Is there a pricing link?
"I don't see any documentation" - Is there docs?
"Amazon Nova Act" - What is the headline?

You must determine if each response means the goal was achieved:

ResponseGoal Achieved?"No"❌ NOT achieved"I don't see..."❌ NOT achieved"Not found"❌ NOT achieved"Yes, I found..."✅ Achieved"Amazon Nova Act" (content)✅ Achieved"The pricing is $29/mo"✅ Achieved

### Result Data Structure

After the test script runs, read the JSON results. Each step contains:

{
    "step_name": "check_nav_for_pricing",
    "prompt": "Is there a pricing link in the navigation?",
    "expected_outcome": "Find pricing in navigation",
    "raw_response": "No",
    "api_success": true,
    "needs_agent_analysis": true,
    "attempts": [
        {
            "prompt": "Is there a pricing link in the navigation?",
            "response": "No",
            "approach": "original"
        }
    ]
}

Key fields you analyze:

raw_response: The actual Nova Act response - YOU determine what it means
api_success: Did the API call work? (script handles this)
needs_agent_analysis: Always true - your cue to interpret
attempts: All attempts made (script tries up to 3 alternative approaches)

### How to Analyze

For each step, determine:

goal_achieved: Did the response indicate success or failure?
friction_level: How hard was it? (attempts.length > 1 = friction)
observations: UX insights from the response

Analysis example:

Step 1: "Is there a pricing link?" 
  → Response: "No" (1 attempt)
  → Goal achieved: NO (explicit negative)
  → Friction: HIGH (not discoverable)

Step 2: "What is the headline?" 
  → Response: "Amazon Nova Act" (1 attempt)
  → Goal achieved: YES (actual content)
  → Friction: LOW (immediately visible)

Step 3: "Find documentation" 
  → Response: "I found a docs link in the footer" (3 attempts)
  → Goal achieved: YES (found eventually)
  → Friction: MEDIUM (required multiple approaches)

### Helper Functions (For Script Integration)

The response_interpreter.py provides helpers if you want structured prompts:

from scripts.response_interpreter import (
    format_for_agent_analysis,
    create_agent_prompt_for_interpretation,
    create_agent_prompt_for_alternative
)

# Format all results for analysis
formatted = format_for_agent_analysis(results)

# Get interpretation prompt for one step
prompt = create_agent_prompt_for_interpretation(step_result)

# Get retry prompt when goal not achieved  
retry_prompt = create_agent_prompt_for_alternative(
    original_prompt="Is there a pricing link?",
    failed_response="No",
    attempt_number=2
)

### Complete Analysis Workflow (MANDATORY)

The script does NOT generate the final report automatically. You (the agent) must:

Run the test script → outputs test_results_adaptive.json with raw data
Read the JSON into your context
Interpret each step → set goal_achieved: true/false based on raw_response
Set overall success → set overall_success: true/false on each test
Save updated JSON
Call report generator with interpreted results

Step-by-step code for the agent to execute:

import json
import os
import sys

# Add skill scripts to path
sys.path.insert(0, os.path.expanduser("~/.openclaw/skills/nova-act-usability/scripts"))
from enhanced_report_generator import generate_enhanced_report

# 1. Read raw results
with open('test_results_adaptive.json', 'r') as f:
    results = json.load(f)

# 2. YOU (the agent) interpret each step
for test in results:
    goals_achieved = 0
    for step in test.get('steps', []):
        raw = step.get('raw_response', '')
        
        # AGENT INTERPRETS: Does this response indicate goal was achieved?
        # You decide based on the response content and expected outcome
        # Example interpretations:
        #   "No" → goal_achieved = False
        #   "Leaderboard, News, Schedule, Players" → goal_achieved = True (content found)
        #   "Yes" → goal_achieved = True
        #   "I don't see any..." → goal_achieved = False
        
        step['goal_achieved'] = ???  # YOU set this based on your interpretation
        if step['goal_achieved']:
            goals_achieved += 1
    
    # 3. Set overall success (e.g., >= 50% goals achieved)
    total = len(test.get('steps', []))
    test['goals_achieved'] = goals_achieved
    test['overall_success'] = (goals_achieved / total >= 0.5) if total > 0 else False

# 4. Save interpreted results
with open('test_results_adaptive.json', 'w') as f:
    json.dump(results, f, indent=2)

# 5. Generate report with interpreted data
page_analysis = {
    'title': '...',  # From your earlier analysis
    'purpose': '...',
    'navigation': [...]
}
all_traces = []
for r in results:
    all_traces.extend(r.get('trace_files', []))

report_path = generate_enhanced_report(page_analysis, results, all_traces)
print(f"Report: {report_path}")

Why the agent must interpret:

No hardcoded regex or pattern matching
You understand context (what "Yes" means for this specific question)
You can reason about partial success, edge cases
No duplicate Claude API calls - you're already running!

### ⚠️ Critical: Keep Nova Act Prompts Simple

Nova Act is a browser automation tool, NOT a reasoning engine.

The Claude agent (you) does all reasoning about:

What to test based on the persona
Whether results are good or bad
What the UX implications are

Nova Act just:

Clicks, types, scrolls
Reports what it sees

### ❌ WRONG: Asking Nova Act to reason

# DON'T ask Nova Act to think about personas
nova.act("As a beginner user, can you easily find the documentation?")
nova.act("Would a business professional find the pricing clear?")
nova.act("Is this task accomplishable for someone with low technical skills?")

### ✅ RIGHT: Simple, direct browser commands

# Simple browser actions
nova.act("Click the Documentation link in the navigation")
nova.act("Find and click a link containing 'Pricing'")
nova.act_get("What text is displayed in the main heading?")
nova.act_get("List the navigation menu items visible on this page")

### The Correct Workflow

Agent (you) decides what to test based on persona: "Dorothy is 68 with low tech skills - she wants to know how to watch golf online"
Agent generates simple Nova Act prompts: "Click 'Watch & Listen' in the navigation"
Nova Act executes browser task and returns raw results: "Clicked Watch & Listen, now on video page"
Agent interprets results: "Dorothy would find this confusing because the options are unclear..."

### How This Works

You (the AI) are the orchestrator. This skill provides:

Nova Act cookbook (references/nova-act-cookbook.md) - Best practices, workflow patterns, and safety guidelines (automatically loaded at test start)
Adaptive test orchestrator (run_adaptive_test.py) - Main execution script with workflow detection
Dynamic strategy generator (scripts/dynamic_exploration.py) - Generates workflow-appropriate test strategies
Session management (scripts/nova_session.py) - Nova Act wrapper
Report generator (enhanced_report_generator.py) - Auto-generated HTML reports

Execution Flow:

### CRITICAL: Check Dependencies First

Before running ANY test, check if dependencies are installed:

# Check if nova-act is installed
python3 -c "import nova_act" 2>/dev/null
if [ $? -ne 0 ]; then
    echo "Dependencies not installed. Please run:"
    echo "  pip3 install nova-act pydantic playwright"
    echo "  playwright install chromium"
    exit 1
fi

# Check API key
if ! grep -q '"apiKey":.*[^"]' ~/.openclaw/config/nova-act.json; then
    echo "⚠️  Please add your Nova Act API key to ~/.openclaw/config/nova-act.json"
    exit 1
fi

Or use Python to check:

import sys

# Check if nova-act is installed
try:
    import nova_act
    print("✅ Dependencies already installed")
except ImportError:
    print("📦 Dependencies not installed. Please run:")
    print("   pip3 install nova-act pydantic playwright")
    print("   playwright install chromium")
    sys.exit(1)

### Running Tests (After Dependencies Confirmed)

When a user asks for usability testing:

# Find the skill directory
SKILL_DIR=~/.openclaw/skills/nova-act-usability

# Run the adaptive test script
python3 "$SKILL_DIR/scripts/run_adaptive_test.py" "https://example.com"

# This will:
# - Create nova_act_logs/ in current directory
# - Create test_results_adaptive.json in current directory
# - Create nova_act_usability_report.html in current directory
# - Provide 60-second status updates during test

### ⏱️ Timeout Guidance

Recommended timeout: 30 minutes (1800 seconds)

Full usability tests with 3 personas × 3 goals = 9 tests can take 10-20+ minutes depending on:

Website load times (slow sites like media-heavy sports sites take longer)
Nova Act API response times (each act() call takes 5-60 seconds)
Network conditions

Graceful shutdown: If the test is interrupted (timeout, SIGTERM, SIGINT), it will:

Save all completed test results to test_results_adaptive.json
Generate a partial report clearly marked as incomplete
Show how many tests completed vs planned

For shorter tests: Use fewer personas or goals:

# Quick test with 1 persona
personas = [{"name": "Test User", "archetype": "casual", ...}]

### What You (the AI) Need to Do:

Check dependencies (run the check above)
If missing: Tell user to run pip3 install nova-act pydantic playwright && playwright install chromium
If present: Extract the website URL from user's request
Run the test with the URL as argument
Monitor progress (status updates every 60 seconds)
Share the report viewing instructions with user

### Quick Start

When user requests usability testing:

import subprocess
import os

# Get skill directory
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
if not os.path.exists(skill_dir):
    # Try workspace location
    skill_dir = os.path.join(os.getcwd(), "nova-act-usability")

script_path = os.path.join(skill_dir, "scripts", "run_adaptive_test.py")

# Run test
result = subprocess.run(
    ["python3", script_path, "https://example.com"],
    env={**os.environ, "NOVA_ACT_SKIP_PLAYWRIGHT_INSTALL": "1"},
    capture_output=True,
    text=True
)

print(result.stdout)

### Detailed Workflow (Internal)

The adaptive test script (run_adaptive_test.py) handles:

### Step 1: Page Analysis

Loads the page with Nova Act
Extracts title, navigation, purpose
Identifies key elements (docs, demo, pricing)

### Step 2: Contextual Persona Generation

Creates personas based on what the page offers
Developer persona if API/code focused
Business persona for evaluation
Beginner persona if demo available

### Step 3: Realistic Test Case Generation

Top 3 use cases per persona
Based on actual page content
Matched to persona goals

### Step 4: Iterative Test Execution

For each persona + task combination:

from scripts.nova_session import nova_session
from nova_act import BOOL_SCHEMA
import time

observations = []

with nova_session(website_url) as nova:
    start_time = time.time()
    
    # Initial navigation
    observations.append({
        "step": "navigate",
        "action": f"Loaded {website_url}",
        "success": True,
        "notes": "Initial page load"
    })
    
    # Execute task step-by-step (AI-orchestrated)
    # Break into small act() calls based on cookbook guidance
    
    # Example: "Find pricing information" task
    
    # Step 1: Look for pricing link
    nova.act("Look for a link or button for pricing, plans, or subscription")
    found = nova.act_get(
        "Is there a visible pricing or plans link?",
        schema=BOOL_SCHEMA
    )
    
    observations.append({
        "step": "find_pricing_link",
        "action": "Search for pricing navigation",
        "success": found.parsed_response,
        "notes": "Easy to find" if found.parsed_response else "Not immediately visible - UX friction"
    })
    
    if found.parsed_response:
        # Step 2: Navigate to pricing
        nova.act("Click on the pricing or plans link")
        
        # Step 3: Analyze pricing page
        is_clear = nova.act_get(
            "Is the pricing information clearly displayed with prices and features?",
            schema=BOOL_SCHEMA
        )
        
        observations.append({
            "step": "view_pricing",
            "action": "Accessed pricing page",
            "success": is_clear.parsed_response,
            "notes": "Clear pricing display" if is_clear.parsed_response else "Pricing unclear or confusing"
        })
    else:
        # Alternative path - try search
        nova.act("Look for a search function")
        # ... continue orchestrating
    
    duration = time.time() - start_time
    
    # Document overall task result
    task_success = all(obs["success"] for obs in observations if obs["success"] is not None)
    
    results.append({
        "persona": persona_name,
        "task": task_description,
        "success": task_success,
        "duration": duration,
        "observations": observations,
        "friction_points": [obs for obs in observations if not obs.get("success")]
    })

### Step 5: Pool and Analyze Results

After all tests:

Identify common friction points across personas
Note accessibility issues for low-tech personas
Flag efficiency problems (too many steps)
Document task failures (major UX issues)

### Step 6: Generate Report

import json
from scripts.enhanced_report_generator import generate_enhanced_report

# Save results
with open("test_results_adaptive.json", "w") as f:
    json.dump(results, f, indent=2)

# Generate HTML report
report_path = generate_enhanced_report(
    page_analysis=page_analysis,
    results=test_results
)

print(f"Report: {report_path}")

### Dynamic Task Decomposition

The AI should decide how to break down each task based on:

Website complexity
Persona's technical level
Task nature (navigation vs data entry vs search)

Low-tech persona example:

# More explicit, step-by-step
nova.act("Look for a button labeled 'Contact' or 'Contact Us'")
nova.act("Click on the Contact button")
result = nova.act_get("Is there a phone number or email address visible?")

High-tech persona example:

# Test efficiency features
nova.act("Look for keyboard shortcuts or quick access features")
nova.act("Try to use search (Ctrl+K or Cmd+K)")

### Real-Time Observation

After EVERY act() call, analyze:

Did it succeed?
Was the UI element easy to find?
Was the label clear?
How many attempts needed?
Any error messages?

Document friction immediately in observations.

### Persona-Aware Prompting

Adapt act() prompts to persona characteristics:

Elderly/low-tech: Look for obvious, labeled buttons; read everything
Power users: Try keyboard shortcuts, advanced features
Mobile users: Test mobile responsiveness, touch targets
Screen reader users: Test keyboard navigation, ARIA labels

### references/nova-act-cookbook.md

MUST READ before starting any test. Contains best practices for:

Effective act() prompting
Task decomposition strategies
Data extraction patterns
Error handling
Persona adaptation

### references/persona-examples.md

Template personas with detailed profiles:

Tech-savvy millennial
Elderly first-timer
Busy professional
Student/budget-conscious
Accessibility-focused
International/non-native speaker

### scripts/nova_session.py

Thin wrapper providing Nova Act session primitive:

with nova_session(url, headless=True, logs_dir="./logs") as nova:
    nova.act("action")
    result = nova.act_get("query", schema=Schema)

### scripts/enhanced_report_generator.py

Compiles observations into HTML usability report with trace file links.

### assets/report-template.html

Professional HTML template for usability reports.

### ⚠️ IMPORTANT: First-Time Setup Required

This skill requires dependencies that must be installed before use.

### For AI Agents: Dependency Check

ALWAYS check if dependencies are installed before running tests:

# Quick dependency check
try:
    import nova_act
    print("✅ Dependencies installed")
except ImportError:
    print("📦 Dependencies not installed. Please run:")
    print("   pip3 install nova-act pydantic playwright")
    print("   playwright install chromium")
    print("")
    print("This will take 2-3 minutes to download browsers (~300MB)")

### For Users: One-Time Setup

Step 1: Install Python packages

pip3 install nova-act pydantic playwright

Step 2: Install Playwright browser

playwright install chromium

Step 3: Configure API key

Get your Nova Act API key from AWS Console
Create config file:

mkdir -p ~/.openclaw/config
echo '{"apiKey": "your-key-here"}' > ~/.openclaw/config/nova-act.json

Replace your-key-here with your actual Nova Act API key

### Example: AI-Orchestrated Test

User request: "Test example.com for elderly users"

AI orchestration:

Read references/nova-act-cookbook.md
Read references/persona-examples.md
Generate elderly persona (Dorothy, 72, low tech proficiency)
Generate tasks:

"Find contact information"
"Read about services"
"Navigate to FAQ"


For each task, dynamically orchestrate Nova Act:

Start session
Execute small act() steps
Observe and analyze each result
Take notes on friction (small text, unclear labels, etc.)
Continue or adapt based on observations


Pool observations
Generate HTML report with findings and recommendations

The AI decides every step. The skill just provides tools and guidance.

### File Structure

nova-act-usability/
├── SKILL.md                          # This file
├── README.md                         # User documentation
├── skill.json                        # Skill manifest
├── scripts/
│   ├── run_adaptive_test.py          # Main orchestrator (accepts URL arg)
│   ├── nova_session.py               # Session wrapper
│   ├── enhanced_report_generator.py  # HTML report generator
│   └── trace_finder.py               # Extract trace file paths
├── references/
│   ├── nova-act-cookbook.md          # Best practices
│   └── persona-examples.md           # Template personas
└── assets/
    └── report-template.html          # HTML template

### Output Files (Created in Working Directory)

When you run a test, these files are created in your current working directory:

./
├── nova_act_logs/                    # Nova Act trace files
│   ├── act_<id>_output.html         # Session recordings
│   └── ...
├── test_results_adaptive.json        # Raw test results
└── nova_act_usability_report.html   # Final report

All paths are relative - works from any installation location!
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: zouchaoqun
- Version: 1.0.5
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-05-10T23:56:15.507Z
- Expires at: 2026-05-17T23:56:15.507Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/nova-act-usability)
- [Send to Agent page](https://openagent3.xyz/skills/nova-act-usability/agent)
- [JSON manifest](https://openagent3.xyz/skills/nova-act-usability/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/nova-act-usability/agent.md)
- [Download page](https://openagent3.xyz/downloads/nova-act-usability)