# Send Business Automation Strategy to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "afrexai-automation-strategy",
    "name": "Business Automation Strategy",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/1kalin/afrexai-automation-strategy",
    "canonicalUrl": "https://clawhub.ai/1kalin/afrexai-automation-strategy",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/afrexai-automation-strategy",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-automation-strategy",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "README.md",
      "SKILL.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "afrexai-automation-strategy",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-01T04:31:43.185Z",
      "expiresAt": "2026-05-08T04:31:43.185Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-automation-strategy",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-automation-strategy",
        "contentDisposition": "attachment; filename=\"afrexai-automation-strategy-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "afrexai-automation-strategy"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/afrexai-automation-strategy"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/afrexai-automation-strategy",
    "downloadUrl": "https://openagent3.xyz/downloads/afrexai-automation-strategy",
    "agentUrl": "https://openagent3.xyz/skills/afrexai-automation-strategy/agent",
    "manifestUrl": "https://openagent3.xyz/skills/afrexai-automation-strategy/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/afrexai-automation-strategy/agent.md"
  }
}
```
## Documentation

### Business Automation Strategy — AfrexAI

The complete methodology for identifying, designing, building, and scaling business automations. Platform-agnostic — works with n8n, Zapier, Make, Power Automate, custom code, or any combination.

### Phase 1: Automation Audit — Find the Gold

Before building anything, map where time and money leak.

### Quick ROI Triage

Ask these 5 questions about any process:

How often does it happen? (frequency)
How long does it take? (duration per occurrence)
How many people touch it? (handoffs)
How error-prone is it? (failure rate)
How much does failure cost? (impact)

### Process Inventory Template

process_inventory:
  process_name: "[Name]"
  department: "[Sales/Marketing/Ops/Finance/HR/Engineering]"
  owner: "[Person responsible]"
  frequency: "[X per day/week/month]"
  duration_minutes: [time per occurrence]
  monthly_volume: [total occurrences]
  monthly_hours: [volume × duration ÷ 60]
  hourly_cost: [fully loaded employee cost]
  monthly_cost: "$[hours × hourly cost]"
  error_rate: "[X%]"
  error_cost_per_incident: "$[average]"
  handoffs: [number of people involved]
  current_tools: ["tool1", "tool2"]
  automation_potential: "[Full/Partial/Assist/None]"
  complexity: "[Simple/Medium/Complex/Enterprise]"
  dependencies: ["system1", "system2"]
  notes: "[Pain points, workarounds, tribal knowledge]"

### Automation Potential Classification

LevelDescriptionHuman RoleExampleFullEnd-to-end automated, no human neededMonitor exceptionsInvoice processing, data syncPartialAutomated with human approval gatesReview & approveContract generation, hiring workflowAssistHuman does work, automation helpsExecute with AI assistanceCustomer support, content creationNoneRequires human judgment/creativityFull ownershipStrategy, relationship building

### ROI Calculation

Annual savings = (monthly_hours × 12 × hourly_cost) + (error_rate × volume × 12 × error_cost)
Build cost = development_hours × developer_rate + tool_costs
Payback period = build_cost ÷ (annual_savings ÷ 12) months
ROI = ((annual_savings - annual_tool_cost) ÷ build_cost) × 100%

Decision rules:

Payback < 3 months → Build immediately
Payback 3-6 months → Build this quarter
Payback 6-12 months → Evaluate against alternatives
Payback > 12 months → Reconsider (unless strategic)

### ICE-R Scoring (0-10 each)

DimensionWeightScoring GuideImpact30%10=saves >$50K/yr, 7=saves >$20K/yr, 5=saves >$5K/yr, 3=saves >$1K/yrConfidence20%10=proven pattern, 7=similar done before, 5=feasible but new, 3=uncertainEase25%10=<1 day, 7=<1 week, 5=<1 month, 3=<3 months, 1=>3 monthsReliability25%10=deterministic, 7=95%+ success, 5=80%+ success, 3=needs frequent fixes

Score = (Impact × 0.30) + (Confidence × 0.20) + (Ease × 0.25) + (Reliability × 0.25)

### Quick Win Identification

Automate FIRST (highest ROI, lowest risk):

Data entry / copy-paste between systems
Notification routing (email → Slack → SMS based on rules)
Report generation and distribution
File organization and naming
Status updates across tools
Meeting scheduling and follow-ups
Invoice creation from templates
Lead capture → CRM entry
Onboarding checklists
Backup and archival

Automate LAST (complex, high risk):

Anything involving money transfers without approval
Customer-facing responses without review
Legal/compliance decisions
Hiring/firing workflows
Security-sensitive operations

### Platform Decision Matrix

FactorNo-Code (Zapier/Make)Low-Code (n8n/Power Automate)Custom CodeAI AgentBest forSimple integrationsComplex workflowsUnique logicJudgment callsBuild speedHoursDaysWeeksDays-weeksMaintenanceLowMediumHighMediumFlexibilityLimitedHighUnlimitedHighCost at scaleExpensiveModerateCheapVariesError handlingBasicGoodFull controlVariableTeam skill neededBusiness userTechnical BADeveloperAI engineerVendor lock-inHighMediumNoneLow-medium

### Selection Decision Tree

Is the process deterministic (same input → same output)?
├── YES: Does it involve >3 systems?
│   ├── YES: Does it need complex branching logic?
│   │   ├── YES → Low-code (n8n/Power Automate)
│   │   └── NO → No-code (Zapier/Make) if budget allows, else n8n
│   └── NO: Is it performance-critical?
│       ├── YES → Custom code
│       └── NO → No-code (simplest wins)
└── NO: Does it need judgment/reasoning?
    ├── YES: Is the judgment pattern learnable?
    │   ├── YES → AI agent with human review
    │   └── NO → Human-assisted automation
    └── NO → Partial automation with human gates

### Cost Comparison by Scale

Monthly TasksZapierMaken8n (self-hosted)Custom Code1,000$30$10$5 (hosting)$50+ (hosting)10,000$100$30$5$50+100,000$500+$150$10$50+1,000,000$2,000+$500+$20$100+

Rule: If you're spending >$200/mo on Zapier/Make, evaluate self-hosted n8n.

### Workflow Blueprint Template

workflow_blueprint:
  name: "[Descriptive name]"
  id: "WF-[DEPT]-[NUMBER]"
  version: "1.0.0"
  owner: "[Person]"
  priority: "[P0-P3]"
  
  trigger:
    type: "[webhook/schedule/event/manual/condition]"
    source: "[System or schedule]"
    conditions: "[When to fire]"
    dedup_strategy: "[How to prevent double-processing]"
  
  inputs:
    - name: "[field]"
      type: "[string/number/date/object]"
      required: true
      validation: "[rules]"
      source: "[where it comes from]"
  
  steps:
    - id: "step_1"
      action: "[verb: fetch/transform/validate/send/create/update/delete]"
      system: "[target system]"
      description: "[what this step does]"
      input: "[from trigger or previous step]"
      output: "[what it produces]"
      error_handling: "[retry/skip/alert/abort]"
      timeout_seconds: 30
    
    - id: "step_2_branch"
      type: "condition"
      condition: "[expression]"
      true_path: "step_3a"
      false_path: "step_3b"
  
  error_handling:
    retry_policy:
      max_attempts: 3
      backoff: "exponential"
      initial_delay_seconds: 5
    on_failure: "[alert/queue-for-review/fallback]"
    alert_channel: "[Slack/email/SMS]"
    dead_letter_queue: true
  
  monitoring:
    success_metric: "[what defines success]"
    expected_duration_seconds: [max]
    alert_on_duration_exceeded: true
    log_level: "[info/debug/error]"
  
  testing:
    test_data: "[how to generate test inputs]"
    expected_output: "[what success looks like]"
    edge_cases: ["empty input", "duplicate", "malformed data"]

### 7 Workflow Design Principles

Idempotent by default — Running the same workflow twice with the same input should produce the same result, not duplicates
Fail loudly — Silent failures are worse than crashes. Every error must notify someone
Checkpoint progress — Long workflows should save state so they can resume, not restart
Validate early — Check inputs at the start, not after 10 expensive API calls
Separate concerns — One workflow, one job. Chain workflows, don't build monoliths
Log everything — Timestamps, inputs, outputs, decisions. You WILL need to debug
Human escape hatch — Every automated workflow needs a manual override path

### Common Workflow Patterns

PatternWhen to UseExampleSequentialSteps depend on each otherLead → Enrich → Score → RouteParallel fan-outIndependent stepsSend email + Update CRM + Log analyticsConditional branchDifferent paths by dataHigh value → Sales, Low value → NurtureLoop/batchProcess collectionsFor each row in CSV, create recordApproval gateHuman judgment neededContract review before sendingEvent-driven chainWorkflow triggers workflowOrder placed → Fulfillment → Shipping → NotificationRetry with fallbackUnreliable external APIsTry API → Retry 3x → Use cached data → AlertScheduled sweepPeriodic cleanup/syncNightly: sync CRM → accounting

### Integration Quality Checklist

For every system integration:

API documentation reviewed
 Authentication method confirmed (OAuth2/API key/JWT)
 Rate limits documented (requests/min, requests/day)
 Webhook support checked (push vs poll)
 Error response format understood
 Pagination handling planned
 Data format confirmed (JSON/XML/CSV)
 Field mapping documented
 Test environment available
 Sandbox/production separation configured

### Data Mapping Template

data_mapping:
  source_system: "[System A]"
  target_system: "[System B]"
  sync_direction: "[one-way/bidirectional]"
  sync_frequency: "[real-time/5min/hourly/daily]"
  conflict_resolution: "[source wins/target wins/newest wins/manual]"
  
  field_mappings:
    - source_field: "contact.email"
      target_field: "customer.email_address"
      transform: "lowercase"
      required: true
    - source_field: "contact.company"
      target_field: "customer.organization"
      transform: "trim"
      default: "Unknown"
    - source_field: "contact.created_at"
      target_field: "customer.signup_date"
      transform: "ISO8601 → YYYY-MM-DD"

### Rate Limit Strategy

ApproachWhenImplementationQueue + throttlePredictable volumeProcess queue at 80% of rate limitExponential backoffBurst trafficWait 1s, 2s, 4s, 8s on 429 errorsBatch API callsHigh volume CRUDGroup 50-100 records per callCache responsesRepeated lookupsCache for TTL matching data freshness needsOff-peak schedulingNon-urgent syncsRun heavy syncs at 2-4 AM

### Error Classification

TypeExampleResponsePriorityTransientAPI timeout, 503Retry with backoffAuto-handleRate limit429 Too Many RequestsQueue + throttleAuto-handleData validationMissing required fieldLog + skip + alertReview dailyAuth failureToken expiredRefresh + retry, else alertP1 — fix within 1hLogic errorUnexpected stateHalt + alert + queueP0 — fix immediatelyExternal changeAPI schema changedHalt + alertP0 — fix immediatelyCapacityQueue overflowScale + alertP1 — fix within 4h

### Dead Letter Queue Pattern

Every workflow should have a DLQ:

Capture — Failed items go to DLQ with full context (input, error, timestamp, step)
Alert — Notify on DLQ growth (>10 items or >1% failure rate)
Review — Daily check of DLQ items
Replay — Ability to reprocess DLQ items after fix
Expire — Auto-archive items older than 30 days with summary

### Circuit Breaker Pattern

States: CLOSED (normal) → OPEN (failing) → HALF-OPEN (testing)

CLOSED: Process normally, track failures
  → If failure_count > threshold in window → OPEN

OPEN: Reject all requests, return cached/default
  → After cool_down_period → HALF-OPEN

HALF-OPEN: Allow 1 test request
  → If success → CLOSED
  → If failure → OPEN (reset cool_down)

Thresholds:

Simple integrations: 5 failures in 60 seconds
Critical paths: 3 failures in 30 seconds
Non-critical: 10 failures in 300 seconds

### Automation Test Pyramid

LevelWhatHowWhenUnitIndividual step logicMock inputs, verify outputEvery changeIntegrationSystem connectionsTest with sandbox APIsWeekly + after changesEnd-to-endFull workflow pathRun with test dataBefore deploy + weeklyChaosFailure scenariosKill steps, corrupt dataMonthlyLoadVolume handling10x normal volumeBefore scaling

### Test Scenario Checklist

For every workflow, test:

Happy path (normal input, expected output)
 Empty/null input (missing required fields)
 Duplicate input (same event twice)
 Malformed input (wrong types, encoding issues)
 Boundary values (max length, zero, negative)
 API down (target system unavailable)
 Slow response (timeout handling)
 Partial failure (step 3 of 5 fails)
 Concurrent execution (two runs at same time)
 Clock skew / timezone issues
 Large payload (oversized data)
 Permission denied (auth issues)

### Validation Before Go-Live

go_live_checklist:
  functionality:
    - [ ] All test scenarios pass
    - [ ] Edge cases documented and handled
    - [ ] Error messages are actionable
  
  reliability:
    - [ ] Retry logic tested
    - [ ] Circuit breaker configured
    - [ ] Dead letter queue active
    - [ ] Idempotency verified (run twice, same result)
  
  monitoring:
    - [ ] Success/failure alerts configured
    - [ ] Duration alerts set
    - [ ] Log retention configured
    - [ ] Dashboard created
  
  documentation:
    - [ ] Workflow blueprint updated
    - [ ] Runbook written
    - [ ] Team trained on manual override
  
  rollback:
    - [ ] Previous version preserved
    - [ ] Rollback procedure tested
    - [ ] Data cleanup plan for partial runs

### Automation Health Dashboard

automation_dashboard:
  period: "weekly"
  
  summary:
    total_workflows: [count]
    total_executions: [count]
    success_rate: "[X%]"
    avg_duration: "[X seconds]"
    errors_this_period: [count]
    time_saved_hours: [calculated]
    cost_saved: "$[calculated]"
  
  by_workflow:
    - name: "[Workflow name]"
      executions: [count]
      success_rate: "[X%]"
      avg_duration: "[X seconds]"
      p95_duration: "[X seconds]"
      errors: [count]
      error_types: ["type1: count", "type2: count"]
      dlq_items: [count]
      status: "[healthy/degraded/failing]"
  
  alerts_fired: [count]
  manual_interventions: [count]
  
  top_issues:
    - "[Issue 1: description + fix status]"
    - "[Issue 2: description + fix status]"
  
  cost:
    platform_cost: "$[monthly]"
    api_calls_cost: "$[monthly]"
    compute_cost: "$[monthly]"
    total: "$[monthly]"
    cost_per_execution: "$[calculated]"

### Alert Rules

MetricWarningCriticalActionSuccess rate<95%<90%Investigate + fixDuration>2x average>5x averageCheck for bottleneckDLQ size>10 items>50 itemsReview + reprocessError spike5 errors/hour20 errors/hourPause + investigateQueue depth>100 pending>1000 pendingScale + investigateCost spike>150% of average>300% of averageAudit + optimize

### Weekly Review Questions

Which workflows had the lowest success rate? Why?
Are any workflows consistently slow? What's the bottleneck?
How many manual interventions were needed? Can we eliminate them?
What's in the DLQ? Patterns?
Are we approaching any rate limits?
Total cost vs total time saved — still positive ROI?

### Scaling Checklist

Before scaling any automation:

Load tested at 10x current volume
 Rate limits mapped for all APIs
 Queue-based architecture (not synchronous chains)
 Database indexes optimized
 Caching layer in place
 Monitoring alerts adjusted for new thresholds
 Cost projections at scale calculated
 Fallback/degradation plan documented

### Performance Optimization Priority

Eliminate unnecessary API calls — Cache lookups, batch operations
Parallelize independent steps — Don't wait when you don't have to
Optimize data payloads — Only fetch/send fields you need
Use webhooks over polling — Real-time + fewer API calls
Batch processing — Group operations (50-100 per batch)
Async where possible — Don't block on non-critical steps
CDN/cache for static lookups — Country codes, categories, templates
Database query optimization — Indexes, query plans, connection pooling

### When to Migrate Platforms

SignalFromToSpending >$500/mo on Zapier/MakeNo-codeSelf-hosted n8nNeed custom logic in >50% of workflowsNo-codeLow-code or code>100K executions/dayAny hostedSelf-hosted or customComplex branching breaking visual toolsLow-codeCustom codeMultiple teams building automationsSingle toolPlatform + governanceAI judgment needed in workflowsTraditionalAI agent integration

### Automation Registry

Every automation must be registered:

automation_registry_entry:
  id: "WF-[DEPT]-[NUMBER]"
  name: "[Descriptive name]"
  description: "[What it does in one sentence]"
  owner: "[Person]"
  team: "[Department]"
  platform: "[n8n/Zapier/Make/custom]"
  status: "[active/paused/deprecated/testing]"
  created: "[date]"
  last_modified: "[date]"
  last_reviewed: "[date]"
  review_frequency: "[monthly/quarterly]"
  
  business_impact:
    time_saved_monthly_hours: [X]
    cost_saved_monthly: "$[X]"
    error_reduction: "[X%]"
    
  technical:
    trigger: "[type]"
    systems_connected: ["system1", "system2"]
    avg_daily_executions: [X]
    success_rate: "[X%]"
    
  dependencies:
    upstream: ["WF-XXX"]
    downstream: ["WF-YYY"]
    
  documentation:
    blueprint: "[link]"
    runbook: "[link]"
    test_plan: "[link]"

### Naming Conventions

Pattern: [DEPT]-[ACTION]-[OBJECT]-[QUALIFIER]
Examples:
  SALES-sync-leads-from-typeform
  FINANCE-generate-invoice-monthly
  HR-onboard-employee-new-hire
  MARKETING-post-content-social-scheduled
  OPS-backup-database-nightly

### Change Management for Automations

Change TypeApprovalTestingRollback PlanConfig change (threshold, timing)OwnerQuick smoke testRevert configLogic change (new branch, new step)Owner + reviewerFull test suitePrevious versionIntegration change (new API, new system)Owner + tech leadIntegration + E2EDisconnect + manualNew workflowOwner + stakeholderFull test + pilotDisable workflowDeprecationOwner + affected teamsVerify replacementsRe-enable

### Quarterly Automation Review

Inventory check — Are all automations in the registry? Any rogue workflows?
ROI validation — Is each automation still delivering value?
Health review — Success rates, error trends, DLQ patterns
Cost audit — Platform costs trending up? Optimization opportunities?
Security review — API keys rotated? Permissions still appropriate?
Deprecation candidates — Any automations that should be retired?
Opportunity scan — New processes to automate? Existing ones to improve?

### When to Add AI to Automations

ScenarioAI TypeExampleClassify unstructured textLLMCategorize support ticketsExtract data from documentsLLM + OCRParse invoices, contractsGenerate content from templatesLLMPersonalized emails, reportsMake judgment callsLLM + rulesLead scoring, risk assessmentSummarize informationLLMMeeting notes, research briefsRoute based on intentLLMCustomer request → right team

### AI Integration Best Practices

Always validate AI output — LLMs hallucinate. Add validation checks
Set confidence thresholds — Below threshold → human review queue
Log AI decisions — Input, output, confidence, model version
A/B test AI vs rules — Prove AI adds value before committing
Cost-control AI calls — Cache similar inputs, batch where possible
Fallback to rules — If AI is unavailable, have deterministic backup
Review AI decisions weekly — Spot check for quality drift

### AI Agent Integration Pattern

ai_agent_step:
  type: "ai_judgment"
  model: "[model name]"
  
  input:
    context: "[relevant data from previous steps]"
    task: "[specific instruction — be precise]"
    output_format: "[JSON schema or structured format]"
    constraints: ["must not", "must always", "if unsure"]
  
  validation:
    confidence_threshold: 0.85
    required_fields: ["field1", "field2"]
    value_ranges:
      score: [0, 100]
      category: ["A", "B", "C"]
    
  on_low_confidence:
    action: "route_to_human"
    queue: "[review queue name]"
    
  on_failure:
    action: "fallback_to_rules"
    rules_engine: "[rule set name]"
    
  monitoring:
    log_all_decisions: true
    sample_rate_for_review: 0.10
    alert_on_confidence_drop: true

### 5 Levels of Automation Maturity

LevelNameDescriptionIndicators1Ad HocManual processes, maybe a few scriptsNo registry, tribal knowledge2ReactiveAutomate pain points as they ariseSome workflows, no standards3SystematicPlanned automation programRegistry, testing, monitoring4OptimizedContinuous improvement, governanceROI tracking, quarterly reviews5IntelligentAI-augmented, self-healingAdaptive workflows, predictive

### Maturity Assessment (Score 1-5 per dimension)

automation_maturity:
  dimensions:
    strategy: [1-5]  # Planned roadmap vs ad hoc
    architecture: [1-5]  # Patterns, standards, reuse
    reliability: [1-5]  # Error handling, monitoring, uptime
    governance: [1-5]  # Registry, change management, reviews
    testing: [1-5]  # Test coverage, validation, chaos
    documentation: [1-5]  # Blueprints, runbooks, training
    optimization: [1-5]  # Performance, cost, continuous improvement
    ai_integration: [1-5]  # AI-powered decisions, self-healing
  
  total: [sum ÷ 8]
  grade: "[A/B/C/D/F]"
  # A: 4.5+ | B: 3.5-4.4 | C: 2.5-3.4 | D: 1.5-2.4 | F: <1.5
  
  top_gap: "[lowest scoring dimension]"
  next_action: "[specific improvement for top gap]"

### 100-Point Quality Rubric

DimensionWeight0-2 (Poor)3-5 (Basic)6-8 (Good)9-10 (Excellent)Design15%No blueprint, ad hocBasic flow documentedFull blueprint with error handlingBlueprint + edge cases + optimizationReliability20%No error handlingBasic retriesDLQ + circuit breaker + fallbackSelf-healing + auto-scalingTesting15%No testsHappy path onlyFull test pyramidChaos testing + load testingMonitoring15%No visibilityBasic success/fail logsDashboard + alertsPredictive monitoringDocumentation10%NoneREADME existsBlueprint + runbookFull docs + training materialsSecurity10%Hardcoded credentialsEncrypted secretsLeast privilege + rotationZero-trust + audit trailPerformance10%Works but slowAcceptable speedOptimized + cachedAuto-scaling + sub-secondGovernance5%No registryListed somewhereFull registry + reviewsChange management + compliance

Score: (weighted sum) → Grade: A (90+) B (80-89) C (70-79) D (60-69) F (<60)

### 10 Automation Killers

#MistakeFix1Automating a broken processFix the process FIRST, then automate2No error handlingEvery step needs a failure path3Silent failuresIf it fails and nobody knows, it's worse than manual4Not testing edge casesTest empty, duplicate, malformed, concurrent5Hardcoded valuesUse config/environment variables for everything6No monitoringYou can't fix what you can't see7Building monolith workflowsOne workflow, one job. Chain them together8Ignoring rate limitsDesign for API limits from day one9No documentationFuture-you will hate present-you10Over-automatingNot everything should be automated. Human judgment exists for a reason

### Small Team / Solo Founder

Start with Zapier/Make — speed over flexibility
Automate the 3 most time-consuming tasks first
Graduate to n8n when spending >$100/mo on no-code

### Regulated Industry

Add approval gates at every decision point
Log all automated actions for audit trail
Review automations quarterly with compliance team
Document data flow for privacy impact assessments

### Legacy Systems

Use middleware/iPaaS for legacy integration
Build adapters that normalize legacy data formats
Plan for eventual migration, not permanent workarounds

### Multi-Team / Enterprise

Establish automation Center of Excellence (CoE)
Standardize on 1-2 platforms max
Shared component library for common patterns
Governance board for cross-team automations

### AI-Heavy Workflows

Always keep human-in-the-loop for high-stakes decisions
Monitor AI output quality continuously
Budget for AI API costs separately (they scale differently)
Version-pin AI models — don't auto-upgrade in production

### Natural Language Commands

Use these to invoke specific phases:

audit my processes for automation opportunities → Phase 1
prioritize automations by ROI → Phase 2
recommend automation platform for [process] → Phase 3
design workflow blueprint for [process] → Phase 4
plan integration between [system A] and [system B] → Phase 5
design error handling for [workflow] → Phase 6
create test plan for [automation] → Phase 7
set up monitoring for [workflow] → Phase 8
optimize [workflow] for scale → Phase 9
review automation governance → Phase 10
add AI to [workflow] → Phase 11
assess automation maturity → Phase 12
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: 1kalin
- Version: 1.0.0
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-05-01T04:31:43.185Z
- Expires at: 2026-05-08T04:31:43.185Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/afrexai-automation-strategy)
- [Send to Agent page](https://openagent3.xyz/skills/afrexai-automation-strategy/agent)
- [JSON manifest](https://openagent3.xyz/skills/afrexai-automation-strategy/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/afrexai-automation-strategy/agent.md)
- [Download page](https://openagent3.xyz/downloads/afrexai-automation-strategy)