โ† All skills
Tencent SkillHub ยท Developer Tools

Business Automation Strategy

Expertise in auditing, prioritizing, selecting platforms, and architecting workflows to identify, build, and scale effective business automations across any...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Expertise in auditing, prioritizing, selecting platforms, and architecting workflows to identify, build, and scale effective business automations across any...

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 46 sections Open source page

Business Automation Strategy โ€” AfrexAI

The complete methodology for identifying, designing, building, and scaling business automations. Platform-agnostic โ€” works with n8n, Zapier, Make, Power Automate, custom code, or any combination.

Phase 1: Automation Audit โ€” Find the Gold

Before building anything, map where time and money leak.

Quick ROI Triage

Ask these 5 questions about any process: How often does it happen? (frequency) How long does it take? (duration per occurrence) How many people touch it? (handoffs) How error-prone is it? (failure rate) How much does failure cost? (impact)

Process Inventory Template

process_inventory: process_name: "[Name]" department: "[Sales/Marketing/Ops/Finance/HR/Engineering]" owner: "[Person responsible]" frequency: "[X per day/week/month]" duration_minutes: [time per occurrence] monthly_volume: [total occurrences] monthly_hours: [volume ร— duration รท 60] hourly_cost: [fully loaded employee cost] monthly_cost: "$[hours ร— hourly cost]" error_rate: "[X%]" error_cost_per_incident: "$[average]" handoffs: [number of people involved] current_tools: ["tool1", "tool2"] automation_potential: "[Full/Partial/Assist/None]" complexity: "[Simple/Medium/Complex/Enterprise]" dependencies: ["system1", "system2"] notes: "[Pain points, workarounds, tribal knowledge]"

Automation Potential Classification

LevelDescriptionHuman RoleExampleFullEnd-to-end automated, no human neededMonitor exceptionsInvoice processing, data syncPartialAutomated with human approval gatesReview & approveContract generation, hiring workflowAssistHuman does work, automation helpsExecute with AI assistanceCustomer support, content creationNoneRequires human judgment/creativityFull ownershipStrategy, relationship building

ROI Calculation

Annual savings = (monthly_hours ร— 12 ร— hourly_cost) + (error_rate ร— volume ร— 12 ร— error_cost) Build cost = development_hours ร— developer_rate + tool_costs Payback period = build_cost รท (annual_savings รท 12) months ROI = ((annual_savings - annual_tool_cost) รท build_cost) ร— 100% Decision rules: Payback < 3 months โ†’ Build immediately Payback 3-6 months โ†’ Build this quarter Payback 6-12 months โ†’ Evaluate against alternatives Payback > 12 months โ†’ Reconsider (unless strategic)

ICE-R Scoring (0-10 each)

DimensionWeightScoring GuideImpact30%10=saves >$50K/yr, 7=saves >$20K/yr, 5=saves >$5K/yr, 3=saves >$1K/yrConfidence20%10=proven pattern, 7=similar done before, 5=feasible but new, 3=uncertainEase25%10=<1 day, 7=<1 week, 5=<1 month, 3=<3 months, 1=>3 monthsReliability25%10=deterministic, 7=95%+ success, 5=80%+ success, 3=needs frequent fixes Score = (Impact ร— 0.30) + (Confidence ร— 0.20) + (Ease ร— 0.25) + (Reliability ร— 0.25)

Quick Win Identification

Automate FIRST (highest ROI, lowest risk): Data entry / copy-paste between systems Notification routing (email โ†’ Slack โ†’ SMS based on rules) Report generation and distribution File organization and naming Status updates across tools Meeting scheduling and follow-ups Invoice creation from templates Lead capture โ†’ CRM entry Onboarding checklists Backup and archival Automate LAST (complex, high risk): Anything involving money transfers without approval Customer-facing responses without review Legal/compliance decisions Hiring/firing workflows Security-sensitive operations

Platform Decision Matrix

FactorNo-Code (Zapier/Make)Low-Code (n8n/Power Automate)Custom CodeAI AgentBest forSimple integrationsComplex workflowsUnique logicJudgment callsBuild speedHoursDaysWeeksDays-weeksMaintenanceLowMediumHighMediumFlexibilityLimitedHighUnlimitedHighCost at scaleExpensiveModerateCheapVariesError handlingBasicGoodFull controlVariableTeam skill neededBusiness userTechnical BADeveloperAI engineerVendor lock-inHighMediumNoneLow-medium

Selection Decision Tree

Is the process deterministic (same input โ†’ same output)? โ”œโ”€โ”€ YES: Does it involve >3 systems? โ”‚ โ”œโ”€โ”€ YES: Does it need complex branching logic? โ”‚ โ”‚ โ”œโ”€โ”€ YES โ†’ Low-code (n8n/Power Automate) โ”‚ โ”‚ โ””โ”€โ”€ NO โ†’ No-code (Zapier/Make) if budget allows, else n8n โ”‚ โ””โ”€โ”€ NO: Is it performance-critical? โ”‚ โ”œโ”€โ”€ YES โ†’ Custom code โ”‚ โ””โ”€โ”€ NO โ†’ No-code (simplest wins) โ””โ”€โ”€ NO: Does it need judgment/reasoning? โ”œโ”€โ”€ YES: Is the judgment pattern learnable? โ”‚ โ”œโ”€โ”€ YES โ†’ AI agent with human review โ”‚ โ””โ”€โ”€ NO โ†’ Human-assisted automation โ””โ”€โ”€ NO โ†’ Partial automation with human gates

Cost Comparison by Scale

Monthly TasksZapierMaken8n (self-hosted)Custom Code1,000$30$10$5 (hosting)$50+ (hosting)10,000$100$30$5$50+100,000$500+$150$10$50+1,000,000$2,000+$500+$20$100+ Rule: If you're spending >$200/mo on Zapier/Make, evaluate self-hosted n8n.

Workflow Blueprint Template

workflow_blueprint: name: "[Descriptive name]" id: "WF-[DEPT]-[NUMBER]" version: "1.0.0" owner: "[Person]" priority: "[P0-P3]" trigger: type: "[webhook/schedule/event/manual/condition]" source: "[System or schedule]" conditions: "[When to fire]" dedup_strategy: "[How to prevent double-processing]" inputs: - name: "[field]" type: "[string/number/date/object]" required: true validation: "[rules]" source: "[where it comes from]" steps: - id: "step_1" action: "[verb: fetch/transform/validate/send/create/update/delete]" system: "[target system]" description: "[what this step does]" input: "[from trigger or previous step]" output: "[what it produces]" error_handling: "[retry/skip/alert/abort]" timeout_seconds: 30 - id: "step_2_branch" type: "condition" condition: "[expression]" true_path: "step_3a" false_path: "step_3b" error_handling: retry_policy: max_attempts: 3 backoff: "exponential" initial_delay_seconds: 5 on_failure: "[alert/queue-for-review/fallback]" alert_channel: "[Slack/email/SMS]" dead_letter_queue: true monitoring: success_metric: "[what defines success]" expected_duration_seconds: [max] alert_on_duration_exceeded: true log_level: "[info/debug/error]" testing: test_data: "[how to generate test inputs]" expected_output: "[what success looks like]" edge_cases: ["empty input", "duplicate", "malformed data"]

7 Workflow Design Principles

Idempotent by default โ€” Running the same workflow twice with the same input should produce the same result, not duplicates Fail loudly โ€” Silent failures are worse than crashes. Every error must notify someone Checkpoint progress โ€” Long workflows should save state so they can resume, not restart Validate early โ€” Check inputs at the start, not after 10 expensive API calls Separate concerns โ€” One workflow, one job. Chain workflows, don't build monoliths Log everything โ€” Timestamps, inputs, outputs, decisions. You WILL need to debug Human escape hatch โ€” Every automated workflow needs a manual override path

Common Workflow Patterns

PatternWhen to UseExampleSequentialSteps depend on each otherLead โ†’ Enrich โ†’ Score โ†’ RouteParallel fan-outIndependent stepsSend email + Update CRM + Log analyticsConditional branchDifferent paths by dataHigh value โ†’ Sales, Low value โ†’ NurtureLoop/batchProcess collectionsFor each row in CSV, create recordApproval gateHuman judgment neededContract review before sendingEvent-driven chainWorkflow triggers workflowOrder placed โ†’ Fulfillment โ†’ Shipping โ†’ NotificationRetry with fallbackUnreliable external APIsTry API โ†’ Retry 3x โ†’ Use cached data โ†’ AlertScheduled sweepPeriodic cleanup/syncNightly: sync CRM โ†’ accounting

Integration Quality Checklist

For every system integration: API documentation reviewed Authentication method confirmed (OAuth2/API key/JWT) Rate limits documented (requests/min, requests/day) Webhook support checked (push vs poll) Error response format understood Pagination handling planned Data format confirmed (JSON/XML/CSV) Field mapping documented Test environment available Sandbox/production separation configured

Data Mapping Template

data_mapping: source_system: "[System A]" target_system: "[System B]" sync_direction: "[one-way/bidirectional]" sync_frequency: "[real-time/5min/hourly/daily]" conflict_resolution: "[source wins/target wins/newest wins/manual]" field_mappings: - source_field: "contact.email" target_field: "customer.email_address" transform: "lowercase" required: true - source_field: "contact.company" target_field: "customer.organization" transform: "trim" default: "Unknown" - source_field: "contact.created_at" target_field: "customer.signup_date" transform: "ISO8601 โ†’ YYYY-MM-DD"

Rate Limit Strategy

ApproachWhenImplementationQueue + throttlePredictable volumeProcess queue at 80% of rate limitExponential backoffBurst trafficWait 1s, 2s, 4s, 8s on 429 errorsBatch API callsHigh volume CRUDGroup 50-100 records per callCache responsesRepeated lookupsCache for TTL matching data freshness needsOff-peak schedulingNon-urgent syncsRun heavy syncs at 2-4 AM

Error Classification

TypeExampleResponsePriorityTransientAPI timeout, 503Retry with backoffAuto-handleRate limit429 Too Many RequestsQueue + throttleAuto-handleData validationMissing required fieldLog + skip + alertReview dailyAuth failureToken expiredRefresh + retry, else alertP1 โ€” fix within 1hLogic errorUnexpected stateHalt + alert + queueP0 โ€” fix immediatelyExternal changeAPI schema changedHalt + alertP0 โ€” fix immediatelyCapacityQueue overflowScale + alertP1 โ€” fix within 4h

Dead Letter Queue Pattern

Every workflow should have a DLQ: Capture โ€” Failed items go to DLQ with full context (input, error, timestamp, step) Alert โ€” Notify on DLQ growth (>10 items or >1% failure rate) Review โ€” Daily check of DLQ items Replay โ€” Ability to reprocess DLQ items after fix Expire โ€” Auto-archive items older than 30 days with summary

Circuit Breaker Pattern

States: CLOSED (normal) โ†’ OPEN (failing) โ†’ HALF-OPEN (testing) CLOSED: Process normally, track failures โ†’ If failure_count > threshold in window โ†’ OPEN OPEN: Reject all requests, return cached/default โ†’ After cool_down_period โ†’ HALF-OPEN HALF-OPEN: Allow 1 test request โ†’ If success โ†’ CLOSED โ†’ If failure โ†’ OPEN (reset cool_down) Thresholds: Simple integrations: 5 failures in 60 seconds Critical paths: 3 failures in 30 seconds Non-critical: 10 failures in 300 seconds

Automation Test Pyramid

LevelWhatHowWhenUnitIndividual step logicMock inputs, verify outputEvery changeIntegrationSystem connectionsTest with sandbox APIsWeekly + after changesEnd-to-endFull workflow pathRun with test dataBefore deploy + weeklyChaosFailure scenariosKill steps, corrupt dataMonthlyLoadVolume handling10x normal volumeBefore scaling

Test Scenario Checklist

For every workflow, test: Happy path (normal input, expected output) Empty/null input (missing required fields) Duplicate input (same event twice) Malformed input (wrong types, encoding issues) Boundary values (max length, zero, negative) API down (target system unavailable) Slow response (timeout handling) Partial failure (step 3 of 5 fails) Concurrent execution (two runs at same time) Clock skew / timezone issues Large payload (oversized data) Permission denied (auth issues)

Validation Before Go-Live

go_live_checklist: functionality: - [ ] All test scenarios pass - [ ] Edge cases documented and handled - [ ] Error messages are actionable reliability: - [ ] Retry logic tested - [ ] Circuit breaker configured - [ ] Dead letter queue active - [ ] Idempotency verified (run twice, same result) monitoring: - [ ] Success/failure alerts configured - [ ] Duration alerts set - [ ] Log retention configured - [ ] Dashboard created documentation: - [ ] Workflow blueprint updated - [ ] Runbook written - [ ] Team trained on manual override rollback: - [ ] Previous version preserved - [ ] Rollback procedure tested - [ ] Data cleanup plan for partial runs

Automation Health Dashboard

automation_dashboard: period: "weekly" summary: total_workflows: [count] total_executions: [count] success_rate: "[X%]" avg_duration: "[X seconds]" errors_this_period: [count] time_saved_hours: [calculated] cost_saved: "$[calculated]" by_workflow: - name: "[Workflow name]" executions: [count] success_rate: "[X%]" avg_duration: "[X seconds]" p95_duration: "[X seconds]" errors: [count] error_types: ["type1: count", "type2: count"] dlq_items: [count] status: "[healthy/degraded/failing]" alerts_fired: [count] manual_interventions: [count] top_issues: - "[Issue 1: description + fix status]" - "[Issue 2: description + fix status]" cost: platform_cost: "$[monthly]" api_calls_cost: "$[monthly]" compute_cost: "$[monthly]" total: "$[monthly]" cost_per_execution: "$[calculated]"

Alert Rules

MetricWarningCriticalActionSuccess rate<95%<90%Investigate + fixDuration>2x average>5x averageCheck for bottleneckDLQ size>10 items>50 itemsReview + reprocessError spike5 errors/hour20 errors/hourPause + investigateQueue depth>100 pending>1000 pendingScale + investigateCost spike>150% of average>300% of averageAudit + optimize

Weekly Review Questions

Which workflows had the lowest success rate? Why? Are any workflows consistently slow? What's the bottleneck? How many manual interventions were needed? Can we eliminate them? What's in the DLQ? Patterns? Are we approaching any rate limits? Total cost vs total time saved โ€” still positive ROI?

Scaling Checklist

Before scaling any automation: Load tested at 10x current volume Rate limits mapped for all APIs Queue-based architecture (not synchronous chains) Database indexes optimized Caching layer in place Monitoring alerts adjusted for new thresholds Cost projections at scale calculated Fallback/degradation plan documented

Performance Optimization Priority

Eliminate unnecessary API calls โ€” Cache lookups, batch operations Parallelize independent steps โ€” Don't wait when you don't have to Optimize data payloads โ€” Only fetch/send fields you need Use webhooks over polling โ€” Real-time + fewer API calls Batch processing โ€” Group operations (50-100 per batch) Async where possible โ€” Don't block on non-critical steps CDN/cache for static lookups โ€” Country codes, categories, templates Database query optimization โ€” Indexes, query plans, connection pooling

When to Migrate Platforms

SignalFromToSpending >$500/mo on Zapier/MakeNo-codeSelf-hosted n8nNeed custom logic in >50% of workflowsNo-codeLow-code or code>100K executions/dayAny hostedSelf-hosted or customComplex branching breaking visual toolsLow-codeCustom codeMultiple teams building automationsSingle toolPlatform + governanceAI judgment needed in workflowsTraditionalAI agent integration

Automation Registry

Every automation must be registered: automation_registry_entry: id: "WF-[DEPT]-[NUMBER]" name: "[Descriptive name]" description: "[What it does in one sentence]" owner: "[Person]" team: "[Department]" platform: "[n8n/Zapier/Make/custom]" status: "[active/paused/deprecated/testing]" created: "[date]" last_modified: "[date]" last_reviewed: "[date]" review_frequency: "[monthly/quarterly]" business_impact: time_saved_monthly_hours: [X] cost_saved_monthly: "$[X]" error_reduction: "[X%]" technical: trigger: "[type]" systems_connected: ["system1", "system2"] avg_daily_executions: [X] success_rate: "[X%]" dependencies: upstream: ["WF-XXX"] downstream: ["WF-YYY"] documentation: blueprint: "[link]" runbook: "[link]" test_plan: "[link]"

Naming Conventions

Pattern: [DEPT]-[ACTION]-[OBJECT]-[QUALIFIER] Examples: SALES-sync-leads-from-typeform FINANCE-generate-invoice-monthly HR-onboard-employee-new-hire MARKETING-post-content-social-scheduled OPS-backup-database-nightly

Change Management for Automations

Change TypeApprovalTestingRollback PlanConfig change (threshold, timing)OwnerQuick smoke testRevert configLogic change (new branch, new step)Owner + reviewerFull test suitePrevious versionIntegration change (new API, new system)Owner + tech leadIntegration + E2EDisconnect + manualNew workflowOwner + stakeholderFull test + pilotDisable workflowDeprecationOwner + affected teamsVerify replacementsRe-enable

Quarterly Automation Review

Inventory check โ€” Are all automations in the registry? Any rogue workflows? ROI validation โ€” Is each automation still delivering value? Health review โ€” Success rates, error trends, DLQ patterns Cost audit โ€” Platform costs trending up? Optimization opportunities? Security review โ€” API keys rotated? Permissions still appropriate? Deprecation candidates โ€” Any automations that should be retired? Opportunity scan โ€” New processes to automate? Existing ones to improve?

When to Add AI to Automations

ScenarioAI TypeExampleClassify unstructured textLLMCategorize support ticketsExtract data from documentsLLM + OCRParse invoices, contractsGenerate content from templatesLLMPersonalized emails, reportsMake judgment callsLLM + rulesLead scoring, risk assessmentSummarize informationLLMMeeting notes, research briefsRoute based on intentLLMCustomer request โ†’ right team

AI Integration Best Practices

Always validate AI output โ€” LLMs hallucinate. Add validation checks Set confidence thresholds โ€” Below threshold โ†’ human review queue Log AI decisions โ€” Input, output, confidence, model version A/B test AI vs rules โ€” Prove AI adds value before committing Cost-control AI calls โ€” Cache similar inputs, batch where possible Fallback to rules โ€” If AI is unavailable, have deterministic backup Review AI decisions weekly โ€” Spot check for quality drift

AI Agent Integration Pattern

ai_agent_step: type: "ai_judgment" model: "[model name]" input: context: "[relevant data from previous steps]" task: "[specific instruction โ€” be precise]" output_format: "[JSON schema or structured format]" constraints: ["must not", "must always", "if unsure"] validation: confidence_threshold: 0.85 required_fields: ["field1", "field2"] value_ranges: score: [0, 100] category: ["A", "B", "C"] on_low_confidence: action: "route_to_human" queue: "[review queue name]" on_failure: action: "fallback_to_rules" rules_engine: "[rule set name]" monitoring: log_all_decisions: true sample_rate_for_review: 0.10 alert_on_confidence_drop: true

5 Levels of Automation Maturity

LevelNameDescriptionIndicators1Ad HocManual processes, maybe a few scriptsNo registry, tribal knowledge2ReactiveAutomate pain points as they ariseSome workflows, no standards3SystematicPlanned automation programRegistry, testing, monitoring4OptimizedContinuous improvement, governanceROI tracking, quarterly reviews5IntelligentAI-augmented, self-healingAdaptive workflows, predictive

Maturity Assessment (Score 1-5 per dimension)

automation_maturity: dimensions: strategy: [1-5] # Planned roadmap vs ad hoc architecture: [1-5] # Patterns, standards, reuse reliability: [1-5] # Error handling, monitoring, uptime governance: [1-5] # Registry, change management, reviews testing: [1-5] # Test coverage, validation, chaos documentation: [1-5] # Blueprints, runbooks, training optimization: [1-5] # Performance, cost, continuous improvement ai_integration: [1-5] # AI-powered decisions, self-healing total: [sum รท 8] grade: "[A/B/C/D/F]" # A: 4.5+ | B: 3.5-4.4 | C: 2.5-3.4 | D: 1.5-2.4 | F: <1.5 top_gap: "[lowest scoring dimension]" next_action: "[specific improvement for top gap]"

100-Point Quality Rubric

DimensionWeight0-2 (Poor)3-5 (Basic)6-8 (Good)9-10 (Excellent)Design15%No blueprint, ad hocBasic flow documentedFull blueprint with error handlingBlueprint + edge cases + optimizationReliability20%No error handlingBasic retriesDLQ + circuit breaker + fallbackSelf-healing + auto-scalingTesting15%No testsHappy path onlyFull test pyramidChaos testing + load testingMonitoring15%No visibilityBasic success/fail logsDashboard + alertsPredictive monitoringDocumentation10%NoneREADME existsBlueprint + runbookFull docs + training materialsSecurity10%Hardcoded credentialsEncrypted secretsLeast privilege + rotationZero-trust + audit trailPerformance10%Works but slowAcceptable speedOptimized + cachedAuto-scaling + sub-secondGovernance5%No registryListed somewhereFull registry + reviewsChange management + compliance Score: (weighted sum) โ†’ Grade: A (90+) B (80-89) C (70-79) D (60-69) F (<60)

10 Automation Killers

#MistakeFix1Automating a broken processFix the process FIRST, then automate2No error handlingEvery step needs a failure path3Silent failuresIf it fails and nobody knows, it's worse than manual4Not testing edge casesTest empty, duplicate, malformed, concurrent5Hardcoded valuesUse config/environment variables for everything6No monitoringYou can't fix what you can't see7Building monolith workflowsOne workflow, one job. Chain them together8Ignoring rate limitsDesign for API limits from day one9No documentationFuture-you will hate present-you10Over-automatingNot everything should be automated. Human judgment exists for a reason

Small Team / Solo Founder

Start with Zapier/Make โ€” speed over flexibility Automate the 3 most time-consuming tasks first Graduate to n8n when spending >$100/mo on no-code

Regulated Industry

Add approval gates at every decision point Log all automated actions for audit trail Review automations quarterly with compliance team Document data flow for privacy impact assessments

Legacy Systems

Use middleware/iPaaS for legacy integration Build adapters that normalize legacy data formats Plan for eventual migration, not permanent workarounds

Multi-Team / Enterprise

Establish automation Center of Excellence (CoE) Standardize on 1-2 platforms max Shared component library for common patterns Governance board for cross-team automations

AI-Heavy Workflows

Always keep human-in-the-loop for high-stakes decisions Monitor AI output quality continuously Budget for AI API costs separately (they scale differently) Version-pin AI models โ€” don't auto-upgrade in production

Natural Language Commands

Use these to invoke specific phases: audit my processes for automation opportunities โ†’ Phase 1 prioritize automations by ROI โ†’ Phase 2 recommend automation platform for [process] โ†’ Phase 3 design workflow blueprint for [process] โ†’ Phase 4 plan integration between [system A] and [system B] โ†’ Phase 5 design error handling for [workflow] โ†’ Phase 6 create test plan for [automation] โ†’ Phase 7 set up monitoring for [workflow] โ†’ Phase 8 optimize [workflow] for scale โ†’ Phase 9 review automation governance โ†’ Phase 10 add AI to [workflow] โ†’ Phase 11 assess automation maturity โ†’ Phase 12

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Docs
  • SKILL.md Primary doc
  • README.md Docs