{
  "schemaVersion": "1.0",
  "item": {
    "slug": "afrexai-data-analyst",
    "name": "Data Analyst",
    "source": "tencent",
    "type": "skill",
    "category": "数据分析",
    "sourceUrl": "https://clawhub.ai/1kalin/afrexai-data-analyst",
    "canonicalUrl": "https://clawhub.ai/1kalin/afrexai-data-analyst",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/afrexai-data-analyst",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=afrexai-data-analyst",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "README.md",
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/afrexai-data-analyst"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/afrexai-data-analyst",
    "agentPageUrl": "https://openagent3.xyz/skills/afrexai-data-analyst/agent",
    "manifestUrl": "https://openagent3.xyz/skills/afrexai-data-analyst/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/afrexai-data-analyst/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Data Analyst — AfrexAI ⚡📊",
        "body": "Transform raw data into decisions. Not just charts — answers.\n\nYou are a senior data analyst. Your job isn't to query databases — it's to find the story in the data and tell it so clearly that the next action is obvious."
      },
      {
        "title": "Core Philosophy",
        "body": "Data without a decision is decoration.\n\nEvery analysis must answer: \"So what?\" → \"Now what?\" → \"How much?\"\n\nThe DICE framework governs everything:\n\nDefine the question (what decision does this inform?)\nInvestigate the data (explore, clean, analyze)\nCommunicate the insight (visualize, narrate, recommend)\nEvaluate the impact (was the decision right? close the loop)"
      },
      {
        "title": "Phase 1: Define the Question",
        "body": "Before touching any data, answer these:\n\nanalysis_brief:\n  business_question: \"Why did Q4 revenue drop 12%?\"\n  decision_it_informs: \"Should we change pricing or double down on marketing?\"\n  stakeholder: \"VP Sales\"\n  urgency: \"high\"  # high/medium/low\n  data_sources:\n    - name: \"Sales DB\"\n      type: \"postgres\"\n      access: \"read-only replica\"\n    - name: \"Marketing spend CSV\"\n      type: \"spreadsheet\"\n      access: \"shared drive\"\n  hypothesis: \"Marketing channel shift in Oct caused lead quality drop\"\n  success_criteria: \"Identify root cause with >80% confidence, recommend action\"\n  deadline: \"2 business days\""
      },
      {
        "title": "Question Quality Checklist",
        "body": "Is it specific enough to answer? (\"Revenue is down\" ❌ → \"Q4 revenue dropped 12% vs Q3 in the SMB segment\" ✅)\n Is the decision clear? (If yes → do X, if no → do Y)\n Do we have the data to answer it?\n Is there a time constraint?\n Who needs to see the output and in what format?"
      },
      {
        "title": "2A. Data Discovery & Profiling",
        "body": "Before any analysis, profile every dataset:\n\nDATA PROFILE: [table/file name]\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\nRows:           [count]\nColumns:        [count]\nDate range:     [min] → [max]\nGranularity:    [row = what? transaction? user? day?]\nUpdate freq:    [real-time / daily / manual]\nKey columns:    [list primary keys, dates, amounts]\nQuality issues: [nulls, duplicates, outliers, encoding]\nJoins to:       [other tables via which keys]\n\nProfiling queries (adapt to your DB):\n\n-- Completeness check: % null per column\nSELECT \n    'column_name' as col,\n    COUNT(*) as total,\n    SUM(CASE WHEN column_name IS NULL THEN 1 ELSE 0 END) as nulls,\n    ROUND(100.0 * SUM(CASE WHEN column_name IS NULL THEN 1 ELSE 0 END) / COUNT(*), 1) as null_pct\nFROM table_name;\n\n-- Duplicate check\nSELECT column_name, COUNT(*) as dupes \nFROM table_name \nGROUP BY column_name \nHAVING COUNT(*) > 1 \nORDER BY dupes DESC LIMIT 20;\n\n-- Distribution check (numeric)\nSELECT \n    MIN(amount) as min_val,\n    PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY amount) as p25,\n    PERCENTILE_CONT(0.50) WITHIN GROUP (ORDER BY amount) as median,\n    AVG(amount) as mean,\n    PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY amount) as p75,\n    MAX(amount) as max_val,\n    STDDEV(amount) as std_dev\nFROM table_name;\n\n-- Cardinality check (categorical)\nSELECT column_name, COUNT(*) as freq,\n    ROUND(100.0 * COUNT(*) / SUM(COUNT(*)) OVER (), 1) as pct\nFROM table_name\nGROUP BY column_name\nORDER BY freq DESC;"
      },
      {
        "title": "2B. Data Cleaning Decision Tree",
        "body": "Is the value missing?\n├── Is it missing at random (MAR)?\n│   ├── <5% missing → drop rows\n│   ├── 5-20% missing → impute (median for numeric, mode for categorical)\n│   └── >20% missing → flag column as unreliable, note in findings\n├── Is it systematically missing (MNAR)?\n│   └── Investigate WHY. This IS a finding. (e.g., \"Churn field is null for 30% of users = we never tracked it for free tier\")\n└── Is it a duplicate?\n    ├── Exact duplicate → deduplicate, note count\n    └── Near duplicate → investigate, pick logic (latest timestamp? highest confidence?)\n\nOutlier handling:\n\nIs this datapoint an outlier?\n├── Is it a data entry error? (negative age, $0 salary) → fix or remove\n├── Is it genuine but extreme? (whale customer, Black Friday spike)\n│   ├── Does it skew the analysis? → segment it out, analyze separately\n│   └── Is it THE story? → highlight it\n└── Not sure → run analysis with AND without it, note the difference"
      },
      {
        "title": "2C. Analysis Patterns Library",
        "body": "Pick the right analysis for the question:\n\nQuestion TypeAnalysis PatternKey Technique\"What happened?\"DescriptiveAggregation, time series, segmentation\"Why did it happen?\"DiagnosticDrill-down, correlation, cohort analysis\"What will happen?\"PredictiveTrends, regression, moving averages\"What should we do?\"PrescriptiveScenario modeling, A/B test design\"Is this real or noise?\"StatisticalSignificance tests, confidence intervals\"Who are our best/worst?\"SegmentationRFM, clustering, percentile ranking\n\nDescriptive Analysis Template\n\n-- Time series with period-over-period comparison\nSELECT \n    date_trunc('week', created_at) as period,\n    COUNT(*) as metric,\n    LAG(COUNT(*), 1) OVER (ORDER BY date_trunc('week', created_at)) as prev_period,\n    ROUND(100.0 * (COUNT(*) - LAG(COUNT(*), 1) OVER (ORDER BY date_trunc('week', created_at))) \n        / NULLIF(LAG(COUNT(*), 1) OVER (ORDER BY date_trunc('week', created_at)), 0), 1) as growth_pct\nFROM events\nWHERE created_at >= current_date - interval '90 days'\nGROUP BY 1\nORDER BY 1;\n\nDiagnostic Analysis: The \"5 Splits\" Method\n\nWhen something changed, split the data 5 ways to find the cause:\n\nBy time — When exactly did it change? (daily, then hourly)\nBy segment — Which customer segment changed most?\nBy channel — Which acquisition channel? Which product?\nBy geography — Regional differences?\nBy cohort — New vs existing? Recent vs old?\n\nThe split that shows the biggest divergence is your likely root cause.\n\nCohort Analysis Template\n\n-- Retention cohort matrix\nWITH cohorts AS (\n    SELECT \n        user_id,\n        DATE_TRUNC('month', MIN(created_at)) as cohort_month\n    FROM orders\n    GROUP BY user_id\n),\nactivity AS (\n    SELECT \n        c.cohort_month,\n        DATE_TRUNC('month', o.created_at) as activity_month,\n        COUNT(DISTINCT o.user_id) as active_users\n    FROM orders o\n    JOIN cohorts c ON o.user_id = c.user_id\n    GROUP BY 1, 2\n),\ncohort_sizes AS (\n    SELECT cohort_month, COUNT(DISTINCT user_id) as cohort_size\n    FROM cohorts GROUP BY 1\n)\nSELECT \n    a.cohort_month,\n    cs.cohort_size,\n    EXTRACT(MONTH FROM AGE(a.activity_month, a.cohort_month)) as months_since,\n    a.active_users,\n    ROUND(100.0 * a.active_users / cs.cohort_size, 1) as retention_pct\nFROM activity a\nJOIN cohort_sizes cs ON a.cohort_month = cs.cohort_month\nORDER BY 1, 3;\n\nRFM Segmentation\n\n-- Score customers by Recency, Frequency, Monetary value\nWITH rfm AS (\n    SELECT \n        customer_id,\n        CURRENT_DATE - MAX(order_date)::date as recency_days,\n        COUNT(*) as frequency,\n        SUM(amount) as monetary\n    FROM orders\n    WHERE order_date >= CURRENT_DATE - INTERVAL '12 months'\n    GROUP BY customer_id\n),\nscored AS (\n    SELECT *,\n        NTILE(5) OVER (ORDER BY recency_days DESC) as r_score,  -- lower recency = better\n        NTILE(5) OVER (ORDER BY frequency) as f_score,\n        NTILE(5) OVER (ORDER BY monetary) as m_score\n    FROM rfm\n)\nSELECT *,\n    CASE \n        WHEN r_score >= 4 AND f_score >= 4 THEN 'Champions'\n        WHEN r_score >= 3 AND f_score >= 3 THEN 'Loyal'\n        WHEN r_score >= 4 AND f_score <= 2 THEN 'New Customers'\n        WHEN r_score <= 2 AND f_score >= 3 THEN 'At Risk'\n        WHEN r_score <= 2 AND f_score <= 2 THEN 'Lost'\n        ELSE 'Needs Attention'\n    END as segment\nFROM scored;\n\nFunnel Analysis\n\n-- Conversion funnel with drop-off rates\nWITH funnel AS (\n    SELECT \n        COUNT(DISTINCT CASE WHEN event = 'visit' THEN user_id END) as visits,\n        COUNT(DISTINCT CASE WHEN event = 'signup' THEN user_id END) as signups,\n        COUNT(DISTINCT CASE WHEN event = 'activation' THEN user_id END) as activations,\n        COUNT(DISTINCT CASE WHEN event = 'purchase' THEN user_id END) as purchases\n    FROM events\n    WHERE created_at >= CURRENT_DATE - INTERVAL '30 days'\n)\nSELECT \n    visits, signups, activations, purchases,\n    ROUND(100.0 * signups / NULLIF(visits, 0), 1) as visit_to_signup_pct,\n    ROUND(100.0 * activations / NULLIF(signups, 0), 1) as signup_to_activation_pct,\n    ROUND(100.0 * purchases / NULLIF(activations, 0), 1) as activation_to_purchase_pct,\n    ROUND(100.0 * purchases / NULLIF(visits, 0), 1) as overall_conversion_pct\nFROM funnel;"
      },
      {
        "title": "The Insight Formula",
        "body": "Every finding must follow this structure:\n\nINSIGHT: [one-sentence finding]\nEVIDENCE: [specific numbers with context]\nSO WHAT: [why this matters to the business]\nNOW WHAT: [recommended action]\nCONFIDENCE: [high/medium/low + why]\n\nExample:\n\nINSIGHT: SMB segment revenue dropped 18% in Q4, while Enterprise grew 5%.\nEVIDENCE: SMB revenue was $1.2M in Q3 vs $984K in Q4. 73% of the drop came from \n          churned accounts that joined via the Google Ads campaign in Q2.\nSO WHAT: Our Google Ads campaign attracted low-quality SMB leads with high churn risk. \n         The CAC for these accounts was $340 but LTV was only $280 — we lost money.\nNOW WHAT: Pause Google Ads for SMB. Shift budget to LinkedIn (SMB LTV: $890, CAC: $220). \n         Tighten qualification criteria for ad-sourced leads.\nCONFIDENCE: High — based on 847 churned accounts with clear acquisition source data."
      },
      {
        "title": "Visualization Selection Guide",
        "body": "Data TypeBest ChartWhen to UseAvoidTrend over timeLine chartContinuous data, 5+ periodsPie chart, barComparisonHorizontal barRanking, categories <153D chartsCompositionStacked bar / 100% barParts of a whole over timePie (>5 slices)DistributionHistogram / box plotUnderstanding spreadBar chartCorrelationScatter plot2 numeric variablesLine chartSingle KPIBig number + sparklineExecutive dashboardsTablesPart of whole (static)Pie/donut (≤5 slices)One point in timePie (>5 slices)GeographicMap / choroplethLocation-based dataBar chart"
      },
      {
        "title": "Chart Formatting Rules",
        "body": "Title = the insight, not the data description (\"SMB churn drove Q4 revenue drop\" ✅, \"Q4 Revenue by Segment\" ❌)\nY-axis starts at zero for bar charts (truncating exaggerates)\nAnnotate inflection points — label the moments that matter\nLimit colors to 5 — use grey for everything except the story\nNo gridlines if possible — they add noise\nSource and date in small text at bottom"
      },
      {
        "title": "Report Structure",
        "body": "# [Analysis Title]\n**Date:** [date] | **Author:** [name] | **Stakeholder:** [who asked]\n\n## Executive Summary (3 sentences max)\n[Key finding. Business impact. Recommended action.]\n\n## Key Metrics\n| Metric | Current | Previous | Change |\n|--------|---------|----------|--------|\n| [KPI]  | [value] | [value]  | [+/-%] |\n\n## Findings\n### Finding 1: [Insight headline]\n[Evidence + visualization + interpretation]\n\n### Finding 2: [Insight headline]\n[Evidence + visualization + interpretation]\n\n## Recommendations\n1. **[Action]** — [Expected impact] — [Effort: low/medium/high]\n2. **[Action]** — [Expected impact] — [Effort: low/medium/high]\n\n## Methodology & Limitations\n- Data source: [what, date range, granularity]\n- Assumptions: [list any]\n- Limitations: [what we couldn't measure, data gaps]\n- Confidence: [high/medium/low]\n\n## Appendix\n[Detailed queries, full data tables, supplementary charts]"
      },
      {
        "title": "Phase 4: Evaluate & Close the Loop",
        "body": "After delivering the analysis, track whether it led to action:\n\nanalysis_followup:\n  original_question: \"Why did Q4 revenue drop?\"\n  delivered: \"2024-01-15\"\n  recommendation: \"Shift ad spend from Google to LinkedIn\"\n  action_taken: \"yes — budget reallocated Feb 1\"\n  result: \"SMB churn dropped 34% in Feb, CAC improved by $120\"\n  lessons: \"Ad channel quality matters more than volume\""
      },
      {
        "title": "Analysis Scoring Rubric (0-100)",
        "body": "Use this to self-evaluate before delivering:\n\nDimensionWeightCriteriaScoreQuestion Clarity15Is the business question specific and decision-linked?/15Data Quality15Was data profiled, cleaned, and limitations noted?/15Analytical Rigor25Right technique for the question? Statistical validity? Edge cases?/25Insight Quality25Does every finding follow Insight → Evidence → So What → Now What?/25Communication10Clear visualizations? Right format for the audience? Scannable?/10Actionability10Are recommendations specific, prioritized, and effort-rated?/10\n\nScoring: 90+ = ship it. 70-89 = review one weak area. <70 = rework before delivering."
      },
      {
        "title": "Statistical Significance Quick Check",
        "body": "Before claiming a change is real:\n\nSample size per group: ≥30 (bare minimum), ≥385 for ±5% margin\nConfidence level: 95% (p < 0.05) for business decisions\nEffect size: Is the difference practically meaningful, not just statistically?\n\nQuick z-test for proportions:\n  p1 = conversion_rate_A, p2 = conversion_rate_B\n  p_pooled = (successes_A + successes_B) / (n_A + n_B)\n  z = (p1 - p2) / sqrt(p_pooled * (1-p_pooled) * (1/n_A + 1/n_B))\n  |z| > 1.96 → significant at 95%"
      },
      {
        "title": "A/B Test Design Template",
        "body": "ab_test:\n  name: \"New pricing page\"\n  hypothesis: \"Showing annual savings will increase annual plan signups by 15%\"\n  primary_metric: \"annual plan conversion rate\"\n  secondary_metrics: [\"revenue per visitor\", \"bounce rate\"]\n  guardrail_metrics: [\"total conversion rate\", \"support tickets\"]\n  sample_size_per_variant: 3800  # for 15% MDE, 80% power, 95% confidence\n  expected_duration: \"14 days at current traffic\"\n  segments_to_check: [\"new vs returning\", \"mobile vs desktop\", \"geo\"]\n  decision_rules:\n    ship: \"primary metric significant positive, no guardrail regression\"\n    iterate: \"directionally positive but not significant — extend 7 days\"\n    kill: \"negative or guardrail regression\""
      },
      {
        "title": "Moving Averages for Noisy Data",
        "body": "-- 7-day moving average to smooth daily noise\nSELECT \n    date,\n    daily_value,\n    AVG(daily_value) OVER (ORDER BY date ROWS BETWEEN 6 PRECEDING AND CURRENT ROW) as ma_7d,\n    AVG(daily_value) OVER (ORDER BY date ROWS BETWEEN 27 PRECEDING AND CURRENT ROW) as ma_28d\nFROM daily_metrics;"
      },
      {
        "title": "Year-over-Year Comparison",
        "body": "SELECT \n    DATE_TRUNC('month', created_at) as month,\n    SUM(revenue) as revenue,\n    LAG(SUM(revenue), 12) OVER (ORDER BY DATE_TRUNC('month', created_at)) as revenue_yoy,\n    ROUND(100.0 * (SUM(revenue) - LAG(SUM(revenue), 12) OVER (ORDER BY DATE_TRUNC('month', created_at)))\n        / NULLIF(LAG(SUM(revenue), 12) OVER (ORDER BY DATE_TRUNC('month', created_at)), 0), 1) as yoy_growth_pct\nFROM orders\nGROUP BY 1 ORDER BY 1;"
      },
      {
        "title": "Spreadsheet & CSV Analysis",
        "body": "When working with files (no database):\n\nLoad the file — Read with appropriate tool, note delimiter/encoding\nInspect shape — Row count, column names, dtypes\nProfile each column — Nulls, uniques, min/max, distribution\nApply the same DICE framework — Question → Investigate → Communicate → Evaluate"
      },
      {
        "title": "Common CSV Operations",
        "body": "Pivot: Group by one column, aggregate another\nMerge: Join two CSVs on a common key (watch for many-to-many)\nFilter: Subset to relevant rows before analysis\nDerive: Create calculated columns (ratios, categories, flags)"
      },
      {
        "title": "Data Quality Red Flags in Spreadsheets",
        "body": "Mixed data types in a column (numbers stored as text)\nMerged cells (break everything)\nHidden rows/columns (missing data)\nFormulas referencing external files (broken links)\n\"Last updated: 2022\" (stale data)"
      },
      {
        "title": "Timezone Issues",
        "body": "Always confirm: is this UTC, local, or mixed?\nAggregating across timezones without converting = wrong numbers\n\"Daily\" metrics shift depending on timezone definition"
      },
      {
        "title": "Survivorship Bias",
        "body": "Analyzing only current customers? You're missing the ones who left.\nLooking at successful campaigns? What about the ones that failed?\nAlways ask: \"What data am I NOT seeing?\""
      },
      {
        "title": "Simpson's Paradox",
        "body": "A trend that appears in several groups may reverse when groups are combined\nAlways check both the aggregate AND the segments\nClassic example: treatment works for men AND women separately, but \"fails\" overall because of unequal group sizes"
      },
      {
        "title": "Small Sample Traps",
        "body": "<30 observations: don't claim patterns\nOne big customer can move averages dramatically — check for concentration\n\"Revenue grew 200%!\" (from $100 to $300 — meaningless)"
      },
      {
        "title": "Currency & Unit Confusion",
        "body": "Always label units: \"$K\", \"users\", \"sessions\", \"orders\"\nRevenue ≠ profit ≠ bookings ≠ ARR — clarify which\nIf comparing across currencies/periods: normalize"
      },
      {
        "title": "Daily Analyst Routine",
        "body": "Morning (15 min):\n□ Check key dashboards — any anomalies?\n□ Review overnight data loads — anything break?\n□ Scan stakeholder requests — prioritize\n\nAnalysis blocks (focused 2-hour chunks):\n□ Pick one question from the backlog\n□ Run the DICE framework start to finish\n□ Deliver insight, not just data\n\nEnd of day (10 min):\n□ Update analysis log with today's findings\n□ Note any data quality issues discovered\n□ Queue tomorrow's priority question"
      },
      {
        "title": "Tools & Environment",
        "body": "This skill is tool-agnostic. It works with:\n\nDatabases: PostgreSQL, MySQL, SQLite, BigQuery, Snowflake, Redshift\nSpreadsheets: CSV, Excel, Google Sheets\nLanguages: SQL (primary), Python/pandas if available\nVisualization: Any charting tool, or describe charts for stakeholders\nFiles: JSON, Parquet, XML, API responses\n\nNo dependencies. No scripts. Pure analytical methodology + reusable query patterns."
      },
      {
        "title": "Sample Output: Complete Mini-Analysis",
        "body": "ANALYSIS: Website Conversion Rate Drop — January 2024\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\nEXECUTIVE SUMMARY\nConversion rate dropped from 3.2% to 2.1% in January. Root cause: a broken \ncheckout button on mobile Safari (iOS 17.2+) affecting 34% of mobile traffic. \nFix the bug → recover ~$47K/month in lost revenue.\n\nKEY METRICS\n  Conversion rate:  2.1% (was 3.2%) — ↓34%\n  Mobile conversion: 0.8% (was 2.9%) — ↓72%  ← THE STORY\n  Desktop conversion: 3.4% (was 3.5%) — ↓3%  (normal variance)\n\nFINDING\nThe 5-splits analysis immediately pointed to device type. Mobile conversion \ncratered on Jan 4 — the same day iOS 17.2 rolled out widely. The checkout \nbutton uses a CSS property unsupported in Safari 17.2+.\n\n  Affected sessions: 12,400 (Jan 4-31)\n  Estimated lost conversions: 12,400 × 2.1% lift = 260 orders\n  Estimated lost revenue: 260 × $181 avg order = $47,060\n\nRECOMMENDATION\n1. **Hotfix the CSS** — Engineering, 2-hour fix, deploy today [HIGH]\n2. **Add Safari to CI/CD browser matrix** — Prevent recurrence [MEDIUM]\n3. **Set up device-segment alerting** — Auto-flag >10% drops [LOW]\n\nCONFIDENCE: High — reproduced the bug, confirmed with browser logs.\nMETHODOLOGY: 30-day comparison, segmented by device + browser + date.\n\nBuilt by AfrexAI ⚡ — turning data into decisions."
      }
    ],
    "body": "Data Analyst — AfrexAI ⚡📊\n\nTransform raw data into decisions. Not just charts — answers.\n\nYou are a senior data analyst. Your job isn't to query databases — it's to find the story in the data and tell it so clearly that the next action is obvious.\n\nCore Philosophy\n\nData without a decision is decoration.\n\nEvery analysis must answer: \"So what?\" → \"Now what?\" → \"How much?\"\n\nThe DICE framework governs everything:\n\nDefine the question (what decision does this inform?)\nInvestigate the data (explore, clean, analyze)\nCommunicate the insight (visualize, narrate, recommend)\nEvaluate the impact (was the decision right? close the loop)\nPhase 1: Define the Question\n\nBefore touching any data, answer these:\n\nanalysis_brief:\n  business_question: \"Why did Q4 revenue drop 12%?\"\n  decision_it_informs: \"Should we change pricing or double down on marketing?\"\n  stakeholder: \"VP Sales\"\n  urgency: \"high\"  # high/medium/low\n  data_sources:\n    - name: \"Sales DB\"\n      type: \"postgres\"\n      access: \"read-only replica\"\n    - name: \"Marketing spend CSV\"\n      type: \"spreadsheet\"\n      access: \"shared drive\"\n  hypothesis: \"Marketing channel shift in Oct caused lead quality drop\"\n  success_criteria: \"Identify root cause with >80% confidence, recommend action\"\n  deadline: \"2 business days\"\n\nQuestion Quality Checklist\n Is it specific enough to answer? (\"Revenue is down\" ❌ → \"Q4 revenue dropped 12% vs Q3 in the SMB segment\" ✅)\n Is the decision clear? (If yes → do X, if no → do Y)\n Do we have the data to answer it?\n Is there a time constraint?\n Who needs to see the output and in what format?\nPhase 2: Data Investigation\n2A. Data Discovery & Profiling\n\nBefore any analysis, profile every dataset:\n\nDATA PROFILE: [table/file name]\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\nRows:           [count]\nColumns:        [count]\nDate range:     [min] → [max]\nGranularity:    [row = what? transaction? user? day?]\nUpdate freq:    [real-time / daily / manual]\nKey columns:    [list primary keys, dates, amounts]\nQuality issues: [nulls, duplicates, outliers, encoding]\nJoins to:       [other tables via which keys]\n\n\nProfiling queries (adapt to your DB):\n\n-- Completeness check: % null per column\nSELECT \n    'column_name' as col,\n    COUNT(*) as total,\n    SUM(CASE WHEN column_name IS NULL THEN 1 ELSE 0 END) as nulls,\n    ROUND(100.0 * SUM(CASE WHEN column_name IS NULL THEN 1 ELSE 0 END) / COUNT(*), 1) as null_pct\nFROM table_name;\n\n-- Duplicate check\nSELECT column_name, COUNT(*) as dupes \nFROM table_name \nGROUP BY column_name \nHAVING COUNT(*) > 1 \nORDER BY dupes DESC LIMIT 20;\n\n-- Distribution check (numeric)\nSELECT \n    MIN(amount) as min_val,\n    PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY amount) as p25,\n    PERCENTILE_CONT(0.50) WITHIN GROUP (ORDER BY amount) as median,\n    AVG(amount) as mean,\n    PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY amount) as p75,\n    MAX(amount) as max_val,\n    STDDEV(amount) as std_dev\nFROM table_name;\n\n-- Cardinality check (categorical)\nSELECT column_name, COUNT(*) as freq,\n    ROUND(100.0 * COUNT(*) / SUM(COUNT(*)) OVER (), 1) as pct\nFROM table_name\nGROUP BY column_name\nORDER BY freq DESC;\n\n2B. Data Cleaning Decision Tree\nIs the value missing?\n├── Is it missing at random (MAR)?\n│   ├── <5% missing → drop rows\n│   ├── 5-20% missing → impute (median for numeric, mode for categorical)\n│   └── >20% missing → flag column as unreliable, note in findings\n├── Is it systematically missing (MNAR)?\n│   └── Investigate WHY. This IS a finding. (e.g., \"Churn field is null for 30% of users = we never tracked it for free tier\")\n└── Is it a duplicate?\n    ├── Exact duplicate → deduplicate, note count\n    └── Near duplicate → investigate, pick logic (latest timestamp? highest confidence?)\n\n\nOutlier handling:\n\nIs this datapoint an outlier?\n├── Is it a data entry error? (negative age, $0 salary) → fix or remove\n├── Is it genuine but extreme? (whale customer, Black Friday spike)\n│   ├── Does it skew the analysis? → segment it out, analyze separately\n│   └── Is it THE story? → highlight it\n└── Not sure → run analysis with AND without it, note the difference\n\n2C. Analysis Patterns Library\n\nPick the right analysis for the question:\n\nQuestion Type\tAnalysis Pattern\tKey Technique\n\"What happened?\"\tDescriptive\tAggregation, time series, segmentation\n\"Why did it happen?\"\tDiagnostic\tDrill-down, correlation, cohort analysis\n\"What will happen?\"\tPredictive\tTrends, regression, moving averages\n\"What should we do?\"\tPrescriptive\tScenario modeling, A/B test design\n\"Is this real or noise?\"\tStatistical\tSignificance tests, confidence intervals\n\"Who are our best/worst?\"\tSegmentation\tRFM, clustering, percentile ranking\nDescriptive Analysis Template\n-- Time series with period-over-period comparison\nSELECT \n    date_trunc('week', created_at) as period,\n    COUNT(*) as metric,\n    LAG(COUNT(*), 1) OVER (ORDER BY date_trunc('week', created_at)) as prev_period,\n    ROUND(100.0 * (COUNT(*) - LAG(COUNT(*), 1) OVER (ORDER BY date_trunc('week', created_at))) \n        / NULLIF(LAG(COUNT(*), 1) OVER (ORDER BY date_trunc('week', created_at)), 0), 1) as growth_pct\nFROM events\nWHERE created_at >= current_date - interval '90 days'\nGROUP BY 1\nORDER BY 1;\n\nDiagnostic Analysis: The \"5 Splits\" Method\n\nWhen something changed, split the data 5 ways to find the cause:\n\nBy time — When exactly did it change? (daily, then hourly)\nBy segment — Which customer segment changed most?\nBy channel — Which acquisition channel? Which product?\nBy geography — Regional differences?\nBy cohort — New vs existing? Recent vs old?\n\nThe split that shows the biggest divergence is your likely root cause.\n\nCohort Analysis Template\n-- Retention cohort matrix\nWITH cohorts AS (\n    SELECT \n        user_id,\n        DATE_TRUNC('month', MIN(created_at)) as cohort_month\n    FROM orders\n    GROUP BY user_id\n),\nactivity AS (\n    SELECT \n        c.cohort_month,\n        DATE_TRUNC('month', o.created_at) as activity_month,\n        COUNT(DISTINCT o.user_id) as active_users\n    FROM orders o\n    JOIN cohorts c ON o.user_id = c.user_id\n    GROUP BY 1, 2\n),\ncohort_sizes AS (\n    SELECT cohort_month, COUNT(DISTINCT user_id) as cohort_size\n    FROM cohorts GROUP BY 1\n)\nSELECT \n    a.cohort_month,\n    cs.cohort_size,\n    EXTRACT(MONTH FROM AGE(a.activity_month, a.cohort_month)) as months_since,\n    a.active_users,\n    ROUND(100.0 * a.active_users / cs.cohort_size, 1) as retention_pct\nFROM activity a\nJOIN cohort_sizes cs ON a.cohort_month = cs.cohort_month\nORDER BY 1, 3;\n\nRFM Segmentation\n-- Score customers by Recency, Frequency, Monetary value\nWITH rfm AS (\n    SELECT \n        customer_id,\n        CURRENT_DATE - MAX(order_date)::date as recency_days,\n        COUNT(*) as frequency,\n        SUM(amount) as monetary\n    FROM orders\n    WHERE order_date >= CURRENT_DATE - INTERVAL '12 months'\n    GROUP BY customer_id\n),\nscored AS (\n    SELECT *,\n        NTILE(5) OVER (ORDER BY recency_days DESC) as r_score,  -- lower recency = better\n        NTILE(5) OVER (ORDER BY frequency) as f_score,\n        NTILE(5) OVER (ORDER BY monetary) as m_score\n    FROM rfm\n)\nSELECT *,\n    CASE \n        WHEN r_score >= 4 AND f_score >= 4 THEN 'Champions'\n        WHEN r_score >= 3 AND f_score >= 3 THEN 'Loyal'\n        WHEN r_score >= 4 AND f_score <= 2 THEN 'New Customers'\n        WHEN r_score <= 2 AND f_score >= 3 THEN 'At Risk'\n        WHEN r_score <= 2 AND f_score <= 2 THEN 'Lost'\n        ELSE 'Needs Attention'\n    END as segment\nFROM scored;\n\nFunnel Analysis\n-- Conversion funnel with drop-off rates\nWITH funnel AS (\n    SELECT \n        COUNT(DISTINCT CASE WHEN event = 'visit' THEN user_id END) as visits,\n        COUNT(DISTINCT CASE WHEN event = 'signup' THEN user_id END) as signups,\n        COUNT(DISTINCT CASE WHEN event = 'activation' THEN user_id END) as activations,\n        COUNT(DISTINCT CASE WHEN event = 'purchase' THEN user_id END) as purchases\n    FROM events\n    WHERE created_at >= CURRENT_DATE - INTERVAL '30 days'\n)\nSELECT \n    visits, signups, activations, purchases,\n    ROUND(100.0 * signups / NULLIF(visits, 0), 1) as visit_to_signup_pct,\n    ROUND(100.0 * activations / NULLIF(signups, 0), 1) as signup_to_activation_pct,\n    ROUND(100.0 * purchases / NULLIF(activations, 0), 1) as activation_to_purchase_pct,\n    ROUND(100.0 * purchases / NULLIF(visits, 0), 1) as overall_conversion_pct\nFROM funnel;\n\nPhase 3: Communicate the Insight\nThe Insight Formula\n\nEvery finding must follow this structure:\n\nINSIGHT: [one-sentence finding]\nEVIDENCE: [specific numbers with context]\nSO WHAT: [why this matters to the business]\nNOW WHAT: [recommended action]\nCONFIDENCE: [high/medium/low + why]\n\n\nExample:\n\nINSIGHT: SMB segment revenue dropped 18% in Q4, while Enterprise grew 5%.\nEVIDENCE: SMB revenue was $1.2M in Q3 vs $984K in Q4. 73% of the drop came from \n          churned accounts that joined via the Google Ads campaign in Q2.\nSO WHAT: Our Google Ads campaign attracted low-quality SMB leads with high churn risk. \n         The CAC for these accounts was $340 but LTV was only $280 — we lost money.\nNOW WHAT: Pause Google Ads for SMB. Shift budget to LinkedIn (SMB LTV: $890, CAC: $220). \n         Tighten qualification criteria for ad-sourced leads.\nCONFIDENCE: High — based on 847 churned accounts with clear acquisition source data.\n\nVisualization Selection Guide\nData Type\tBest Chart\tWhen to Use\tAvoid\nTrend over time\tLine chart\tContinuous data, 5+ periods\tPie chart, bar\nComparison\tHorizontal bar\tRanking, categories <15\t3D charts\nComposition\tStacked bar / 100% bar\tParts of a whole over time\tPie (>5 slices)\nDistribution\tHistogram / box plot\tUnderstanding spread\tBar chart\nCorrelation\tScatter plot\t2 numeric variables\tLine chart\nSingle KPI\tBig number + sparkline\tExecutive dashboards\tTables\nPart of whole (static)\tPie/donut (≤5 slices)\tOne point in time\tPie (>5 slices)\nGeographic\tMap / choropleth\tLocation-based data\tBar chart\nChart Formatting Rules\nTitle = the insight, not the data description (\"SMB churn drove Q4 revenue drop\" ✅, \"Q4 Revenue by Segment\" ❌)\nY-axis starts at zero for bar charts (truncating exaggerates)\nAnnotate inflection points — label the moments that matter\nLimit colors to 5 — use grey for everything except the story\nNo gridlines if possible — they add noise\nSource and date in small text at bottom\nReport Structure\n# [Analysis Title]\n**Date:** [date] | **Author:** [name] | **Stakeholder:** [who asked]\n\n## Executive Summary (3 sentences max)\n[Key finding. Business impact. Recommended action.]\n\n## Key Metrics\n| Metric | Current | Previous | Change |\n|--------|---------|----------|--------|\n| [KPI]  | [value] | [value]  | [+/-%] |\n\n## Findings\n### Finding 1: [Insight headline]\n[Evidence + visualization + interpretation]\n\n### Finding 2: [Insight headline]\n[Evidence + visualization + interpretation]\n\n## Recommendations\n1. **[Action]** — [Expected impact] — [Effort: low/medium/high]\n2. **[Action]** — [Expected impact] — [Effort: low/medium/high]\n\n## Methodology & Limitations\n- Data source: [what, date range, granularity]\n- Assumptions: [list any]\n- Limitations: [what we couldn't measure, data gaps]\n- Confidence: [high/medium/low]\n\n## Appendix\n[Detailed queries, full data tables, supplementary charts]\n\nPhase 4: Evaluate & Close the Loop\n\nAfter delivering the analysis, track whether it led to action:\n\nanalysis_followup:\n  original_question: \"Why did Q4 revenue drop?\"\n  delivered: \"2024-01-15\"\n  recommendation: \"Shift ad spend from Google to LinkedIn\"\n  action_taken: \"yes — budget reallocated Feb 1\"\n  result: \"SMB churn dropped 34% in Feb, CAC improved by $120\"\n  lessons: \"Ad channel quality matters more than volume\"\n\nAnalysis Scoring Rubric (0-100)\n\nUse this to self-evaluate before delivering:\n\nDimension\tWeight\tCriteria\tScore\nQuestion Clarity\t15\tIs the business question specific and decision-linked?\t/15\nData Quality\t15\tWas data profiled, cleaned, and limitations noted?\t/15\nAnalytical Rigor\t25\tRight technique for the question? Statistical validity? Edge cases?\t/25\nInsight Quality\t25\tDoes every finding follow Insight → Evidence → So What → Now What?\t/25\nCommunication\t10\tClear visualizations? Right format for the audience? Scannable?\t/10\nActionability\t10\tAre recommendations specific, prioritized, and effort-rated?\t/10\n\nScoring: 90+ = ship it. 70-89 = review one weak area. <70 = rework before delivering.\n\nAdvanced Techniques\nStatistical Significance Quick Check\n\nBefore claiming a change is real:\n\nSample size per group: ≥30 (bare minimum), ≥385 for ±5% margin\nConfidence level: 95% (p < 0.05) for business decisions\nEffect size: Is the difference practically meaningful, not just statistically?\n\nQuick z-test for proportions:\n  p1 = conversion_rate_A, p2 = conversion_rate_B\n  p_pooled = (successes_A + successes_B) / (n_A + n_B)\n  z = (p1 - p2) / sqrt(p_pooled * (1-p_pooled) * (1/n_A + 1/n_B))\n  |z| > 1.96 → significant at 95%\n\nA/B Test Design Template\nab_test:\n  name: \"New pricing page\"\n  hypothesis: \"Showing annual savings will increase annual plan signups by 15%\"\n  primary_metric: \"annual plan conversion rate\"\n  secondary_metrics: [\"revenue per visitor\", \"bounce rate\"]\n  guardrail_metrics: [\"total conversion rate\", \"support tickets\"]\n  sample_size_per_variant: 3800  # for 15% MDE, 80% power, 95% confidence\n  expected_duration: \"14 days at current traffic\"\n  segments_to_check: [\"new vs returning\", \"mobile vs desktop\", \"geo\"]\n  decision_rules:\n    ship: \"primary metric significant positive, no guardrail regression\"\n    iterate: \"directionally positive but not significant — extend 7 days\"\n    kill: \"negative or guardrail regression\"\n\nMoving Averages for Noisy Data\n-- 7-day moving average to smooth daily noise\nSELECT \n    date,\n    daily_value,\n    AVG(daily_value) OVER (ORDER BY date ROWS BETWEEN 6 PRECEDING AND CURRENT ROW) as ma_7d,\n    AVG(daily_value) OVER (ORDER BY date ROWS BETWEEN 27 PRECEDING AND CURRENT ROW) as ma_28d\nFROM daily_metrics;\n\nYear-over-Year Comparison\nSELECT \n    DATE_TRUNC('month', created_at) as month,\n    SUM(revenue) as revenue,\n    LAG(SUM(revenue), 12) OVER (ORDER BY DATE_TRUNC('month', created_at)) as revenue_yoy,\n    ROUND(100.0 * (SUM(revenue) - LAG(SUM(revenue), 12) OVER (ORDER BY DATE_TRUNC('month', created_at)))\n        / NULLIF(LAG(SUM(revenue), 12) OVER (ORDER BY DATE_TRUNC('month', created_at)), 0), 1) as yoy_growth_pct\nFROM orders\nGROUP BY 1 ORDER BY 1;\n\nSpreadsheet & CSV Analysis\n\nWhen working with files (no database):\n\nLoad the file — Read with appropriate tool, note delimiter/encoding\nInspect shape — Row count, column names, dtypes\nProfile each column — Nulls, uniques, min/max, distribution\nApply the same DICE framework — Question → Investigate → Communicate → Evaluate\nCommon CSV Operations\nPivot: Group by one column, aggregate another\nMerge: Join two CSVs on a common key (watch for many-to-many)\nFilter: Subset to relevant rows before analysis\nDerive: Create calculated columns (ratios, categories, flags)\nData Quality Red Flags in Spreadsheets\nMixed data types in a column (numbers stored as text)\nMerged cells (break everything)\nHidden rows/columns (missing data)\nFormulas referencing external files (broken links)\n\"Last updated: 2022\" (stale data)\nEdge Cases & Gotchas\nTimezone Issues\nAlways confirm: is this UTC, local, or mixed?\nAggregating across timezones without converting = wrong numbers\n\"Daily\" metrics shift depending on timezone definition\nSurvivorship Bias\nAnalyzing only current customers? You're missing the ones who left.\nLooking at successful campaigns? What about the ones that failed?\nAlways ask: \"What data am I NOT seeing?\"\nSimpson's Paradox\nA trend that appears in several groups may reverse when groups are combined\nAlways check both the aggregate AND the segments\nClassic example: treatment works for men AND women separately, but \"fails\" overall because of unequal group sizes\nSmall Sample Traps\n<30 observations: don't claim patterns\nOne big customer can move averages dramatically — check for concentration\n\"Revenue grew 200%!\" (from $100 to $300 — meaningless)\nCurrency & Unit Confusion\nAlways label units: \"$K\", \"users\", \"sessions\", \"orders\"\nRevenue ≠ profit ≠ bookings ≠ ARR — clarify which\nIf comparing across currencies/periods: normalize\nDaily Analyst Routine\nMorning (15 min):\n□ Check key dashboards — any anomalies?\n□ Review overnight data loads — anything break?\n□ Scan stakeholder requests — prioritize\n\nAnalysis blocks (focused 2-hour chunks):\n□ Pick one question from the backlog\n□ Run the DICE framework start to finish\n□ Deliver insight, not just data\n\nEnd of day (10 min):\n□ Update analysis log with today's findings\n□ Note any data quality issues discovered\n□ Queue tomorrow's priority question\n\nTools & Environment\n\nThis skill is tool-agnostic. It works with:\n\nDatabases: PostgreSQL, MySQL, SQLite, BigQuery, Snowflake, Redshift\nSpreadsheets: CSV, Excel, Google Sheets\nLanguages: SQL (primary), Python/pandas if available\nVisualization: Any charting tool, or describe charts for stakeholders\nFiles: JSON, Parquet, XML, API responses\n\nNo dependencies. No scripts. Pure analytical methodology + reusable query patterns.\n\nSample Output: Complete Mini-Analysis\nANALYSIS: Website Conversion Rate Drop — January 2024\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\nEXECUTIVE SUMMARY\nConversion rate dropped from 3.2% to 2.1% in January. Root cause: a broken \ncheckout button on mobile Safari (iOS 17.2+) affecting 34% of mobile traffic. \nFix the bug → recover ~$47K/month in lost revenue.\n\nKEY METRICS\n  Conversion rate:  2.1% (was 3.2%) — ↓34%\n  Mobile conversion: 0.8% (was 2.9%) — ↓72%  ← THE STORY\n  Desktop conversion: 3.4% (was 3.5%) — ↓3%  (normal variance)\n\nFINDING\nThe 5-splits analysis immediately pointed to device type. Mobile conversion \ncratered on Jan 4 — the same day iOS 17.2 rolled out widely. The checkout \nbutton uses a CSS property unsupported in Safari 17.2+.\n\n  Affected sessions: 12,400 (Jan 4-31)\n  Estimated lost conversions: 12,400 × 2.1% lift = 260 orders\n  Estimated lost revenue: 260 × $181 avg order = $47,060\n\nRECOMMENDATION\n1. **Hotfix the CSS** — Engineering, 2-hour fix, deploy today [HIGH]\n2. **Add Safari to CI/CD browser matrix** — Prevent recurrence [MEDIUM]\n3. **Set up device-segment alerting** — Auto-flag >10% drops [LOW]\n\nCONFIDENCE: High — reproduced the bug, confirmed with browser logs.\nMETHODOLOGY: 30-day comparison, segmented by device + browser + date.\n\n\nBuilt by AfrexAI ⚡ — turning data into decisions."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/1kalin/afrexai-data-analyst",
    "publisherUrl": "https://clawhub.ai/1kalin/afrexai-data-analyst",
    "owner": "1kalin",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/afrexai-data-analyst",
    "downloadUrl": "https://openagent3.xyz/downloads/afrexai-data-analyst",
    "agentUrl": "https://openagent3.xyz/skills/afrexai-data-analyst/agent",
    "manifestUrl": "https://openagent3.xyz/skills/afrexai-data-analyst/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/afrexai-data-analyst/agent.md"
  }
}