Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Comprehensive statistical testing library with 37+ methods for normality tests, location tests, correlation tests, time series tests, and model diagnostics....
Comprehensive statistical testing library with 37+ methods for normality tests, location tests, correlation tests, time series tests, and model diagnostics....
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Comprehensive statistical testing library for hypothesis testing, A/B testing, and data analysis.
from pywayne.statistics import NormalityTests, LocationTests import numpy as np # Test data normality nt = NormalityTests() data = np.random.normal(0, 1, 100) result = nt.shapiro_wilk(data) print(f"p-value: {result.p_value:.4f}, is_normal: {not result.reject_null}") # Compare two groups lt = LocationTests() group_a = np.random.normal(100, 15, 50) group_b = np.random.normal(105, 15, 50) result = lt.two_sample_ttest(group_a, group_b) print(f"Significant difference: {result.reject_null}")
Test if data follows a normal distribution or other specified distributions. MethodDescriptionUse Caseshapiro_wilkShapiro-Wilk testSmall-medium samples (n โค 5000)ks_test_normalK-S normality testMedium-large samplesks_test_two_sampleTwo-sample K-S testCompare two sample distributionsanderson_darlingAnderson-Darling testTail-sensitive normality testdagostino_pearsonD'Agostino-Pearson KยฒBased on skewness and kurtosisjarque_beraJarque-Bera testLarge samples, regression residualschi_square_goodness_of_fitChi-square goodness-of-fitCategorical datalilliefors_testLilliefors testUnknown parameters K-S test Example: from pywayne.statistics import NormalityTests nt = NormalityTests() result = nt.shapiro_wilk(data) if result.p_value < 0.05: print("Data is NOT normally distributed") else: print("Data follows normal distribution")
Compare means or medians across groups (parametric and non-parametric). MethodDescriptionUse Caseone_sample_ttestOne-sample t-testCompare sample mean to a valuetwo_sample_ttestTwo-sample t-testCompare two independent group meanspaired_ttestPaired t-testCompare before/after measurementsone_way_anovaOne-way ANOVACompare 3+ group meansmann_whitney_uMann-Whitney U testNon-parametric two-sample testwilcoxon_signed_rankWilcoxon signed-rankNon-parametric paired testkruskal_wallisKruskal-Wallis H testNon-parametric multi-group test Example (A/B Testing): from pywayne.statistics import LocationTests, NormalityTests lt = LocationTests() nt = NormalityTests() # Check normality first if nt.shapiro_wilk(control).p_value > 0.05: result = lt.two_sample_ttest(control, treatment) else: result = lt.mann_whitney_u(control, treatment) print(f"Effect significant: {result.reject_null}")
Test correlation between variables and independence of categorical variables. MethodDescriptionUse Casepearson_correlationPearson correlationLinear relationshipspearman_correlationSpearman's rankMonotonic relationshipkendall_tauKendall's tauRank correlation, small sampleschi_square_independenceChi-square independenceCategorical variablesfisher_exact_testFisher's exact test2ร2 contingency tablemcnemar_testMcNemar's testPaired categorical data Example: from pywayne.statistics import CorrelationTests ct = CorrelationTests() result = ct.pearson_correlation(x, y) print(f"Correlation: {result.statistic:.3f}, p-value: {result.p_value:.4f}")
Test time series properties: stationarity, autocorrelation, cointegration. MethodDescriptionUse Caseadf_testAugmented Dickey-FullerUnit root test for stationaritykpss_testKPSS testStationarity test (complements ADF)ljung_box_testLjung-Box Q testOverall autocorrelationruns_testRuns testRandomness testingarch_testARCH effect testHeteroscedasticitygranger_causalityGranger causalityCausal relationshipengle_granger_cointegrationEngle-Granger cointegrationLong-term equilibriumbreusch_godfrey_testBreusch-GodfreyHigher-order autocorrelation Example: from pywayne.statistics import TimeSeriesTests tst = TimeSeriesTests() adf_result = tst.adf_test(time_series_data) kpss_result = tst.kpss_test(time_series_data) if adf_result.reject_null: print("Series is stationary") else: print("Series has unit root (non-stationary)")
Regression model diagnostics: heteroscedasticity, autocorrelation, multicollinearity. MethodDescriptionUse Casebreusch_pagan_testBreusch-PaganHeteroscedasticity testwhite_testWhite's testGeneral heteroscedasticitygoldfeld_quandt_testGoldfeld-QuandtStructural break heteroscedasticitydurbin_watson_testDurbin-WatsonFirst-order autocorrelationvariance_inflation_factorVIFMulticollinearity diagnosislevene_testLevene's testHomogeneity of variancebartlett_testBartlett's testHomogeneity (normal assumption)residual_normality_testResidual normalityRegression assumption check Example: from pywayne.statistics import ModelDiagnostics md = ModelDiagnostics() residuals = y - model.predict(X) # Check assumptions bp_result = md.breusch_pagan_test(residuals, X) dw_result = md.durbin_watson_test(residuals) if bp_result.reject_null: print("Warning: Heteroscedasticity detected")
All test methods return a unified TestResult object: result = nt.shapiro_wilk(data) # Access results result.test_name # Test method name result.statistic # Test statistic value result.p_value # P-value result.reject_null # True if null hypothesis is rejected result.critical_value # Critical value (if applicable) result.confidence_interval # Tuple (lower, upper) if applicable result.effect_size # Effect size if applicable result.additional_info # Dict with additional information
List all available test methods across all modules. from pywayne.statistics import list_all_tests print(list_all_tests())
Display usage and documentation for a specific test. from pywayne.statistics import show_test_usage show_test_usage('shapiro_wilk')
Sample SizeRecommended Methodn < 30Shapiro-Wilk30 โค n โค 300Shapiro-Wilk, D'Agostino-Pearsonn > 300Jarque-Bera, Kolmogorov-Smirnov
ConditionParametricNon-parametricNormal datat-test, ANOVA-Non-normal data-Mann-Whitney U, Kruskal-WallisPaired dataPaired t-testWilcoxon signed-rank
When performing multiple tests, apply p-value correction: from statsmodels.stats.multitest import multipletests p_values = [r.p_value for r in results] rejected, p_corrected, _, _ = multipletests( p_values, alpha=0.05, method='fdr_bh' )
def data_quality_check(data): nt = NormalityTests() lt = LocationTests() normality = nt.shapiro_wilk(data) # Outlier detection (IQR) Q1, Q3 = np.percentile(data, [25, 75]) IQR = Q3 - Q1 outliers = data[(data < Q1 - 1.5*IQR) | (data > Q3 + 1.5*IQR)] return { 'size': len(data), 'is_normal': not normality.reject_null, 'p_value': normality.p_value, 'outliers': len(outliers) }
def ab_test_analysis(control, treatment): nt = NormalityTests() lt = LocationTests() # Check normality norm_c = nt.shapiro_wilk(control[:100]) norm_t = nt.shapiro_wilk(treatment[:100]) # Choose appropriate test if norm_c.p_value > 0.05 and norm_t.p_value > 0.05: result = lt.two_sample_ttest(control, treatment) else: result = lt.mann_whitney_u(control, treatment) return { 'test_used': result.test_name, 'p_value': result.p_value, 'significant': result.reject_null, 'effect_size': result.effect_size }
def diagnose_model(y, X, model): md = ModelDiagnostics() residuals = y - model.predict(X) return { 'heteroscedasticity_bp': md.breusch_pagan_test(residuals, X).reject_null, 'autocorrelation_dw': md.durbin_watson_test(residuals).statistic, 'residuals_normal': md.residual_normality_test(residuals).p_value, 'vif_max': max(md.variance_inflation_factor(X)) }
All methods accept np.ndarray or list as input All methods return TestResult with consistent interface Always validate test assumptions before applying parametric tests Apply multiple testing correction when performing several tests Report effect sizes alongside p-values for complete interpretation
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.