Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Solve optimization, statistics, signal processing, and linear algebra problems with SciPy recipes and ready-to-run code.
Solve optimization, statistics, signal processing, and linear algebra problems with SciPy recipes and ready-to-run code.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
On first use, read setup.md for guidance on how to help the user effectively.
User needs scientific computing in Python: optimization, curve fitting, statistical tests, signal processing, interpolation, integration, or linear algebra. Agent provides working code, not theory.
This skill is stateless β no persistent storage needed. All code runs in user's Python environment. See memory-template.md for optional preference tracking.
TopicFileUsage guidancesetup.mdOptional preferencesmemory-template.md
Every response includes runnable code. No pseudocode, no "implement this yourself". # Always include imports from scipy import optimize import numpy as np # Complete, working example result = optimize.minimize(lambda x: x**2, x0=1.0) print(f"Minimum at x={result.x[0]:.4f}")
ProblemModuleKey FunctionFind minimum/maximumscipy.optimizeminimize, minimize_scalarCurve fittingscipy.optimizecurve_fitRoot findingscipy.optimizeroot, brentq, fsolveStatistical testsscipy.statsttest_ind, chi2_contingencyDistributionsscipy.statsnorm, poisson, exponFilter signalsscipy.signalbutter, filtfilt, savgol_filterFFTscipy.fftfft, ifft, fftfreqInterpolationscipy.interpolateinterp1d, UnivariateSplineIntegrationscipy.integratequad, solve_ivpLinear algebrascipy.linalgsolve, eig, svdSparse matricesscipy.sparsecsr_matrix, linalg.spsolveSpatial datascipy.spatialKDTree, distanceImage processingscipy.ndimagegaussian_filter, label
When code uses non-obvious parameters, explain why: # method='L-BFGS-B' for bounded optimization # bounds prevent physically impossible values result = optimize.minimize( objective, x0, method='L-BFGS-B', bounds=[(0, None), (0, 100)] # x1 >= 0, 0 <= x2 <= 100 )
Always include sanity checks: result = optimize.minimize(func, x0) if not result.success: print(f"β οΈ Optimization failed: {result.message}") else: print(f"β Converged in {result.nit} iterations")
SciPy builds on NumPy. Use vectorized operations: # β Vectorized (fast) x = np.linspace(0, 10, 1000) y = np.sin(x) # β Loop (slow) y = [np.sin(xi) for xi in x]
from scipy.optimize import minimize import numpy as np # Rosenbrock function (classic test) def rosenbrock(x): return sum(100*(x[1:]-x[:-1]**2)**2 + (1-x[:-1])**2) x0 = np.array([0, 0]) result = minimize(rosenbrock, x0, method='BFGS') print(f"Minimum at: {result.x}") print(f"Function value: {result.fun}") print(f"Converged: {result.success}")
from scipy.optimize import minimize # Minimize f(x,y) = xΒ² + yΒ² subject to x + y = 1 def objective(x): return x[0]**2 + x[1]**2 def constraint(x): return x[0] + x[1] - 1 # Must equal 0 result = minimize( objective, x0=[0.5, 0.5], constraints={'type': 'eq', 'fun': constraint} )
from scipy.optimize import curve_fit import numpy as np # Fit exponential decay def model(t, a, tau): return a * np.exp(-t / tau) t_data = np.array([0, 1, 2, 3, 4, 5]) y_data = np.array([10, 6.1, 3.7, 2.2, 1.4, 0.8]) params, covariance = curve_fit(model, t_data, y_data) a_fit, tau_fit = params errors = np.sqrt(np.diag(covariance)) print(f"a = {a_fit:.2f} Β± {errors[0]:.2f}") print(f"Ο = {tau_fit:.2f} Β± {errors[1]:.2f}")
from scipy import stats # Compare two groups (independent t-test) group_a = [23, 25, 28, 24, 26] group_b = [30, 32, 29, 31, 33] t_stat, p_value = stats.ttest_ind(group_a, group_b) print(f"t = {t_stat:.3f}, p = {p_value:.4f}") if p_value < 0.05: print("β Significant difference (p < 0.05)") else: print("β No significant difference")
from scipy import stats import numpy as np data = np.random.exponential(scale=2.0, size=1000) # Fit exponential distribution loc, scale = stats.expon.fit(data) print(f"Fitted scale (Ξ»β»ΒΉ): {scale:.3f}") # Test goodness of fit ks_stat, ks_p = stats.kstest(data, 'expon', args=(loc, scale)) print(f"KS test: p = {ks_p:.4f}")
from scipy import stats import numpy as np data = [2.3, 2.5, 2.1, 2.8, 2.4, 2.6, 2.2] confidence = 0.95 mean = np.mean(data) sem = stats.sem(data) ci = stats.t.interval(confidence, len(data)-1, loc=mean, scale=sem) print(f"Mean: {mean:.2f}") print(f"95% CI: [{ci[0]:.2f}, {ci[1]:.2f}]")
from scipy import signal import numpy as np # Create noisy signal fs = 1000 # Sample rate t = np.linspace(0, 1, fs) clean = np.sin(2 * np.pi * 10 * t) # 10 Hz noisy = clean + 0.5 * np.random.randn(len(t)) # Design and apply Butterworth filter cutoff = 20 # Hz order = 4 b, a = signal.butter(order, cutoff / (fs/2), btype='low') filtered = signal.filtfilt(b, a, noisy) # Zero-phase filtering
from scipy.fft import fft, fftfreq import numpy as np # Sample signal fs = 1000 t = np.linspace(0, 1, fs) signal_data = np.sin(2*np.pi*50*t) + 0.5*np.sin(2*np.pi*120*t) # Compute FFT yf = fft(signal_data) xf = fftfreq(len(t), 1/fs) # Get magnitude spectrum (positive frequencies only) n = len(t) // 2 freqs = xf[:n] magnitudes = 2/n * np.abs(yf[:n]) # Find dominant frequency peak_idx = np.argmax(magnitudes) print(f"Dominant frequency: {freqs[peak_idx]:.1f} Hz")
from scipy.interpolate import interp1d, UnivariateSpline import numpy as np x = np.array([0, 1, 2, 3, 4, 5]) y = np.array([0, 0.8, 0.9, 0.1, -0.8, -1]) # Linear interpolation f_linear = interp1d(x, y, kind='linear') # Cubic interpolation (smoother) f_cubic = interp1d(x, y, kind='cubic') # Smoothing spline (handles noise) spline = UnivariateSpline(x, y, s=0.5) x_new = np.linspace(0, 5, 100) y_cubic = f_cubic(x_new)
from scipy.integrate import quad import numpy as np # Integrate sin(x) from 0 to Ο result, error = quad(np.sin, 0, np.pi) print(f"β«sin(x)dx from 0 to Ο = {result:.6f} Β± {error:.2e}") # Expected: 2.0
from scipy.integrate import solve_ivp import numpy as np # dy/dt = -2y, y(0) = 1 (exponential decay) def dydt(t, y): return -2 * y sol = solve_ivp(dydt, [0, 5], [1], t_eval=np.linspace(0, 5, 100)) # sol.t contains time points # sol.y[0] contains y values
from scipy import linalg import numpy as np # Solve Ax = b A = np.array([[3, 1], [1, 2]]) b = np.array([9, 8]) x = linalg.solve(A, b) print(f"Solution: x = {x}") # Verify print(f"Check A @ x = {A @ x}")
from scipy import linalg import numpy as np A = np.array([[1, 2], [2, 1]]) eigenvalues, eigenvectors = linalg.eig(A) print(f"Eigenvalues: {eigenvalues}") print(f"Eigenvectors:\n{eigenvectors}")
Wrong bounds format in minimize β bounds must be list of (min, max) tuples, one per variable Forgetting to check result.success β optimization can fail silently, always check Using interp1d outside data range β raises error by default, use fill_value='extrapolate' or bounds_error=False filtfilt vs lfilter β use filtfilt for zero-phase filtering, lfilter introduces phase shift curve_fit with bad initial guess β can converge to wrong solution, always provide reasonable p0 Integer division in Python 3 β use x / 2 not x // 2 for float division in formulas
Data that stays local: All computations run in user's Python environment No data leaves the machine This skill does NOT: Send data externally Create persistent files Access network resources
Install with clawhub install <slug> if user confirms: math β mathematical concepts data-analysis β data exploration data β data handling patterns
If useful: clawhub star scipy Stay updated: clawhub sync
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.