[Quant Lecture] The Hypothesis-Testing Framework
Statistics for algorithmic traders
The Hypothesis-Testing Framework
This chapter turns estimation into a decision process: from a single backtest path to a formal, falsifiable test that separates repeatable edge from noise and supports real capital decisions. It adopts a skeptical starting point (βno edgeβ) and advances with robust test statistics, defensible p-values/alphas, confidence-interval thinking, and explicit power analysis.
Whatβs inside:
From estimation to decision. Why measuring uncertainty (point & interval estimates) isnβt enoughβand how hypothesis testing supplies the courtroom-style procedure for a go/no-go verdict.
Defining the claims. How to state precise null vs. alternative hypotheses for traders: mean excess return, Sharpe, regression alpha, correlations, and model comparisons (including costs via βΒ΅ β€ frictionβ).
Test statistics done right. Distilling performance into βsignal Γ· noise,β using HAC/NeweyβWest errors and joint tests (F-tests) when factors are assessed together.
From numbers to evidence. Mapping a test statistic to a p-value and choosing Ξ± as a business threshold; what p-values areβand arenβt.
Intervals as decisions. Duality between tests and confidence intervals; using bootstrap CIs (e.g., Sharpe) and deciding on the lower bound vs. a cost-aware hurdle.
Errors and power. Type I (false edge) vs. Type II (missed edge), their costs, and the levers that set statistical power: Ξ±, effect size, variance, and sample size.
Backtest length trade-off. Long samples boost power but threaten stationarity; short samples fit regime but risk underpowered conclusionsβplus ways to balance the tension.
Advanced testing via Likelihood Ratio. A general, nested-model framework (with ΟΒ² reference) to test structural breaks, factor redundancy, and justified model complexity.
A disciplined path from βlooks goodβ to statistically defensibleβand economically relevantβdeployment decisions.
Check a sample of what you will find inside: