Trading the Breaking

Trading the Breaking

Share this post

Trading the Breaking
Trading the Breaking
Processing: Smoothing algorithms
Alpha Lab

Processing: Smoothing algorithms

A roast of fancy algorithms and the shockingly simple truth about machine learning

𝚀𝚞𝚊𝚗𝚝 𝙱𝚎𝚌𝚔𝚖𝚊𝚗's avatar
𝚀𝚞𝚊𝚗𝚝 𝙱𝚎𝚌𝚔𝚖𝚊𝚗
Feb 15, 2025
∙ Paid
14

Share this post

Trading the Breaking
Trading the Breaking
Processing: Smoothing algorithms
2
3
Share

Table of content:

  1. Introduction.

  2. Particle filter.

  3. Wavelet denoising.

  4. HP filter.


Introduction

Stock prices are like hyper kids at recess: wild, unpredictable, and loud enough to give you a headache. Traditional methods, like simple moving averages, are like that one supervisor who shows up late to the party, only to see the mess after the chaos has already happened.

And then there’s machine learning—oh, machine learning! It’s like that friend who overcomplicates a simple plan. So, the million-dollar question: are ML algorithms better than old-school methods? Well, the answer is: it depends 😇

It depends, particularly on the choice of the ML model and the analytical method used. Now, personally, I’m all about non-directional methods—they’re like the chill yoga instructors of trading. But, by popular demand, I’ll take a look at those three directional ML algorithms.

  1. Particle filters: Think of them as a swarm of cookie-seeking robots, each trying to locate the true cookie jar–the hidden trend–amidst all the chaos.

  2. Wavelet fenoising: This method acts like a LEGO-building microscope that breaks down data into small pieces, cleans out the noise, and reconstructs the signal.

  3. Hodrick-Prescott Filter: Picture a tightrope walker who must maintain balance; the HP filter smooths the trend while keeping close to the original data.

Let’s just say, I’m about to remember why I swiped left on them a long time ago. Aaah! ML and nostalgia... Get ready for some laughs and facepalms—this is gonna be fun!

Let us begin with the first tool, the cookie-seeking robot swarm.

Particle filter

Particle Filters are designed to estimate hidden states in a dynamic system. In our context, the hidden state represents the true underlying stock trend, and the observable measurements are the noisy stock prices.

We start with a state-space model that consists of two equations:

  1. State evolution equation:
    The hidden state xt at time t evolves from the previous state xt-1​ plus some random fluctuations–think of it as cookie crumbs left behind by sneaky cookie thieves:

\(x_t = x_{t-1} + \omega_t, \)

where:

\(\omega_t \sim \mathcal{N}(0, \sigma_{\omega}^2) \text{ represents process noise.} \)

  1. Observation equation:
    The observed price zt​ is modeled as the hidden state plus some measurement noise–like wind dispersing the cookie smell:

\(z_t = x_t + \nu_t\)

where vt​ is modeled by a heavy‑tailed distribution such as the Student’s t‑distribution:

\(\nu_t \sim t(\text{df}, 0, \sigma_{\nu}).\)

The degrees of freedom df control how heavy the tails are—a low df means extreme events (or extra crunchy cookies) are more likely.

Updating the particle filter:

The Particle Filter approximates the probability distribution of xt​ using a set of particles

\(\{ x_t^{(i)} \}_{i=1}^{N} \text{ with corresponding weights } \{ w_t^{(i)} \}_{i=1}^{N} \)

The algorithm follows these steps:

  1. Initialization: Start with N particles distributed according to a prior belief about x0​.

  2. Prediction–motion update: For each particle, update its state using the state evolution equation:

\(x_t^{(i)} = x_{t-1}^{(i)} + \omega_t^{(i)}\)

Note: For highly non‑stationary situations, the variance is adjusted dynamically.

  1. Update–measurement update: The likelihood for each particle is computed using the Student’s t‑distribution:

\(p(z_t \mid x_t^{(i)}) = t_{\text{pdf}}(z_t - x_t^{(i)}; \text{df}, 0, \sigma_{\nu}), \)

and the weights are updated as:

\(w_t^{(i)} \propto w_{t-1}^{(i)} \, t_{\text{pdf}}(z_t - x_t^{(i)}; \text{df}, 0, \sigma_{\nu}). \)

  1. Resampling: When the effective number of particles:

\(N_{\text{eff}} = \frac{1}{\sum (w_t^{(i)})^2}\)

falls below a threshold, resample the particles to focus on the most promising cookie jar locations.

This framework is flexible—it can accommodate time‑varying parameters to handle non‑stationarity, while the heavy‑tailed likelihood makes it robust against extreme market moves.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Quant Beckman
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share