Processing: Smoothing algorithms
A roast of fancy algorithms and the shockingly simple truth about machine learning
Table of content:
Introduction.
Particle filter.
Wavelet denoising.
HP filter.
Introduction
Stock prices are like hyper kids at recess: wild, unpredictable, and loud enough to give you a headache. Traditional methods, like simple moving averages, are like that one supervisor who shows up late to the party, only to see the mess after the chaos has already happened.
And then there’s machine learning—oh, machine learning! It’s like that friend who overcomplicates a simple plan. So, the million-dollar question: are ML algorithms better than old-school methods? Well, the answer is: it depends 😇
It depends, particularly on the choice of the ML model and the analytical method used. Now, personally, I’m all about non-directional methods—they’re like the chill yoga instructors of trading. But, by popular demand, I’ll take a look at those three directional ML algorithms.
Particle filters: Think of them as a swarm of cookie-seeking robots, each trying to locate the true cookie jar–the hidden trend–amidst all the chaos.
Wavelet fenoising: This method acts like a LEGO-building microscope that breaks down data into small pieces, cleans out the noise, and reconstructs the signal.
Hodrick-Prescott Filter: Picture a tightrope walker who must maintain balance; the HP filter smooths the trend while keeping close to the original data.
Let’s just say, I’m about to remember why I swiped left on them a long time ago. Aaah! ML and nostalgia... Get ready for some laughs and facepalms—this is gonna be fun!
Let us begin with the first tool, the cookie-seeking robot swarm.
Particle filter
Particle Filters are designed to estimate hidden states in a dynamic system. In our context, the hidden state represents the true underlying stock trend, and the observable measurements are the noisy stock prices.
We start with a state-space model that consists of two equations:
State evolution equation:
The hidden state xt at time t evolves from the previous state xt-1 plus some random fluctuations–think of it as cookie crumbs left behind by sneaky cookie thieves:
where:
Observation equation:
The observed price zt is modeled as the hidden state plus some measurement noise–like wind dispersing the cookie smell:
where vt is modeled by a heavy‑tailed distribution such as the Student’s t‑distribution:
The degrees of freedom df control how heavy the tails are—a low df means extreme events (or extra crunchy cookies) are more likely.
Updating the particle filter:
The Particle Filter approximates the probability distribution of xt using a set of particles
The algorithm follows these steps:
Initialization: Start with N particles distributed according to a prior belief about x0.
Prediction–motion update: For each particle, update its state using the state evolution equation:
Note: For highly non‑stationary situations, the variance is adjusted dynamically.
Update–measurement update: The likelihood for each particle is computed using the Student’s t‑distribution:
and the weights are updated as:
Resampling: When the effective number of particles:
falls below a threshold, resample the particles to focus on the most promising cookie jar locations.
This framework is flexible—it can accommodate time‑varying parameters to handle non‑stationarity, while the heavy‑tailed likelihood makes it robust against extreme market moves.