Trading the Breaking

Trading the Breaking

Share this post

Trading the Breaking
Trading the Breaking
[WITH CODE] Model: Combining independent signals
Alpha Lab

[WITH CODE] Model: Combining independent signals

Why signal correlations are sabotaging your returns

𝚀𝚞𝚊𝚗𝚝 𝙱𝚎𝚌𝚔𝚖𝚊𝚗's avatar
𝚀𝚞𝚊𝚗𝚝 𝙱𝚎𝚌𝚔𝚖𝚊𝚗
Mar 24, 2025
∙ Paid
8

Share this post

Trading the Breaking
Trading the Breaking
[WITH CODE] Model: Combining independent signals
1
Share

Table of contents:

  1. Introduction.

  2. When one sign is not enough.

  3. Noise reduction through combining independent signals.

  4. Analysis of signal independence.

    1. Independence.

    2. Expectation properties and joint distributions.

    3. Covariance as a measure of dependence.

    4. Partial correlations.

  5. Limits of variance reduction.


Introduction

Imagine driving through a city filled with intersections—each governed by a traffic light. These lights guide your way safely. Now, imagine if you had only one traffic light, and it suddenly failed. You’d likely get stuck or take a wrong turn! Trading in financial markets is quite similar. Relying on one signal to make trading decisions can be risky because that single indicator might be overwhelmed by noise or fail when market conditions change.

However, too many simultaneous and interconnected signals can cause accidents. What happens if the traffic light at an intersection is green at the same time? Your car would be like an accordion!

When one sign is not enough

Normally, one sign is not enough, but more than three could complicate traffic. Suppose we have an asset, which we label as asset i. For this asset, there are M different signals, each capturing a unique piece of market information. These signals are denoted by:

\(\quad \text{for} \quad j = 1, 2, \dots, M.\)

Each sij​ might represent:

  • si1: A momentum indicator,

  • si2: An earnings surprise measure,

  • si3​: A volatility estimate,
    and so on.

For simplicity, we assume that these signals are already comparable—that is, they are pre-scaled to be on the same level so that we can directly average them without further normalization. Think of it as having all ingredients pre-measured for a recipe.

Once we have M comparable signals for each asset i, the next step is to combine them into a single composite signal. We define this composite signal as:

\(\alpha_{i,\text{combined}} = \sum_{j=1}^{M} w_j \, s_{ij},\)

where wj​ is the weight assigned to the jth signal. For simplicity, we choose equal weights:

\(w_j = \frac{1}{M} \quad \text{for every } j.\)

Thus, the composite signal becomes:

\(\alpha_{i,\text{combined}} = \frac{1}{M} \sum_{j=1}^{M} s_{ij}.\)

This averaging method is our key tool for reducing noise. Even if each individual signal is noisy, their average becomes more reliable.

Now, before we go any further, I'll give you a warning about that method—and this is where the research part comes in, perhaps for later:

  • The method assumes that every signal is equally reliable and informative.

  • Some signals may be more predictive or less noisy than others.

  • Using equal weights can lead to suboptimal results if stronger signals are diluted by weaker or less reliable ones.

  • The noise reduction benefit relies on the assumption that the errors in the individual signals are uncorrelated.

  • If many signals are highly correlated, the averaging process may not cancel out the noise effectively, limiting the improvement in signal quality.

  • For the averaging method to work as intended, all signals must be on the same scale.

  • A simple average is a linear combination, which may fail to capture nonlinear relationships between the individual signals and the asset’s behavior.

  • The equal-weight scheme is static. In changing market conditions, the predictive power of individual signals might vary over time.

  • A fixed average may not adjust quickly to such shifts, potentially resulting in outdated or less effective composite signals.

  • If each signal contains a systematic bias, averaging them will not cancel out this bias—instead, it might reinforce it.

Since some of you have told me that sometimes you find it difficult to follow the formulations, I will give it to you already implemented.

import numpy as np

# Number of signals
M = 3

# Simulated signal values for a given asset (example)
s_i = np.array([0.5, -1.2, 0.8])  # These are s_i1, s_i2, s_i3

# Define equal weights
weights = np.ones(M) / M

# Compute the composite signal as the weighted sum
alpha_i_combined = np.dot(weights, s_i)
print("Composite signal:", alpha_i_combined)

Let’s plot these signals:

Individual signals—shown in semi-transparent colors—have a wider spread due to their noise. The composite signal—in aqua—is the average of these signals. Since the noise tends to cancel out when averaging independent variables, the composite histogram is tighter, illustrating the noise reduction benefit—if you did things right 😬

If you're curious about this method, check out this introduction to signal voting:

𝚃𝚛𝚊𝚍𝚒𝚗𝚐 𝚝𝚑𝚎 𝙱𝚛𝚎𝚊𝚔𝚒𝚗𝚐
The science behind combining predictions
Introduction…
Read more
6 months ago · 9 likes · 𝚀𝚞𝚊𝚗𝚝 𝙱𝚎𝚌𝚔𝚖𝚊𝚗

Overall, while averaging can be an effective noise reduction tool under certain conditions, careful consideration must be given to the nature of the signals—their reliability, correlations, and scales—and to the dynamics of the market environment. More sophisticated weighting methods—such as those that adjust for individual signal performance or correlation structures—might be needed to overcome these pitfalls.

Noise reduction through combining independent signals

Assume that each signal sij​ has a noise level quantified by its variance:

\(\operatorname{Var}(s_{ij}) = \sigma^2.\)

When the signals are independent, the variance of their sum—or average—follows a neat property. For our composite signal, we have:

\(\alpha_{i,\text{combined}} = \frac{1}{M}\sum_{j=1}^{M} s_{ij}.\)

Using the linearity of expectation and the additivity of variances for independent variables, the variance of the composite signal is:

\(\operatorname{Var}\left(\alpha_{i,\text{combined}}\right) = \operatorname{Var}\left(\frac{1}{M}\sum_{j=1}^{M} s_{ij}\right) = \frac{1}{M^2}\sum_{j=1}^{M} \operatorname{Var}(s_{ij}) = \frac{M\,\sigma^2}{M^2} = \frac{\sigma^2}{M}.\)

Thus, the noise in the composite signal is reduced by a factor of M. Even if each individual signal is noisy, their average is less noisy.

For those who appreciate a more formal mathematical treatment, consider the signals for asset iii arranged in a vector:

\(\mathbf{s}_i = \begin{pmatrix} s_{i1} \\ s_{i2} \\ \vdots \\ s_{iM} \end{pmatrix}.\)

If the signals are independent, their covariance matrix is diagonal:

\(\Sigma_i = \begin{pmatrix} \sigma^2 & 0 & \cdots & 0 \\ 0 & \sigma^2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \sigma^2 \end{pmatrix}.\)

Let the weight vector be:

\(\mathbf{w} = \left(\frac{1}{M}, \frac{1}{M}, \dots, \frac{1}{M}\right)^T.\)

Then the variance of the composite signal can be written as:

\(\operatorname{Var}(\alpha_{i,\text{combined}}) = \mathbf{w}^T \Sigma_i \, \mathbf{w}.\)

Since

\(\mathbf{w}^T \Sigma_i \, \mathbf{w} = \sigma^2 \mathbf{w}^T \mathbf{w} = \sigma^2 \left(\frac{1}{M^2}\cdot M\right) = \frac{\sigma^2}{M},\)

this matrix formulation confirms our previous derivation.

import numpy as np

# Parameters
M = 3
sigma = 1.0

# Covariance matrix for independent signals (diagonal matrix)
Sigma = np.eye(M) * sigma**2

# Weight vector (equal weights)
w = np.ones(M) / M

# Compute variance of the composite signal using matrix multiplication
composite_variance = np.dot(w.T, np.dot(Sigma, w))
print("Composite signal variance (matrix formulation):", composite_variance)

The independence of the signals ensures that their covariances—off-diagonal elements—are zero, allowing for straightforward noise reduction.

Analysis of signal independence

Understanding why independence matters and how it affects the noise reduction process is crucial to appreciating the power of multiple signals.

So let’s begin by defining independence:

Two random variables X and Y are said to be independent if their joint probability distribution factorizes into the product of their individual distributions:

\(P(X = x, Y = y) = P(X = x) \cdot P(Y = y) \quad \text{for all } x, y.\)

In the context of our trading signals, this means that knowing the value of one signal does not provide any information about the other. Independence implies that the signals' noises are uncorrelated; they do not move together in a systematic way.

The mathematical definition of independence is powerful because it leads to several useful properties:

  • Expectation factorization: If X and Y are independent, then:

    \(E[XY] = E[X] \cdot E[Y].\)
  • Variance of the sum: For independent random variables, the variance of their sum is the sum of their variances:

    \(\operatorname{Var}(X + Y) = \operatorname{Var}(X) + \operatorname{Var}(Y).\)

These properties are the foundation for our earlier derivation showing that the variance of the composite signal is σ2/M.

Another important point here is the expectation properties and joint distributions.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Quant Beckman
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share