RiskOps: Traditional sizing methods
Why βgo big or go homeβ is a lieβand how to play the long game with surgical precision
Table of contents:
Introduction.
Sizing based on fixed fractional.
Sizing based on volatility targeting
Sizing based on Monte Carlo simulations.
Sizing based on Omega ratio.
Bonus: Theoretical foundations of conformal prediction in contract sizing.
Introduction
Everyone knows the typical trader who is 99% sure that their algorithm will winβuntil the account burns. They bet everything on that certainty. Then the market slips through his fingers like sand βhello, unexpected news!βand, poof, their money is gone.
Confidence is like a shiny balloon: fun until it pops, leaving you with a bunch of deflated dreams. Risk management, on the other hand, is the string that keeps the balloon from drifting away.
In trading, if you use mediocre models, itβs not enough to have a good prediction; you need to size your positions based on quantifiable risk. Now, if you drink from the Holy Grail, ahhh! Brother, equiponderate is God.
Letβs start with the simplest risk rule: fixed fractional sizing. Think of it as the βfuel gaugeβ methodβwhere your trading algorithm only burns a measured amount of capital on each trade, ensuring you never run out of fuel on your journey, no matter how strong the signal.
Sizing based on fixed fractional
Imagine your algorithm is like a sophisticated trading system that generates signals for when to enter a trade. These signals might be as tempting as an all-in recommendation from your favorite robo-advisor. However, even if the algorithm is screaming, this stock is going to skyrocket!, your system must still follow a disciplined approach to avoid catastrophic losses.
Instead of diving in headfirst and allocating all your capital based solely on a strong signal, you treat your available capital like the fuel in your algorithmβs trading engine. You only commit a fixed fraction of that fuel per trade. This is analogous to a well-designed trading system that never burns through its reserves by risking only a predetermined percentage of capital on any single trade.
Assume your total capital is C dollars, and you decide to risk a fixed percentage rfβ on every trade. Then your risk per tradeβin dollarsβis:
Now, suppose you buy an asset at a price Pentryβββ and set a stop loss at a relative distance Ξ΄βe.g., 10%. The potential loss per share is then:
If you buy N shares, the total loss if the stop is hit is:
To ensure that your loss does not exceed your predetermined risk dollars, set:
which rearranges to give:
This simple equation guarantees that you only risk a small, fixed percentage of your capital on each tradeβregardless of your algorithmβs level of confidence.
Letβs implement it with a plot that shows how the number of shares changes as the stop-loss distance increases.
import numpy as np
import matplotlib.pyplot as plt
def fixed_fractional(capital, risk_percent=0.02, stop_loss=0.10, entry_price=100):
"""
Calculate the number of shares to buy using fixed fractional sizing.
Parameters:
capital (float): Total capital available.
risk_percent (float): Fraction of capital to risk per trade.
stop_loss (float): Stop-loss distance as a fraction of the entry price.
entry_price (float): Price at which the asset is bought.
Returns:
float: Number of shares to buy.
"""
risk_dollars = capital * risk_percent
return risk_dollars / (entry_price * stop_loss)
# Example usage:
capital = 1000
shares = fixed_fractional(capital, risk_percent=0.02, stop_loss=0.10, entry_price=50)
print(f"Buy {shares:.2f} shares.")
# Plot: Effect of stop_loss on the number of shares
stop_loss_range = np.linspace(0.01, 0.20, 100)
N_values = [fixed_fractional(capital, risk_percent=0.02, stop_loss=sl, entry_price=50) for sl in stop_loss_range]
plt.figure(figsize=(8, 4))
plt.plot(stop_loss_range, N_values, label='Shares (N)', color='darkblue', linewidth=2)
plt.xlabel("Stop-Loss distance (Ξ΄)")
plt.ylabel("Number of shares (N)")
plt.title("Fixed fractional sizing: N vs. Stoploss distance")
plt.legend()
plt.grid(True)
plt.show()
This plot shows the amount of shares vs. stop-loss distance. Wider stops = fewer shares. Itβs like filling fewer cups if you pour lemonade into bigger mugs.
With fixed fractional sizing locking down basic risk, we now move on to a method that adapts to market mood swingsβvolatility targeting.
Sizing based on volatility targeting
When the market is calm, prices move gently; when itβs wild, prices swing like needles. Volatility, denoted by Ο quantifies this unpredictability. Volatility targeting adjusts your position size based on market volatility to keep your risk constant.
Letβs say your risk budget for a trade is R dollars. High Ο = needles mode. To ensure that your exposure remains consistent, regardless of volatility, you set:
For example, if your risk budget is $100 and the volatility is 5%βΟ=0.05βthen your position size is:
If volatility doubles to 10%βΟ=0.10βthe position size halves:
This inverse relationship ensures that when the market becomes more uncertain, you reduce your exposure, protecting your capital.
Let's get our hands on that code!
import numpy as np
import matplotlib.pyplot as plt
def volatility_target(risk_budget, volatility):
"""
Compute the position size based on volatility targeting.
Parameters:
risk_budget (float): Dollar amount you are willing to risk.
volatility (float): Standard deviation (Ο) of returns.
Returns:
float: Position size in dollars.
"""
return risk_budget / volatility
# Example usage:
risk_budget = 100
vol_low = 0.05 # 5% volatility
vol_high = 0.10 # 10% volatility
pos_size_low = volatility_target(risk_budget, vol_low)
pos_size_high = volatility_target(risk_budget, vol_high)
print(f"Position size at 5% volatility: ${pos_size_low:.0f}")
print(f"Position size at 10% volatility: ${pos_size_high:.0f}")
# Plot: Position size vs. volatility
vol_range = np.linspace(0.01, 0.20, 100)
pos_sizes = [volatility_target(risk_budget, vol) for vol in vol_range]
plt.figure(figsize=(8, 4))
plt.plot(vol_range, pos_sizes, 'b-', label='Position size curve', linewidth=2)
# Superpose the specific points for vol_low and vol_high
plt.scatter([vol_low, vol_high], [pos_size_low, pos_size_high], color='red', s=100, label='Specific volatilities')
plt.xlabel("Volatility (Ο)")
plt.ylabel("Position size ($)")
plt.title("Volatility targeting: Position size vs. volatility")
plt.legend()
plt.grid(True)
plt.show()
Position size plunges as volatility spikesβlike shrinking your surfboard when waves get huge:
With volatility captured mathematically, we now turn our attention to simulation-based methodsβMonte Carlo simulationsβthat help us prepare for extreme market events.
Sizing based on Monte Carlo simulations
The markets can be unpredictableβsometimes behaving like a drunken robot with a PhD in uncertainty. Monte Carlo simulations allow us to model thousandsβor even millionsβof potential future scenarios using stochastic processes. A common model is the geometric Brownian motion, defined by the stochastic differential equation:
where ΞΌ is the driftβexpected returnβΟ is the volatility, and Wtβ is a Wiener process representing the random movement. When integrated, the solution is:
By simulating many paths Ptββ, we estimate the distribution of future pricesβand, importantly, the worst-case loss. If we choose a confidence level Ξ±βsay, 95%βthen we define Lworst βas the loss corresponding to the (1βΞ±) percentile. Our position size multiplier is then:
where R is the risk budget.
While GBM assumes normally distributed returns, real markets often show skewnessβasymmetryβand kurtosisβfat tails. These properties mean that extreme events happen more frequently than a normal distribution predicts.
Mathematical considerations [In depth]
Skewness:
Skewness measures the asymmetry of the return distribution. A negative skew indicates that the left tailβlossesβis longer or fatter, whereas a positive skew indicates that the right tailβgainsβis more pronounced. One way to account for this is to model returns using a skewed distribution, such as a skewed Studentβs t-distribution. The probability density function for such a distribution can be adjusted by a skew parameter Ξ» along with the degrees of freedom Ξ½.Kurtosis:
Kurtosis describes how heavy the tails of the distribution are compared to a normal distribution. High kurtosis means that extreme outcomesβboth gains and lossesβare more likely. A Studentβs t-distribution, which has a parameter Ξ½βdegrees of freedomβnaturally exhibits heavier tails when Ξ½ is low.
Using these adjustments, we can simulate returns that reflect the underlying dynamics more accurately. Libraries like SciPy provide functions to sample from the Studentβs t-distribution, letting us incorporate skew and kurtosis into our simulations.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import t
def monte_carlo_position_size(capital, mu=0.01, sigma=0.05, nu=5, simulations=10000, confidence=0.95):
"""
Estimate the position sizing multiplier using advanced Monte Carlo simulations
that account for fat tails via the Student's t-distribution.
Parameters:
capital (float): Total capital (risk budget).
mu (float): Expected return per period.
sigma (float): Scale parameter, analogous to volatility.
nu (float): Degrees of freedom for the t-distribution (lower nu implies fatter tails).
simulations (int): Number of simulation runs.
confidence (float): Confidence level (e.g., 0.95 for 95% survival).
Returns:
tuple: (position size multiplier, simulated losses array)
"""
# Generate simulated returns from a standardized Student's t-distribution.
t_samples = t.rvs(df=nu, size=simulations)
# Adjust samples to have the desired mean (mu) and scale (sigma)
returns = mu + sigma * t_samples
# Calculate potential losses (loss = -capital * return)
losses = -capital * returns
# Determine the loss at the (1 - confidence) percentile
worst_loss = np.percentile(losses, 100 * (1 - confidence))
multiplier = np.inf if worst_loss == 0 else capital / worst_loss
return multiplier, losses
# Example usage:
capital = 1000
multiplier_adv, losses_adv = monte_carlo_position_size(capital, mu=0.01, sigma=0.05, nu=5, simulations=10000, confidence=0.95)
print(f"Monte Carlo position size multiplier: {multiplier_adv:.2f}x")
# Plot: Histogram of simulated losses with tail indicator
worst_loss_adv = np.percentile(losses_adv, 100 * 0.05) # 5th percentile
plt.figure(figsize=(8, 4))
plt.hist(losses_adv, bins=50, alpha=0.7, color='lightcoral', label='Simulated losses (t-dist)')
plt.axvline(x=worst_loss_adv, color='blue', linestyle='--', label=f'5th percentile loss: ${worst_loss_adv:.2f}')
plt.xlabel("Loss ($)")
plt.ylabel("Frequency")
plt.title("Monte Carlo simulation of losses (with fat tails)")
plt.legend()
plt.grid(True)
plt.show()
The output is Monte Carlo position size multiplier: -8.98x.
Basically, losses are computed as the negative product of capital and return. The 5th percentile of these losses, Lworst ββ, represents the worst-case scenario under our chosen confidence level.
Why did the quant upgrade his simulation model? Because betting on normalcy is like bringing an umbrella to a hurricaneβit just isnβt enough! π€£
With our simulation-based approach complete, we now turn to the Omega Ratioβa metric that synthesizes the entire riskβreward profile into one neat number.
Sizing based on Omega ratio
The Omega Ratio evaluates performance by considering the full distribution of returns. Instead of simply calculating an average return, it weighs the total probability of gains against that of losses. In simple terms, think of it as comparing your βA+β test scoresβgainsβto your βFβ test scoresβlosses. If the ratio is greater than 1, youβre generally winning; if itβs less than 1, the losses are eating up your gains.
A simplified version is given by:
though the full definition involves integrating over the cumulative distribution function (CDF) of returns:
where F(r) is the CDF of returns.
By integrating over all gains and losses, the Omega Ratio captures the effects of extreme outcomesβtail events. It provides a robust measure of whether the strategyβs rewards compensate for its risks. Even if your strategy produces several small gains, a few large losses could drag the Omega Ratio below 1. Conversely, a high Omega Ratio indicates that, overall, gains substantially outweigh losses.
def omega_ratio(returns):
"""
Compute the Omega Ratio for a given set of returns.
Parameters:
returns (list or array): Returns from the trading strategy.
Returns:
float: The Omega Ratio.
"""
gains = [r for r in returns if r > 0]
losses = [r for r in returns if r < 0]
avg_gain = np.mean(gains) if gains else 0
avg_loss = -np.mean(losses) if losses else 0 # Convert losses to positive value
if avg_loss == 0:
return np.inf
return avg_gain / avg_loss
# Example usage:
sample_returns = [0.1, -0.05, 0.15, -0.1, 0.2, -0.08, 0.05, -0.02]
omega = omega_ratio(sample_returns)
print(f"Omega ratio (Ξ©): {omega:.2f} β {'Bet more!' if omega > 1 else 'Pull back.'}")
# Plot: Scatter plot of returns with average gain and loss lines.
plt.figure(figsize=(8, 4))
plt.scatter(range(len(sample_returns)), sample_returns, color='purple', label='Returns')
plt.axhline(np.mean([r for r in sample_returns if r > 0]), color='green', linestyle='--', linewidth=2, label='Average gain')
plt.axhline(np.mean([r for r in sample_returns if r < 0]), color='red', linestyle='--', linewidth=2, label='Average loss')
plt.xlabel("Trade number")
plt.ylabel("Return")
plt.title("Visualization of returns and Omega ratio components")
plt.legend()
plt.grid(True)
plt.show()
Two horizontal dashed lines represent the average gain and average loss. The ratio of these averages gives you the Omega Ratioβa quick, visual way to gauge whether your strategyβs rewards outweigh its risks.
One last word, I personally think that if your systems are performing as they should, the best thing to do is to ππ»equal weightππ»
However, if your systems are based on spurious relationships, then use any of the methods I have mentioned, specifically the one in the paper:
Certainty β Equiponderate.
Uncertainty β Calibrated contract sizing.
Okay fellows, until next timeβmay your trades be sharp and decisive, your algorithms run with seamless efficiency, and your edge stay ever elusive to the masses! π
PS: Would you like me to dedicate a day a week or every two weeks to developing readings in the field of quantitative research?
Just what I needed.