If you are into algotrading, or trading with the usage of algorithms, you probably heard of strategy over-fitting or over-optimization.
Over-fitted strategy means, that strategy was created (or its parameters were fitted) to perform perfectly on specific historical data, but on real trading, it won’t work the same. It is a common issue when we develop a new strategy (or adjusting existing one) with machine learning and genetic algorithms. We then get a really nice-looking strategy, but when we put it on the real market, the strategy performs poorly and we are suddenly at the loss.
Fortunately, there is the solution to this problem, and that is doing Robustness tests.
Robustness tests are series of simulations, which we are running on strategy, to ensure that even with the changed market conditions, missing trades, or change parameters, the strategy will still be profitable (at least on some scale). The goal is to simulate real market conditions, where we can have a slightly higher spread or slippage and the strategy can’t fail. Other tests are for finding, whether our strategy has a real Edge and we are not following any extremes (for example best possible setup of parameters, which if we change, the strategy fails).
I am developing new strategies with software Strategy Quant, where I have a lot of various robustness tests at my disposal. As I don’t wish to have an over-fitted strategy, which could look perfect on historical data but struggle in real trading, I always do these tests:
0. testing on OOS data: backtest the strategy on unknown data and see, if results are similar with backtesting during strategy development. This test is run automatically during strategy creation.
1. Higher backtest precision: tested on high precision data from Dukascopy
2. What If simulation: exclude 5% trades with biggest/lowest profit
3. Backtests on additional markets, tested on EURUSD market with profitable results, which prove the robustness of the strategy
Monte Carlo trades manipulation:
4. Randomize trades order with method Exact
5. Randomize trades order with method Resample: strategy performs well even with mix up trades sequence
6. Randomly skip trades: test for realistic behavior of the trading, when sometimes trades are not executed (caused by malfunction VPS for example)
Monte Carlo Retest methods:
7. Randomize strategy parameters: tests change of strategy parameters and whether strategy perform well or not
8. Randomize history data: a test of volatility, so the strategy performs the same even in a highly volatile market
9. Randomize slippage and spread: test for different market/broker condition
10. Randomize history data, slippage, and spread: combination of above, very strict test which only a few strategies pass
11. Optimization Profile and System Parameter Permutation: optimization tests ensuring that strategy has a real edge and doesn’t follow any extremes (for example over-optimized parameters tunned to specific backtest data). Strategies with slightly different parameters should behave similarly, which means we are following real Edge.
How to read Monte Carlo analysis results?
It’s quite easy. Monte Carlo always simulates a specific scenario, for example, it randomly skips trades with some probability and it does that 200-times (200 backtest results). Then it shows results, which 95% of backtests achieved. When you are looking at the table, you can see highlighted row with the number 95% and then some results. That means 95% of done backtests had at least these results. The picture below shows, that the original strategy has a profit of 35432$ and 95% of done backtests with the skip trades achieved a profit of 28256$. That’s really great, as it signals a robust strategy, which is still profitable even if we don’t take 10% of all trades. Similar simulations can be done in various numbers, so now you can understand the results.