Machine Learning vs Quantitative Finance: What's the Difference?

Author

Jin Won Choi

Category

Machine Learning

Date

Jan. 22, 2021

Computer Chip

Suppose I told you that computers are nothing but fancy calculators. Would you believe me? Your gut reaction would say “no”. Most people now don’t use computers to crunch math, but instead to watch shows, write emails, attend online meetings, and so forth.

Yet at a fundamental level, the above statement is indeed true. When someone plays a video game using a computer, the fancy graphics displayed are “merely” the result of an incredible number of calculations that would have made rocket scientists in the 60s green with envy. Although a computer largely consists of different calculation engines, it manages to produce much more than math; the whole becomes greater than the sum of its parts.

But in the early years of the computer, it would have looked exactly like a calculator. A programmer would have had to use punched cards to specify the input data (e.g. numbers 6 and 4), and the computer would perform some operations (e.g. addition) to create the output data (e.g. 10).

The transition from mere ‘calculators’ to all purpose ‘computers’ came about as engineers divided the workings of a computer into components, and dedicated calculation engines for each component. They introduced keyboards, and dedicated programs to convert signals generated by the keyboards into characters. They added monitors, and dedicated new processors to convert numbers into pixels on screens.

The computer became what it is today by applying the philosophy that adding calculation engines makes every component better, which in turn makes the whole system better.

When people ask what machine learning is, some practitioners give a blithe answer - they say “Machine learning is nothing more than statistics.” At a fundamental level, this statement is true. But it’s also much more than that. It’s also a philosophy. It’s the belief that we can improve on any decision making process by viewing it through a statistical lens. A financial machine learning practitioner is someone who examines every stage of the investment management process, and improves the process through better statistical modelling.

Some readers may wonder how this is different from traditional quantitative analysis. Don’t such quants also apply statistical models to investing? Indeed they do. The difference is that such quants tend to use simpler statistical methods, and use them more sparingly.

It’s perhaps best to illustrate the differences between the two approaches using an example. The following investment methodology is taken from the Morningstar Canada Target Momentum TR CAD index home page. One can invest in this index today by buying the WXM ETF.

Morningstar Canada Target Momentum TR CAD Methodology

This is a classic example of a traditional quantitative investment strategy. The methodology uses 6 factors ranging from trailing returns on equity to 12 month price changes in order to score each security. Each of these factors was chosen because statistical analysis revealed that those factors have been good predictors of future performance. The weights assigned to each factor may also have been motivated by statistical analysis. For example, 12 month price changes may have had more predictive power than 3, 6 or 9 month price changes.

However, some rules don't appear to have been motivated by statistical analysis. For example, the decision to limit the number of securities per GIC sector to 5 smacks of an intuitive judgement call. I’m not saying it’s the wrong call, but rather that it doesn’t appear to be based on statistical analysis.

Now let me show you how we, as financial machine learning practitioners, would approach the construction of this methodology. To keep things simple, let’s assume that we keep the same 6 underlying factors.

First of all, we would apply a method that reduces the six factors into a smaller set of orthogonal components. ‘Orthogonal’ components are those that move independently of each other, like two baristas at a coffee shop working on different orders. As it is, many of the six factors move in tandem; for example, if the 3 month price change is positive, the 9 month price change would likely be positive as well. Transforming these factors into orthogonal components would help with statistical modelling down the road.

We would also approach the weighting scheme differently. Whereas Morningstar’s method assigns fixed weights to each factor, we would use weights that vary depending on the situation and through time. For example, if the trailing return on equity is at the top 10% among its peers, perhaps our algorithm would put more weight on the 3 month price change, increasing the weight from 10% to 30%. Conversely, if the trailing return on equity is at the bottom 10%, perhaps our algorithm would eliminate the weighting of 3 month price change altogether. Such fluid and flexible weighting schemes would only be made possible using sophisticated statistical models, which is in the domain of machine learning.

Lastly, instead of capping the number of securities per GICS sector, we would utilize a portfolio optimizer that uses statistical models to decrease future volatility of the portfolio, regardless of GICS sector membership.

I hope you can see, through this example, how different the machine learning approach is compared to the traditional quantitative approach. Although both approaches utilize statistics, the machine learning approach makes use of statistical tools much more heavily.

We’re big believers in the machine learning approach. We believe that it allows us to optimize every aspect of decision making, which in turn leads to better results. However, there are some definite downsides to adopting this approach.

The first concern is that it’s easy to screw up with machine learning. Critics of machine learning often say that they tried machine learning in the past, but didn’t get any meaningful results. In my experience, however, that’s often the result of misapplication.

Many people have the misconception that using machine learning means applying large statistical models that have been proven to work in other domains, such as text or image analysis, to financial data. Unfortunately, such models have been finely tuned to work in their own domains, and their domains only. This is why a model that analyzes text looks very different from one that analyzes images, and it therefore doesn’t make sense to apply either model to financial data. Instead, a practitioner must understand the principles that make machine learning models work, and apply those principles to financial data in order to get the desired results. Unfortunately, that ability to divine principles and apply them in different forms is very rare.

The other downside of machine learning is that models become impossible to fully understand. For example, I previously mentioned a technique that reduces a set of factors into orthogonal components. Such a technique will inevitably turn intuitively understandable data (e.g. 3 month price change) into sets of numbers that don’t have any easy interpretations. Although there are tools to help one understand the inner workings of machine learning models, they only give a bird’s eye view of each model, and necessarily leave out many of the smaller details that could have meaningful impacts on the model’s performance.

But if one can overcome the pitfalls of using machine learning and gain comfort in their methods, the potential benefits are huge. Firms such as Renaissance Technologies and D. E. Shaw have generated huge alphas from employing machine learning, catapulting their firms’ AUMs to hundreds of billions.

Some people think that such firms’ success is down to a secret sauce, which once discovered, could lead to similar success for other firms. I beg to differ. Rather, I believe these firms achieved success through applying the machine learning philosophy to every aspect of their investment process. Optimizing each process by itself likely wouldn’t have made huge differences to their overall performances. But just as introducing calculation engines everywhere completely transformed the usefulness of a computer, introducing the machine learning approach everywhere, too, can end up completely transforming the performance of an investment strategy.

Quantocracy Badge
Latest posts