Interview with Jacques Joubert of Hudson and Thames, the creators of mlfinlab

Author

Steve Young

Category

Machine Learning

Date

April 20, 2020

Image Credit:  REDPIXEL.PL  / Shutterstock.com

Topics discussed in order of the interview:

Question 1: Jacques Joubert background

Question 2: Hudson and Thames philosophy and approach to investment management, and how it lead to their use of Machine Learning

Question 3: Jacques' observations on how funds are adopting the use of Machine Learning based on his experiences. Everyone says they're using it, but are they really?

Question 4: Some of the hesitations around using Machine Learning in finance, and the mismatch between professionals with Computer Science and Finance expertise

Question 5: Event based modelling approach vs. the aggregation of price data

Question 6: Discussion about some open source tools and techniques that Hudson and Thames has implemented

  • Overview on feature importance algorithms, and research by backtesting vs. feature importance
  • Portfolio optimization techniques using unsupervised learning
  • Probability of overfitting- deflated Sharpe ratio and probabilistic Sharpe ratio

Question 7: Problems with using Machine Learning in finance

  • Techniques to address various model assumptions

Question 8: Ways of dealing with overfitting

Question 9: More on feature importance techniques

  • Backtesting strategies and why to avoid research by backtesting

Question 10: Exciting areas of research Jacques is currently looking at

Question 11: Jacques' resource suggestions for cutting edge in finance

SY: Hi Jacques. Why don’t we start with you telling us a little bit about yourself and the ventures that you are participating in?

JJ: My name is Jacques Joubert. I’m a South African who recently moved to London, and now I’m working as a systematic trader. I’d worked for several South African hedge funds and did a short stint at a machine learning consultancy in South Africa before moving to London.

My big body of work that the public can see is the Hudson and Thames project and the different open source libraries that we have there. The primary library that we have there of interest to people, the full name of it is Machine Learning Financial Laboratory, but its short acronym is just MLFINLAB; it’s a package full of machine learning implementations for techniques used in finance.

The majority of those techniques are from Dr. Marcos Lopez de Prado, but also we’ve started moving to The Journal of Financial Data Science and some from the Journal of Portfolio Management. There are a couple of implementations of tools that we saw there that we thought would be very useful. I run that on the side whilst working as a trader, and that’s what I’m busy doing right now.

SY: What’s the philosophy underlying Hudson and Thames’ approach to investment management, and how did that lead to the use of machine learning?

JJ: I think that this is the interesting part of it. Hudson and Thames is really a place for people to come together and share ideas, and that’s what we wanted it to be from the start. Hudson and Thames itself does not have any clients. The only money that we get is the money from the sponsorship that we get from people using our package who want to send a few dollars our way to continue development of the packages.

Hudson and Thames’ philosophy is that we want to build cool stuff with people who love working with us. It’s been a lot more about getting my friends involved and us building stuff up together. We share expenses, so we’ll buy data together and get general subscriptions. It’s all about this camaraderie and building up tools that other people can use, and they are actually tools that we use at work every single day. That was the main idea behind Hudson and Thames.

Then, within the group, different people have started their own consultancies. Alex has started a company called Machine Factor Technologies that deals with a lot of these concepts and does consulting work. I was doing consulting work here in London before becoming a trader. I always wanted Hudson and Thames to remain a neutral space where we can build stuff together. Of course, we support anybody in our group who wants to go and start their own thing.

How that led to machine learning in finance, I came from a machine learning background. Interestingly enough, I studied accounting as an undergrad and moved on to quantitative investing as my first job out of university in a quantitative equity portfolio management hedge fund. They would be doing momentum factor investing. I got into it there and I turned it into a Masters in Financial Engineering Masters program, and, from there, I started building up more of these implementations as I could get my hands on them.

How do we end up with machine learning in finance? I’ve had a career path that has gone from hedge fund to machine learning hedge fund to machine learning consultancy because I was mostly interested in machine learning applied to finance. At the time, it wasn’t really a field that had been exploited already. There wasn’t much known about how to apply machine learning to finance, a lot of it was econometrics, and there were a lot of questions and concerns about machine learning. For us, it was more a case that there was machine learning first, then, how do we apply this to finance second. That’s what led to the use of machine learning for us at Hudson and Thames.

SY: That’s really neat. In your experience, how would you say the penetration of machine learning is within finance? Are a lot of funds already using it or are they still in the investigation phase?

JJ: It’s so funny because, in South Africa, you think everybody else is doing all these super cool things. And, in South Africa, I’d say, maybe three funds are exploring the idea of machine learning, of which I worked at one of them. Actually, we moved to London because I couldn’t find another job in my country that I was interested in. So, I got here to London, and I started interviewing with a number of places, hedge funds and a number of other firms and places like that. I had done my Masters focused on machine learning and finance, and I had all these techniques and ideas that I wanted to use. I’d get to the interview process, and I’d get to talk to some of the people from these companies, and I very quickly realized that none of these people were involved in machine learning.

Some companies, their big idea for machine learning was first figure out model interpretability, before we could even use machine learning in finance, which I felt was like putting the cart before the horse. You don’t even know if these techniques work yet, but you are a lot more worried about how you’re going to explain it to the client. First show that this technique works, then you can worry about explaining it to the client. It helps nothing if you figure out model interpretability, but this technique is not a value-add. Some companies are super-focused on that.

Other companies were interested in using machine learning, but they had zero machine learning infrastructural knowledge or other people there that knew about it. It became this thing that I experienced where it felt like people in finance treat machine learning like this saying I heard at the machine learning school that was hosted in South Africa last year.

The saying was, “Machine learning is like teenage sex. Everyone talks about it; no one really knows how to do it. Everyone thinks everyone else is doing it, so, everyone claims that they are doing it, when, actually, nobody is doing it.” That’s exactly how I felt about machine learning in finance. I got to London, and I just started going for interviews and realized that all these people say they’re doing it because they think that everybody else is doing it, but the truth is most people here are not doing it.

What I thought was absolutely unique about my current role that really had me excited, was that the people around me come a lot more from engineering backgrounds and not primarily from finance backgrounds, they’re solving these problems from a math perspective, and not from an econometrics point of view and linear regression. They are trying to plug neural nets onto just anything that they think it makes sense on, and they’ve got this way more open mind to applying these techniques. I actually found there to be a reluctance to use machine learning from other companies. To answer your question, “How have I found the uptake?” I found that everyone says they’re doing it, but on closer observation, you realize ninety percent of people are not doing it.

SY: What do you think is behind some of that hesitation around using machine learning in finance?

JJ: I think a big part of that is this skill mismatch. Machine learning, it’s not brand new, it’s an off-shoot of statistics, found its way first and foremost in the computer science divisions at universities. The students who are getting taught machine learning at university, they’re usually coming from a computer science background, and those people are not primarily from a finance background.

The problem is that your fund managers and the guys who have been running these things for the last thirty years don’t have machine learning backgrounds. They can see it’s a new technique and they’re hiring these graduates, and then these graduates have a bit of a skill mismatch. They may be skilled at machine learning, but they do not have a good understanding of finance.

I find that your problem is that your senior people at this stage do not have a good machine learning background, whereas it’s more likely the junior people coming in have that background, but they are inexperienced, or are still finding their way. I think that for me, is a big one. Then, the second one is that for a long time, I mean only now with the Journal of Financial Data Science and research coming out, is machine learning starting to get more of a hold in finance.

When I started in 2015, I don’t think even TensorFlow was out yet. I remember when the fund I was working for moved from R to Python because TensorFlow had just come out. There were not a lot of articles or papers on how to apply machine learning to finance or if it worked. For a long time, there was this uncertainty about, “Is it even a value-add, is it something we should go into?” Now, in 2020, we look around and we can see a lot of exciting things.

For example, the book of Jim Simons, The Man Who Beat the Market has just come out, and, in the book, they talk about how they use machine learning in finance to solve this problem, and they’re, in many respects, the most successful hedge fund in terms of return on investment by far. They talk about using it there. Then, we started to see AQR come out with a couple of really good papers online about applying machine learning to the cross-section of returns, and Marcos Lopez de Prado’s book came out in 2018. Those things really only happened recently. They haven’t been around for a long time.

I think what is really difficult for many people from these funds, from the non-machine-learning funds, so take the D.E. Shaw, Two Sigma type of funds that do use machine learning out of the equation, I think that the hesitation for using machine learning is that a lot of these people are coming from econometrics backgrounds, and they’re looking for things like significant p values and it’s hard to say is the p value significant if you’re using a neural net.

SY: Right. So, is there a bit of a mental hurdle to shift the paradigm from coming up with a hypothesis first and then coming up with test statistics to ensure significance, and now we’re moving more towards a data-driven approach first, where the ML model can discover patterns and signals on its own? Is that a mental hurdle that people need to overcome?

JJ: I think it’s definitely part of it. One of the things that I see, I can tell you this is the super-typical thing when someone says “I want to do machine learning in finance,” their idea is “Well, okay, I will just get as much data as I can.” Usually they’ll start on price data, so they’ll just take open/high/low/close end-of-day data, and they’ll say, “Okay, let me strap on a machine learning model to forecast the next day’s returns.” That’s their idea for applying machine learning in forecasting. I have never seen that work in practice. I’ve worked at funds where we’ve tried to do that for a couple of years. I’ve read lots of papers and I’ve just not seen any success with that. I think the reason why that doesn’t work is because financial time series data does have a random walk component, and I do think that if no information enters the system, then prices will follow some kind of stochastic process. It may look like it’s trending, even though it has no drift component, but it has a random walk component.

I think that a better way of modelling, and it’s the way that I’m approaching these problems now, is to rather focus on some kind of specific event, instead of trying to forecast every single day’s direction as either up or down or what the magnitude of that move will be, because not all days have information.

If you think about it from the point of view of accounting, about how we do valuation on a company, we’ll base that, for example, on the dividend discount model or future cash flows, or we’ll look at things like earnings, but the problem is that earnings only come up, maybe, once a quarter when that company does financial statements or, maybe, just twice a year. The problem is this data only comes up twice a year, or, maybe, three or four times a year. It’s not a lot of data to work on, but now the difference is that’s when you know information enters the system.

You’re looking at a security, there are at least the financial statements, they have a better than expected earnings and you see the share price move to this new equilibrium, and it does so over time; it’s not instantaneous. You use this new equilibrium, using some kind of stochastic process and so it eventually gets there, and that, for me, is a much better idea. That’s just an example for earnings but it could be something else.

De Prado uses an example where he says you are looking at financial time series; a structural break occurs; you’ve measured this event; you know it occurred; it’s a trigger. Now the question is, “Does the price move up or down over the next few days after this type of event?” You’re looking for things like, “Can I find proxies for information entering the system?” If we say that markets are efficient, the idea there is that they are efficient because information enters and prices move to the new price equilibrium.

There are different forms of market efficiency, there’s weak form, mild form, and strong form, but the idea is that we have to wait for information to enter and we can then place bets in terms of directional magnitude from the price moving from where it is now to its new price equilibrium. If you are just forecasting every single day, you’re just going to get a whole bunch of random data. I’ve just never seen that work.

SY: That’s really interesting, and we’ll definitely come back to that topic where a little bit of subject matter expertise actually goes quite a long way and just throwing every variable you can think of into the model might not necessarily work.

But, you also mention that you’ve imported some of the tools that have been developed at Hudson and Thames for your work. Could you highlight some of the machine learning tools and techniques that Hudson and Thames has implemented and made open source? And, how can practitioners use these tools to improve their processes?

JJ: There are a couple that I’ve used that I think are really important.

The first one for me is measures of feature importance. Given a machine learning model, you’ve got a whole bunch of features. Which features are actually adding value to your model? Because the trick is, you can’t just throw the kitchen sink at it. If you add too many variables, you’re just going to confuse your model. That’s true even if you use regularization. I know because I’ve tested this.

For me, by far the most important one is some of the feature importance algorithms we’ve implemented. We’ve implemented three from de Prado, which are the MDI, MDA, and Single Feature Importance. We’ve also implemented a paper from the Journal of Financial Data Science which has this algorithm called The Model Fingerprint Algorithm. What you do is you give this function to your model and the features that you use to train your model, and it tells you which of the features your model is exploiting linear relationships in and nonlinear relationships in as well as pairwise interactions. That’s super useful because it tells you where your model is getting most of its information from.

De Prado has a saying that backtesting is not a research tool; feature importance is. That’s because research by backtesting leads to overfitting. Research by feature importance leads you to better features that leads to better model performance. A good example of this is, let’s say you are building a model to filter out false positives. You’ve got this underlying model that says, “These are what my bets are now.” My idea is that my secondary model is going to filter out bets that we get wrong.

Now, the question is, what kind of features are important there? You may give it distribution-related features, things like what is the mean, standard deviation, maybe the skewness, the kurtosis, maybe you also give it the volatility of the assets you’re trading. You give it, maybe, the last few predictions your model made. You give it all these things, then you have a look at the feature importance and, for me, what always comes up is it says, “Hey! Volatility is an extremely important feature!” Then you know what the important features are.

Then you can ask, why is volatility important? Well, maybe when there’s a lot of volatility, there’s probably more model uncertainty. You can state a thing like, what other features can I add that will allow me to exploit this inefficiency better? You can do a deep dive. This helps direct you to better features. I find that immensely useful.

Another thing that we’ve implemented which I think is worth noting is, there’s a couple of new portfolio optimization techniques which use unsupervised learning and they’ve been shown to outperform mean variance portfolios out-of-sample. Those ones are hierarchical risk paritiy, by Lopez de Prado, and we’ve implemented the hierarchical clustering asset allocation algorithm by Thomas Raffinot. There’s a whole bunch of different portfolio optimization techniques. People are interested in that.

I think the other thing that’s really important that we have is that we started implementing these backtest statistics which highlight what your probability of overfitting is. We have both the deflated Sharpe ratio and the probabilistic Sharpe ratio, and we’re busy implementing a couple of papers which generate like a tally sheet and tells you what is the likelihood that you’ve overfitted. Because, so many people know that research by backtesting leads to overfitting except they continue to do it. We thought it was quite important to add these implementations where they would have a risk adjusted Sharpe ratio, where they’d know, for example, I thought I had a Sharpe ratio of 6 but actually it’s more likely to be around 2.5. I would say those three things are really, really important. There’s a lot of cool stuff in the package though.

SY: Those all sound super useful to anyone who’s interested in applying machine learning to finance. And, a lot of lessons learned—as you mentioned. So, what are some of the differences or challenges that practitioners should be aware of when applying traditional machine learning techniques to finance, and how do we address them?

JJ: I think that for anybody who’s interested in that topic, de Prado has famously written this paper, titled “The Ten Reasons Most Machine Learning Hedge Funds Fail”. In it, he describes the problems with using machine learning in finance, but the particular one that comes to mind for me is that when we’re doing machine learning in finance, we must remember what our model assumptions are.

For example, in linear regression your model assumptions are things like your error terms are normally distributed, there’s no auto-correlation, and that your observations are IID, that they are generated using the same data generating process and that each observation is independent. But, often when we move to financial machine learning, we ignore those things, and then we often get confused as to why our model is not performing as well.

A really big one here for me is the IID assumption. In finance, the biggest problem is that our data is not stationary: the data-generating process underlying the data is changing through time. It’s not the same, and so just using plug and play algorithms that work in computer vision don’t work in finance, and it’s because this data-generating process is changing.

There’s a couple of ways to address that. One of the techniques we could use is an online learning algorithm where we update our model parameters as we move through time so that the model can adapt as the market changes. That’s one way to do it, although that one has the problem of, you’ll be trading and it’ll be making money, and then, all of a sudden, you’ll start losing money, and you’ll see the spike in your error term of your model, and you can say a structural break has occurred. That now means that you actually have had a move to a different distribution, and so you’ve now got this problem of non-stationarity. Now the problem is, if I’m using an online model, I need to stop trading and make the model train for a couple of observations before I allow it to be active again. Then, the hope is that it will have adjusted and will make money.

The second approach, which I quite like, de Prado has also mentioned this in his text book Advances in Financial Machine Learning, is the approach of meta-labelling, where you have a secondary model which determines when your primary model is not going to work. That works quite nicely.

I always give this example, and really I’m not a fan of technical analysis, but this is just a really good example. Let’s imagine you’re doing some kind of trend-following strategies using moving averages. Those strategies perform well when markets are trending and moving in one direction and they perform really poorly when the market is going sideways and has a lot of volatility. What the meta-labelling, what the secondary model does is it basically looks at what is the primary model saying it wants to do. It says it wants to go long. What is the market state at this given time? The secondary model knows that the primary model performs well in markets that have a lot of direction and are trending, but it also knows that this algorithm will not perform well in environments where there’s a lot of volatility, where serial correlation breaks down. What it can do is it can filter out those bad trades by switching off the primary model when it's going through these different regimes. It’s a nice way of combating this non-stationarity problem.

There are certain market states that certain algorithms work under, and if we can just switch them on and off when those states are active, that’ll also be effective. That’s the one thing. The other problem that I really see is, in finance our data is not independent. It’s not IID. We’ve discussed the non-stationary portion of that, and on the other side of that is independence.

When you’re looking at financial times series, especially if you are doing it from the point of view that I’m forecasting tomorrow based on the previous windows’ features, and, then, I’m forecasting the day after that, and especially when you have these overlapping windows in your labels, now your data’s definitely no longer independent, you have to address a whole bunch of things to do with the way that you do cross validation in your data.

De Prado has come out with purged and embargoed cross-validation techniques. Using walk-forward analysis will not work because that technique will break down and lead to overfitting. It’s just the subtle way that you get data leakage by not adjusting for it. Those are, I would say, the two major problems with machine learning in finance.

One of the things that I really wanted to point out, because I think this is so true, I heard this said at QuantCon in 2018 from de Prado I believe, he said that he finds it ironic that, in finance, on the one hand we have people who are pro efficient market hypothesis, and they say it’s impossible to outperform the market, and then, on the other hand, we have all the active investment managers who have loads of econometrics and believe that linear regression is a model sufficient for extracting millions of dollars’ worth of alpha. Those are two radical statements. We think that something as basic as linear regression will be able to extract millions of dollars’ of alpha. That for me is just too much. I think the key thing about using machine learning in finance, what makes it attractive, is that we are just upgrading our skill set to include models that can exploit non-linear relationships. That’s it. That’s the value of adding machine learning.

SY: That’s very interesting. You mentioned before how it’s a common temptation to just throw the kitchen sink at a machine learning model, and let it sort it all out by itself.

A concern that I’ve heard about using machine learning, especially in finance, since it has a relatively low signal to noise ratio, is that it’s prone to data mining or overfitting or finding spurious relationships. Can the risk of these things happening be mitigated?

JJ: There are multiple ways to stop overfitting, and they’re well known in machine learning, things like cross validation, k-Fold cross validation, splitting your data up between training, validation and test sets, and not burning through your test data. In essence, you are only supposed to use it once to test if your model really works out-of-sample.

We have a whole array of techniques to deal with overfitting. We have regularization. If you are using neural nets, you could use Lasso or Ridge or maybe even an elastic net regression. You can drop bars, you can add some Gaussian noise to your data. You can look at things like the bias-variance tradeoff. We can very easily see in the learning curves when our model is starting to overfit because we’ll see that in the training set it’s getting better and better, but in the validation set, the performance is starting to get worse. We can very easily see where these elbow points are, and we can stop the model overfitting. That may still lead to some overfitting in that you still need to check that your model’s robust.

A good way to do that is to check if your model will also work across assets. A really good example of this is if you are building a trading strategy to trade TTF gas, which is European gas, and you see that your model has a Sharpe ratio of 6, and it’s doing so well. You should be able to apply the same algorithm onto NBP gas, which is UK gas. It’s very close to Europe; it’s still natural gas; it’s pretty much the same asset with only a geographic difference. Maybe, there are a couple of small things but they’re supposed to be very close together. They’re highly correlated. They’ve got slightly different paths, highly correlated, same assets, just in different places. If you apply the algorithm and it does poorly, then you know you’ve also overfitted.

You want to see some kind of robustness across your model, using different assets or similar assets. Then you have all these additional techniques, for example, the deflated Sharpe ratio, the probabilistic Sharpe ratio, probability of back test overfitting. These are all techniques you can use to determine how much you need to dampen your Sharpe ratios by to get an idea of it.

There’s a really good paper written in The Journal of Portfolio Management by Campbell R. Harvey called “Backtesting”. I think he won an award in The Journal of Portfolio Management, the Jacobs Levy Award for 2015 or 2016. The paper highlights, “In industry, we all know that you are supposed to use a 50% haircut on any Sharpe ratio that you’ve been running a backtest on.” I’ve got to be honest, I’d never heard of the 50% haircut. That was news to me. But, that immediately tells you that if you’re building a trading strategy and you’re doing research by backtesting, you should really apply that 50% haircut to your Sharpe ratio. The paper goes on to say, we’re going to create a technique that you can apply to actually figure out what your haircut Sharpe ratio should be and that’s what we are proposing implementing into the package. That’ll be out soon and people can use that.

There’s a lot of tools to avoid backtest overfitting. I feel like if you do these techniques, you’ll be very aware when you are overfitting. If you’re using cross validation and it’s not working on your test set, you’re probably overfitting. If you’re doing research by backtesting, you’re probably going to overfit. If you’re not taking account of the number of trials, not adjusting your Sharpe ratio based on the number of trials you’re running, you’re probably overfitting. All of these techniques also apply to linear regression. De Prado speaks out quite heavily about this, about how many of these econometric investment strategies are just false strategies, they’re just false positives.

SY: Okay! So, you talked about feature importance techniques before. Employing those techniques, does that allow someone to throw in whatever thousands of features, pass them through feature importance, and then feature importance sorts out which ones are truly important to pass on to the model?

Is that a valid approach, or is that something that should be avoided as well?

JJ: I think it depends. You first need to think about the algorithm that you are using. I think the only algorithm that comes to mind where people think they can just throw everything at it is the neural net, because their argument will be, “Yes, but you can strap on L1, or L2 regularization, it will adjust your weights for you, so it won’t use the features that aren’t adding value”. To be honest, I have not found that to be true. In fact, I’ve found the opposite to be true. I’ve found that if you give the model too many features, even if you give it the answer, if you give it your y label index data set, it may not find it. That’s just because you’ve added so much noise.

What I recommend people do is, first, get an idea of what features you think will be good. In essence, I think the curse of dimensionality is very real, and I think if you don’t have a lot of rows of information, you can’t add thousands of columns worth of data, you can’t have thousands of features for only a hundred rows of observations. That’s just going to make so many problems. That is my first observation: make sure that you have enough rows of data, enough observations, to justify the number of features that you are using.

In finance, if you are using end-of-day data, you usually don’t have enough observations to justify the large feature set. Then what I find is that there are a number of algorithms that you can use for feature importance. A very common model that people use is a Random Forest. They’ll fill it up with data, and Random Forest is cool because you don’t really have to scale and centre your data before you pass it through to the algorithm, and you can use the MDI feature importance, which stands for Mean Decrease Impurity and comes from tree-based ensemble models like a Random Forest. Using that, you can see very clearly which features are adding value, but now you need to remember that this MDI algorithm is doing everything in-sample, so it’s looking at which of your features add a lot of value in-sample.

The MDA algorithm, which is Mean Decrease Accuracy, that is an out-of-sample test, so in your out-of-sample data, which features add the most value? There are problems with this as well. For example, if your features are multicollinear then the problem is that the MDI and MDA techniques won’t work properly because they won’t be able to sort which feature is adding the value.

There’s another algorithm called Single Feature Importance which tests each feature in isolation and then picks it out. You can use these three together. There’s a new technique that just came out, we plan to code it up, called Cluster Feature Importance. It basically uses unsupervised learning, on top of those, to figure out in the face of multicollinearity which features add the most value. Definitely, use feature importance to guide your research. I’ve found, and my colleagues have found the same thing, using feature importance to find out which features are redundant and not adding to your model, when you take those features out, it almost always increases the performance of your models. It’s very important not to just throw rubbish at your model. If it’s not adding value, don’t add it.

SY: Okay. Your other point about backtesting is really important. There are a lot of strategies, including quant-based, that appear to have really good backtest results, but perform poorly when implemented, or even just when backtesting on a different set of dates.

How can we increase the confidence that we’ve actually found a good, viable strategy?

JJ: Ideally what you would like is that you’ve gone through this modelling process, you haven’t done research through backtesting. What I mean by that is that, for example, a person will evolve a strategy, they’ll look at the PnL, and they’ll see it has this massive drawdown in 2009 so they add a bunch of filters and a couple of things, and then they get that drawdown to stop. The problem is you’re backtest overfitting because you’re trying to reduce the drawdown that happened in that specific part. You’ll find that on a different asset, it probably won’t work.

The first idea is don’t do research by backtesting. Stop pushing the backtest button and adjusting your strategy so that it has lower drawdown. A better approach is to say, “What anomaly am I trying to exploit?” Based on that, maybe use feature importance to find out, “What are the best features I could engineer and give to my model to exploit this specific market inefficiency?” And then, hopefully, from that you’ll find features which are able to provide value and you can come up with some economic rationale. In that point you have to be careful of this storytelling bias, but if you can find good features that have this really good economic rationale that you can justify and understand why this thing is happening, you can go even further and improve your model like that.

You can do certain robustness checks. You can do things like making sure the same strategy works on an asset that’s very similar in nature. You can check if it works across markets. A really good example of a super robust strategy is actually momentum. I’m talking about Jegadeesh Titman’s original momentum. AQR has written about the momentum factor being everywhere.

The idea of momentum investing, if you are doing it from a factor investing point of view, is that you, for example, go long the top, the ten assets that have gone up the most, and you short the bottom ten that have gone down the most, and you have to make certain adjustments there based on the technique you are using. That’s the idea: stocks that are going to go up are going to continue to go up for some period in the future. If you test that, you’ll see that that strategy works across asset classes. It works across different assets themselves. That is a really good example of a robust strategy. You know your strategy is not robust or probably overfit when you’ve done research on backtesting, and you haven’t adjusted for the number of trials that you’ve run.

The advice from de Prado and Cambell and those guys is that you have to absolutely keep track of the number of trials you are running, and you have to keep the PnL curves that you’ve generated, and you should store that to a database because you could use that to do some kind of error-adjusted Sharpe ratios and PnL based on that.

Typically, if you are doing multiple hypothesis testing, you’ll do this family-wide error correction, so the deflated Sharpe ratio and those similar algorithms. It’s all about, if you’ve kept track of the number of trials you’re running and you’ve kept PnL curves, “How can I then adjust the Sharpe ratio that I think I have to be a more meaningful representation of what it actually is?” You can follow those techniques, but I think that the best advice is just really try to avoid doing research by backtesting. That just leads to so many problems.

SY: Yes. That’s definitely valuable advice. What are some exciting areas of research that you are currently looking at?

JJ: I’m actually very interested in some of the new portfolio optimization algorithms and, Enjine, your company, actually came out with one of the algorithms that I’m interested in, which is the Monte Carlo Optimized Search Portfolio Optimizer. I’m very interested in having a look at that.

Also, theory implied correlation matrices, using that for portfolio optimization. It seems to me that a lot of our users are using portfolio optimization functionality that we have, so I’m particularly interested in that. I also really want to have this clustered feature importance in because I use the feature importance all the time. It’s really valuable to my research. It’s one of the biggest differences in my life. I really want a technique that can handle multi-collinearity in my feature set.

What else is exciting that’s coming out? Trend scanning labels are coming out. De Prado is publishing his new book, which is Machine Learning for Asset Managers, and, in there, he’s going to have a whole bunch of new techniques, one of them is trend scanning labels. It’s a way of labeling your data that you could figure out if you were in trend or not in trend, for your asset. I think that kind of labeling technique would work really well for those previous strategies that I spoke to you about when I said, “the problem is people are trying to forecast direction every single day.” Now the difference is you’d use this labeling technique, and you would basically just try and forecast, “Am I still in a trend or not?” because then I would know whether I should hold my position or if I should get out. I’d say that we’re really looking forward to that.

What else has really got my attention lately? A lot of people are very excited about using reinforcement learning in finance. Specifically, I’ve seen a lot of work done in options pricing. JP Morgan has done a lot of good work with that. There’s a professor here in the UK, she’s done a lot of good work on this. Reinforcement learning for options prices is really a big thing. Igor Halperin with Matthew Dixon and Paul Bilokon, they’ve written a book which is coming out this year as well, and Igor has done a lot of work on reinforcement learning for options pricing. I know that’s something a lot of people are quite excited about.

SY:Those are all really interesting ideas which I’m actually going to be looking forward to as well, and I’m looking forward to your implementations of a bunch of those. That’s all the questions that I have. Is there anything that you’d like to add?

JJ: No. That’s mostly about what we’re doing, and what I think is interesting. I think maybe a useful thing to add is that if people are thinking about what resources they should look out for or how they should get a really good idea of what the most cutting edge in Finance is, I must say, from my point of view, there is a journal out called The Journal of Financial Data Science. I really love that journal. I think it’s fantastic.

It’s got a whole bunch of really cool ideas in it. I’m looking at the page now, the second article here is “Deep Reinforcement Learning for Trading”. There’s “Machine Learning in Asset Management”, “Portfolio Construction”, and “Modular Machine Learning for Model Validation”. There are articles here on using geo-point location, geo-location data to do stat-arb between Nike and Adidas. There are some super cool topics in here. This is actually where we get a lot of our inspiration for implementations. If someone is looking for getting more involved in the literature in this field, I’d say The Journal of Financial Data Science is fantastic.

SY: I’ll definitely be checking that out as well. Well, thank you for your time and answering my questions.

JJ: Thanks to you. This is actually a lot of fun. It’s actually the first time we’ve been interviewed, so I’m very excited about it. I’m interested to see how your open source package goes.

Twitter: https://twitter.com/hudson_thames

LinkedIn: https://www.linkedin.com/company/hudson-thames-quantiative-research/

Quantocracy Badge
Latest posts