Interpreting Machine Learning Models Part 1: Accumulated Local Effects

Author

Peter White

Category

Machine Learning

Date

Nov. 25, 2019

Image Credit: nd3000 / Shutterstock.com

As the use of machine learning models rises, so too does the need to understand and interpret how they work. A lot of the resistance to machine learning is due to two factors:

  1. They're seen as 'black box' systems that are completely opaque as to how they work
  2. People secretly think they will take over the world and enslave humanity

Now there isn't much we can do about number 2, so I wouldn't worry about it. Besides, an AI government couldn't be much worse than what we've already got. I, for one, welcome our new robot overlords. And what better way to prepare for the takeover than to learn how to interpret and understand machine learning systems.

This series of articles aims to show you the best and most helpful techniques that exist for examining machine learning models, so that you are better able to understand and interpret your results. Some of these methods are cutting edge, and undoubtedly there are more being developed as you're reading this. But the goal of them all is the same: opening up the black box.

Part 1: Accumulated Local Effects

One of the beautiful things about Machine Learning is that you can create a model by using all kinds of different algorithms and with all kinds of different tuning parameters. The downside is that because of this freedom, models can be vastly different and complicated in unexpected ways. As such, one of the pushes in interpretability is to create model agnostic tools. That is, tools that can allow us to understand these models regardless of how they were created.

We don't even need to know how the model was formed to be able to figure out what's going on. It seems counter intuitive, but think of it like baking a cake. You don't need to understand the chemical reactions that go on inside the oven to understand making cakes. If the cake is too sweet, you added too much sugar. If it's burnt, either the oven is too hot or you had too much wine again, fell asleep, and left it in too long. You can figure out what went wrong with your cake without knowing the math and science that made the cake. Model agnostic tools work the same way. Our predictions are our cakes, and by studying them and our ingredients (inputs), we're able to understand our model.

We start our series with one of these such tools called Accumulated Local Effects (ALE) Plots. The high level concept is pretty straightforward: we want to determine the effect that each individual input, isolated of all others, has on our output. So if we're creating a model to predict how many joggers we see on a trail in a day, and we include factors like temperature, wind speed, precipitation, time of year, etc etc., an ALE plot would show us how much just the temperature affects the prediction, regardless of the other factors.

ALE plots do so by isolating the change in prediction caused by a change in a single feature. As the name implies, it does this by defining localized areas of our feature. For each local area, we take all data samples where the feature’s value falls within the area, and vary the value of that feature holding all other feature values of the samples constant. We then calculate the differences in predictions between the start and end of each area.

A sufficiently small window allows us to create a reasonably accurate estimate of the change over that period. Then by accumulating all of the local areas, we are able to have a full picture of our input's effect on the output. So if we wanted to know what effect a 20 degree celsius day has on our runners, we would find the effect at 21 degrees and subtract the difference at 19 degrees. By averaging the changes in prediction we can determine the effect of the feature for that window. We then repeat the process across the data and accumulate.

By localizing the measurements via the use of windows, we are able to avoid including practically unlikely or impossible situations. For instance, in our model to predict the number of joggers in a day, temperature and time of year are likely highly correlated. By limiting our window in our ALE plot, we can ensure that we're not factoring in a scenario where the time of year is winter and the temperature is 35 degrees Celsius. We shouldn't be factoring in the model's prediction for this situation because it is certainly outside of any data set we will be using. ALE plots are able to avoid such situations and give us much more accurate results.

Calculating the difference across our window as opposed to the average (which some other techniques do) allows ALE plots measure the change in the expected prediction, instead of the actual prediction itself. This change helps us isolate the single feature. By holding others constant, adjusting just the single feature, and taking the difference, the features we are uninterested in cancel each other out and we are left with just the effects of the desired feature. Combining these two techniques leaves us with an unbiased view of our feature's effect on our prediction.

As you can imagine, as the number of features rises, the math to compute ALE plots gets a bit arduous. Luckily, there is at least one python package that can help. Though it is still a work-in-progress, it's already a wonderful window into your model. We've used it to create the graphs below.

For instance, view this sample ALE plot we created. This plot is based on a sample test model, not a real model we’d use in production.

We can clearly see the complicated relationship that the feature has on the output. It's non-linear and difficult to describe with a mathematical formula, but quite understandable as a graph. The y-axis is the change we expect in predicted value of the stock value prediction, and the x-axis is the change in Accounts Payable. We can clearly see that as the accounts payable increases, we get a rather stark increase in the predicted stock price. It’s almost linear, but we can see that it’s slightly more complicated than that. And on the other side, a flat or decreasing accounts payable has relatively little effect. Advances in software libraries allow us to create graphs like this for all of our inputs with relative ease, giving us a powerful tool for understanding the relationship of our features to our prediction.

This alone is an incredible insight into the workings of the model, without any biases as to the method of creation. All models are treated equally - we're simply graphing the complex relationship in an easy, readable way. But ALE plots pack some additional power when you start to analyze pairs of features instead of individual features on their own. A second order ALE plot is able to show you the combined effect of multiple features. We’ll discuss these in the next part.

Quantocracy Badge
Latest posts