Alexei ChernobrovovConsultant on Analytics and Data Monetization

Interpreting: the SHAP method in Data Science

This article discusses what Explainable Artificial Intelligence (AI) is, why the transparent interpretation of ML models is so important, and what tools are available to describe and visualize machine learning results in a comprehensible way. There are many off-the-shelf tools for interpreting and visualizing ML models, packaged in special libraries. For example, Yellowbrick is an extension of scikit-learn, ELI5 is another visual library for eliminating errors in models and explaining predictions (compatible with scikit-learn, XGBoost, and Keras), MLxtend for data visualization, solution comparison, and its constituent classifiers, LIME is a package for prediction interpretation [1] and SHAP, which will be discussed below.

Black Box challenges or why interpreting ML models is so important

Gartner analytics agency has included Explainable AI (Explainable AI) in the top 10 Data&Analytics trends of 2020. This feature set, which describes the ML model with its strengths and weaknesses, is expected to help more accurately predict model behavior and identify any potential errors. This will improve the transparency and reliability of AI decisions and results, reducing regulatory and reputational risk [2].

Indeed, a lack of understanding of the process of obtaining results using Data Science tools, such as neural networks or other machine learning methods, is a serious obstacle to their mass adoption in business and everyday life. Usually, people are afraid of what they don't understand and tend to avoid it. Thus, the areola of mystery around AI in the minds of consumers slows down the practical development of this scientific field. Therefore, the transparency of ML modeling processes and the comprehensibility of their results is important for every Data Scientist. This task is described by the term "interpretability," which refers to the extent to which one understands the reason why a particular decision was made by an ML algorithm. This is how AI from "complicated magic" or "black box" becomes explainable and predictable, and hence not scary, becoming a day-to-day phenomenon. In addition, there are several other key factors why the interpretation of ML models is particularly relevant today [3].

  • The legal aspect, when the decision made by the ML model caused certain consequences. For example, an unmanned car caused an accident, a plane crash occurred because of the actions of the autopilot, the algorithm misdiagnosed a patient and prescribed the wrong medication. This can also include actions that are not directly related to human casualties or harm to health, such as refusing a loan. At the same time, Article 13 of GDPR stipulates that every data subject has the right to explanation, that is to be informed why the automated system that uses his personal data has taken a certain decision.
  • A question of trust and involvement, when customers and end-users of ML models are more willing to use a tool that they understand at least on a basic level. For example, if customers understand how the quality of the input data affects the accuracy of the results, they will be more willing to enter the necessary information.
  • Model testing and improvement are only possible if people understand what factors affect ML algorithm performance, allowing them to identify potential problems and areas for optimization.

The interpretability of the ML model increases its potential value. In reality, however, the realization of any added value incurs additional costs. In particular, it increases the working time of the Data Scientist, who is trying to show the potential of his research to the users. Therefore, one should not chase the maximum interpretability of the results without considering the context. For example, if the cost of prediction error is low (e.g., recommending a movie to a user), then it may not be worth the enormous effort to make the model more interpretable. In this case, one can limit oneself to the standard model quality validation procedure [3].

What is Shap and what does Game Theory have to do with it?

The complexity of real ML modeling is that some predictors influence the result more than others. It is possible to reveal such dependence at the stage of model cross-validation, but it does not give an exact answer to the question about the contribution of each attribute to the obtained result. However, such problems are often observed in game theory, a branch of applied mathematics devoted to the study of transactions. The key concept of this mathematical method is a game as a process involving two or more parties fighting for their interests. Each side has its own goal and uses its own strategy, which can lead to winning or losing, depending on the behavior of the other players. The point of Game Theory is to choose optimal strategies, taking into account perceptions of the other participants, their resources, and their possible actions. Mathematicians John von Neumann, Oscar Morgenstern, and John Forbes Nash, Jr. made great contributions to the development and popularization of these ideas in the 40s and 50s of the 20th century. Since the mid-1980s the mathematical apparatus of Game Theory has been actively used in economics, management, politics, sociology, and psychology [4]. The Game theory itself, as a developed mathematical field, includes many categories, such as cooperative games, in which groups of players can combine their efforts, forming coalitions to achieve the best result. In turn, we can determine the optimal distribution of winnings between players by means of the vector of Shepley (). It represents the allocation in which each player's winnings are equal to his average contribution to the general welfare under a certain mechanism of coalition formation [5]:

Where n is the number of players, k is the number of members of the coalition K

 

Applying the above-mentioned provisions of Game Theory to the interpretation of ML-models, the following conclusions can be made [6]:

  • The result of learning with the teacher (based on a given example) is a game;
  • The winning is the difference between the expectation of the result on all available examples and the result obtained on the given example;
  • Players' contributions to the game are the effect of each feature value on the gain, that is, the result.

When calculating the vector of Shepley, it is necessary to form coalitions from a limited set of attributes. However, not every ML-model allows to simply remove a trait without learning the model again "from scratch". Therefore, to form coalitions one does not usually remove "superfluous" features, but replaces them with random values from the "background" dataset. The averaged result of the model with random values of the feature is equivalent to the result of the model, in which this feature is absent at all [6].

The practical implementation of this approach is represented by a special library SHAP (Shapley Additive exPlanations), which is supported by tree ensemble models on XGBoost, LightGBM, CatBoost, scikit-learn, and pyspark. Like any Python library, installing SHAP is very easy: just write a "pip install shap" script at the command line [6].

To be fair, it is worth noting that the results produced by the library are not quite exact Shapley vectors, but an approximation of them. At the same time, ML tree ensembles algorithms such as Gradient Tree Boosting and neural networks use additional information about the model structure, performing the computation in a reasonable time [6]. SHAP is also supported by Deep Learning, currently popular in the form of Deep SHAP, an algorithm for the high-speed approximation of SHAP values. In this case, the distribution of background samples is used instead of a single reference value and the Shepley equations are applied to linearize components such as max, softmax, etc. TensorFlow and Keras models using the TensorFlow backend are also supported, and there is also prior support for PyTorch [7].

Speaking of interpreting the results of ML modeling, it is worth noting the rich functionality of the SHAP library for data visualization. In particular, it supports the main types of graphs that are most commonly used in Data Science: histograms, line graphs, scatter plots, etc. This allows the results of ML modeling to be visualized and explained clearly to Business users. Data visualization also helps Data Scientists themselves in assessing the adequacy of the model [8].

Practical applications of the Shap library

Let's consider several real cases of SHAP application. For example, to analyze employee attrition in a company using xgboost, the following Python code plots the importance of the attributes used in the model (Fig. 1) [8]:

import shap shap_test = shap.TreeExplainer(best_model).shap_values(df) shap.summary_plot(shap_test, df, max_display=25, auto_size_plot=True)

Figure 1. SHAP graph of attribute importance by employee outflow [8]

Amount of pay rise

Pay rise on the 1st day of work

The average amount of emails after 6 p.m.

Median pay raise

The average amount of emails from 0 a.m. to 8 a.m.

Lack of A grades for 2015-2017

Average pay rise

Time to write the first email of the day

Job area

Function

The average amount of emails from 12 p.m. to 6 p.m.

Time to write the last email of the day

Amount of emails during morning hours

Age

The average amount of emails from 12 p.m. to 6 p.m.

The number of SMS is more during working hours than after

Time on position

Communication drops in all channels

Invitations to meetings by mail

The interval between internet sessions

Time of receiving the first email of the day

Number of incoming emails

Moving after work

Amount of e-mails from colleagues of the same grade

Employee position

 

The obtained graph is interpreted as follows [8]:

  • The values to the left of the central vertical line are negative-class (0), to the right is positive (1) according to the error matrix of the predictive ML model;
  • The thickness of the line is directly proportional to the number of observation points;
  • The redder the points, the greater the value of the feature at that point.

 

Thus, we can draw interesting conclusions from this graph and check their adequacy [8]:

  • The less an employee gets a pay raise, the more likely he/she is to quit;
  • In some regions the outflow is significantly higher;
  • The younger is the employee, the higher is the probability of his quitting.

 

Based on such findings, we can draw a portrait of the quitting employee: he or she is quite young, single, has been in the same position for a long time without a salary increase, does not receive high annual grades, and has had little communication with his or her colleagues.

Similarly, we can identify the most significant factors of individual bankruptcy and predict the probability of this event for an individual client. Let's consider the SHAP chart from the financial scoring area (Fig.2) [3].

Figure 2. SHAP graph of attributes importance according to the probability of bankruptcy [3]

Previous loans

Interest payable

Age

NBСР request

Days since last payment

Years of experience

Income

 

The graph in Figure 2 is interpreted as follows [3]:

  • Each customer is marked as a dot;
  • Blue indicates clients with a low value of the corresponding variable, and red indicates clients with a high value;
  • The horizontal axis shows the effect of each variable on the predicted probability of default for an individual client.

 

This chart allows a quick assessment of whether the simulation results are in line with expectations or common sense. In the example under consideration, the low number of previous loans increases the predicted probability of bankruptcy for most clients. And a long track record and high income lead to a lower probability of default. Notably, SHAP not only reveals trends common to the entire sample but also explains the results of the ML modeling for each particular case.

 

In particular, explain the prediction of bankruptcy of a particular customer using a separate chart that distributes the variables that affect the forecast. The predicted probability of default for this client is 19%, which is quite high. Low income and open credit are the most important variables. In Figure 3, the features that increase the target variable are red, and the features that decrease it are blue. Such a diagram is useful for a bank employee or loan officer when assessing the degree of confidence in the decision to grant a loan, which is formed by the ML model [3].

Figure 3. Degree of influence by different predictors on the target variable for a particular case [3]

 

SHAP can also be used for web analytics, for example, to identify which factors have the greatest impact on on-site conversion. Besides, scatter charts (boxes plot) show significant variables more clearly than typical Yandex.Metrika or Google.Analytics reports (fig. 4) [6].

Figure 4. SHAP diagram of scattering by significance of conversion factors, considering the geographical allocation of visitors of the online store [6]

 

Alternatives, advantages and disadvantages of Shepley vectors

It is worth noting that Shepley vectors are not the only method for interpreting ML models to show the effect of individual variables on the outcome. In practice, Partial Dependence Plots (PDP), Individual Conditional Expectation (ICE) plots, and LIME (Local Interpretable Model-agnostic Explanations) are widely used [3]. However, for example, LIME does not guarantee a fair distribution of the target variable from predictors. Moreover, the SHAP value is the only method that provides a complete explanation. This is especially important when the law requires explainability, such as the right to explain at the GDPR mentioned at the beginning of this article.

Also, Shepley vectors permit contrastive explanations, allowing a prediction to be compared to a subset of the data or a single case instead of the average of the entire sample. This feature distinguishes the SHAP method from LIME, which offers a local interpretation of the linear behavior of the ML model. Finally, a key advantage of SHAP is the availability of a solid mathematical basis with axioms and proofs [9].

The flipside of all these merits are the following disadvantages [9]:

  • Long computation time, solved in practice by approximations. The exact calculation of the Shepley value is computationally expensive because there are 2k possible coalitions of feature values, and the "absence" of an item must be modeled by creating random instances. This increases the variance for estimating Shepley values. The exponential number of coalitions is solved by sampling the coalitions and limiting the number of iterations to M. Decreasing M reduces computation time, but increases the variance of the Shepley value. There is no good rule of thumb for the number of iterations M to be large enough to accurately estimate the Shepley value, but small enough to complete the computation in a reasonable time.
  • The Shepley value can be misinterpreted. The estimated SHAP value should be interpreted as the contribution of an individual variable to the difference between the actual prediction and the average prediction given the current set of trait values.
  • For sparse explanations (small initial sample), it is better to use the local LIME method rather than the generalized SHAP approach.
  • SHAP returns a simple value for the feature, but not for the whole predictive model, unlike LIME. Therefore, it should not be used to make claims about changes in predictions when inputs change, such as "If I earned 300 more euros a year, my credit rating would increase by 5 points."
  • Calculating SHAP values for new data requires not only access to the prediction function but also data to replace the variable of interest with values from randomly selected data. This can be avoided by creating data instances that look like real data but are not part of the training sample.
  • Like other permutation-based interpretation methods, SHAP depends on including unrealistic data when correlating features. To simulate that a property value is not present in the coalition, we isolate it. This is fine as long as the traits are independent of each other. If the predictors are interdependent, we can combine them, getting them to have one common Shepley value. Or we will have to adjust the data sampling procedure to account for feature dependence.

 

Summary

As a conclusion, it is worth noting that, for all its capabilities in analyzing and visualizing ML models, SHAP is not a silver bullet that can be applied always and everywhere without regard to circumstances. In particular, SHAP values do not identify causality, which is best determined experimentally or by analogy [10]. This corresponds to the statement "after is not a consequence", meaning that when interpreting machine learning results using the SHAP library, it is worth remembering that it only shows the degree of influence of individual predictors on the target variable, but does not explain other aspects of Machine Learning: choosing the model architecture, adjusting neural network parameters, etc. That's why SHAP is a great tool for a competent Data Scientist at the final stage, that can partially automate some tasks without reducing the complexity of the whole work.

Sources

 

Contacts