Our purpose is to look at how to use and interpret the Shapley values, plots, and other information produced by the SHAP package.
Currently, some plots in the notebook 'shap_tutorial.ipynb' may not render properly on Github's website. As an alternative, one can download the notebook and view it locally or download and view its HTML version 'shap_tutorial.html'.
To learn more about Shapley values, the SHAP package, and how these are used to help us interpret our machine learning models, please refer to these resources:
-
A Unified Approach to Interpreting Model Predictions
Scott Lundberg, Su-In Lee
-
Consistent feature attribution for tree ensembles
Scott M. Lundberg, Su-In Lee
-
Consistent Individualized Feature Attribution for Tree Ensembles
Scott M. Lundberg, Gabriel G. Erion, Su-In Lee
-
A game theoretic approach to explain the output of any machine learning model.
-
Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 5.9 Shapley Values
Christoph Molnar, 2019-12-17
-
Christoph Molnar, 2019-12-17
-
Interpretable Machine Learning with XGBoost
Scott Lundberg, Apr 17, 2018
-
Explain Your Model with the SHAP Values
Dataman, Sep 14, 2019
Christoph Molnar's book and Tim Miller's paper can provide further insight into the challenges and promise of machine learning interpretability:
-
Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.
Christoph Molnar, 2019-12-17
-
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
For my own blog post describing how machine learning interpretability can be used in healthcare, please see:
- ORIGINAL LINK BROKEN:
Interpretability and the promise of healthcare AI - UPDATED LINK: Interpretability and the promise of healthcare AI
Andrew Fairless, Ph.D., Principal Data Scientist, January 23, 2020