Explainable AI for Science and Medicine
Microsoft Research Microsoft Research
320K subscribers
46,539 views
0

 Published On May 21, 2019

Understanding why a machine learning model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. Here l will present a unified approach to explain the output of any machine learning model. It connects game theory with local explanations, uniting many previous methods. I will then focus specifically on tree-based models, such as random forests and gradient boosted trees, where we have developed the first polynomial time algorithm to exactly compute classic attribution values from game theory. Based on these methods we have created a new set of tools for understanding both global model structure and individual model predictions. These methods were motivated by specific problems we faced in medical machine learning, and they significantly improve doctor decision support during anesthesia. However, these explainable machine learning methods are not specific to medicine, and are now used by researchers across many domains. The associated open source software (http://github.com/slundberg/shap) supports many modern machine learning frameworks and is very widely used in industry (including at Microsoft).

See more at https://www.microsoft.com/en-us/resea...

show more

Share/Embed