Загрузка страницы

Interpretability vs. Explainability in Machine Learning

Abstract: With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes. In this talk, I will discuss some of the reasons that black boxes with explanations can go wrong, whereas using inherently interpretable models would not have these same problems. I will give an example of where an explanation of a black box model went wrong, namely, I will discuss ProPublica's analysis of the COMPAS model used in the criminal justice system: ProPublica’s explanation of the black box model COMPAS was flawed because it relied on wrong assumptions to identify the race variable as being important. Luckily in recidivism prediction applications, black box models are not needed because inherently interpretable models exist that are just as accurate as COMPAS. I will also give examples of interpretable models in healthcare. One of these models, the 2HELPS2B score, is actually used in intensive care units in hospitals; most machine learning models cannot be used when the stakes are so high. Finally, I will discuss two long-term projects my lab is working on, namely optimal sparse decision trees and interpretable neural networks.

References:

- Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and use Interpretable Models Instead, Nature Machine Intelligence, 2019. https://urldefense.com/v3/__https://rdcu.be/bBCPd__;!!KGKeukY!kTF2pLVJ4isunQ-QdR4yQipVZQF9ZrcyFGMxkCHgMTsxWSXc-ifOO6B-AqnTt0XUKg$

- The Age of Secrecy and Unfairness in Recidivism Prediction. Harvard Data Science Review, 2020 https://urldefense.com/v3/__https://hdsr.mitpress.mit.edu/pub/7z10o269__;!!KGKeukY!kTF2pLVJ4isunQ-QdR4yQipVZQF9ZrcyFGMxkCHgMTsxWSXc-ifOO6B-AqnMl29t5g$

- Deep Learning for Interpretable Image Recognition. NeurIPS spotlight, 2019. https://urldefense.com/v3/__https://arxiv.org/abs/1806.10574__;!!KGKeukY!kTF2pLVJ4isunQ-QdR4yQipVZQF9ZrcyFGMxkCHgMTsxWSXc-ifOO6B-Aqk11DTigw$

- Optimal Sparse Decision Trees. NeurIPS spotlight, 2019. https://urldefense.com/v3/__https://arxiv.org/abs/1904.12847__;!!KGKeukY!kTF2pLVJ4isunQ-QdR4yQipVZQF9ZrcyFGMxkCHgMTsxWSXc-ifOO6B-AqmTsje5eg$

- Learning Optimized Risk Scores. Journal of Machine Learning Research, 2019. https://urldefense.com/v3/__http://jmlr.org/papers/v20/18-615.html__;!!KGKeukY!kTF2pLVJ4isunQ-QdR4yQipVZQF9ZrcyFGMxkCHgMTsxWSXc-ifOO6B-Aqm8F6bydA$

Bio: Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. Her degrees are from the University at Buffalo and Princeton University. She is a three time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She has served on committees for INFORMS, the National Academies, the American Statistical Association, DARPA, the NIJ, and AAAI. She is a fellow of both the American Statistical Association and Institute of Mathematical Statistics. She is a Thomas Langford Lecturer at Duke University for 2019-2020.

Видео Interpretability vs. Explainability in Machine Learning канала Stochastic Programming Society
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
15 июня 2020 г. 11:53:19
01:14:26
Яндекс.Метрика