Manipulating and Measuring Model Interpretability
Forough Poursabzi, Researcher, Microsoft Research
Presented at MLconf 2018
Abstract: Machine learning is increasingly used to make decisions that affect people’s lives in critical domains like criminal justice, fair lending, and medicine. While most of the research in machine learning focuses on improving the performance of models on held-out datasets, this is seldom enough to convince end-users that these models are trustworthy and reliable in the wild. To address this problem, a new line of research has emerged that focuses on developing interpretable machine learning methods and helping end-users make informed decisions. Despite the growing body of work in developing interpretable models, there is still no consensus on the definition and quantification of interpretability. In this talk, I will argue that to understand interpretability, we need to bring humans in the loop and run human-subject experiments. I approach the problem of interpretability from an interdisciplinary perspective which builds on decades of research in psychology, cognitive science, and social science to understand human behavior and trust. I will talk about a set of controlled user experiments, where we manipulated various design factors in models that are commonly thought to make them more or less interpretable and measured their influence on users’ behavior. Our findings emphasize the importance of studying how models are presented to people and empirically verifying that interpretable models achieve their intended effects on end-users.
See Forough's's presentation slides on our slideshare page here: https://www.slideshare.net/SessionsEvents/manipulating-and-measuring-model-interpretability
Видео Manipulating and Measuring Model Interpretability канала MLconf
Presented at MLconf 2018
Abstract: Machine learning is increasingly used to make decisions that affect people’s lives in critical domains like criminal justice, fair lending, and medicine. While most of the research in machine learning focuses on improving the performance of models on held-out datasets, this is seldom enough to convince end-users that these models are trustworthy and reliable in the wild. To address this problem, a new line of research has emerged that focuses on developing interpretable machine learning methods and helping end-users make informed decisions. Despite the growing body of work in developing interpretable models, there is still no consensus on the definition and quantification of interpretability. In this talk, I will argue that to understand interpretability, we need to bring humans in the loop and run human-subject experiments. I approach the problem of interpretability from an interdisciplinary perspective which builds on decades of research in psychology, cognitive science, and social science to understand human behavior and trust. I will talk about a set of controlled user experiments, where we manipulated various design factors in models that are commonly thought to make them more or less interpretable and measured their influence on users’ behavior. Our findings emphasize the importance of studying how models are presented to people and empirically verifying that interpretable models achieve their intended effects on end-users.
See Forough's's presentation slides on our slideshare page here: https://www.slideshare.net/SessionsEvents/manipulating-and-measuring-model-interpretability
Видео Manipulating and Measuring Model Interpretability канала MLconf
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Jorge Silva, Sr. Research Statistician Developer, SAS @ MLconf ATLDr. June Andrews, Principal Data Scientist, Wise.io, From GE DigitalBuilding Machine Learning Models with Strict Privacy BoundariesAnima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor, CalTechJennifer Marsman, Principal Developer Evangelist, Microsoft @ MLconf NYCMLconf Online 2020: DevOps for Data Science With Kubernetes by Sophie WatsonSven Kreiss, Lead Data Scientist, Wildcard @ MLconf ATLVirginia Smith - A General Framework for Communication-Efficient Distributed... - MLconf SF 2016Jeremy Stanley, EVP/Data Scientist, Sailthru @ MLconf NYCSanjeev Satheesh, The Story of End to End Models in Deep Learning at The AI Conference 2017MLconf Online 2020: Data Science is Key to Achieving Energy Access in Africa Madeleine GleaveSubutai Ahmad, VP of Research, Numenta @ MLconf SFJustin Basilico, Senior Researcher Engineer in Recommendation Systems, Netlix @ MLconf ATLTed Dunning, Chief Application Architect, MapR @ MLconf ATLMLconf Online 2020: Mathematical Approaches to Clustering by Joseph RossByron Galbraith, Chief Data Scientist, Talla, NYC 2017MLconf NYC 2022: How to Detect and Interpret Data Drift in Production by Emeli Dral of Evidently AIBryan Thompson, Chief Scientist and Founder, SYSTAP, LLC @ MLconf ATLOptimized Image Classification on the CheapMLconf SF 2022: Essential Ingredients in Scaling Organizations for ML by Dr. Ali Arsanjani @Google