Explainable AI for Science and Medicine
Understanding why a machine learning model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. Here l will present a unified approach to explain the output of any machine learning model. It connects game theory with local explanations, uniting many previous methods. I will then focus specifically on tree-based models, such as random forests and gradient boosted trees, where we have developed the first polynomial time algorithm to exactly compute classic attribution values from game theory. Based on these methods we have created a new set of tools for understanding both global model structure and individual model predictions. These methods were motivated by specific problems we faced in medical machine learning, and they significantly improve doctor decision support during anesthesia. However, these explainable machine learning methods are not specific to medicine, and are now used by researchers across many domains. The associated open source software (http://github.com/slundberg/shap) supports many modern machine learning frameworks and is very widely used in industry (including at Microsoft).
See more at https://www.microsoft.com/en-us/research/video/explainable-ai-for-science-and-medicine/
Видео Explainable AI for Science and Medicine канала Microsoft Research
See more at https://www.microsoft.com/en-us/research/video/explainable-ai-for-science-and-medicine/
Видео Explainable AI for Science and Medicine канала Microsoft Research
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
The Science Behind InterpretML: SHAPExplainable AI - Methods, Applications & Recent Developments - Dr. Wojciech Samek | ODSC Europe 2019How computers learn to recognize objects instantly | Joseph RedmonInterpreting ML models with explainable AIWhat exactly is explainable AI and why is it so important?PyData Tel Aviv Meetup: SHAP Values for ML Explainability - Adi WatzmanArtificial intelligence and algorithms: pros and cons | DW Documentary (AI documentary)Artificial Intelligence Colloquium: Explainable AIFlask Vs Django and When Should You Use What?AI Simplified: SHAP Values in Machine LearningInterpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data Scientist, AvivaPlease Stop Doing "Explainable" ML - Cynthia RudinHima Lakkaraju: How can we fool LIME and SHAP? Adversarial Attacks on Explanation MethodsOpen the Black Box: an Introduction to Model Interpretability with LIME and SHAP - Kevin LemagnenIntegrating AI and mental health | ZDNetAI In Healthcare and Biomedical ResearchInterpretability vs. Explainability in Machine LearningExplainable AI Cheat Sheet - Five Key CategoriesMy Video Went Viral. Here's WhyGraphs for AI and ML