[XAI] Explainable AI in Retail | AISC
Speaker(s): Andrey Sharapov
Facilitator(s): Ali El-Sharif
Find the recording, slides, and more info at https://ai.science/e/xai-explainable-ai-in-retail--P5GX9My5lrZufnFvL9h6
Motivation / Abstract
Andrey will review approaches and tools for explaining ML models along with a retail use case.
What was discussed?
1) Some of these explainability methods, like LIME, are statistical methods that suffer from instability, giving a different explanation for the same prediction. Did you encounter this issue in your practice, and if yes, how have you addressed it?
2) You showed us a number of tools, do you typically run them all? or do you run a few? Which top three tools do you did find useful?
3) If you run multiple explainer tools and get different or contradicting explanations, what is your typical next step?
4)Have you had cases in which the explanation triggered you to go back and make changes to the model? If yes, is there any example you could share?
------
#AISC hosts 3-5 live sessions like this on various AI research, engineering, and product topics every week! Visit https://ai.science for more details
Видео [XAI] Explainable AI in Retail | AISC канала ML Explained - Aggregate Intellect - AISC
Facilitator(s): Ali El-Sharif
Find the recording, slides, and more info at https://ai.science/e/xai-explainable-ai-in-retail--P5GX9My5lrZufnFvL9h6
Motivation / Abstract
Andrey will review approaches and tools for explaining ML models along with a retail use case.
What was discussed?
1) Some of these explainability methods, like LIME, are statistical methods that suffer from instability, giving a different explanation for the same prediction. Did you encounter this issue in your practice, and if yes, how have you addressed it?
2) You showed us a number of tools, do you typically run them all? or do you run a few? Which top three tools do you did find useful?
3) If you run multiple explainer tools and get different or contradicting explanations, what is your typical next step?
4)Have you had cases in which the explanation triggered you to go back and make changes to the model? If yes, is there any example you could share?
------
#AISC hosts 3-5 live sessions like this on various AI research, engineering, and product topics every week! Visit https://ai.science for more details
Видео [XAI] Explainable AI in Retail | AISC канала ML Explained - Aggregate Intellect - AISC
Показать
Комментарии отсутствуют
Информация о видео
24 июня 2020 г. 8:01:49
00:57:06
Другие видео канала
Explainable AI - Methods, Applications & Recent Developments - Dr. Wojciech Samek | ODSC Europe 2019Open the Black Box: an Introduction to Model Interpretability with LIME and SHAP - Kevin LemagnenA DARPA Perspective on Artificial IntelligenceArtificial Intelligence: The Future of AI in Digital MarketingArtificial Intelligence Colloquium: Explainable AIRevolutionary Deep Learning Method to Denoise EEG BrainwavesAlphaFold, Is Protein Folding Solved? | AISCExplainable AI (XAI) - The Practitioners ViewNeural Network Architectures and Deep LearningThe Importance of Strategy in AI Product Management | AISCMachine Learning Fundamentals: Bias and VarianceExplainable AI for Science and MedicineHow computers learn to recognize objects instantly | Joseph RedmonWhat is Wrong with Explainable AI? | AISCMachine learning - Random forestsExplaining AI12a: Neural NetsScott Lundberg, Microsoft Research - Explainable Machine Learning with Shapley Values - #H2OWorldNMCS4ALL: Machine Learning (full version)Making AI more trusted, by making it explainable.