#047 Interpretable Machine Learning - Christoph Molnar
Christoph Molnar is one of the main people to know in the space of interpretable ML. In 2018 he released the first version of his incredible online book, interpretable machine learning. Interpretability is often a deciding factor when a machine learning (ML) model is used in a product, a decision process, or in research. Interpretability methods can be used to discover knowledge, to debug or justify the model and its predictions, and to control and improve the model, reason about potential bias in models as well as increase the social acceptance of models. But Interpretability methods can also be quite esoteric, add an additional layer of complexity and potential pitfalls and requires expert knowledge to understand. Is it even possible to understand complex models or even humans for that matter in any meaningful way?
Introduction to IML [00:00:00]
Show Kickoff [00:13:28]
What makes a good explanation? [00:15:51]
Quantification of how good an explanation is [00:19:59]
Knowledge of the pitfalls of IML [00:22:14]
Are linear models even interpretable? [00:24:26]
Complex Math models to explain Complex Math models? [00:27:04]
Saliency maps are glorified edge detectors [00:28:35]
Challenge on IML -- feature dependence [00:36:46]
Don't leap to using a complex model! Surrogate models can be too dumb [00:40:52]
On airplane pilots. Seeking to understand vs testing [00:44:09]
IML Could help us make better models or lead a better life [00:51:53]
Lack of statistical rigor and quantification of uncertainty [00:55:35]
On Causality [01:01:09]
Broadening out the discussion to the process or institutional level [01:08:53]
No focus on fairness / ethics? [01:11:44]
Is it possible to condition ML model training on IML metrics ? [01:15:27]
Where is IML going? Some of the esoterica of the IML methods [01:18:35]
You can't compress information without common knowledge, the latter becomes the bottleneck [01:23:25]
IML methods used non-interactively? Making IML an engineering discipline [01:31:10]
Tim Postscript -- on the lack of effective corporate operating models for IML, security, engineering and ethics [01:36:34]
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges (Molnar et al 2020)
https://arxiv.org/abs/2010.09337
Model-agnostic Feature Importance and Effects with Dependent Features -- A Conditional Subgroup Approach (Molnar et al 2020)
https://arxiv.org/abs/2006.04628
Explanation in Artificial Intelligence: Insights from the Social Sciences (Tim Miller 2018)
https://arxiv.org/pdf/1706.07269.pdf
Pitfalls to Avoid when Interpreting Machine Learning Models (Molnar et al 2020)
https://arxiv.org/abs/2007.04131
Seven Myths in Machine Learning Research (Chang 19)
Myth 7: Saliency maps are robust ways to interpret neural networks
https://arxiv.org/pdf/1902.06789.pdf
Sanity Checks for Saliency Maps (Adebayo 2020)
https://arxiv.org/pdf/1810.03292.pdf
Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.
https://christophm.github.io/interpretable-ml-book/
Christoph Molnar:
https://www.linkedin.com/in/christoph-molnar-63777189/
https://machine-master.blogspot.com/
https://twitter.com/ChristophMolnar
Please show your appreciation and buy Christoph's book here;
https://www.lulu.com/shop/christoph-molnar/interpretable-machine-learning/paperback/product-24449081.html?page=1&pageSize=4
Panel:
Connor Tann https://www.linkedin.com/in/connor-tann-a92906a1/
Dr. Tim Scarfe
Dr. Keith Duggar
Pod Version:
https://anchor.fm/machinelearningstreettalk/episodes/047-Interpretable-Machine-Learning---Christoph-Molnar-eshafn
Видео #047 Interpretable Machine Learning - Christoph Molnar канала Machine Learning Street Talk
Introduction to IML [00:00:00]
Show Kickoff [00:13:28]
What makes a good explanation? [00:15:51]
Quantification of how good an explanation is [00:19:59]
Knowledge of the pitfalls of IML [00:22:14]
Are linear models even interpretable? [00:24:26]
Complex Math models to explain Complex Math models? [00:27:04]
Saliency maps are glorified edge detectors [00:28:35]
Challenge on IML -- feature dependence [00:36:46]
Don't leap to using a complex model! Surrogate models can be too dumb [00:40:52]
On airplane pilots. Seeking to understand vs testing [00:44:09]
IML Could help us make better models or lead a better life [00:51:53]
Lack of statistical rigor and quantification of uncertainty [00:55:35]
On Causality [01:01:09]
Broadening out the discussion to the process or institutional level [01:08:53]
No focus on fairness / ethics? [01:11:44]
Is it possible to condition ML model training on IML metrics ? [01:15:27]
Where is IML going? Some of the esoterica of the IML methods [01:18:35]
You can't compress information without common knowledge, the latter becomes the bottleneck [01:23:25]
IML methods used non-interactively? Making IML an engineering discipline [01:31:10]
Tim Postscript -- on the lack of effective corporate operating models for IML, security, engineering and ethics [01:36:34]
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges (Molnar et al 2020)
https://arxiv.org/abs/2010.09337
Model-agnostic Feature Importance and Effects with Dependent Features -- A Conditional Subgroup Approach (Molnar et al 2020)
https://arxiv.org/abs/2006.04628
Explanation in Artificial Intelligence: Insights from the Social Sciences (Tim Miller 2018)
https://arxiv.org/pdf/1706.07269.pdf
Pitfalls to Avoid when Interpreting Machine Learning Models (Molnar et al 2020)
https://arxiv.org/abs/2007.04131
Seven Myths in Machine Learning Research (Chang 19)
Myth 7: Saliency maps are robust ways to interpret neural networks
https://arxiv.org/pdf/1902.06789.pdf
Sanity Checks for Saliency Maps (Adebayo 2020)
https://arxiv.org/pdf/1810.03292.pdf
Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.
https://christophm.github.io/interpretable-ml-book/
Christoph Molnar:
https://www.linkedin.com/in/christoph-molnar-63777189/
https://machine-master.blogspot.com/
https://twitter.com/ChristophMolnar
Please show your appreciation and buy Christoph's book here;
https://www.lulu.com/shop/christoph-molnar/interpretable-machine-learning/paperback/product-24449081.html?page=1&pageSize=4
Panel:
Connor Tann https://www.linkedin.com/in/connor-tann-a92906a1/
Dr. Tim Scarfe
Dr. Keith Duggar
Pod Version:
https://anchor.fm/machinelearningstreettalk/episodes/047-Interpretable-Machine-Learning---Christoph-Molnar-eshafn
Видео #047 Interpretable Machine Learning - Christoph Molnar канала Machine Learning Street Talk
Показать
Комментарии отсутствуют
Информация о видео
14 марта 2021 г. 17:57:55
01:40:22
Другие видео канала
Getting ready for Dr Thomas Parr interview, watch it first on Patreon!#035 Christmas Community Edition!Prof. Simon Prince on factor graphsJordan Edwards: ML Engineering and DevOps on AzureMLCapsule Networks and EducationLuciano Floridi on the ramifications of working in AI #machineleaning #artificialintelligence#94 - ALAN CHAN - AI Alignment and Governance #NEURIPS#62 Dr. GUY EMERSON [Unplugged]Riddhi Jain Pitliya on virtual agents and memes #aiProf. Sepp Hochreiter: A Pioneer in Deep Learning#83 Dr. ANDREW LAMPINEN (Deepmind) - Natural Language, Symbols and Grounding [NEURIPS2022 UNPLUGGED]#52 - Dr. HADI SALMAN - Adversarial Examples Beyond Security [MIT]#063 - Prof. YOSHUA BENGIO - GFlowNets, Consciousness & CausalityLeetcode Challenge with DeepMind & Mila Scientists!SWaV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments (Mathilde Caron)#71 - ZAK JOST (Graph Neural Networks + Geometric DL) [UNPLUGGED]#85 Dr. Petar Veličković (Deepmind) - Categories, Graphs, Reasoning [NEURIPS22 UNPLUGGED]#036 - Max Welling: Quantum, Manifolds & Symmetries in MLDr. Minqi Jiang on curriculum learningThe Lottery Ticket Hypothesis with Jonathan FrankleBuilding a GENERAL AI agent with reinforcement learning