Weiwei Pan: What Are Useful Uncertainties in Deep Learning and How Do We Get Them? | IACS Seminar
Presented by Weiwei Pan, Harvard University
Talk Description: While deep learning has demonstrable success on many tasks, the point estimates provided by standard deep models can lead to overfitting and provide no uncertainty quantification on predictions. However, when models are applied to critical domains such as autonomous driving, precision health care, or criminal justice, reliable measurements of a model's predictive uncertainty may be as crucial as correctness of its predictions. At the same time, increasing attention in recent literature is being paid to separating sources of predictive uncertainty, with the goal of separating types of uncertainties reducible through additional data collection from those that represent stochasticity inherent in the data generation process. In this talk, Dr. Pan will examine a number of deep (Bayesian) models that promise to capture complex forms for predictive uncertainties. She will also examine metrics commonly used to such uncertainties. Her aim is to highlight strengths and limitations of the models as well as the metrics; she will discuss potential ways to improve both in meaningful ways for downstream tasks.
Speaker Bio: Weiwei Pan is a Research Associate and Lecturer on Computational Science at the Institute for Applied Computational Science at Harvard University. You can read more about Weiwei and her research interests here: https://iacs.seas.harvard.edu/people/weiwei-pan.
For more information abut the IACS seminar series, please visit our website at https://iacs.seas.harvard.edu/iacs-seminar-series.
Видео Weiwei Pan: What Are Useful Uncertainties in Deep Learning and How Do We Get Them? | IACS Seminar канала Harvard Institute for Applied Computational Science
Talk Description: While deep learning has demonstrable success on many tasks, the point estimates provided by standard deep models can lead to overfitting and provide no uncertainty quantification on predictions. However, when models are applied to critical domains such as autonomous driving, precision health care, or criminal justice, reliable measurements of a model's predictive uncertainty may be as crucial as correctness of its predictions. At the same time, increasing attention in recent literature is being paid to separating sources of predictive uncertainty, with the goal of separating types of uncertainties reducible through additional data collection from those that represent stochasticity inherent in the data generation process. In this talk, Dr. Pan will examine a number of deep (Bayesian) models that promise to capture complex forms for predictive uncertainties. She will also examine metrics commonly used to such uncertainties. Her aim is to highlight strengths and limitations of the models as well as the metrics; she will discuss potential ways to improve both in meaningful ways for downstream tasks.
Speaker Bio: Weiwei Pan is a Research Associate and Lecturer on Computational Science at the Institute for Applied Computational Science at Harvard University. You can read more about Weiwei and her research interests here: https://iacs.seas.harvard.edu/people/weiwei-pan.
For more information abut the IACS seminar series, please visit our website at https://iacs.seas.harvard.edu/iacs-seminar-series.
Видео Weiwei Pan: What Are Useful Uncertainties in Deep Learning and How Do We Get Them? | IACS Seminar канала Harvard Institute for Applied Computational Science
Показать
Комментарии отсутствуют
Информация о видео
22 сентября 2020 г. 4:01:01
01:11:59
Другие видео канала
Uncertainty Quantification and Deep Learning ǀ Elise Jennings, Argonne National LaboratoryCS 207: Systems Development for Computational ScienceHeisenberg's Uncertainty Principle ExplainedUncertainty (Aleatoric vs Epistemic) | Machine LearningDirichlet Distribution | Intuition & Intro | w\ example in TensorFlow ProbabilityMathematics at MITRobustness and Uncertainty Estimation for Visual Perception in Deep LearningWhat is Uncertainty Quantification (UQ)?Uncertainty in deep learning by Olof MogrenThere is certainty in uncertainty: Brian Schmidt at TEDxCanberraAndrey Malinin: Estimating Data and Knowledge UncertaintyEric J. Ma - An Attempt At Demystifying Bayesian Deep LearningA General Framework for Uncertainty Estimation in Deep LearningHEC WAT Epistemic and Aleatory UncertaintyFlorian Wilhelm: Are you sure about that?! Uncertainty Quantification in AI | PyData Berlin 2019Common Sense Test That 90% of People FailKarim Pichara: NotCo - Creating an AI that Fixes the Broken Food Industry | IACS SeminarModeling Aleatoric and Epistemic Uncertainty - Aleksander Molak | PyData Global 2021Week 5 - Uncertainty and Out-of-Distribution Robustness in Deep LearningAndrew Rowan - Bayesian Deep Learning with Edward (and a trick using Dropout)