Yee Whye Teh: On Bayesian Deep Learning and Deep Bayesian Learning (NIPS 2017 Keynote)
Breiman Lecture by Yee Whye Teh on Bayesian Deep Learning and Deep Bayesian Learning.
Abstract:
Probabilistic and Bayesian reasoning is one of the principle theoretical pillars to our understanding of machine learning. Over the last two decades, it has inspired a whole range of successful machine learning methods and influenced the thinking of many researchers in the community. On the other hand, in the last few years the rise of deep learning has completely transformed the field and led to a string of phenomenal, era-defining, successes. In this talk I will explore the interface between these two perspectives on machine learning, and through a number of projects I have been involved in, explore questions like: How can probabilistic thinking help us understand deep learning methods or lead us to interesting new methods? Conversely, how can deep learning technologies help us develop advanced probabilistic methods?
Bio:
I am a Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford and a Research Scientist at DeepMind. I am also an Alan Turing Institute Fellow and a European Research Council Consolidator Fellow. I obtained my Ph.D. at the University of Toronto (working with Geoffrey Hinton), and did postdoctoral work at the University of California at Berkeley (with Michael Jordan) and National University of Singapore (as Lee Kuan Yew Postdoctoral Fellow). I was a Lecturer then a Reader at the Gatsby Computational Neuroscience Unit, UCL, and a tutorial fellow at University College Oxford, prior to my current appointment. I am interested in the statistical and computational foundations of intelligence, and works on scalable machine learning, Probabilistic models, Bayesian nonparametrics and deep learning. I was programme co-chair of ICML 2017 and AISTATS 2010.
Видео Yee Whye Teh: On Bayesian Deep Learning and Deep Bayesian Learning (NIPS 2017 Keynote) канала Steven Van Vaerenbergh
Abstract:
Probabilistic and Bayesian reasoning is one of the principle theoretical pillars to our understanding of machine learning. Over the last two decades, it has inspired a whole range of successful machine learning methods and influenced the thinking of many researchers in the community. On the other hand, in the last few years the rise of deep learning has completely transformed the field and led to a string of phenomenal, era-defining, successes. In this talk I will explore the interface between these two perspectives on machine learning, and through a number of projects I have been involved in, explore questions like: How can probabilistic thinking help us understand deep learning methods or lead us to interesting new methods? Conversely, how can deep learning technologies help us develop advanced probabilistic methods?
Bio:
I am a Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford and a Research Scientist at DeepMind. I am also an Alan Turing Institute Fellow and a European Research Council Consolidator Fellow. I obtained my Ph.D. at the University of Toronto (working with Geoffrey Hinton), and did postdoctoral work at the University of California at Berkeley (with Michael Jordan) and National University of Singapore (as Lee Kuan Yew Postdoctoral Fellow). I was a Lecturer then a Reader at the Gatsby Computational Neuroscience Unit, UCL, and a tutorial fellow at University College Oxford, prior to my current appointment. I am interested in the statistical and computational foundations of intelligence, and works on scalable machine learning, Probabilistic models, Bayesian nonparametrics and deep learning. I was programme co-chair of ICML 2017 and AISTATS 2010.
Видео Yee Whye Teh: On Bayesian Deep Learning and Deep Bayesian Learning (NIPS 2017 Keynote) канала Steven Van Vaerenbergh
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Andrew Rowan - Bayesian Deep Learning with Edward (and a trick using Dropout)Ian Goodfellow: Adversarial Machine Learning (ICLR 2019 invited talk)How we teach computers to understand pictures | Fei Fei LiThe quick proof of Bayes' theoremArtificial Intelligence Colloquium: KeynoteTamara Broderick: Variational Bayes and Beyond: Bayesian Inference for Big Data (ICML 2018 tutorial)Shawe-Taylor and Rivasplata: Statistical Learning Theory - a Hitchhiker's Guide (NeurIPS 2018)Simulating an epidemicBayesian hierarchical time series with Prophet and PyMC3 - Matthijs Brouns | PyData JeddahTime Series Forecasting with Facebook Prophet and Python in 20 MinutesMarrying Graphical Models & Deep Learning - Max Welling - MLSS 2017Model Predictive Control6. Monte Carlo SimulationDeep Learning for SignalsUncertainty Quantification and Deep Learning ǀ Elise Jennings, Argonne National LaboratoryBayesian Nonparametrics 1 - Yee Whye Teh - MLSS 2013 TübingenDave Blei: "Black Box Variational Inference"Best Paper of NIPS2017 - Safe & Nested Subgame Solving for Imperfect-Information GamesPanel: What can Quantum do for AI?