Emtiyaz Khan - Bayesian Principles for Machine Learning
Abstract:
Humans and animals have a natural ability to autonomously learn and quickly adapt to their surroundings. How can we design machines that do the same? In this talk, I will present Bayesian principles to bridge such gaps between humans and machines. I will show that a wide-variety of
machine-learning algorithms are instances of a single learning-rule derived from Bayesian principles. The rule unravels a dual perspective yielding new mechanisms for knowledge transfer in learning machines. My hope is to convince the audience that Bayesian principles are indispensable for an AI that learns as efficiently as we do.
Bio:
Emtiyaz Khan (also known as Emti) is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference Team. He is also an external professor at the Okinawa Institute of Science and Technology (OIST). Previously, he
was a postdoc and then a scientist at Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also taught two large machine learning courses and received a teaching award. He finished his PhD in machine learning from University of British Columbia in 2012. The main goal of Emti’s research is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings. For the past 10 years, his work has focused on developing Bayesian methods that
could lead to such fundamental principles. The approximate Bayesian inference team now continues to use these principles, as well as derive new ones, to solve real-world problems.
References for the first part:
- The Bayesian Learning Rule, (Preprint) M.E. Khan, H. Rue
- Practical Deep Learning with Bayesian Principles, (NeurIPS 2019) K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R.E. Turner, R. Yokota, M.E. Khan.
- Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models, (AIstats 2017) M.E. Khan and W. Lin
References for the second part:
- Knowledge-Adaptation Priors, (Preprint) M.E. Khan, Siddharth Swaroop
- Continual Deep Learning by Functional Regularisation of Memorable Past, (NeurIPS 2020) P. Pan*, S. Swaroop*, A. Immer, R. Eschenhagen, R. E. Turner, M.E. Khan
- Approximate Inference Turns Deep Networks into Gaussian Processes, (NeurIPS 2019) M.E. Khan, A. Immer, E. Abedi, M. korzepa.
Видео Emtiyaz Khan - Bayesian Principles for Machine Learning канала Secondmind
Humans and animals have a natural ability to autonomously learn and quickly adapt to their surroundings. How can we design machines that do the same? In this talk, I will present Bayesian principles to bridge such gaps between humans and machines. I will show that a wide-variety of
machine-learning algorithms are instances of a single learning-rule derived from Bayesian principles. The rule unravels a dual perspective yielding new mechanisms for knowledge transfer in learning machines. My hope is to convince the audience that Bayesian principles are indispensable for an AI that learns as efficiently as we do.
Bio:
Emtiyaz Khan (also known as Emti) is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference Team. He is also an external professor at the Okinawa Institute of Science and Technology (OIST). Previously, he
was a postdoc and then a scientist at Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also taught two large machine learning courses and received a teaching award. He finished his PhD in machine learning from University of British Columbia in 2012. The main goal of Emti’s research is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings. For the past 10 years, his work has focused on developing Bayesian methods that
could lead to such fundamental principles. The approximate Bayesian inference team now continues to use these principles, as well as derive new ones, to solve real-world problems.
References for the first part:
- The Bayesian Learning Rule, (Preprint) M.E. Khan, H. Rue
- Practical Deep Learning with Bayesian Principles, (NeurIPS 2019) K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R.E. Turner, R. Yokota, M.E. Khan.
- Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models, (AIstats 2017) M.E. Khan and W. Lin
References for the second part:
- Knowledge-Adaptation Priors, (Preprint) M.E. Khan, Siddharth Swaroop
- Continual Deep Learning by Functional Regularisation of Memorable Past, (NeurIPS 2020) P. Pan*, S. Swaroop*, A. Immer, R. Eschenhagen, R. E. Turner, M.E. Khan
- Approximate Inference Turns Deep Networks into Gaussian Processes, (NeurIPS 2019) M.E. Khan, A. Immer, E. Abedi, M. korzepa.
Видео Emtiyaz Khan - Bayesian Principles for Machine Learning канала Secondmind
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Sebastian Farquhar - Unbiased Active Learning and Testing](https://i.ytimg.com/vi/MHHZS6Wi8Ts/default.jpg)
![World Summit AI Roundtable - Making Sense of Data (Part One)](https://i.ytimg.com/vi/7P9LAkxZTFM/default.jpg)
![Roberto Calandra - Bayesian optimization for robotics](https://i.ytimg.com/vi/u38wYL6D8PY/default.jpg)
![Ítalo Gomes Gonçalves - Variational Gaussian processes for spatial modeling: the geoML project](https://i.ytimg.com/vi/JDdPZRqtyLg/default.jpg)
![Antonio Del Rio Chanona - Multi-Fidelity Bayesian Optimization in Chemical Engineering](https://i.ytimg.com/vi/qT9ju4eMLKA/default.jpg)
![François-Xavier Briol - Bayesian Estimation of Integrals: A Multi-task Approach](https://i.ytimg.com/vi/7NBrUJcyL7w/default.jpg)
![Peter Stone - Efficient Robot Skill Learning](https://i.ytimg.com/vi/qzMvLEviihM/default.jpg)
![Luigi Nardi - Harnessing new information in Bayesian optimization](https://i.ytimg.com/vi/-huaWITLyE8/default.jpg)
![Andrew G. Wilson - How do we build models that learn and generalize?](https://i.ytimg.com/vi/GvylV2KkXf0/default.jpg)
![M. E. Taylor - Reinforcement Learning in the Real-world: How to “cheat” and still feel good about it](https://i.ytimg.com/vi/KOHEefx3izY/default.jpg)
![Arno Solin - Stationary Activations for Uncertainty Calibration in Deep Learning](https://i.ytimg.com/vi/G_PVRL_wxIE/default.jpg)
![Aryan Deshwal - Bayesian Optimization over Combinatorial Structures](https://i.ytimg.com/vi/22MgClgFyHk/default.jpg)
![Vincent Adam - Sparse methods for markovian GPs](https://i.ytimg.com/vi/-Iw4whJsAhg/default.jpg)
![François Bachoc - Sequential construction and dimension reduction of GP under inequality constraints](https://i.ytimg.com/vi/SpGrecIO6o0/default.jpg)
![World Summit AI Roundtable - Making Sense of Data (Part Two)](https://i.ytimg.com/vi/PC6T8ccEcH0/default.jpg)
![Pablo Moreno-Muñoz - Model Recycling with Gaussian Processes](https://i.ytimg.com/vi/QuEmEXrnFZk/default.jpg)
![Mojmír Mutný - Optimal Experiment Design in Markov Chains](https://i.ytimg.com/vi/o59lLu8yAUM/default.jpg)
![Christopher Nemeth - Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates](https://i.ytimg.com/vi/clhieMkVdI0/default.jpg)
![José Miguel Hernández-Lobato - Probabilistic Methods for Increased Robustness in Machine Learning](https://i.ytimg.com/vi/4ppFiyXJkiM/default.jpg)
![Frank Hutter - Towards Deep Learning 2.0: Going to the Meta-Level](https://i.ytimg.com/vi/RFncTuZIcac/default.jpg)