Vincent Adam - Sparse methods for markovian GPs
Abstract:
Gaussian Processes (GP) provide rich priors for time series models. Markovian GPs with 1d input have an equivalent representation as stochastic differential equations (SDE) whose structure allows for the derivation of fast (approximate) inference algorithms. Their typical computational complexity scales linearly with the number of data points O(N), with computations inherently sequential. Using inducing states of this SDE to support a sparse GP approximation to the posterior process leads to further computational savings by making the O(N) scaling parallel. I will present various approximate inference algorithms based on this sparse approximation including Laplace, expectation-propagation and variational inference and I will discuss their performance guarantees and comparative advantages.
Speaker: Vincent Adam is a Senior Machine Learning Researcher at Secondmind.ai, and Postdoctoral researcher at Aalto University, Finland. More info can be found at the personal website: https://vincentadam87.github.io/
This talk was given at Secondmind Labs, as a part of our (virtual) research seminar. Our research seminar is where we exchange ideas with guest speakers, keeping you up to date with the latest developments and inspiring research topics. Occasionally, Secondmind researchers present their own work as well. You can find a complete list of speakers at https://www.secondmind.ai/labs/seminars/. Learn more about Secondmind Labs at https://www.secondmind.ai/labs/
Видео Vincent Adam - Sparse methods for markovian GPs канала Secondmind
Gaussian Processes (GP) provide rich priors for time series models. Markovian GPs with 1d input have an equivalent representation as stochastic differential equations (SDE) whose structure allows for the derivation of fast (approximate) inference algorithms. Their typical computational complexity scales linearly with the number of data points O(N), with computations inherently sequential. Using inducing states of this SDE to support a sparse GP approximation to the posterior process leads to further computational savings by making the O(N) scaling parallel. I will present various approximate inference algorithms based on this sparse approximation including Laplace, expectation-propagation and variational inference and I will discuss their performance guarantees and comparative advantages.
Speaker: Vincent Adam is a Senior Machine Learning Researcher at Secondmind.ai, and Postdoctoral researcher at Aalto University, Finland. More info can be found at the personal website: https://vincentadam87.github.io/
This talk was given at Secondmind Labs, as a part of our (virtual) research seminar. Our research seminar is where we exchange ideas with guest speakers, keeping you up to date with the latest developments and inspiring research topics. Occasionally, Secondmind researchers present their own work as well. You can find a complete list of speakers at https://www.secondmind.ai/labs/seminars/. Learn more about Secondmind Labs at https://www.secondmind.ai/labs/
Видео Vincent Adam - Sparse methods for markovian GPs канала Secondmind
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Sebastian Farquhar - Unbiased Active Learning and Testing](https://i.ytimg.com/vi/MHHZS6Wi8Ts/default.jpg)
![Emtiyaz Khan - Bayesian Principles for Machine Learning](https://i.ytimg.com/vi/1ZOzPFrbFWs/default.jpg)
![World Summit AI Roundtable - Making Sense of Data (Part One)](https://i.ytimg.com/vi/7P9LAkxZTFM/default.jpg)
![Roberto Calandra - Bayesian optimization for robotics](https://i.ytimg.com/vi/u38wYL6D8PY/default.jpg)
![Ítalo Gomes Gonçalves - Variational Gaussian processes for spatial modeling: the geoML project](https://i.ytimg.com/vi/JDdPZRqtyLg/default.jpg)
![Antonio Del Rio Chanona - Multi-Fidelity Bayesian Optimization in Chemical Engineering](https://i.ytimg.com/vi/qT9ju4eMLKA/default.jpg)
![François-Xavier Briol - Bayesian Estimation of Integrals: A Multi-task Approach](https://i.ytimg.com/vi/7NBrUJcyL7w/default.jpg)
![Peter Stone - Efficient Robot Skill Learning](https://i.ytimg.com/vi/qzMvLEviihM/default.jpg)
![Luigi Nardi - Harnessing new information in Bayesian optimization](https://i.ytimg.com/vi/-huaWITLyE8/default.jpg)
![Andrew G. Wilson - How do we build models that learn and generalize?](https://i.ytimg.com/vi/GvylV2KkXf0/default.jpg)
![M. E. Taylor - Reinforcement Learning in the Real-world: How to “cheat” and still feel good about it](https://i.ytimg.com/vi/KOHEefx3izY/default.jpg)
![Arno Solin - Stationary Activations for Uncertainty Calibration in Deep Learning](https://i.ytimg.com/vi/G_PVRL_wxIE/default.jpg)
![Aryan Deshwal - Bayesian Optimization over Combinatorial Structures](https://i.ytimg.com/vi/22MgClgFyHk/default.jpg)
![François Bachoc - Sequential construction and dimension reduction of GP under inequality constraints](https://i.ytimg.com/vi/SpGrecIO6o0/default.jpg)
![World Summit AI Roundtable - Making Sense of Data (Part Two)](https://i.ytimg.com/vi/PC6T8ccEcH0/default.jpg)
![Pablo Moreno-Muñoz - Model Recycling with Gaussian Processes](https://i.ytimg.com/vi/QuEmEXrnFZk/default.jpg)
![Mojmír Mutný - Optimal Experiment Design in Markov Chains](https://i.ytimg.com/vi/o59lLu8yAUM/default.jpg)
![Christopher Nemeth - Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates](https://i.ytimg.com/vi/clhieMkVdI0/default.jpg)
![José Miguel Hernández-Lobato - Probabilistic Methods for Increased Robustness in Machine Learning](https://i.ytimg.com/vi/4ppFiyXJkiM/default.jpg)
![Frank Hutter - Towards Deep Learning 2.0: Going to the Meta-Level](https://i.ytimg.com/vi/RFncTuZIcac/default.jpg)