Загрузка страницы

Arthur Gretton - Generalized Energy-Based Models

Abstract: I will introduce Generalized Energy Based Models (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the "generator"). In particular, while the energy function is analogous to the GAN critic function, it is not discarded after training. GEBMs are trained by alternating between learning the energy and the base. We show that both training stages are well-defined: the energy is learned by maximising a generalized likelihood, and the resulting energy-based loss provides informative gradients for learning the base. Samples from the posterior on the latent space of the trained model can be obtained via MCMC, thus finding regions in this space that produce better quality samples. Empirically, the GEBM samples on image-generation tasks are of much better quality than those from the learned generator alone, indicating that all else being equal, the GEBM will outperform a GAN of the same complexity. GEBMs also return state-of-the-art performance on density modelling tasks, and when using base measures with an explicit form.

Speaker: Arthur Gretton is a Professor at the Gatsby Computational Neuroscience Unit and Director of the Centre for Computational Statistics and Machine Learning, at University College London. His personal website can be found at http://www.gatsby.ucl.ac.uk/~gretton/.

This talk was given at Secondmind Labs, as a part of our (virtual) research seminar. Our research seminar is where we exchange ideas with guest speakers, keeping you up to date with the latest developments and inspiring research topics. Occasionally, Secondmind researchers present their own work as well. You can find a complete list of speakers at https://www.secondmind.ai/labs/seminars/. Learn more about Secondmind Labs at https://www.secondmind.ai/labs/​

Видео Arthur Gretton - Generalized Energy-Based Models канала Secondmind
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
16 апреля 2021 г. 18:12:14
01:06:42
Другие видео канала
Sebastian Farquhar - Unbiased Active Learning and TestingSebastian Farquhar - Unbiased Active Learning and TestingEmtiyaz Khan - Bayesian Principles for Machine LearningEmtiyaz Khan - Bayesian Principles for Machine LearningWorld Summit AI Roundtable - Making Sense of Data (Part One)World Summit AI Roundtable - Making Sense of Data (Part One)Roberto Calandra - Bayesian optimization for roboticsRoberto Calandra - Bayesian optimization for roboticsÍtalo Gomes Gonçalves - Variational Gaussian processes for spatial modeling: the geoML projectÍtalo Gomes Gonçalves - Variational Gaussian processes for spatial modeling: the geoML projectAntonio Del Rio Chanona - Multi-Fidelity Bayesian Optimization in Chemical EngineeringAntonio Del Rio Chanona - Multi-Fidelity Bayesian Optimization in Chemical EngineeringFrançois-Xavier Briol - Bayesian Estimation of Integrals: A Multi-task ApproachFrançois-Xavier Briol - Bayesian Estimation of Integrals: A Multi-task ApproachPeter Stone - Efficient Robot Skill LearningPeter Stone - Efficient Robot Skill LearningLuigi Nardi - Harnessing new information in Bayesian optimizationLuigi Nardi - Harnessing new information in Bayesian optimizationAndrew G. Wilson - How do we build models that learn and generalize?Andrew G. Wilson - How do we build models that learn and generalize?M. E. Taylor - Reinforcement Learning in the Real-world: How to “cheat” and still feel good about itM. E. Taylor - Reinforcement Learning in the Real-world: How to “cheat” and still feel good about itArno Solin - Stationary Activations for Uncertainty Calibration in Deep LearningArno Solin - Stationary Activations for Uncertainty Calibration in Deep LearningAryan Deshwal - Bayesian Optimization over Combinatorial StructuresAryan Deshwal - Bayesian Optimization over Combinatorial StructuresVincent Adam - Sparse methods for markovian GPsVincent Adam - Sparse methods for markovian GPsFrançois Bachoc - Sequential construction and dimension reduction of GP under inequality constraintsFrançois Bachoc - Sequential construction and dimension reduction of GP under inequality constraintsWorld Summit AI Roundtable - Making Sense of Data (Part Two)World Summit AI Roundtable - Making Sense of Data (Part Two)Pablo Moreno-Muñoz - Model Recycling with Gaussian ProcessesPablo Moreno-Muñoz - Model Recycling with Gaussian ProcessesMojmír Mutný - Optimal Experiment Design in Markov ChainsMojmír Mutný - Optimal Experiment Design in Markov ChainsChristopher Nemeth - Coin Sampling: Gradient-Based Bayesian Inference without Learning RatesChristopher Nemeth - Coin Sampling: Gradient-Based Bayesian Inference without Learning RatesJosé Miguel Hernández-Lobato - Probabilistic Methods for Increased Robustness in Machine LearningJosé Miguel Hernández-Lobato - Probabilistic Methods for Increased Robustness in Machine LearningFrank Hutter - Towards Deep Learning 2.0: Going to the Meta-LevelFrank Hutter - Towards Deep Learning 2.0: Going to the Meta-Level
Яндекс.Метрика