Загрузка страницы

Scalable Bayesian Inference in Low-Dimensional Subspaces

Slides: https://bayesgroup.github.io/bmml_sem/2019/Kirichenko%26Izmailov_Output.pdf

Pavel Izmailov and Polina Kirichenko, New York University

Bayesian methods can provide full-predictive distributions and well-calibrated uncertainties in modern deep learning. However, scaling Bayesian inference techniques to deep neural networks (DNNs) is challenging due to the high dimensionality of the parameter space. In this talk, we will discuss two recent papers on scalable Bayesian inference which share a similar high-level idea: performing approximate inference in low-dimensional subspaces of DNNs parameter space.
In Subspace Inference for Bayesian Deep Learning [1], we propose to exploit the geometry of DNN training objectives to construct low-dimensional subspaces that contain diverse sets of models. In these subspaces, we are able to apply a wide range of advanced approximate inference methods, such as elliptical slice sampling and variational inference, that struggle in the full parameter space. We show that Bayesian model averaging over the induced posterior in these subspaces leads to strong performance in terms of accuracy and uncertainty quantification on regression and image classification tasks.
In Projected BNNs [2], the authors propose a variational inference framework for Bayesian neural networks that (1) encodes complex distributions in high-dimensional parameter space with representations in a low-dimensional latent space, and (2) performs inference efficiently on the low-dimensional representations.

Видео Scalable Bayesian Inference in Low-Dimensional Subspaces канала BayesGroup.ru
Показать
Страницу в закладки Мои закладки
Все заметки Новая заметка Страницу в заметки