Загрузка страницы

Sparse Bayesian Variational Learning with Matrix Normal Distributions

Slides: https://bayesgroup.github.io/bmml_sem/2019/Rudnev_Sparse%20Variational%20Learning%20with%20Matrix%20Normal%20Distributions.pdf

The application of variational Bayesian methods to neural networks has been limited by the choice of the posterior approximation family. One could use a simple family like a normal distribution with independent variables, but that results in a low quality of the approximation and optimization issues. In the paper we propose to use Matrix Normal distribution (MN) for variational approximation family. While being more flexible, this family supports efficient reparameterization and Riemannian optimization procedures. We apply this family for Bayesian neural networks sparsification through Automatic Relevance Determination (Kharitonov et al., 2018). We show that MN family here outperforms simpler fully-factorized Gaussians, especially for the case of group sparsification, while remaining as computationally efficient as the latter. We also analyze application of MN distribution for inference in Variational Auto-Encoder model.

Видео Sparse Bayesian Variational Learning with Matrix Normal Distributions канала BayesGroup.ru
Показать
Страницу в закладки Мои закладки
Все заметки Новая заметка Страницу в заметки