Sparse Bayesian Variational Learning with Matrix Normal Distributions
Slides: https://bayesgroup.github.io/bmml_sem/2019/Rudnev_Sparse%20Variational%20Learning%20with%20Matrix%20Normal%20Distributions.pdf
The application of variational Bayesian methods to neural networks has been limited by the choice of the posterior approximation family. One could use a simple family like a normal distribution with independent variables, but that results in a low quality of the approximation and optimization issues. In the paper we propose to use Matrix Normal distribution (MN) for variational approximation family. While being more flexible, this family supports efficient reparameterization and Riemannian optimization procedures. We apply this family for Bayesian neural networks sparsification through Automatic Relevance Determination (Kharitonov et al., 2018). We show that MN family here outperforms simpler fully-factorized Gaussians, especially for the case of group sparsification, while remaining as computationally efficient as the latter. We also analyze application of MN distribution for inference in Variational Auto-Encoder model.
Видео Sparse Bayesian Variational Learning with Matrix Normal Distributions канала BayesGroup.ru
The application of variational Bayesian methods to neural networks has been limited by the choice of the posterior approximation family. One could use a simple family like a normal distribution with independent variables, but that results in a low quality of the approximation and optimization issues. In the paper we propose to use Matrix Normal distribution (MN) for variational approximation family. While being more flexible, this family supports efficient reparameterization and Riemannian optimization procedures. We apply this family for Bayesian neural networks sparsification through Automatic Relevance Determination (Kharitonov et al., 2018). We show that MN family here outperforms simpler fully-factorized Gaussians, especially for the case of group sparsification, while remaining as computationally efficient as the latter. We also analyze application of MN distribution for inference in Variational Auto-Encoder model.
Видео Sparse Bayesian Variational Learning with Matrix Normal Distributions канала BayesGroup.ru
Показать
Комментарии отсутствуют
Информация о видео
12 апреля 2019 г. 23:58:43
00:36:58
Другие видео канала
Scalable Bayesian Inference in Low-Dimensional SubspacesAutoformer and Autoregressive Denoising Diffusion Models for Time Series Forecasting [in Russian]Stochastic computational graphs: optimization and applications in NLP, Maksim Kretov[DeepBayes2018]: Day 2, lecture 4. Discrete latent variablesTensor Train Decomposition for Fast Learning in Large Scale Gaussian Process Models, Dmitry KropotovСлучайные матрицы: теория и приложенияNeural Program Synthesis, part 2 [in Russian]Hyperbolic Deep Learning [in Russian]Tensor Programs, part 2 [in Russian]Discovering Faster Matrix Multiplication Algorithms with Reinforcement Learning [in Russian]Learning Differential Equations that are easy to solve [in Russian]SketchBoost: быстрый бустинг для multiclass/multilabel классификации и multitask регрессии[DeepBayes2018]: Day 3, Practical session 5. Distributional reinforcement learningOn Power Laws in Deep Ensembles [in Russian][DeepBayes2018]: Day 2, practical session 5. Variational autoencoders[DeepBayes2019]: Day 5, Sponsor talkMathematical Models of the Genetic Architecture in Complex Human DisordersControlling GANs Latent Space [in Russian][DeepBayes2019]: Day 2, practical session 2. Variational autoencodersPredicting Oil Movement in a Development System using Deep Latent Dynamics Models