Learning Restricted Boltzman Machines
Ankur Moitra (MIT)
https://simons.berkeley.edu/talks/tbd-394
Algorithmic Aspects of Causal Inference
Graphical models are a rich language for describing high-dimensional distributions in terms of their dependence structure. While there are algorithms with provable guarantees for learning undirected graphical models in a variety of settings, there has been much less progress in the important scenario when there are latent variables, which is the focus of our work.
In particular, we study Restricted Boltzmann Machines (RBMs) which are a popular model with wide-ranging applications. We gave a simple greedy algorithm for learning ferromagnetic RBMs. Our analysis is based on tools from mathematical physics that were developed to show the concavity of magnetization. Conversely we show that even for a constant number of latent variables with constant degree, without ferromagneticity the problem is as hard as sparse parity with noise. This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs.
Based on joint work with Guy Bresler and Frederic Koehler.
Видео Learning Restricted Boltzman Machines канала Simons Institute
https://simons.berkeley.edu/talks/tbd-394
Algorithmic Aspects of Causal Inference
Graphical models are a rich language for describing high-dimensional distributions in terms of their dependence structure. While there are algorithms with provable guarantees for learning undirected graphical models in a variety of settings, there has been much less progress in the important scenario when there are latent variables, which is the focus of our work.
In particular, we study Restricted Boltzmann Machines (RBMs) which are a popular model with wide-ranging applications. We gave a simple greedy algorithm for learning ferromagnetic RBMs. Our analysis is based on tools from mathematical physics that were developed to show the concavity of magnetization. Conversely we show that even for a constant number of latent variables with constant degree, without ferromagneticity the problem is as hard as sparse parity with noise. This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs.
Based on joint work with Guy Bresler and Frederic Koehler.
Видео Learning Restricted Boltzman Machines канала Simons Institute
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Berkeley in the 80s, Episode 5: Richard KarpAdversarial Examples in Deep LearningProbabilistic Programming for Augmented IntelligenceQuantum Advantage Without StructureRecent Advances in Quantum Simulation with Applications to Chemistry and Field TheoryRemarks on the Discrete CubeInferring Single-Trial Neural Population Dynamics Using Sequential Auto-Encoders"The Problem with Qubits"Stability and Learning in Repeated GamesMachine Learning-based Design of Proteins and Small MoleculesExact Asymptotics and Universality for Gradient Flows and Empirical Risk MinimizersRates of Normal Approximation for Typical Weighted SumsThe Sparse Manifold TransformReductionism in Reinforcement LearningSubmodular OptimizationTractable Learning in Structured Probability SpacesUsing Molecular Networks to Seed Predictive Hierarchies of Cell SystemsPanel | Quantum ColloquiumTractable Probabilistic CircuitsAlgorithmic Fairness From The Lens Of Causality And Information TheoryProject CETI Next Steps: Industrial-Scale Whale Bioacoustic Data Collection and Analysis