From Compression to Convection: A Latent Variable Perspective
Abstract: Latent variable models have been an integral part of probabilistic machine learning, ranging from simple mixture models to variational autoencoders to powerful diffusion probabilistic models at the center of recent media attention. Perhaps less well-appreciated is the intimate connection between latent variable models and compression, and the potential of these models for advancing natural science. I will begin by showcasing connections between variational methods and the theory and practice of neural data compression, ranging from constructing learnable codecs to assessing the fundamental compressibility of real-world data, such as images and particle physics data. I will then connect this lossy compression perspective to climate science problems, which often involve distribution shifts between unlabeled datasets, such as simulation data from different models or data simulated under different assumptions (e.g., global average temperatures). I will show that a combination of non-linear dimensionality reduction and vector quantization can assess the magnitude of these shifts and enable intercomparisons of different climate simulations. Additionally, when combined with physical model assumptions, this approach can provide insights into the implications of global warming on extreme precipitation.
Bio: Stephan Mandt is an Associate Professor of Computer Science and Statistics at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and Head of the statistical machine learning group at Disney Research in Pittsburgh and Los Angeles. He held previous postdoctoral positions at Columbia University and Princeton University. Stephan holds a Ph.D. in Theoretical Physics from the University of Cologne, where he received the German National Merit Scholarship. He is furthermore a recipient of the NSF CAREER Award, the UCI ICS Mid-Career Excellence in Research Award, the German Research Foundation's Mercator Fellowship, a Kavli Fellow of the U.S. National Academy of Sciences, a member of the ELLIS Society, and a former visiting researcher at Google Brain. His research is currently supported by NSF, DARPA, IARPA, DOE, Disney, Intel, and Qualcomm. Stephan is an Action Editor of the Journal of Machine Learning Research and Transaction on Machine Learning Research and regularly serves as an Area Chair for NeurIPS, ICML, AAAI, and ICLR. He currently serves as Program Chair for AISTATS 2024.
Видео From Compression to Convection: A Latent Variable Perspective канала Allen Institute for AI
Bio: Stephan Mandt is an Associate Professor of Computer Science and Statistics at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and Head of the statistical machine learning group at Disney Research in Pittsburgh and Los Angeles. He held previous postdoctoral positions at Columbia University and Princeton University. Stephan holds a Ph.D. in Theoretical Physics from the University of Cologne, where he received the German National Merit Scholarship. He is furthermore a recipient of the NSF CAREER Award, the UCI ICS Mid-Career Excellence in Research Award, the German Research Foundation's Mercator Fellowship, a Kavli Fellow of the U.S. National Academy of Sciences, a member of the ELLIS Society, and a former visiting researcher at Google Brain. His research is currently supported by NSF, DARPA, IARPA, DOE, Disney, Intel, and Qualcomm. Stephan is an Action Editor of the Journal of Machine Learning Research and Transaction on Machine Learning Research and regularly serves as an Area Chair for NeurIPS, ICML, AAAI, and ICLR. He currently serves as Program Chair for AISTATS 2024.
Видео From Compression to Convection: A Latent Variable Perspective канала Allen Institute for AI
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Text Modular Networks: Learning to Decompose Tasks in the Language of Existing Models](https://i.ytimg.com/vi/_Q59o0f_HC8/default.jpg)
![Visual Reaction](https://i.ytimg.com/vi/iyAoPuHxvYs/default.jpg)
![Horacio Saggion: Mining and Enriching Scientific Text Collections](https://i.ytimg.com/vi/DB6DcKYlC4w/default.jpg)
![Ajay Nagesh: Exploring Relational Features and Learning](https://i.ytimg.com/vi/LzcIUIFlvSA/default.jpg)
![Learning for Never-before-seen Biomedicine](https://i.ytimg.com/vi/GjL3jjQFVh0/default.jpg)
![Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE](https://i.ytimg.com/vi/gapJa67kaKc/default.jpg)
![Rishabh Iyer: Submodular Optimization and Data Summarization with Applications to Computer Vision](https://i.ytimg.com/vi/LEpS0iVdD4o/default.jpg)
![Kevin Gimpel: From Paraphrase Modeling to Controlled Generation](https://i.ytimg.com/vi/W-q6iTfWxM4/default.jpg)
![Applied AI in High-Expertise Settings, or Curation as Programming](https://i.ytimg.com/vi/zmeLQiO_P1M/default.jpg)
![From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project | AI2](https://i.ytimg.com/vi/CR3aICkhCJM/default.jpg)
![When Not to Trust Language Models: Investigating Effectiveness of Parametric&Non-Parametric Memories](https://i.ytimg.com/vi/hJbxW0xct2E/default.jpg)
![Visual Room Rearrangement (CVPR 2021)](https://i.ytimg.com/vi/1APxaOC9U-A/default.jpg)
![Adapting to Long Tail Domains: A Case Study in Clinical Information | AI2](https://i.ytimg.com/vi/cc1SIr1Heaw/default.jpg)
![Kenneth D. Forbus: Multimodal Science Learning](https://i.ytimg.com/vi/rzS-1fZ26G8/default.jpg)
![Cross-Task Generalization via Natural Language Crowdsourcing Instructions](https://i.ytimg.com/vi/DbUXFCmeJ34/default.jpg)
![Daniel Khashabi - Natural Language Understanding with Indirect Supervision](https://i.ytimg.com/vi/__h_iApLbys/default.jpg)
![Jesse Dodge: Open Loop Hyperparameter Optimization and Determinantal Point Processes](https://i.ytimg.com/vi/el_DbbqXuQY/default.jpg)
![Dr. Asma Ben Abacha: Medical Question Answering](https://i.ytimg.com/vi/Fjsz5Giw9rs/default.jpg)
![Towards Teachable Reasoning Systems: Using a Dynamic Memory of User Feedback for System Improvement](https://i.ytimg.com/vi/c5j_tWsENFg/default.jpg)
![Robot Learning by Understanding Egocentric Videos](https://i.ytimg.com/vi/4WznUQvDQEw/default.jpg)
![Explaining Answers with Entailment Trees](https://i.ytimg.com/vi/QPSZQYA1RmA/default.jpg)