Загрузка страницы

From Compression to Convection: A Latent Variable Perspective

Abstract: Latent variable models have been an integral part of probabilistic machine learning, ranging from simple mixture models to variational autoencoders to powerful diffusion probabilistic models at the center of recent media attention. Perhaps less well-appreciated is the intimate connection between latent variable models and compression, and the potential of these models for advancing natural science. I will begin by showcasing connections between variational methods and the theory and practice of neural data compression, ranging from constructing learnable codecs to assessing the fundamental compressibility of real-world data, such as images and particle physics data. I will then connect this lossy compression perspective to climate science problems, which often involve distribution shifts between unlabeled datasets, such as simulation data from different models or data simulated under different assumptions (e.g., global average temperatures). I will show that a combination of non-linear dimensionality reduction and vector quantization can assess the magnitude of these shifts and enable intercomparisons of different climate simulations. Additionally, when combined with physical model assumptions, this approach can provide insights into the implications of global warming on extreme precipitation.

Bio: Stephan Mandt is an Associate Professor of Computer Science and Statistics at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and Head of the statistical machine learning group at Disney Research in Pittsburgh and Los Angeles. He held previous postdoctoral positions at Columbia University and Princeton University. Stephan holds a Ph.D. in Theoretical Physics from the University of Cologne, where he received the German National Merit Scholarship. He is furthermore a recipient of the NSF CAREER Award, the UCI ICS Mid-Career Excellence in Research Award, the German Research Foundation's Mercator Fellowship, a Kavli Fellow of the U.S. National Academy of Sciences, a member of the ELLIS Society, and a former visiting researcher at Google Brain. His research is currently supported by NSF, DARPA, IARPA, DOE, Disney, Intel, and Qualcomm. Stephan is an Action Editor of the Journal of Machine Learning Research and Transaction on Machine Learning Research and regularly serves as an Area Chair for NeurIPS, ICML, AAAI, and ICLR. He currently serves as Program Chair for AISTATS 2024.

Видео From Compression to Convection: A Latent Variable Perspective канала Allen Institute for AI
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
31 августа 2023 г. 3:34:45
00:56:52
Яндекс.Метрика