Загрузка страницы

Animation: Variational Autoencoder

A variational autoencoder is a type of neural network that learns to compress (encode) data, in such a way that one can later randomly sample data.

Here we show training of a variational autoencoder on a data set of wave packets that have been placed with a random location and amplitude. Upper left: test set, with VAE decoder results shown in orange; upper right: latent space distribution, with color coding for actual location (left) or amplitude (right) of any data point; lower right: latent space distributions for the few samples from the test set (in each case, the Gaussian spread in latent space is also indicated); lower left: loss evolution.

2021 by Florian Marquardt. For the full explanation, watch https://www.youtube.com/watch?v=bSta4s439-I .

This animation is part of the online lecture series "Advanced Machine Learning for Physics, Science, and Artificial Scientific Discovery". See the website https://pad.gwdg.de/s/2021_AdvancedMachineLearningForScience# and the channel with the full lecture videos: https://www.youtube.com/playlist?list=PLemsnf33Vij4-kv-JTjDthaGUYUnQbbws.

Видео Animation: Variational Autoencoder канала Florian Marquardt
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
18 декабря 2021 г. 18:49:10
00:00:50
Другие видео канала
Lecture 4: Loss functions. Overfitting. Dropout. Adaptive Gradient Descent. Convolutional networks.Lecture 4: Loss functions. Overfitting. Dropout. Adaptive Gradient Descent. Convolutional networks.Lecture 26: Active Learning for Network Training: Uncertainty Sampling and other approaches.Lecture 26: Active Learning for Network Training: Uncertainty Sampling and other approaches.Lecture 23: Reinforcement Learning - Policy Gradient and Q-Learning.Lecture 23: Reinforcement Learning - Policy Gradient and Q-Learning.Lecture 14: Boltzmann Machines (General Theory).Lecture 14: Boltzmann Machines (General Theory).Lecture 19: Graph Neural Networks. Attention Mechanisms (Basics).Lecture 19: Graph Neural Networks. Attention Mechanisms (Basics).Lecture 10: Inductive Bias. Fisher Information. Information Geometry.Lecture 10: Inductive Bias. Fisher Information. Information Geometry.Moderne Physik: "Auf der Jagd nach kosmischen Teilchen." (Prof. Anna Nelles)Moderne Physik: "Auf der Jagd nach kosmischen Teilchen." (Prof. Anna Nelles)Lecture 21: Transformers (and examples). Implicit Layers.Lecture 21: Transformers (and examples). Implicit Layers.Lecture 12: Mutual Information. Learning Probability Distributions. Normalizing Flows.Lecture 12: Mutual Information. Learning Probability Distributions. Normalizing Flows.Talk: Discovering feedback strategies for open quantum systems via deep reinforcement learningTalk: Discovering feedback strategies for open quantum systems via deep reinforcement learningMachine Learning for Physicists (Lecture 3): Training networks, Keras, Image recognitionMachine Learning for Physicists (Lecture 3): Training networks, Keras, Image recognitionLecture 16: Variational Autoencoder. Generative Adversarial Networks.Lecture 16: Variational Autoencoder. Generative Adversarial Networks.Lecture 11: Natural Gradient. Kullback-Leibler Divergence. Mutual Information.Lecture 11: Natural Gradient. Kullback-Leibler Divergence. Mutual Information.Lecture 15: Restricted Boltzmann Machines. Conditional Sampling. Variational Autoencoder.Lecture 15: Restricted Boltzmann Machines. Conditional Sampling. Variational Autoencoder.Machine Learning for Physicists (Lecture 5): Principal Component Analysis, t-SNE, Adam etc., ...Machine Learning for Physicists (Lecture 5): Principal Component Analysis, t-SNE, Adam etc., ...Lecture 25: Reinforcement Learning: Continuous actions. Model-based. Monte Carlo Tree Search.Lecture 25: Reinforcement Learning: Continuous actions. Model-based. Monte Carlo Tree Search.Lecture 7: Contractive Autoencoder. Shannon's Information Theory: Compression and Information.Lecture 7: Contractive Autoencoder. Shannon's Information Theory: Compression and Information.Lecture 27:  Bayesian Optimal Experimental Design. Active Learning: Gaussian Processes and Networks.Lecture 27: Bayesian Optimal Experimental Design. Active Learning: Gaussian Processes and Networks.Lecture 22: Implicit Layers. Hamiltonian and Lagrangian Networks. Reinforcement Learning Overview.Lecture 22: Implicit Layers. Hamiltonian and Lagrangian Networks. Reinforcement Learning Overview.Lecture 20: Attention. Differentiable Neural Computer. Transformers.Lecture 20: Attention. Differentiable Neural Computer. Transformers.
Яндекс.Метрика