Загрузка страницы

Lecture 13: Invertible Neural Networks. Convolutional and Conditional Invertible Networks.

Lecture Series "Advanced Machine Learning for Physics, Science, and Artificial Scientific Discovery". Normalizing Flows: Invertible Neural Networks (cont'd), learning from samples, convolutional INNs, conditional INNs, learning from an explicitly known distribution

Lecture series 2021/22 by Florian Marquardt. See the course website: https://pad.gwdg.de/s/2021_AdvancedMachineLearningForScience

Видео Lecture 13: Invertible Neural Networks. Convolutional and Conditional Invertible Networks. канала Florian Marquardt
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
2 декабря 2021 г. 11:37:01
01:31:55
Другие видео канала
Lecture 4: Loss functions. Overfitting. Dropout. Adaptive Gradient Descent. Convolutional networks.Lecture 4: Loss functions. Overfitting. Dropout. Adaptive Gradient Descent. Convolutional networks.Lecture 26: Active Learning for Network Training: Uncertainty Sampling and other approaches.Lecture 26: Active Learning for Network Training: Uncertainty Sampling and other approaches.Animation: Variational AutoencoderAnimation: Variational AutoencoderLecture 23: Reinforcement Learning - Policy Gradient and Q-Learning.Lecture 23: Reinforcement Learning - Policy Gradient and Q-Learning.Lecture 14: Boltzmann Machines (General Theory).Lecture 14: Boltzmann Machines (General Theory).Lecture 19: Graph Neural Networks. Attention Mechanisms (Basics).Lecture 19: Graph Neural Networks. Attention Mechanisms (Basics).Lecture 10: Inductive Bias. Fisher Information. Information Geometry.Lecture 10: Inductive Bias. Fisher Information. Information Geometry.Moderne Physik: "Auf der Jagd nach kosmischen Teilchen." (Prof. Anna Nelles)Moderne Physik: "Auf der Jagd nach kosmischen Teilchen." (Prof. Anna Nelles)Lecture 21: Transformers (and examples). Implicit Layers.Lecture 21: Transformers (and examples). Implicit Layers.Lecture 12: Mutual Information. Learning Probability Distributions. Normalizing Flows.Lecture 12: Mutual Information. Learning Probability Distributions. Normalizing Flows.Talk: Discovering feedback strategies for open quantum systems via deep reinforcement learningTalk: Discovering feedback strategies for open quantum systems via deep reinforcement learningMachine Learning for Physicists (Lecture 3): Training networks, Keras, Image recognitionMachine Learning for Physicists (Lecture 3): Training networks, Keras, Image recognitionLecture 16: Variational Autoencoder. Generative Adversarial Networks.Lecture 16: Variational Autoencoder. Generative Adversarial Networks.Lecture 11: Natural Gradient. Kullback-Leibler Divergence. Mutual Information.Lecture 11: Natural Gradient. Kullback-Leibler Divergence. Mutual Information.Lecture 15: Restricted Boltzmann Machines. Conditional Sampling. Variational Autoencoder.Lecture 15: Restricted Boltzmann Machines. Conditional Sampling. Variational Autoencoder.Machine Learning for Physicists (Lecture 5): Principal Component Analysis, t-SNE, Adam etc., ...Machine Learning for Physicists (Lecture 5): Principal Component Analysis, t-SNE, Adam etc., ...Lecture 25: Reinforcement Learning: Continuous actions. Model-based. Monte Carlo Tree Search.Lecture 25: Reinforcement Learning: Continuous actions. Model-based. Monte Carlo Tree Search.Lecture 7: Contractive Autoencoder. Shannon's Information Theory: Compression and Information.Lecture 7: Contractive Autoencoder. Shannon's Information Theory: Compression and Information.Lecture 27:  Bayesian Optimal Experimental Design. Active Learning: Gaussian Processes and Networks.Lecture 27: Bayesian Optimal Experimental Design. Active Learning: Gaussian Processes and Networks.Lecture 22: Implicit Layers. Hamiltonian and Lagrangian Networks. Reinforcement Learning Overview.Lecture 22: Implicit Layers. Hamiltonian and Lagrangian Networks. Reinforcement Learning Overview.Lecture 20: Attention. Differentiable Neural Computer. Transformers.Lecture 20: Attention. Differentiable Neural Computer. Transformers.
Яндекс.Метрика