Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
Generative adversarial networks (GANs) are a recently introduced class of generative models, designed to produce realistic samples. This tutorial is intended to be accessible to an audience who has no experience with GANs, and should prepare the audience to make original research contributions applying GANs or improving the core GAN algorithms. GANs are universal approximators of probability distributions. Such models generally have an intractable log-likelihood gradient, and require approximations such as Markov chain Monte Carlo or variational lower bounds to make learning feasible. GANs avoid using either of these classes of approximations. The learning process consists of a game between two adversaries: a generator network that attempts to produce realistic samples, and a discriminator network that attempts to identify whether samples originated from the training data or from the generative model. At the Nash equilibrium of this game, the generator network reproduces the data distribution exactly, and the discriminator network cannot distinguish samples from the model from training data. Both networks can be trained using stochastic gradient descent with exact gradients computed by maximum likelihood.
Topics include:
- An introduction to the basics of GANs.
- A review of work applying GANs to large image generation.
- Extending the GAN framework to approximate maximum likelihood, rather than minimizing the Jensen-Shannon divergence.
- Improved model architectures that yield better learning in GANs.
- Semi-supervised learning with GANs.
- Research frontiers, including guaranteeing convergence of the GAN game.
- Other applications of adversarial learning, such as domain adaptation and privacy.
Видео Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) канала Steven Van Vaerenbergh
Topics include:
- An introduction to the basics of GANs.
- A review of work applying GANs to large image generation.
- Extending the GAN framework to approximate maximum likelihood, rather than minimizing the Jensen-Shannon divergence.
- Improved model architectures that yield better learning in GANs.
- Semi-supervised learning with GANs.
- Research frontiers, including guaranteeing convergence of the GAN game.
- Other applications of adversarial learning, such as domain adaptation and privacy.
Видео Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial) канала Steven Van Vaerenbergh
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Ian Goodfellow: Adversarial Machine Learning (ICLR 2019 invited talk)](https://i.ytimg.com/vi/sucqskXRkss/default.jpg)
![This Canadian Genius Created Modern AI](https://i.ytimg.com/vi/l9RWTMNnvi4/default.jpg)
![[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton](https://i.ytimg.com/vi/hDQNCWR3HLQ/default.jpg)
![The incredible inventions of intuitive AI | Maurice Conti](https://i.ytimg.com/vi/aR5N2Jl8k14/default.jpg)
![When A.I. Becomes Creative](https://i.ytimg.com/vi/KZ7BnJb30Cc/default.jpg)
![Zebras, Horses & CycleGAN - Computerphile](https://i.ytimg.com/vi/T-lBMrjZ3_0/default.jpg)
![Lecture 13 | Generative Models](https://i.ytimg.com/vi/5WoItGTWV54/default.jpg)
![A Short Introduction to Entropy, Cross-Entropy and KL-Divergence](https://i.ytimg.com/vi/ErfnhcEV1O8/default.jpg)
![Deepmind AlphaZero - Mastering Games Without Human Knowledge](https://i.ytimg.com/vi/Wujy7OzvdJk/default.jpg)
![Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19](https://i.ytimg.com/vi/Z6rxFNMGdn0/default.jpg)
![Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI](https://i.ytimg.com/vi/9JpdAg6uMXs/default.jpg)
![Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow](https://i.ytimg.com/vi/pWAc9B2zJS4/default.jpg)
![Editing Faces using Artificial Intelligence](https://i.ytimg.com/vi/dCKbRCUyop8/default.jpg)
![Susan Athey: Counterfactual Inference (NeurIPS 2018 Tutorial)](https://i.ytimg.com/vi/yKs6msnw9m8/default.jpg)
![[Classic] Generative Adversarial Networks (Paper Explained)](https://i.ytimg.com/vi/eyxmSmjmNS0/default.jpg)
![Yikang Shen: Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks (ICLR2019)](https://i.ytimg.com/vi/7REBftHDQOw/default.jpg)
![J. Frankle & M. Carbin: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks](https://i.ytimg.com/vi/s7DqRZVvRiQ/default.jpg)
![The Great AI Debate - NIPS2017 - Yann LeCun](https://i.ytimg.com/vi/93Xv8vJ2acI/default.jpg)
![J. Z. Kolter and A. Madry: Adversarial Robustness - Theory and Practice (NeurIPS 2018 Tutorial)](https://i.ytimg.com/vi/TwP-gKBQyic/default.jpg)
![Do Statistical Models Understand the World? - Ian Goodfellow, Research Scientist, Google](https://i.ytimg.com/vi/Pq4A2mPCB0Y/default.jpg)