Загрузка страницы

Lecture 13: Attention

Lecture 13 introduces attention as a mechanism for deep networks to dynamically pay attention to different parts of their inputs. We see how recurrent networks can be augmented with attention, adding an interpretability to their predictions, and how neural attention can be seen as a coarse approximation of the saccades made by biological eyes. We see how early mechanisms for attention in recurrent networks can be generalized to yield attention and self-attention layers which can be used as standalone primitives in neural networks. We see how self-attention gives us a novel way to process sequences, leading to Transformer networks which use attention as their primary computational building block. We see how Transformers gracefully scale to very large data sets and model sizes, and have rapidly improved the state of the art in text generation.

Note: I had some technical difficulty with the slides during this lecture. The slides were corrected after the lecture, and the difficulties were mostly smoothed out in editing; however there are a few places where what I’m saying doesn’t exactly match up with the content on the slides.

Slides: http://myumi.ch/qgvlw
_________________________________________________________________________________________________

Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification and object detection. Recent developments in neural network approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of neural-network based deep learning methods for computer vision. During this course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. We will cover learning algorithms, neural network architectures, and practical engineering tricks for training and fine-tuning networks for visual recognition tasks.

Course Website: http://myumi.ch/Bo9Ng

Instructor: Justin Johnson http://myumi.ch/QA8Pg

Видео Lecture 13: Attention канала Michigan Online
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
10 августа 2020 г. 19:03:54
01:11:53
Яндекс.Метрика