Загрузка страницы

Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (Paper Explained)

#ai #attention #transformer #deeplearning

Transformers are famous for two things: Their superior performance and their insane requirements of compute and memory. This paper reformulates the attention mechanism in terms of kernel functions and obtains a linear formulation, which reduces these requirements. Surprisingly, this formulation also surfaces an interesting connection between autoregressive transformers and RNNs.

OUTLINE:
0:00 - Intro & Overview
1:35 - Softmax Attention & Transformers
8:40 - Quadratic Complexity of Softmax Attention
9:40 - Generalized Attention Mechanism
13:45 - Kernels
20:40 - Linear Attention
25:20 - Experiments
28:30 - Intuition on Linear Attention
33:55 - Connecting Autoregressive Transformers and RNNs
41:30 - Caveats with the RNN connection
46:00 - More Results & Conclusion

Paper: https://arxiv.org/abs/2006.16236
Website: https://linear-transformers.com/
Code: https://github.com/idiap/fast-transformers

My Video on Attention: https://youtu.be/iDulhoQ2pro
My Video on BERT: https://youtu.be/-9evrZnBorM

Abstract:
Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences. To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from (N2) to (N), where N is the sequence length. We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks. Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences.

Authors: Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher

Видео Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (Paper Explained) канала Yannic Kilcher
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
4 июля 2020 г. 17:39:13
00:48:06
Яндекс.Метрика