Big Bird: Transformers for Longer Sequences (Paper Explained)
#ai #nlp #attention
The quadratic resource requirements of the attention mechanism are the main roadblock in scaling up transformers to long sequences. This paper replaces the full quadratic attention mechanism by a combination of random attention, window attention, and global attention. Not only does this allow the processing of longer sequences, translating to state-of-the-art experimental results, but also the paper shows that BigBird comes with theoretical guarantees of universal approximation and turing completeness.
OUTLINE:
0:00 - Intro & Overview
1:50 - Quadratic Memory in Full Attention
4:55 - Architecture Overview
6:35 - Random Attention
10:10 - Window Attention
13:45 - Global Attention
15:40 - Architecture Summary
17:10 - Theoretical Result
22:00 - Experimental Parameters
25:35 - Structured Block Computations
29:30 - Recap
31:50 - Experimental Results
34:05 - Conclusion
Paper: https://arxiv.org/abs/2007.14062
My Video on Attention: https://youtu.be/iDulhoQ2pro
My Video on BERT: https://youtu.be/-9evrZnBorM
My Video on Longformer: https://youtu.be/_8KNb5iqblE
... and its memory requirements: https://youtu.be/gJR28onlqzs
Abstract:
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.
Authors: Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Видео Big Bird: Transformers for Longer Sequences (Paper Explained) канала Yannic Kilcher
The quadratic resource requirements of the attention mechanism are the main roadblock in scaling up transformers to long sequences. This paper replaces the full quadratic attention mechanism by a combination of random attention, window attention, and global attention. Not only does this allow the processing of longer sequences, translating to state-of-the-art experimental results, but also the paper shows that BigBird comes with theoretical guarantees of universal approximation and turing completeness.
OUTLINE:
0:00 - Intro & Overview
1:50 - Quadratic Memory in Full Attention
4:55 - Architecture Overview
6:35 - Random Attention
10:10 - Window Attention
13:45 - Global Attention
15:40 - Architecture Summary
17:10 - Theoretical Result
22:00 - Experimental Parameters
25:35 - Structured Block Computations
29:30 - Recap
31:50 - Experimental Results
34:05 - Conclusion
Paper: https://arxiv.org/abs/2007.14062
My Video on Attention: https://youtu.be/iDulhoQ2pro
My Video on BERT: https://youtu.be/-9evrZnBorM
My Video on Longformer: https://youtu.be/_8KNb5iqblE
... and its memory requirements: https://youtu.be/gJR28onlqzs
Abstract:
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.
Authors: Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Видео Big Bird: Transformers for Longer Sequences (Paper Explained) канала Yannic Kilcher
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Hopfield Networks is All You Need (Paper Explained)Longformer: The Long-Document TransformerImage GPT: Generative Pretraining from Pixels (Paper Explained)Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAIBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[DeepReader] Big Bird: Transformers for Longer SequencesLSTM is dead. Long Live Transformers!Self-training with Noisy Student improves ImageNet classification (Paper Explained)[Classic] Deep Residual Learning for Image Recognition (Paper Explained)StatQuest: t-SNE, Clearly ExplainedTransformers are RNNs: Fast Autoregressive Transformers with Linear Attention (Paper Explained)Lena Shakurova: How to expand your NLP Solution to new languages | PyData Amsterdam 2019Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)DETR: End-to-End Object Detection with Transformers (Paper Explained)GPT-3: Language Models are Few-Shot Learners (Paper Explained)The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks14 Cool Apps Built on OpenAI's GPT-3 APIBig Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality