How do Vision Transformers work? – Paper explained | multi-head self-attention & convolutions
It turns out that multi-head self-attention and convolutions are complementary. So, what makes multi-head self-attention different from convolutions? How and why do Vision Transformers work? In this video, we will find out by explaining the paper “How Do Vision Transformers Work?” by Namuk & Kim, 2021.
SPONSOR: Weights & Biases 👉 https://wandb.me/ai-coffee-break
⏩ Vision Transformers explained playlist: https://youtube.com/playlist?list=PLpZBeKTZRGPMddKHcsJAOIghV8MwzwQV6
📺 ViT: An image is worth 16x16 pixels: https://youtu.be/DVoHvmww2lQ
📺 Swin Transformer: https://youtu.be/SndHALawoag
📺 ConvNext: https://youtu.be/QqejV0LNDHA
📺 DeiT: https://youtu.be/-FbV2KgRM8A
📺 Adversarial attacks: https://youtu.be/YyTyWGUUhmo
❓Check out our daily #MachineLearning Quiz Questions: ►
https://www.youtube.com/c/AICoffeeBreak/community
Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Don Rosenthal, Dres. Trost GbR, banana.dev -- Kyle Morris, Joel Ang
Paper 📜:
Park, Namuk, and Songkuk Kim. "How Do Vision Transformers Work?." In International Conference on Learning Representations. 2021. https://openreview.net/forum?id=D78Go4hVcxO
🔗 Official implementation: https://github.com/xxxnell/how-do-vits-work
Outline:
00:00 Transformers vs ConvNets
01:04 Sponsor: Weights & Biases
02:21 Convolutions explained in a nutshell
03:35 Multi-Head Self-Attention explained
06:46 Why we thought that MSA is cool
09:56 Paper insights
15:26 MSA vs. Convs (more insight)
16:07 Low-pass filters (MSA) and high-pass filters (Convs)
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon: https://www.patreon.com/AICoffeeBreak
Ko-fi: https://ko-fi.com/aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔗 Links:
AICoffeeBreakQuiz: https://www.youtube.com/c/AICoffeeBreak/community
Twitter: https://twitter.com/AICoffeeBreak
Reddit: https://www.reddit.com/r/AICoffeeBreak/
YouTube: https://www.youtube.com/AICoffeeBreak
#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research
Music 🎵 : Bella Bella Beat by Nana Kwabena
Видео How do Vision Transformers work? – Paper explained | multi-head self-attention & convolutions канала AI Coffee Break with Letitia
SPONSOR: Weights & Biases 👉 https://wandb.me/ai-coffee-break
⏩ Vision Transformers explained playlist: https://youtube.com/playlist?list=PLpZBeKTZRGPMddKHcsJAOIghV8MwzwQV6
📺 ViT: An image is worth 16x16 pixels: https://youtu.be/DVoHvmww2lQ
📺 Swin Transformer: https://youtu.be/SndHALawoag
📺 ConvNext: https://youtu.be/QqejV0LNDHA
📺 DeiT: https://youtu.be/-FbV2KgRM8A
📺 Adversarial attacks: https://youtu.be/YyTyWGUUhmo
❓Check out our daily #MachineLearning Quiz Questions: ►
https://www.youtube.com/c/AICoffeeBreak/community
Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Don Rosenthal, Dres. Trost GbR, banana.dev -- Kyle Morris, Joel Ang
Paper 📜:
Park, Namuk, and Songkuk Kim. "How Do Vision Transformers Work?." In International Conference on Learning Representations. 2021. https://openreview.net/forum?id=D78Go4hVcxO
🔗 Official implementation: https://github.com/xxxnell/how-do-vits-work
Outline:
00:00 Transformers vs ConvNets
01:04 Sponsor: Weights & Biases
02:21 Convolutions explained in a nutshell
03:35 Multi-Head Self-Attention explained
06:46 Why we thought that MSA is cool
09:56 Paper insights
15:26 MSA vs. Convs (more insight)
16:07 Low-pass filters (MSA) and high-pass filters (Convs)
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon: https://www.patreon.com/AICoffeeBreak
Ko-fi: https://ko-fi.com/aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔗 Links:
AICoffeeBreakQuiz: https://www.youtube.com/c/AICoffeeBreak/community
Twitter: https://twitter.com/AICoffeeBreak
Reddit: https://www.reddit.com/r/AICoffeeBreak/
YouTube: https://www.youtube.com/AICoffeeBreak
#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research
Music 🎵 : Bella Bella Beat by Nana Kwabena
Видео How do Vision Transformers work? – Paper explained | multi-head self-attention & convolutions канала AI Coffee Break with Letitia
Показать
Комментарии отсутствуют
Информация о видео
23 февраля 2022 г. 18:47:26
00:19:15
Другие видео канала
![Why is DALL-E 3 better at following Text Prompts? — DALL-E 3 explained](https://i.ytimg.com/vi/NTGRcTRlcE4/default.jpg)
![Adversarial Attacks and Defenses. The Dimpled Manifold Hypothesis. David Stutz from DeepMind #HLF23](https://i.ytimg.com/vi/9bJcfk3HdLY/default.jpg)
![What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED](https://i.ytimg.com/vi/KEv-F5UkhxU/default.jpg)
![Are ChatBots their own death? | Training on Generated Data Makes Models Forget – Paper explained](https://i.ytimg.com/vi/rrMNWJ9qXlI/default.jpg)
![The first law on AI regulation | The EU AI Act](https://i.ytimg.com/vi/JOKXONV7LuA/default.jpg)
![Say that 3 times in a row. 😅](https://i.ytimg.com/vi/EV8v5P1t84U/default.jpg)
![Author Interviews, Poster Highlights, Summary of the ACL 2023 Toronto NLP](https://i.ytimg.com/vi/-Agcr0nawuk/default.jpg)
![ChatGPT ist not an intelligent agent. It is a cultural technology. – Gopnik Keynote](https://i.ytimg.com/vi/FPqxmkc_qZU/default.jpg)
![Do LLMs understand? Jay Alammar's TLDR of Geoffrey Hinton ACL2023 Keynote](https://i.ytimg.com/vi/BNA0QY79Xhk/default.jpg)
![[Own work] MM-SHAP to measure modality contributions](https://i.ytimg.com/vi/RLaiomLMK9I/default.jpg)
![Eight Things to Know about Large Language Models](https://i.ytimg.com/vi/RX-gGs_EV7M/default.jpg)
![Speaking about AI is hard, even for humans | AI Coffee Break Bloopers](https://i.ytimg.com/vi/w_fmoJz83Cs/default.jpg)
![Moral Self-Correction in Large Language Models | paper explained](https://i.ytimg.com/vi/X_RKCTpuYRA/default.jpg)
![AI beats us at another game: STRATEGO | DeepNash paper explained](https://i.ytimg.com/vi/3vO45gcEbRs/default.jpg)
![Why ChatGPT fails | Language Model Limitations EXPLAINED](https://i.ytimg.com/vi/XstVY5epRWs/default.jpg)
!["Watermarking Language Models" paper and GPTZero EXPLAINED | How to detect text by ChatGPT?](https://i.ytimg.com/vi/-vToUx5SDW4/default.jpg)
![Training learned optimizers: VeLO paper EXPLAINED](https://i.ytimg.com/vi/9a6PQJxzUpM/default.jpg)
![ChatGPT vs Sparrow - Battle of Chatbots](https://i.ytimg.com/vi/SWwQ3k-DWyo/default.jpg)
![Paella: Text to image FASTER than diffusion models | Paella paper explained](https://i.ytimg.com/vi/6zeLSANd41k/default.jpg)
![Generate long form video with Transformers | Phenaki from Google Brain explained](https://i.ytimg.com/vi/RYLomvaPWa4/default.jpg)
![Movie Diffusion explained | Make-a-Video from MetaAI and Imagen Video from Google Brain](https://i.ytimg.com/vi/AcvmyqGgMh8/default.jpg)