ConvNeXt: A ConvNet for the 2020s | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️
https://www.patreon.com/theaiepiphany
👨👩👧👦 Join our Discord community 👨👩👧👦
https://discord.gg/peBrCpheKE
In this video I cover the recently published "A ConvNet for the 2020s" paper. They show that ConvNets are still in the game! - by adding new design ideas and training procedures they outperform vision transformers even in big data regimes and without any attention layers.
Convolutional prior continues to stand the test of time in the field of computer vision.
Note: I also partially cover the Swin transformer paper in case you missed out on that one. :)
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
✅ Paper: https://arxiv.org/abs/2201.03545
✅ GitHub: https://github.com/facebookresearch/ConvNeXt
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:00 Intro - convergence of transformers and CNNs
05:05 Main diagram explained
07:40 Main diagram corrections
10:10 Swin transformer recap
20:20 Modernizing ResNets
24:10 Diving deeper: stage ratio
27:20 Diving deeper: misc (inverted bottleneck, depthwise conv...)
34:45 Results (classification, object detection, segmentation)
37:35 RIP DanNet
38:40 Summary and outro
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany - https://www.patreon.com/theaiepiphany
One-time donation - https://www.paypal.com/paypalme/theaiepiphany
Huge thank you to these AI Epiphany patreons:
Eli Mahler
Petar Veličković
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/
🐦 Twitter - https://twitter.com/gordic_aleksa
👨👩👧👦 Discord - https://discord.gg/peBrCpheKE
📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/
📚 Medium - https://gordicaleksa.medium.com/
💻 GitHub - https://github.com/gordicaleksa
📢 AI Newsletter - https://aiepiphany.substack.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#convnext #visiontransformers #computervision
Видео ConvNeXt: A ConvNet for the 2020s | Paper Explained канала Aleksa Gordić - The AI Epiphany
https://www.patreon.com/theaiepiphany
👨👩👧👦 Join our Discord community 👨👩👧👦
https://discord.gg/peBrCpheKE
In this video I cover the recently published "A ConvNet for the 2020s" paper. They show that ConvNets are still in the game! - by adding new design ideas and training procedures they outperform vision transformers even in big data regimes and without any attention layers.
Convolutional prior continues to stand the test of time in the field of computer vision.
Note: I also partially cover the Swin transformer paper in case you missed out on that one. :)
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
✅ Paper: https://arxiv.org/abs/2201.03545
✅ GitHub: https://github.com/facebookresearch/ConvNeXt
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:00 Intro - convergence of transformers and CNNs
05:05 Main diagram explained
07:40 Main diagram corrections
10:10 Swin transformer recap
20:20 Modernizing ResNets
24:10 Diving deeper: stage ratio
27:20 Diving deeper: misc (inverted bottleneck, depthwise conv...)
34:45 Results (classification, object detection, segmentation)
37:35 RIP DanNet
38:40 Summary and outro
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany - https://www.patreon.com/theaiepiphany
One-time donation - https://www.paypal.com/paypalme/theaiepiphany
Huge thank you to these AI Epiphany patreons:
Eli Mahler
Petar Veličković
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/
🐦 Twitter - https://twitter.com/gordic_aleksa
👨👩👧👦 Discord - https://discord.gg/peBrCpheKE
📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/
📚 Medium - https://gordicaleksa.medium.com/
💻 GitHub - https://github.com/gordicaleksa
📢 AI Newsletter - https://aiepiphany.substack.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#convnext #visiontransformers #computervision
Видео ConvNeXt: A ConvNet for the 2020s | Paper Explained канала Aleksa Gordić - The AI Epiphany
Показать
Комментарии отсутствуют
Информация о видео
26 января 2022 г. 19:33:32
00:40:08
Другие видео канала
![Feed-forward method | Neural Style Transfer #5](https://i.ytimg.com/vi/lOR-LncQlk8/default.jpg)
![10k subscribers | joining Google DeepMind, updates, AMA](https://i.ytimg.com/vi/QaIDMuGbqt0/default.jpg)
![Day 25: Open NLLB - filtering HBS (Pt 2)](https://i.ytimg.com/vi/wFOSNFHVeKQ/default.jpg)
![Fastformer: Additive Attention Can Be All You Need | Paper Explained](https://i.ytimg.com/vi/Ich5TIvdYRE/default.jpg)
![Day 24: Open NLLB - back from China, analyzing spikes, preparing HBS run (Pt 2)](https://i.ytimg.com/vi/jgt-INU2WLw/default.jpg)
![Day 6: Meta NLLB - data filtering (Pt. 4)](https://i.ytimg.com/vi/HqY5pYAHvDw/default.jpg)
![Day 24: Open NLLB - back from China, filtering HBS data (Pt 3)](https://i.ytimg.com/vi/nlzHe4A2vwE/default.jpg)
![Day 14: Open NLLB - Eval of our first run (English, Turkish, Hindi) (Pt 2.)](https://i.ytimg.com/vi/BYtGuM8Y2Vs/default.jpg)
![The Vesuvius challenge breakthrough with Luke Farritor](https://i.ytimg.com/vi/Bb2MEngbx7Q/default.jpg)
![Day 4: Training 600M NLLB - data preps (Pt. 2)](https://i.ytimg.com/vi/l9Wnm5k3MEw/default.jpg)
![DeepMind's Android RL Environment - AndroidEnv](https://i.ytimg.com/vi/847zrERIr-k/default.jpg)
![Hamel Husain - Building LLM Apps in Production](https://i.ytimg.com/vi/MFSd-_pMExI/default.jpg)
![Day 28: Open NLLB - debugging fuzzy dedup, training fasttext LID (Pt 3)](https://i.ytimg.com/vi/ee57AeGkEeU/default.jpg)
![Day 10: Open NLLB - evaluation data, filtering (Pt 3.)](https://i.ytimg.com/vi/pwS-5BKgNkk/default.jpg)
![Day 18: Open NLLB - data loading document, GitHub tasks (Pt 1 cont.)](https://i.ytimg.com/vi/W9aWQEn2-Ww/default.jpg)
![DeepMind's TacticAI: an AI assistant for football tactics | Petar Veličković](https://i.ytimg.com/vi/BRbUikWdXhI/default.jpg)
![Jarvis for Images! (demo) - run locally, no external APIs](https://i.ytimg.com/vi/zaAhkIV6dmw/default.jpg)
![Day 15: Open NLLB - ALTI+, detecting hallucinations (Pt 2)](https://i.ytimg.com/vi/MKCh9mFJinw/default.jpg)
![Day 29: Open NLLB - training fasttext LID (Pt 1)](https://i.ytimg.com/vi/cvxJssoRRnA/default.jpg)
![LLaMA 2 w/ Thomas Scialom (LLaMA 2 lead)](https://i.ytimg.com/vi/k_HMgpJKBso/default.jpg)
![Day 19: Open NLLB - compute grant, downloading HBS parallel corpora (Pt 1)](https://i.ytimg.com/vi/uAI1oRIdLtg/default.jpg)