Week 10 – Lecture: Self-supervised learning (SSL) in computer vision (CV)
Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Ishan Misra
Week 10: http://bit.ly/pDL-en-10
0:00:00 – Week 10 – Lecture
LECTURE Part A: http://bit.ly/pDL-en-10-1
In this section, we understand the motivation behind Self-Supervised Learning (SSL), define what it is and see some of its applications in NLP and Computer Vision. We understand how pretext tasks aid with SSL and see some example pretext tasks in images, videos and videos with sound. Finally, we try to get an intuition behind the representation learned by pretext tasks.
0:01:15 – Challenges of supervised learning and how self supervised learning differs from supervised and unsupervised, with examples in NLP and Relative positions for vision
0:12:39 – Examples of pretext tasks in images, videos and videos with sound
0:40:26 – Understanding what the "pretext" task learns
LECTURE Part B: http://bit.ly/pDL-en-10-2
In this section, we discuss the shortcomings of pretext tasks, define characteristics that make a good pretrained feature, and how we can achieve this using Clustering and Contrastive Learning. We then learn about ClusterFit, its steps and performance. We further dive into a specific simple framework for Contrastive Learning known as PIRL. We discuss its working as well as its evaluation in different contexts.
1:01:50 – Generalization of pretext task and ClusterFit
1:19:08 – Basic idea of PIRL
1:38:09 – Evaluating PIRL on different tasks and questions
Видео Week 10 – Lecture: Self-supervised learning (SSL) in computer vision (CV) канала Alfredo Canziani
Playlist: http://bit.ly/pDL-YouTube
Speaker: Ishan Misra
Week 10: http://bit.ly/pDL-en-10
0:00:00 – Week 10 – Lecture
LECTURE Part A: http://bit.ly/pDL-en-10-1
In this section, we understand the motivation behind Self-Supervised Learning (SSL), define what it is and see some of its applications in NLP and Computer Vision. We understand how pretext tasks aid with SSL and see some example pretext tasks in images, videos and videos with sound. Finally, we try to get an intuition behind the representation learned by pretext tasks.
0:01:15 – Challenges of supervised learning and how self supervised learning differs from supervised and unsupervised, with examples in NLP and Relative positions for vision
0:12:39 – Examples of pretext tasks in images, videos and videos with sound
0:40:26 – Understanding what the "pretext" task learns
LECTURE Part B: http://bit.ly/pDL-en-10-2
In this section, we discuss the shortcomings of pretext tasks, define characteristics that make a good pretrained feature, and how we can achieve this using Clustering and Contrastive Learning. We then learn about ClusterFit, its steps and performance. We further dive into a specific simple framework for Contrastive Learning known as PIRL. We discuss its working as well as its evaluation in different contexts.
1:01:50 – Generalization of pretext task and ClusterFit
1:19:08 – Basic idea of PIRL
1:38:09 – Evaluating PIRL on different tasks and questions
Видео Week 10 – Lecture: Self-supervised learning (SSL) in computer vision (CV) канала Alfredo Canziani
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Yann LeCun: "Energy-Based Self-Supervised Learning"What Is Self-Supervised Learning? | AI with AlexThe 10 Most Important Concepts For Coding Interviews (algorithms and data structures)Will Transformers Replace CNNs in Computer Vision? + NVIDIA GTC GiveawayLecture 7 Self-Supervised Learning -- UC Berkeley Spring 2020 - CS294-158 Deep Unsupervised LearningWhat is Self-Supervised Learning? [Quick Introduction]Supervised and self-supervised transfer learning (with PyTorch Lightning)10L – Self-supervised learning in computer visionBig Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)Deep Learning: Class Activation Maps TheoryYann LeCun - Self Supervised Learning | ICLR 2020SWaV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments (Mathilde Caron)Self-supervised denoising using blind-spot convolutional networksWhat is Self Supervised Learning?How to Train your Brain to Learn Anything Faster? | Secrets of Human Brain by Swami MukundanandaMachine Learning Engineer Interviewing Tips with AlexDINO in PyTorchSelf-Supervised Learning Explained | Ishan Misra and Lex FridmanWhat are Generative Models? | VAE & GAN | Intro to AI