Загрузка страницы

Rethinking Pre-training and Self-Training

**ERRATA** at 9:31 I called the large scale jittering "color jittering", this isn't an operation specifically on colors.

This video explores an interesting paper from researchers at Google AI. They show that self-training outperforms supervised or self-supervised (SimCLR) pre-training. The video explains what self-training is and how all these methods attempt to utilize extra data (labeled or not) for better performance on downstream tasks.

Thanks for watching! Please Subscribe!

Paper Links:
Rethinking Pre-training and Self-training: https://arxiv.org/pdf/2006.06882.pdf
OpenImages Dataset:https://storage.googleapis.com/openimages/web/index.html
RetinaNet: https://arxiv.org/pdf/1708.02002.pdf
Rethinking ImageNet Pre-training: https://arxiv.org/pdf/1811.08883.pdf
Image Classification State-of-the-Art: https://paperswithcode.com/sota/image-classification-on-imagenet
Self-Training with Noisy Student: https://arxiv.org/pdf/1911.04252.pdf
Rotation Self-Supervised Learning: https://arxiv.org/pdf/1803.07728.pdf
POET: https://arxiv.org/pdf/1901.01753.pdf
ImageGPT: https://openai.com/blog/image-gpt/

Видео Rethinking Pre-training and Self-Training канала Connor Shorten
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
19 июня 2020 г. 0:06:13
00:17:53
Яндекс.Метрика