Загрузка страницы

Episode 2: PyTorch Dropout, Batch size and interactive debugging

This is the second video in our PyTorch Lightning MasterClass, taking you from basic PyTorch to all the latest AI best practices with PyTorch Lightning.

In the previous video, we create a PyTorch classification model from scratch and set up training on GPUs: https://youtu.be/OMDn66kM9Qc

In this video we are going over using dropout layers to avoid overfitting, interactive debugging of your models, and choosing the optimal batch size.

Check out our next video to find out how to train your model the Lightning way- write less boilerplate, scale more quickly: https://youtu.be/DbESHcCoWbM

Alfredo Canziani is a Computer Science professor at NYU (check out his deep learning class -https://www.youtube.com/playlist?list=PLLHTzKZzVU9eaEyErdV26ikyolxOsz6mq.)
Willam Falcon is an AI Ph.D. researcher at NYU, and creator and founder of PyTorch Lightning.

Chapters:
00:00 Introduction
00:24 Prevent overfitting with Dropout
08:06 Interactive neural network debugging
28:45 Choosing batch size

Thanks for watching!

Видео Episode 2: PyTorch Dropout, Batch size and interactive debugging канала PyTorch Lightning
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
5 сентября 2020 г. 0:44:04
00:31:01
Яндекс.Метрика