Загрузка страницы

Week 14 – Practicum: Overfitting and regularization, and Bayesian neural nets

Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Alfredo Canziani
Week 14: http://bit.ly/pDL-en-14

0:00:00 – Week 14 – Practicum

PRACTICUM: http://bit.ly/pDL-en-14-3
When training highly parametrised models such as deep neural networks there is a risk of overfitting to the training data. This leads to greater generalization error. To help reduce overfitting we can introduce regularization into our training, discouraging certain solutions to decrease the extent to which our models will fit to noise.
0:01:41 – Overfitting and regularization
0:18:11 – Model regularization (L2, L1, dropout, batch norm, and data augmentation)
0:49:30 – Visualizing Regularisation and Overfitting, Bayesian Neural Networks

Видео Week 14 – Practicum: Overfitting and regularization, and Bayesian neural nets канала Alfredo Canziani
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
25 сентября 2020 г. 11:01:14
01:11:28
Яндекс.Метрика