Загрузка страницы

Hadi Ghauch: Large-scale training for deep neural networks

This talk will complement some of lectures in the course by combining large-scale learning, and deep neural networks (DNNs). We will start discuss some challenges for optimizing DNNs, namely, the complex loss surface, ill-conditioning, etc. We will then review some state-of-the-art training methods for DNNs, such as, backprop (review), stochastic gradient descent (review), and adaptive rate methods, RMSProp, ADAGrad, and ADAM.

This talk was a part of The Workshop on Fundamentals of Machine Learning Over Networks (MLoNs) and the KTH EP3260 Fundamentals of MLoNs (MLoNs).

Course website:
https://sites.google.com/view/mlons/course-materials

Workshop website:
https://sites.google.com/view/mlon2019/home

Видео Hadi Ghauch: Large-scale training for deep neural networks канала MLRG KTH
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
7 апреля 2019 г. 17:40:26
01:01:07
Яндекс.Метрика