Загрузка страницы

Neural Net Learning as Functional Kernel Gradient Descent (ft. Arthur Jacot)

In the functional space, and with the right kernel to compare functions, neural net learning can actually be regarded as a nice gradient descent, which is even convex for some common loss functions, as discussed by Arthur Jacot, PhD candidate in mathematics at EPFL.
https://people.epfl.ch/arthur.jacot

Check Arthur's 2018 NeurIPS paper on the neural tangent kernel
https://www.youtube.com/watch?v=raT2ECrvbag
https://arxiv.org/abs/1806.07572

Видео Neural Net Learning as Functional Kernel Gradient Descent (ft. Arthur Jacot) канала ZettaBytes, EPFL
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
10 апреля 2019 г. 20:00:00
00:03:26
Яндекс.Метрика