Загрузка страницы

Activation Functions - EXPLAINED!

We start with the whats/whys/hows. Then delve into details (math) with examples.

REFERENCES
[1] Amazing discussion on the "dying relu problem": https://www.quora.com/What-is-the-dying-ReLU-problem-in-neural-networks
[2] Saturating functions that "squeeze" inputs: https://stats.stackexchange.com/questions/174295/what-does-the-term-saturating-nonlinearities-mean
[3] Plot math functions beautifully with desmos: https://www.desmos.com/
[4] The paper on Exponential Linear units (ELU): https://arxiv.org/abs/1511.07289
[5] Relatively new activation function (swish): https://arxiv.org/pdf/1710.05941v1.pdf
[6] Used an Image of activation functions from this Pawan Jain's Blog: https://towardsdatascience.com/complete-guide-of-activation-functions-34076e95d044
[7] Why bias in Neural Networks? https://stackoverflow.com/questions/7175099/why-the-bias-is-necessary-in-ann-should-we-have-separate-bias-for-each-layer

Видео Activation Functions - EXPLAINED! канала CodeEmporium
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
19 марта 2020 г. 7:03:08
00:10:05
Яндекс.Метрика