Activation Functions - EXPLAINED!
We start with the whats/whys/hows. Then delve into details (math) with examples.
REFERENCES
[1] Amazing discussion on the "dying relu problem": https://www.quora.com/What-is-the-dying-ReLU-problem-in-neural-networks
[2] Saturating functions that "squeeze" inputs: https://stats.stackexchange.com/questions/174295/what-does-the-term-saturating-nonlinearities-mean
[3] Plot math functions beautifully with desmos: https://www.desmos.com/
[4] The paper on Exponential Linear units (ELU): https://arxiv.org/abs/1511.07289
[5] Relatively new activation function (swish): https://arxiv.org/pdf/1710.05941v1.pdf
[6] Used an Image of activation functions from this Pawan Jain's Blog: https://towardsdatascience.com/complete-guide-of-activation-functions-34076e95d044
[7] Why bias in Neural Networks? https://stackoverflow.com/questions/7175099/why-the-bias-is-necessary-in-ann-should-we-have-separate-bias-for-each-layer
Видео Activation Functions - EXPLAINED! канала CodeEmporium
REFERENCES
[1] Amazing discussion on the "dying relu problem": https://www.quora.com/What-is-the-dying-ReLU-problem-in-neural-networks
[2] Saturating functions that "squeeze" inputs: https://stats.stackexchange.com/questions/174295/what-does-the-term-saturating-nonlinearities-mean
[3] Plot math functions beautifully with desmos: https://www.desmos.com/
[4] The paper on Exponential Linear units (ELU): https://arxiv.org/abs/1511.07289
[5] Relatively new activation function (swish): https://arxiv.org/pdf/1710.05941v1.pdf
[6] Used an Image of activation functions from this Pawan Jain's Blog: https://towardsdatascience.com/complete-guide-of-activation-functions-34076e95d044
[7] Why bias in Neural Networks? https://stackoverflow.com/questions/7175099/why-the-bias-is-necessary-in-ann-should-we-have-separate-bias-for-each-layer
Видео Activation Functions - EXPLAINED! канала CodeEmporium
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
But what is a neural network? | Chapter 1, Deep learningWhich Activation Function Should I Use?Batch Normalization - EXPLAINED!Loss Functions - EXPLAINED!Why Non-linear Activation Functions (C1W3L07)Transformer Neural Networks - EXPLAINED! (Attention is all you need)Data Scientist Noob Vs Pro #shortsActivation Functions (C1W3L06)Convolution Neural Networks - EXPLAINEDWhy We Use the Sigmoid Function in Neural Networks for Binary ClassificationOptimizers - EXPLAINED!Deep Learning-Activation Functions-Elu, PRelu,Softmax,Swish And SoftplusStatistical Power, Clearly Explained!!!SIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)Breaking down Google Maps with Neural Networks!Gradient Descent, Step-by-StepAn introduction to Reinforcement LearningNeural Networks from Scratch - P.5 Hidden Layer Activation FunctionsWhat is backpropagation really doing? | Chapter 3, Deep learning