Загрузка страницы

AdaMax Optimization from Scratch in Python

Adamax build upon the well-known Adam optimization, but swap out the l2 norm for an l_infinite norm in the gradient scaling factor. Let's explore the definition and an implementation!

Code can be found over here: https://github.com/yacineMahdid/artificial-intelligence-and-machine-learning

## Credit
Check out this cool blog post if you want to learn more about stochastic gradient descent based optimization: https://ruder.io/optimizing-gradient-descent/index.html#adamax

Music is from Youtube library!

## Table of Content
- Introduction: 0:00
- Formula: 1:25
- Python Implementation: 3:50
- Conclusion: 9:43

## Reference
- Adam: A Method for Stochastic Optimization: https://arxiv.org/pdf/1412.6980.pdf
----
Join the Discord for general discussion: https://discord.gg/QpkxRbQBpf

----
Follow Me Online Here:

Twitter: https://twitter.com/CodeThisCodeTh1
GitHub: https://github.com/yacineMahdid
LinkedIn: https://www.linkedin.com/in/yacine-mahdid-809425163/
Instagram: https://www.instagram.com/yacine_mahdid/
___

Have a great week! 👋

Видео AdaMax Optimization from Scratch in Python канала Machine Learning Explained
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
13 сентября 2020 г. 7:51:02
00:10:01
Яндекс.Метрика