Competition Winning Learning Rates
Leslie Smith, Senior Research Scientist, US Naval Research Laboratory
Presented at MLconf 2018
Abstract: It is well known that learning rates are the most important hyper-parameter to tune for training deep neural networks. Surprisingly, training with dynamic learning rates can lead to an order of magnitude speedup in training time. This talk will discuss my path from static learning rates to dynamic cyclical learning rates and finally to fast training with very large learning rates (I named this technique “super-convergence”). In particular, I will show that very large learning rates are the preferred method for regularizing the training because they provide the twin benefits of training speed and good generalization. The super-convergence method was integrated into the fast.ai library and the Fastai team used it to win the DAWNBench and Kaggle’s iMaterialist challenges.
See Leslie's presentation slides on our slideshare page here: https://www.slideshare.net/SessionsEvents/competition-winning-learning-rates
Видео Competition Winning Learning Rates канала MLconf
Presented at MLconf 2018
Abstract: It is well known that learning rates are the most important hyper-parameter to tune for training deep neural networks. Surprisingly, training with dynamic learning rates can lead to an order of magnitude speedup in training time. This talk will discuss my path from static learning rates to dynamic cyclical learning rates and finally to fast training with very large learning rates (I named this technique “super-convergence”). In particular, I will show that very large learning rates are the preferred method for regularizing the training because they provide the twin benefits of training speed and good generalization. The super-convergence method was integrated into the fast.ai library and the Fastai team used it to win the DAWNBench and Kaggle’s iMaterialist challenges.
See Leslie's presentation slides on our slideshare page here: https://www.slideshare.net/SessionsEvents/competition-winning-learning-rates
Видео Competition Winning Learning Rates канала MLconf
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
CIFAR-10 for DAWNBench feat. "Super Convergence"Choosing the Learning Rate with LR Finder23. Accelerating Gradient Descent (Use Momentum)PyTorch LR Scheduler - Adjust The Learning Rate For Better Results8 Best Psychological Negotiation Tactics and Strategies - How to HaggleMLconf Online 2020: Lessons Learned from Building a Real-World Knowledge Graph by John MaidenDaniel Rueckert: "Deep learning in medical imaging"Jeremy Howard interviews Leslie SmithPyTorch Lightning - William FalconGradient Descent With Momentum (C2W2L06)15. Batch Size and Learning Rate in CNNsLSTM is dead. Long Live Transformers!Train and deploy ML models at scale using Azure Machine Learning | INT134CMLconf Online 2020: Prediction Model for Favorable COVID-19 Patient Outcomes by Yin AphinyanaphongsUncertainty in Neural Networks? Monte Carlo DropoutAdvances in 2D/3D image segmentation using CNNS - Krzysztof KotowskiKaggle Grandmaster PanelOptimizing Neural Network Structures with Keras-TunerMLconf Online 2020: Adversarial Examples, Defense Methods for Online Fraud Detection by Nitin SharmaState-of-the-art Learning Rate Schedules