Загрузка страницы

Tips N Tricks #6: How to train multiple deep neural networks on TPUs simultaneously

In this video, I will show you how you can train multiple neural networks on TPUs simultaneously. You can use this trick to train multiple folds for a dataset really quick and avoid all the optimization of hyperparameters that are usually associated with TPUs. I am not talking about how TPUs work.

Please note: you need to use "xm.optimizer_step(optimizer, barrier=True)" in the train_fn. This is not mentioned in the video.

You can see the full code here: https://www.kaggle.com/abhishek/super-duper-fast-pytorch-tpu-kernel

If you want to present something on my live show, fill up the form here: http://bit.ly/AbhishekTalks

#TPU #Tricks #DataScience

Follow me on:
Twitter: https://twitter.com/abhi1thakur
LinkedIn: https://www.linkedin.com/in/abhi1thakur/
Kaggle: https://kaggle.com/abhishek

Видео Tips N Tricks #6: How to train multiple deep neural networks on TPUs simultaneously канала Abhishek Thakur
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
17 апреля 2020 г. 21:22:27
00:15:24
Яндекс.Метрика