Загрузка страницы

Distributed TensorFlow training (Google I/O '18)

To efficiently train machine learning models, you will often need to scale your training to multiple GPUs, or even multiple machines. TensorFlow now offers rich functionality to achieve this with just a few lines of code. Join this session to learn how to set this up.

Rate this session by signing-in on the I/O website here → https://goo.gl/sBZMEm

Distribution Strategy API:
https://goo.gl/F9vXqQ
https://goo.gl/Zq2xvJ

ResNet50 Model Garden example with MirroredStrategy API:
https://goo.gl/3UWhj8

Performance Guides:
https://goo.gl/doqGE7
https://goo.gl/NCnrCn

Commands to set up a GCE instance and run distributed training:
https://goo.gl/xzwN4C

Multi-machine distributed training with train_and_evaluate:
https://goo.gl/kyikAC

Watch more TensorFlow sessions from I/O '18 here → https://goo.gl/GaAnBR
See all the sessions from Google I/O '18 here → https://goo.gl/q1Tr8x

Subscribe to the TensorFlow channel → https://goo.gl/ht3WGe

#io18 event: Google I/O 2018; re_ty: Publish; product: TensorFlow - General; fullname: Priya Gupta, Anjali Sridhar; event: Google I/O 2018;

Видео Distributed TensorFlow training (Google I/O '18) канала TensorFlow
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
12 мая 2018 г. 8:34:41
00:35:29
Яндекс.Метрика