Загрузка страницы

Taming Latency Variability and Scaling Deep Learning, by Jeff Dean, Google, 20131016

I'll describe a collection of techniques and practices for lowering response times (especially in the tail of the latency distribution) in large distributed systems whose components run on shared clusters of machines. I'll highlight some recent work on using large-scale distributed systems for training deep neural networks. I'll discuss how we can utilize both model-level parallelism and data-level parallelism in order to train large models on large datasets more quickly. I'll also highlight how we have applied this work to a variety of problems in domains such as speech recognition, object recognition, and language modeling.

Видео Taming Latency Variability and Scaling Deep Learning, by Jeff Dean, Google, 20131016 канала San Francisco Bay ACM
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
10 ноября 2013 г. 20:37:08
01:13:34
Яндекс.Метрика