Загрузка страницы

Do statistical models understand the world?

Speaker: Dr. Ian Goodfellow, Google

Machine learning algorithms have reached human-level performance on a variety of benchmark tasks. This raises the question of whether these algorithms have also reached human-level 'understanding' of these tasks. By designing inputs specifically to confuse machine learning algorithms, we show that statistical models ranging from logistic regression to deep convolutional networks fail in predictable ways when presented with statistically unusual inputs. Fixing these specific failures allows deep models to attain unprecedented levels of accuracy, but the philosophical question of what it means to understand a task and how to build a machine that does so remains open.

Ian Goodfellow is a research scientist at Google. He earned a PhD in machine learning from Université de Montréal in 2014. His PhD advisors were Yoshua Bengio and Aaron Courville. His studies were funded by the Google PhD Fellowship in Deep Learning. During his PhD studies, he wrote Pylearn2, the open source deep learning research library, and introduced a variety of new deep learning algorithms. Previously, he obtained a BSc and MSc in computer science from Stanford University, where he was one of the earliest members of Andrew Ng's deep learning research group.

Recorded at the Big Techday 8 / http://www.bigtechday.com of TNG Technology Consulting / http://www.tngtech.com on June 12th, 2015 in Munich / Germany

Видео Do statistical models understand the world? канала TNG Technology Consulting GmbH
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
28 августа 2015 г. 1:47:09
00:53:32
Яндекс.Метрика