Strategies for Active Machine Learning
Robert Nowak
Professor, University of Wisconsin-Madison
Abstract
The field of Machine Learning (ML) has advanced considerably in recent years, but mostly in well-defined domains and often using huge amounts of human-labeled training data. Machines can recognize objects in images and translate text, but they must be trained with more images and text than a person can see in nearly a lifetime. The computational complexity of training has been offset by recent technological advances, but the cost of training data is measured in terms of the human effort in labeling data. People are not getting faster nor cheaper, so generating labeled training datasets has become a major bottleneck in ML pipelines.
Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant, redundant, or trivial examples. This talk explores the development of active ML theory and methods over the past decade, including a new approach applicable to kernel methods and neural networks, which views the learning problem through the lens of representer theorems. This perspective highlights the effect of adding a new training example on the functional representation, leading to a new criterion for actively selecting examples.
Bio
Robert Nowak holds the Nosbusch Professorship in Engineering at the University of Wisconsin-Madison, where his research focuses on signal processing, machine learning, optimization, and statistics.
Видео Strategies for Active Machine Learning канала Stanford Research Talks
Professor, University of Wisconsin-Madison
Abstract
The field of Machine Learning (ML) has advanced considerably in recent years, but mostly in well-defined domains and often using huge amounts of human-labeled training data. Machines can recognize objects in images and translate text, but they must be trained with more images and text than a person can see in nearly a lifetime. The computational complexity of training has been offset by recent technological advances, but the cost of training data is measured in terms of the human effort in labeling data. People are not getting faster nor cheaper, so generating labeled training datasets has become a major bottleneck in ML pipelines.
Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant, redundant, or trivial examples. This talk explores the development of active ML theory and methods over the past decade, including a new approach applicable to kernel methods and neural networks, which views the learning problem through the lens of representer theorems. This perspective highlights the effect of adding a new training example on the functional representation, leading to a new criterion for actively selecting examples.
Bio
Robert Nowak holds the Nosbusch Professorship in Engineering at the University of Wisconsin-Madison, where his research focuses on signal processing, machine learning, optimization, and statistics.
Видео Strategies for Active Machine Learning канала Stanford Research Talks
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Jan Freyberg: Active learning in the interactive python environment | PyData London 2019](https://i.ytimg.com/vi/W2bJH0iXTKc/default.jpg)
![[ICML 2019] Active Learning from Theory to Practice](https://i.ytimg.com/vi/_Ql5vfOPxZU/default.jpg)
![Eldad Haber: "Active learning and experimental design - who should we test?"](https://i.ytimg.com/vi/uNeOfrngMZE/default.jpg)
![Study Skills & Evidence-Based Learning Strategies](https://i.ytimg.com/vi/d7viLjsHruY/default.jpg)
![Active (Machine) Learning - Computerphile](https://i.ytimg.com/vi/ANIw1Mz1SRI/default.jpg)
![Between Tractable and Intractable Problems in Reinforcement Learning](https://i.ytimg.com/vi/j4MMzyg7jKk/default.jpg)
![Digging into Data: Active Learning](https://i.ytimg.com/vi/8Jwp4_WbRio/default.jpg)
![After watching this, your brain will not be the same | Lara Boyd | TEDxVancouver](https://i.ytimg.com/vi/LNHBMFCzznE/default.jpg)
![But what is a neural network? | Chapter 1, Deep learning](https://i.ytimg.com/vi/aircAruvnKk/default.jpg)
![11. Introduction to Machine Learning](https://i.ytimg.com/vi/h0e2HAPTGF4/default.jpg)
![Active Learning Strategies](https://i.ytimg.com/vi/Sz7c2i-9Xm4/default.jpg)
![Best Laptop Configuration For Learning Machine Learning and Deep Learning](https://i.ytimg.com/vi/iAbhu1DN1n0/default.jpg)
![Marty Lobdell - Study Less Study Smart](https://i.ytimg.com/vi/IlU-zDU6aQ0/default.jpg)
![Lecture 1 | Machine Learning (Stanford)](https://i.ytimg.com/vi/UzxYlbK2c7E/default.jpg)
![Real World Active Learning - Machine Learning](https://i.ytimg.com/vi/NQrkfLbX4tQ/default.jpg)
![Learning from dirty jobs | Mike Rowe](https://i.ytimg.com/vi/IRVdiHu1VCc/default.jpg)
![Active Learning for NLP](https://i.ytimg.com/vi/pd8UTzFtzHA/default.jpg)
![AI Pioneer Panel with Yoshua Bengio, Yann LeCun & Geoffrey Hinton (Full Keynote)](https://i.ytimg.com/vi/UM7_-eoXfao/default.jpg)
![Lecture 24: Active Learning](https://i.ytimg.com/vi/UHWbZHZ7aVk/default.jpg)
![Computational Challenges and the Future of ML Panel](https://i.ytimg.com/vi/uyZOcUDhIbY/default.jpg)