Pieter Abbeel: Deep Learning for Robotics (NIPS 2017 Keynote)
Abstract:
Computer scientists are increasingly concerned about the many ways that machine learning can reproduce and reinforce forms of bias. When ML systems are incorporated into core social institutions, like healthcare, criminal justice and education, issues of bias and discrimination can be extremely serious. But what can be done about it? Part of the trouble with bias in machine learning in high-stakes decision making is that it can be the result of one or many factors: the training data, the model, the system goals, and whether the system works less well for some populations, among several others. Given the difficulty of understanding how a machine learning system produced a particular result, bias is often discovered after a system has been producing unfair results in the wild. But there is another problem as well: the definition of bias changes significantly depending on your discipline, and there are exciting approaches from other fields that have not yet been included by computer science. This talk will look at the recent literature on bias in machine learning, consider how we can incorporate approaches from the social sciences, and offer new strategies to address bias.
Bio:
Pieter Abbeel (Associate Professor at UC Berkeley, Research Scientist at OpenAI, Co-Founder Gradescope) works in machine learning and robotics, in particular his research is on making robots learn from people (apprenticeship learning) and how to make robots learn through their own trial and error (Reinforcement learning). His robots have learned advanced helicopter aerobatics, knottying, basic assembly, and organizing laundry. His group has pioneered deep Reinforcement learning for robotics, including learning visuomotor skills and simulated locomotion. He has won various awards, including best paper awards at ICML, NIPS and ICRA, the Sloan Fellowship, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) award, the Office of Naval Research Young Investigator Program (ONR-YIP) award, the DARPA Young Faculty Award (DARPAYFA), the National Science Foundation Faculty Early Career Development Program Award (NSF-CAREER), the Presidential Early Career Award for Scientists and Engineers (PECASE), the CRA-E Undergraduate Research Faculty Mentoring Award, the MIT TR35, the IEEE Robotics and Automation Society (RAS) Early Career Award, and the Dick Volz Best U.S. Ph.D. Thesis in Robotics and Automation Award.
Видео Pieter Abbeel: Deep Learning for Robotics (NIPS 2017 Keynote) канала Steven Van Vaerenbergh
Computer scientists are increasingly concerned about the many ways that machine learning can reproduce and reinforce forms of bias. When ML systems are incorporated into core social institutions, like healthcare, criminal justice and education, issues of bias and discrimination can be extremely serious. But what can be done about it? Part of the trouble with bias in machine learning in high-stakes decision making is that it can be the result of one or many factors: the training data, the model, the system goals, and whether the system works less well for some populations, among several others. Given the difficulty of understanding how a machine learning system produced a particular result, bias is often discovered after a system has been producing unfair results in the wild. But there is another problem as well: the definition of bias changes significantly depending on your discipline, and there are exciting approaches from other fields that have not yet been included by computer science. This talk will look at the recent literature on bias in machine learning, consider how we can incorporate approaches from the social sciences, and offer new strategies to address bias.
Bio:
Pieter Abbeel (Associate Professor at UC Berkeley, Research Scientist at OpenAI, Co-Founder Gradescope) works in machine learning and robotics, in particular his research is on making robots learn from people (apprenticeship learning) and how to make robots learn through their own trial and error (Reinforcement learning). His robots have learned advanced helicopter aerobatics, knottying, basic assembly, and organizing laundry. His group has pioneered deep Reinforcement learning for robotics, including learning visuomotor skills and simulated locomotion. He has won various awards, including best paper awards at ICML, NIPS and ICRA, the Sloan Fellowship, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) award, the Office of Naval Research Young Investigator Program (ONR-YIP) award, the DARPA Young Faculty Award (DARPAYFA), the National Science Foundation Faculty Early Career Development Program Award (NSF-CAREER), the Presidential Early Career Award for Scientists and Engineers (PECASE), the CRA-E Undergraduate Research Faculty Mentoring Award, the MIT TR35, the IEEE Robotics and Automation Society (RAS) Early Career Award, and the Dick Volz Best U.S. Ph.D. Thesis in Robotics and Automation Award.
Видео Pieter Abbeel: Deep Learning for Robotics (NIPS 2017 Keynote) канала Steven Van Vaerenbergh
Показать
Комментарии отсутствуют
Информация о видео
18 декабря 2017 г. 14:07:12
00:51:09
Другие видео канала
Suchi Saria: Augmenting Clinical Intelligence with Machine Intelligence (ICLR 2018 invited talk)Evolution of CleverHans (May 15th, 2018)Erik Brynjolfsson: What Can Machine Learning Do? Workforce Implications (ICLR 2018)Machine Learning in Automated Mechanism Design for Pricing and Auctions (ICML 2018 tutorial)Max Welling: Intelligence per Kilowatthour (ICML 2018 invited talk)Using D-ID to create a talking avatar videoProbabilistic Methods, Applications sessions at NIPS 2017Demo of ChatGPT's visual capabilities (Oct. 2023)Yisong Yue and Hoang M Le: Tutorial on Imitation Learning (ICML 2018 tutorial)Truyen Tran - Learning to Remember More with Less Memorization (ICLR 2019 talk)Geometric reasoning with ChatGPT and GeoGebra, part 1Deep Learning session at NIPS 2017Michael Unser: Splines and Machine Learning: From classical RKHS methods to DNN (MLSP 2020 keynote)Kristen Grauman: Visual Learning With Unlabeled Video and Look-Around Policies ICLR2018 invited talkYikang Shen: Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks (ICLR2019)GitHub Copilot demonstration October 2023 (part 1)Copilot demonstration April 2023Joelle Pineau: Reproducibility, Reusability, and Robustness in Deep Reinforcement Learning ICLR 2018ChatGPT 4 system prompt (December 16, 2023)Eight years of scikit-learn development (Jan. 11th 2018)