EI Seminar - Jeannette Bohg - Scaling Robot Learning for Long-Horizon Manipulation Tasks
Title: Scaling Robot Learning for Long-Horizon Manipulation Tasks with language, Logic, and Youtube
Abstract: My long-term research goal is enable real robots to manipulate any kind of object such that they can perform many different tasks in a wide variety of application scenarios such as in our homes, in hospitals, warehouses, or factories. Many of these tasks will require long-horizon reasoning and sequencing of skills to achieve a goal state. While learning approaches promise generalization beyond what the robot has seen during training, they require large data collection - a challenge when operating on real robots and specifically for long-horizon tasks. In this talk, I will present our work on enabling long-horizon reasoning on real robots for a variety of different long-horizon tasks that can be solved by sequencing a large variety of composable skill primitives. We approach this problem from many different angles such as (i) using large-scale, language-annotated video datasets as a cheap data source for skill learning; (ii) sequencing these learned skill primitives to resolve geometric dependencies prevalent in long-horizon tasks; (iii) learning grounded predicates thereby enabling closed-loop, symbolic task planning.
Bio: Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.
Видео EI Seminar - Jeannette Bohg - Scaling Robot Learning for Long-Horizon Manipulation Tasks канала MIT Embodied Intelligence
Abstract: My long-term research goal is enable real robots to manipulate any kind of object such that they can perform many different tasks in a wide variety of application scenarios such as in our homes, in hospitals, warehouses, or factories. Many of these tasks will require long-horizon reasoning and sequencing of skills to achieve a goal state. While learning approaches promise generalization beyond what the robot has seen during training, they require large data collection - a challenge when operating on real robots and specifically for long-horizon tasks. In this talk, I will present our work on enabling long-horizon reasoning on real robots for a variety of different long-horizon tasks that can be solved by sequencing a large variety of composable skill primitives. We approach this problem from many different angles such as (i) using large-scale, language-annotated video datasets as a cheap data source for skill learning; (ii) sequencing these learned skill primitives to resolve geometric dependencies prevalent in long-horizon tasks; (iii) learning grounded predicates thereby enabling closed-loop, symbolic task planning.
Bio: Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.
Видео EI Seminar - Jeannette Bohg - Scaling Robot Learning for Long-Horizon Manipulation Tasks канала MIT Embodied Intelligence
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![MIT EI Seminar - Laura Schulz - Curiouser and curiouser: why we make problems for ourselves](https://i.ytimg.com/vi/1l0u5gctDP4/default.jpg)
![EI Seminar - Graham Neubig - Learning to Explain and Explaining to Learn](https://i.ytimg.com/vi/CtcP5bvODzY/default.jpg)
![EI Seminar - Martin Riedmiller - Learning Controllers - From Engineering to AGI](https://i.ytimg.com/vi/Pno8xsrgWA4/default.jpg)
![EI Seminar Livestream - Max Tegmark](https://i.ytimg.com/vi/aDaOuBP-jN4/default.jpg)
![EI Seminar - Recent papers in Embodied Intelligence](https://i.ytimg.com/vi/wcVejqmb1mQ/default.jpg)
![EI Seminar - Beomjoon Kim - Making Robots See and Manipulate](https://i.ytimg.com/vi/GZ-oiwOeRc8/default.jpg)
![EI Seminar - Marco Pavone - Building Trust in AI for Autonomous Vehicles](https://i.ytimg.com/vi/HjOt-4k6haI/default.jpg)
![EI Seminar - Jacob Andreas - Good Old-fashioned LLMs (or, Autoformalizing the World)](https://i.ytimg.com/vi/_TrKARhF5cI/default.jpg)
![EI Seminar - Grey Yang - Tuning GPT-3 on a Single GPU via Zero-Shot Hyperparameter Transfer](https://i.ytimg.com/vi/xbCibcC9Ud0/default.jpg)
![EI Seminar - Maurice Fallon - Multi-Sensor Robot Navigation and Subterranean Exploration](https://i.ytimg.com/vi/4D4TbI1gGIg/default.jpg)
![EI Seminar - Chad Jenkins - Semantic Robot Programming... and Maybe Making the Worlda Better Place](https://i.ytimg.com/vi/UaTq6ojGuYo/default.jpg)
![EI Seminar - Joydeep Biswas](https://i.ytimg.com/vi/0vPNN0J8M44/default.jpg)
![MIT EI Seminar - Lerrel Pinto - Diverse data and efficient algorithms for robot learning](https://i.ytimg.com/vi/tRcwyC-ivMQ/default.jpg)
![EI Seminar - Yuan Gong - Audio Large Language Models: From Sound Perception to Understanding](https://i.ytimg.com/vi/uqsW2eK-Rms/default.jpg)
![Lawson Wong - High-Level Guidance for Generalizable Reinforcement Learning](https://i.ytimg.com/vi/8KGbtpkMBZc/default.jpg)
![EI Seminar - Monroe Kennedy - Collaborative Robotics: From Dexterity to Teammate Prediction](https://i.ytimg.com/vi/ii8ZNXaZ0hg/default.jpg)
![EI Seminar - Rob Fergus - Data Augmentation for Image-Based Reinforcement Learning](https://i.ytimg.com/vi/Ny2CpgPrtB8/default.jpg)
![EI Seminar - Jacob Steinhardt - Large Language Models as Statisticians](https://i.ytimg.com/vi/1m_fCzB__Oo/default.jpg)
![EI Seminar - Oriol Vinyals - The Deep Learning Toolbox: from AlphaFold to AlphaCode](https://i.ytimg.com/vi/dOlbnrsQy_I/default.jpg)
![Daniel Wolpert - Computational principles underlying the learning of sensorimotor repertoires](https://i.ytimg.com/vi/wp3c1E6oCTM/default.jpg)