Загрузка страницы

Learning Robust Control Policies for End-to-End Driving in Simulation | RA-L/ICRA 2020

This talk is streamed as part of a presentation for the IEEE International Conference on Robotics and Automation (ICRA) on the paper entitled:
Learning Robust Control Policies for End-to-End Autonomous Driving From Data-Driven Simulation.
Amini, A., Gilitschenski, I., Phillips, J., Moseyko, J., Banerjee, R., Karaman, S., & Rus, D. (2020). IEEE Robotics and Automation Letters, 5(2), 1143-1150.

To get more details on this work, read the paper, or access the code please visit: http://www.mit.edu/~amini/vista/

Abstract:
In this work, we present a data-driven simulation and training engine capable of learning end-to-end autonomous vehicle control policies using only sparse rewards. By leveraging real, human-collected trajectories through an environment, we render novel training data that allows virtual agents to drive along a continuum of new local trajectories consistent with the road appearance and semantics, each with a different view of the scene. We demonstrate the ability of policies learned within our simulator to generalize to and navigate in previously unseen real-world roads, without access to any human control labels during training. Our results validate the learned policy onboard a full-scale autonomous vehicle, including in previously un-encountered scenarios, such as new roads and novel, complex, near-crash situations. Our methods are scalable, leverage reinforcement learning, and apply broadly to situations requiring effective perception and robust operation in the physical world.

Видео Learning Robust Control Policies for End-to-End Driving in Simulation | RA-L/ICRA 2020 канала Alexander Amini
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
10 июня 2020 г. 0:00:10
00:09:59
Яндекс.Метрика