Autonomous robot car control demonstration in CES2016
See explanation video in https://twitter.com/Toyota/status/685657676961873920
We demonstrate how cars can learn to drive by themselves. In this demo, cars use only raw sensor data (simulated ridars: 32 distances, and angles) to control their steers and speed by deep reinforcement learning. Cars do not know their positions, and need to understand their environments only from raw sensor data and need to decide optimal control. (A QR code on the car is used for simulating ridar information). We don't give any driving rules beforehand. All these driving techniques are learned by themselves from their experiences.
A red car is manually controlled by human. Other cars try to avoid collisions.
Their learning models are shared among cars in realtime.
This is joint work with Toyota and NTT
Видео Autonomous robot car control demonstration in CES2016 канала Preferred Networks
We demonstrate how cars can learn to drive by themselves. In this demo, cars use only raw sensor data (simulated ridars: 32 distances, and angles) to control their steers and speed by deep reinforcement learning. Cars do not know their positions, and need to understand their environments only from raw sensor data and need to decide optimal control. (A QR code on the car is used for simulating ridar information). We don't give any driving rules beforehand. All these driving techniques are learned by themselves from their experiences.
A red car is manually controlled by human. Other cars try to avoid collisions.
Their learning models are shared among cars in realtime.
This is joint work with Toyota and NTT
Видео Autonomous robot car control demonstration in CES2016 канала Preferred Networks
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Consecutive category morphing of GANs generated images (10x10 version, submitted to ICLR 2018)Encoder-Decoderモデルに置ける出力長制御autonomous tidying-up robot system : dynamic destination update [with caption]MotionChainer Crowd Scene / MotionChainer群衆シーンInvisible Marker: Automatic Annotation for Object ManipulationTwo-fingered Hand with Gear-type Synchronization Mechanism with Magnet: F2 HandMath and GoInteractively Picking Real-World Objects with Unconstrained Spoken Language InstructionsChainer Roadmap Meetup: v4 to v5The iPhone, iOS Security, and JailbreakingIntroduction to GIS and OpenDatavideo1: ChainerRL Visualizer demo1: Rollout one episodePFN 4D Scan - Aquarium(しながわ水族館 海の宝石箱編)[SIGGRAPH Asia 2023 TC] Interactive Material Annotation on 3D Scanned Modelsすごいぞ!おかたづけロボットの謎!の巻Sedue Search Cloudを使用したiPadによるサンプルアプリ(動作イメージ)Interactively Picking Real-World Objects with Unconstrained Spoken Language Instructions (cluttered)Invisible Marker: Automatic Annotation of Segmentation Masks for Object Manipulationすごいぞ!自動化技術の未来の巻CEATEC JAPAN 2018: Tidying-up Robot System: Visualization (Picking a pen)PFIセミナー 2016/04/28: Pythonの理解を試みる