Transporter Networks: Rearranging the Visual World for Robotic Manipulation
Learn more: https://transporternets.github.io/
Abstract: Robotic manipulation can be formulated as inducing a sequence of spatial displacements: where the space being moved can encompass an object, part of an object, or end effector. In this work, we propose the Transporter Network, a simple model architecture that rearranges deep features to infer spatial displacements from visual input -- which can parameterize robot actions. It makes no assumptions of objectness (e.g. canonical poses, models, or keypoints), it exploits spatial symmetries, and is orders of magnitude more sample efficient than our benchmarked alternatives in learning vision-based manipulation tasks: from stacking a pyramid of blocks, to assembling kits with unseen objects; from manipulating deformable ropes, to pushing piles of small objects with closed-loop feedback. Our method can represent complex multi-modal policy distributions and generalizes to multi-step sequential tasks, as well as 6DoF pick-and-place. Experiments on 10 simulated tasks show that it learns faster and generalizes better than a variety of end-to-end baselines, including policies that use ground-truth object poses. We validate our methods with hardware in the real world.
Narration: Laura Graesser
Видео Transporter Networks: Rearranging the Visual World for Robotic Manipulation канала Andy Zeng
Abstract: Robotic manipulation can be formulated as inducing a sequence of spatial displacements: where the space being moved can encompass an object, part of an object, or end effector. In this work, we propose the Transporter Network, a simple model architecture that rearranges deep features to infer spatial displacements from visual input -- which can parameterize robot actions. It makes no assumptions of objectness (e.g. canonical poses, models, or keypoints), it exploits spatial symmetries, and is orders of magnitude more sample efficient than our benchmarked alternatives in learning vision-based manipulation tasks: from stacking a pyramid of blocks, to assembling kits with unseen objects; from manipulating deformable ropes, to pushing piles of small objects with closed-loop feedback. Our method can represent complex multi-modal policy distributions and generalizes to multi-step sequential tasks, as well as 6DoF pick-and-place. Experiments on 10 simulated tasks show that it learns faster and generalizes better than a variety of end-to-end baselines, including policies that use ground-truth object poses. We validate our methods with hardware in the real world.
Narration: Laura Graesser
Видео Transporter Networks: Rearranging the Visual World for Robotic Manipulation канала Andy Zeng
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
PhD Thesis Defense: High-resolution Tactile Sensing for Reactive Robotic Manipulation -- Siyuan Dong3DMatch: Learning Local Geometric Descriptors from RGB-D ReconstructionsLearning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement LearningHow to prepare for your PhD thesis defencePath planning with moving obstacles (MiG 2015)Asymmetric Self-Play for Automatic Goal Discovery in Robotic ManipulationMulti-view Self-supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge9 Most Advanced AI Robots - Humanoid & Industrial RobotsMeet Spot, the robot dog that can run, hop and open doors | Marc RaibertAtlas 3D printed robot arm pouring water - artysta automatykTossingBot: Learning to Throw Arbitrary Objects with Residual PhysicsC4W4L03 Siamese NetworkAUTOMATICA - Robots Vs. Music - Nigel StanfordMulti-expert learning of adaptive legged locomotionPrecise Robot Manipulation with Never-Before-Seen ObjectsMachine Vision Systems | CognexThesis Defense (Lucas Manuelli): July 20, 2020QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic ManipulationBut what is a neural network? | Chapter 1, Deep learning10 Awesome Open-source 3D printed Robotic Gripper