Compliant humanoid robot COMAN learns to walk efficiently
The compliant humanoid robot COMAN learns to walk with two different gaits: one with fixed height of the center of mass, and one with varying height. The varying-height center-of-mass trajectory was learned by reinforcement learning in order to minimize the electric energy consumption during walking. The optimized walking gait achieves 18% reduction of the energy consumption in the sagittal plane, due to the passive compliance - the springs in the knees and ankles of the robot are able to store and release energy efficiently. In addition, the varying-height walking looks more natural and smooth than the conventional fixed-height walking.
This research was presented at the International Conference on Intelligent Robots and Systems (IROS 2011) in September 25-30, 2011 in San Francisco, California.
Video credits:
--------------------------
Dr. Petar Kormushev
http://kormushev.com
Dr. Barkan Ugurlu
Dr. Nikos Tsagarakis
Affiliation:
-------------------------
Department of Advanced Robotics
Italian Institute of Technology
Publication:
---------------------------------
Kormushev, P., Ugurlu, B., Calinon, S., Tsagarakis, N., and Caldwell, D.G., "Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization", In Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS-2011), San Francisco, 2011.
http://kormushev.com/research/publications/
Paper title:
--------------------------
Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization
Authors:
---------------------------------
Petar Kormushev, Barkan Ugurlu, Sylvain Calinon, Nikolaos G. Tsagarakis, Darwin G. Caldwell
Paper abstract:
--------------------------
We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robot's passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction.
Other videos:
-------------------------------------
http://kormushev.com/research/videos/
.
Видео Compliant humanoid robot COMAN learns to walk efficiently канала PetarKormushev
This research was presented at the International Conference on Intelligent Robots and Systems (IROS 2011) in September 25-30, 2011 in San Francisco, California.
Video credits:
--------------------------
Dr. Petar Kormushev
http://kormushev.com
Dr. Barkan Ugurlu
Dr. Nikos Tsagarakis
Affiliation:
-------------------------
Department of Advanced Robotics
Italian Institute of Technology
Publication:
---------------------------------
Kormushev, P., Ugurlu, B., Calinon, S., Tsagarakis, N., and Caldwell, D.G., "Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization", In Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS-2011), San Francisco, 2011.
http://kormushev.com/research/publications/
Paper title:
--------------------------
Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization
Authors:
---------------------------------
Petar Kormushev, Barkan Ugurlu, Sylvain Calinon, Nikolaos G. Tsagarakis, Darwin G. Caldwell
Paper abstract:
--------------------------
We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robot's passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction.
Other videos:
-------------------------------------
http://kormushev.com/research/videos/
.
Видео Compliant humanoid robot COMAN learns to walk efficiently канала PetarKormushev
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Boston Dynamics' amazing robots Atlas and HandleDevelopment of Humanoid Robot LOLACan Active Impedance Protect Robots from Landing Impact?The Two-Legged Robots Walking Into the FutureRobot X #2 | Legs & Hips | James Bruton17 DOF HUMANOID ROBOT (Mark 2) (MK 2)Robot WALK-MAN ready for DARPA Robotics ChallengeRobots bootstrapped through learning from ExperienceMaking Servo Robot Eyes for YoLuke Webcams - IzzyBuilding a $150 TikTok Viral RobotTOP 5 OPEN SOURCE 3D PRINTED ROBOTSEvolution Of Boston Dynamics Since 2012Top 5 Japan's Coolest Robots - Technology of the Future TodayKangaroo robot by FestoHow To Make Walking Robot With Icecream Sticks | Walking Robot With DC Motor | DIY Walking RobotRobot Exhibition in Beijing | The Coolest Robots from China 2020 | CIFTIS 2020Best Robot Arms of our time17DOF Bipedal Robot (Raspberry Pi)March of the First OrderThe 10 Most Advanced HUMANOID ROBOTS In The World