DFM: Deep Fourier Mimic for Expressive Dance Motion Learning
Title: DFM: Deep Fourier Mimic for Expressive Dance Motion Learning
Authors: Ryo Watanabe, Chenhao Li, Marco Hutter
Project page: https://sony.github.io/DFM/
Paper: https://arxiv.org/abs/2502.10980
Abstract: As entertainment robots gain popularity, the demand for natural and expressive motion, particularly in dancing, continues to rise. Traditionally, dancing motions have been manually designed by artists, a process that is both labor-intensive and restricted to simple motion playback, lacking the flexibility to incorporate additional tasks such as locomotion or gaze control during dancing. To overcome these challenges, we introduce Deep Fourier Mimic (DFM), a novel method that combines advanced motion representation with Reinforcement Learning (RL) to enable smooth transitions between motions while concurrently managing auxiliary tasks during dance sequences. While previous frequency domain based motion representations have successfully encoded dance motions into latent parameters, they often impose overly rigid periodic assumptions at the local level, resulting in reduced tracking accuracy and motion expressiveness, which is a critical aspect for entertainment robots. By relaxing these locally periodic constraints, our approach not only enhances tracking precision but also facilitates smooth transitions between different motions. Furthermore, the learned RL policy that supports simultaneous base activities, such as locomotion and gaze control, allows entertainment robots to engage more dynamically and interactively with users rather than merely replaying static, pre-designed dance routines.
Видео DFM: Deep Fourier Mimic for Expressive Dance Motion Learning канала Robotic Systems Lab: Legged Robotics at ETH Zürich
Authors: Ryo Watanabe, Chenhao Li, Marco Hutter
Project page: https://sony.github.io/DFM/
Paper: https://arxiv.org/abs/2502.10980
Abstract: As entertainment robots gain popularity, the demand for natural and expressive motion, particularly in dancing, continues to rise. Traditionally, dancing motions have been manually designed by artists, a process that is both labor-intensive and restricted to simple motion playback, lacking the flexibility to incorporate additional tasks such as locomotion or gaze control during dancing. To overcome these challenges, we introduce Deep Fourier Mimic (DFM), a novel method that combines advanced motion representation with Reinforcement Learning (RL) to enable smooth transitions between motions while concurrently managing auxiliary tasks during dance sequences. While previous frequency domain based motion representations have successfully encoded dance motions into latent parameters, they often impose overly rigid periodic assumptions at the local level, resulting in reduced tracking accuracy and motion expressiveness, which is a critical aspect for entertainment robots. By relaxing these locally periodic constraints, our approach not only enhances tracking precision but also facilitates smooth transitions between different motions. Furthermore, the learned RL policy that supports simultaneous base activities, such as locomotion and gaze control, allows entertainment robots to engage more dynamically and interactively with users rather than merely replaying static, pre-designed dance routines.
Видео DFM: Deep Fourier Mimic for Expressive Dance Motion Learning канала Robotic Systems Lab: Legged Robotics at ETH Zürich
Комментарии отсутствуют
Информация о видео
21 февраля 2025 г. 17:47:46
00:02:59
Другие видео канала