[SIGGRAPH 2019] Learning Character-Agnostic Motion for Motion Retargeting in 2D
Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan, Daniel Cohen-Or.
Learning Character-Agnostic Motion for Motion Retargeting in 2D, ACM Transactions on Graphics (SIGGRAPH 2019)
Webpage: https://motionretargeting2d.github.io/
Abstract:
Analyzing human motion is a challenging task with a wide variety of applications in computer vision and in graphics. One such application, of particular importance in computer animation, is the retargeting of motion from one performer to another. While humans move in three dimensions, the vast majority of human motions are captured using video, requiring 2D-to-3D pose and camera recovery, before existing retargeting approaches may be applied. In this paper, we present a new method for retargeting video-captured motion between different human performers, without the need to explicitly reconstruct 3D poses and/or camera parameters. In order to achieve our goal, we learn to extract, directly from a video, a high-level latent motion representation, which is invariant to the skeleton geometry and the camera view. Our key idea is to train a deep neural network to decompose temporal sequences of 2D poses into three components: motion, skeleton, and camera view-angle. Having extracted such a representation, we are able to re-combine motion with novel skeletons and camera views, and decode a retargeted temporal sequence, which we compare to a ground truth from a synthetic dataset. We demonstrate that our framework can be used to robustly extract human motion from videos, bypassing 3D reconstruction, and outperforming existing retargeting methods, when applied to videos in-the-wild. It also enables additional applications, such as performance cloning, video-driven cartoons, and motion retrieval.
Видео [SIGGRAPH 2019] Learning Character-Agnostic Motion for Motion Retargeting in 2D канала kfir aberman
Learning Character-Agnostic Motion for Motion Retargeting in 2D, ACM Transactions on Graphics (SIGGRAPH 2019)
Webpage: https://motionretargeting2d.github.io/
Abstract:
Analyzing human motion is a challenging task with a wide variety of applications in computer vision and in graphics. One such application, of particular importance in computer animation, is the retargeting of motion from one performer to another. While humans move in three dimensions, the vast majority of human motions are captured using video, requiring 2D-to-3D pose and camera recovery, before existing retargeting approaches may be applied. In this paper, we present a new method for retargeting video-captured motion between different human performers, without the need to explicitly reconstruct 3D poses and/or camera parameters. In order to achieve our goal, we learn to extract, directly from a video, a high-level latent motion representation, which is invariant to the skeleton geometry and the camera view. Our key idea is to train a deep neural network to decompose temporal sequences of 2D poses into three components: motion, skeleton, and camera view-angle. Having extracted such a representation, we are able to re-combine motion with novel skeletons and camera views, and decode a retargeted temporal sequence, which we compare to a ground truth from a synthetic dataset. We demonstrate that our framework can be used to robustly extract human motion from videos, bypassing 3D reconstruction, and outperforming existing retargeting methods, when applied to videos in-the-wild. It also enables additional applications, such as performance cloning, video-driven cartoons, and motion retrieval.
Видео [SIGGRAPH 2019] Learning Character-Agnostic Motion for Motion Retargeting in 2D канала kfir aberman
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
MyStyle: A Personalized Generative PriorDeep Video-Based Performance Cloning[SIGGRAPH 2020 Fast-Forward] Unpaired Motion Style Transfer from Video to Animation[SIGGRAPH 2018] Neural Best-Buddies: Sparse Cross-Domain Correspondence[SIGGRAPH 2020 Fast-Forward] Skeleton-Aware Networks for Deep Motion Retargeting[SIGGRAPH 2020] Skeleton-Aware Networks for Deep Motion Retargeting[SIGGRAPH 2021] Learning Skeletal Articulations with Neural Blend Shapes[SIGGRAPH 2020] Unpaired Motion Style Transfer from Video to AnimationMotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency [ToG 2020]