Загрузка страницы

Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion

ACM Transactions on Graphics (Proc. SIGGRAPH 2017)

http://research.nvidia.com/publication/2017-07_Audio-Driven-Facial-Animation

Tero Karras (NVIDIA)
Timo Aila (NVIDIA)
Samuli Laine (NVIDIA)
Antti Herva (Remedy Entertainment)
Jaakko Lehtinen (NVIDIA and Aalto University)

We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet.

We train our network with 3-5 minutes of high-quality animation data obtained using traditional, vision-based performance capture methods. Even though our primary goal is to model the speaking style of a single actor, our model yields reasonable results even when driven with audio from other speakers with different gender, accent, or language, as we demonstrate with a user study. The results are applicable to in-game dialogue, low-cost localization, virtual reality avatars, and telepresence.

Видео Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion канала Tero Karras FI
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
1 ноября 2017 г. 12:58:10
00:05:37
Яндекс.Метрика