ACM MM 2023

SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces

Ziqiao Peng1, Yihao Luo2,3, Yue Shi3, Hao Xu3,4, Xiangyu Zhu5
Hongyan Liu6, Jun He1, Zhaoxin Fan1,3
1Renmin University of China, 2Imperial College London, 3Psyche AI Inc., 4The Hong Kong University of Science and Technology, 5State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA, 6Tsinghua University

SelfTalk utilizes a cross-modal network system to generate coherent and visually comprehensible 3D talking faces by reducing the domain gap between different modalities.

Abstract

Speech-driven 3D face animation technique, extending its applications to various multimedia fields. Previous research has generated promising realistic lip movements and facial expressions from audio signals. However, traditional regression models solely driven by data face several essential problems, such as difficulties in accessing precise labels and domain gaps between different modalities, leading to unsatisfactory results lacking precision and coherence.

To enhance the visual accuracy of generated lip movement while reducing the dependence on labeled data, we propose a novel framework SelfTalk, by involving self-supervision in a cross-modals network system to learn 3D talking faces. The framework constructs a network system consisting of three modules: facial animator, speech recognizer, and lip-reading interpreter. The core of SelfTalk is a commutative training diagram that facilitates compatible features exchange among audio, text, and lip shape, enabling our models to learn the intricate connection between these factors. The proposed framework leverages the knowledge learned from the lip-reading interpreter to generate more plausible lip shapes. Extensive experiments and user studies demonstrate that our proposed approach achieves state-of-the-art performance both qualitatively and quantitatively. We recommend watching the supplementary.



Proposed Method

selftalk


Overview of the proposed SelfTalk. We constructed a commutative training diagram consisting of three modules: facial animator, speech recognizer, and lip-reading interpreter. Specifically, given an input audio signal , the facial animator module extracts the corresponding facial animation , which constitutes the core component of our framework. The speech recognizer, on the other hand, is capable of producing the corresponding text and utilizing it as a ground truth label for supervision. Lastly, the lip-reading interpreter interprets lip movements, produces a text distribution and establishes a constraint using the label from .

BibTeX


      @inproceedings{peng2023selftalk,
        title={SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces}, 
        author={Ziqiao Peng and Yihao Luo and Yue Shi and Hao Xu and Xiangyu Zhu and Hongyan Liu and Jun He and Zhaoxin Fan},
        booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
        pages = {5292–5301},
        doi = {10.1145/3581783.3611734},
        year={2023}
      }