SyncTalk++: High-Fidelity and Efficient Synchronized Talking Heads Synthesis Using Gaussian Splatting

Ziqiao Peng1, Wentao Hu2, Junyuan Ma3, Xiangyu Zhu3, Xiaomei Zhang3, Hao Zhao4, Hui Tian2,
Jun He1, Hongyan Liu4*, Zhaoxin Fan5*
1Renmin University of China, 2Beijing University of Posts and Telecommunications, 3Chinese Academy of Sciences, 4Tsinghua University 5Beihang University

SyncTalk++ synthesizes synchronized talking head videos, employing Gaussian Splatting to maintain subject identity. It can generate synchronized lip movements, facial expressions, and stable head poses, and restores hair details to create high-resolution videos.

Abstract

Achieving high synchronization in the synthesis of realistic, speech-driven talking head videos presents a significant challenge. A lifelike talking head requires synchronized coordination of subject identity, lip movements, facial expressions, and head poses. The absence of these synchronizations is a fundamental flaw, leading to unrealistic results.

To address the critical issue of synchronization, identified as the “devil” in creating realistic talking heads, we introduce SyncTalk++, which features a Dynamic Portrait Renderer with Gaussian Splatting to ensure consistent subject identity preservation and a FaceSync Controller that aligns lip movements with speech while innovatively using a 3D facial blendshape model to reconstruct accurate facial expressions. To ensure natural head movements, we propose a Head-Sync Stabilizer, which optimizes head poses for greater stability. Additionally, SyncTalk++ enhances robustness to out-of-distribution (OOD) audio by incorporating an Expression Generator and a Torso Restorer, which generate speech-matched facial expressions and seamless torso regions. Our approach maintains consistency and continuity in visual details across frames and significantly improves rendering speed and quality, achieving up to 101 frames per second. Extensive experiments and user studies demonstrate that SyncTalk++ outperforms state-of-the-art methods in synchronization and realism.



Proposed Method

synctalk++


Overview of SyncTalk++. Given a cropped reference video of a talking head and the corresponding speech, SyncTalk++ can extract the Lip Feature , Expression Feature , and Head Pose through two synchronization modules and . Then, Gaussian Splatting is used to model and deform the head, producing a talking head video. The OOD Audio Expression Generator and Torso Restorer can generate speech-matched facial expressions and repair artifacts at head-torso junctions.

BibTeX


  @article{peng2024synctalk++,
    title={SyncTalk++: High-Fidelity and Efficient Synchronized Talking Heads Synthesis Using Gaussian Splatting}, 
    author={Ziqiao Peng and Wentao Hu and Junyuan Ma and Xiangyu Zhu and Xiaomei Zhang and Hao Zhao and Hui Tian and Jun He and Hongyan Liu and Zhaoxin Fan},
    year={2024}
  }