Achieving high synchronization in the synthesis of realistic,
speech-driven talking head videos presents a significant challenge.
A lifelike talking head requires synchronized coordination of subject identity,
lip movements, facial expressions, and head poses. The absence of these
synchronizations is a fundamental flaw, leading to unrealistic results.
To address the critical issue of synchronization, identified as
the “devil” in creating realistic talking heads, we introduce SyncTalk++,
which features a Dynamic Portrait Renderer with Gaussian Splatting to ensure
consistent subject identity preservation and a FaceSync Controller that aligns
lip movements with speech while innovatively using a 3D facial blendshape model
to reconstruct accurate facial expressions. To ensure natural head movements,
we propose a Head-Sync Stabilizer, which optimizes head poses for greater stability.
Additionally, SyncTalk++ enhances robustness to out-of-distribution (OOD) audio by
incorporating an Expression Generator and a Torso Restorer, which generate speech-matched
facial expressions and seamless torso regions. Our approach maintains consistency
and continuity in visual details across frames and significantly improves rendering
speed and quality, achieving up to 101 frames per second. Extensive experiments and
user studies demonstrate that SyncTalk++ outperforms state-of-the-art methods in
synchronization and realism.
Overview of SyncTalk++. Given a cropped reference video of a talking head and the corresponding speech,
SyncTalk++ can extract the Lip Feature ,
Expression Feature
,
and Head Pose
through
two synchronization modules
and
. Then, Gaussian Splatting
is used to model and deform the head, producing a talking head video. The OOD Audio
Expression Generator and Torso Restorer can generate speech-matched facial expressions
and repair artifacts at head-torso junctions.
@article{peng2024synctalk++,
title={SyncTalk++: High-Fidelity and Efficient Synchronized Talking Heads Synthesis Using Gaussian Splatting},
author={Ziqiao Peng and Wentao Hu and Junyuan Ma and Xiangyu Zhu and Xiaomei Zhang and Hao Zhao and Hui Tian and Jun He and Hongyan Liu and Zhaoxin Fan},
year={2024}
}