Ziqiao Peng

I am a third-year PhD candidate at Renmin University of China, supervised by Prof. Jun He from Renmin University of China and Prof. Hongyan Liu from Tsinghua University.

I'm actively seeking internship opportunities that align with my research interests. If you know of any openings or have recommendations, I'd greatly appreciate your input.

My areas of focus include AI-generated content, talking head generation, and video generation.

Email  /  Github

profile photo
Research
omnisync
[arXiv 2025] OmniSync: Towards Universal Lip Synchronization via Diffusion Transformers
Ziqiao Peng, Jiwen Liu, Haoxian Zhang, Xiaoqiang Liu, Songlin Tang, Pengfei Wan, Di Zhang, Hongyan Liu, Jun He
Project / arXiv

We present OmniSync, a universal lip synchronization framework for diverse visual scenarios.

dualtalk
[CVPR 2025] DualTalk: Dual-Speaker Interaction for 3D Talking Head Conversations
Ziqiao Peng, Yanbo Fan, Haoyu Wu, Xuan Wang, Hongyan Liu, Jun He, Zhaoxin Fan
Project / arXiv

We propose a new task -- multi-round dual-speaker interaction for 3D talking head generation -- which requires models to handle and generate both speaking and listening behaviors in continuous conversation.

synctalkpp
SyncTalk++: High-Fidelity and Efficient Synchronized Talking Heads Synthesis Using Gaussian Splatting
Ziqiao Peng, Wentao Hu, Junyuan Ma, Xiangyu Zhu, Xiaomei Zhang, Hao Zhao, Hui Tian, Jun He, Hongyan Liu, Zhaoxin Fan
Project / arXiv

We propose a 3DGS-based method to synthesis realistic talking head videos with better OOD quality.

synctalk
[CVPR 2024] SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis
Ziqiao Peng, Wentao Hu, Yue Shi, Xiangyu Zhu, Xiaomei Zhang, Hao Zhao, Jun He, Hongyan Liu, Zhaoxin Fan
Project / arXiv / Code

We propose a NeRF-based method to synthesis realistic talking head videos.

selftalk
[ACM MM 2023] SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces
Ziqiao Peng, Yihao Luo, Yue Shi, Hao Xu, Xiangyu Zhu, Hongyan Liu, Jun He, Zhaoxin Fan
Project / arXiv / Code

We propose a novel framework called SelfTalk utilizes a cross-modal network system to generate coherent and visually comprehensible 3D talking faces by reducing the domain gap between different modalities.

emotalk
[ICCV 2023] EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation
Ziqiao Peng, Haoyu Wu, Zhenbo Song, Hao Xu, Xiangyu Zhu, Hongyan Liu, Jun He, Zhaoxin Fan
Project / arXiv / Code

We propose an end-to-end neural network for speech-driven emotion-enhanced 3D facial animation.

Review Service
Conferences: CVPR, NeurIPS, ICCV, ECCV, ACM MM, ICME, Eurographics
Journals: IJCV, TIP, TMM, TOMM, IET Image Processing, IET Computer Vision

Last updated: June 2025
Web page design credit to Jon Barron