1. Multimodal Semantic Communication for Generative Audio-Driven Video Conferencing
- Author
-
Tong, Haonan, Li, Haopeng, Du, Hongyang, Yang, Zhaohui, Yin, Changchuan, and Niyato, Dusit
- Subjects
Computer Science - Multimedia - Abstract
This paper studies an efficient multimodal data communication scheme for video conferencing. In our considered system, a speaker gives a talk to the audiences, with talking head video and audio being transmitted. Since the speaker does not frequently change posture and high-fidelity transmission of audio (speech and music) is required, redundant visual video data exists and can be removed by generating the video from the audio. To this end, we propose a wave-to-video (Wav2Vid) system, an efficient video transmission framework that reduces transmitted data by generating talking head video from audio. In particular, full-duration audio and short-duration video data are synchronously transmitted through a wireless channel, with neural networks (NNs) extracting and encoding audio and video semantics. The receiver then combines the decoded audio and video data, as well as uses a generative adversarial network (GAN) based model to generate the lip movement videos of the speaker. Simulation results show that the proposed Wav2Vid system can reduce the amount of transmitted data by up to 83% while maintaining the perceptual quality of the generated conferencing video., Comment: accepted by IEEE Wireless Communications Letters
- Published
- 2024