Back to Search Start Over

ViDA-MAN: Visual Dialog with Digital Humans

Authors :
Shen, Tong
Zuo, Jiawei
Shi, Fan
Zhang, Jin
Jiang, Liqin
Chen, Meng
Zhang, Zhengchen
Zhang, Wei
He, Xiaodong
Mei, Tao
Publication Year :
2021

Abstract

We demonstrate ViDA-MAN, a digital-human agent for multi-modal interaction, which offers realtime audio-visual responses to instant speech inquiries. Compared to traditional text or voice-based system, ViDA-MAN offers human-like interactions (e.g, vivid voice, natural facial expression and body gestures). Given a speech request, the demonstration is able to response with high quality videos in sub-second latency. To deliver immersive user experience, ViDA-MAN seamlessly integrates multi-modal techniques including Acoustic Speech Recognition (ASR), multi-turn dialog, Text To Speech (TTS), talking heads video generation. Backed with large knowledge base, ViDA-MAN is able to chat with users on a number of topics including chit-chat, weather, device control, News recommendations, booking hotels, as well as answering questions via structured knowledge.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.13384
Document Type :
Working Paper