Back to Search Start Over

Real-Time Multimodal Human--Avatar Interaction.

Authors :
Yun Fu
Renxiang Li
Huang, Thomas S.
Danielsen, Mike
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Apr2008, Vol. 18 Issue 4, p467-477. 11p. 11 Diagrams, 1 Chart.
Publication Year :
2008

Abstract

This paper presents a novel real-time multimodal human-avatar interaction (RTM-HAI) framework with vision-based remote animation control (RAC). The framework is designed for both mobile and desktop avatar-based human-machine or human-human visual communications in real-world scenarios. Using 3-D components stored in the Java mobile 3-D (M3G) file format, the avatar models can be flexibly constructed and customized on the fly on any mobile devices or systems that support the M3G standard. For the RAC head tracker, we propose a 2-D real-time face detection/tracking strategy through an interactive loop, in which the detection and tracking complement each other for efficient and reliable face localization, tolerating extreme user movement. With the face location robustly tracked, the RAC head tracker selects a main user and estimates the user's head rolling, tilting, yawing, scaling, horizontal, and vertical motion in order to generate avatar animation parameters. The animation parameters can be used either locally or remotely and can be transmitted through socket over the network. In addition, it integrates audio-visual analysis and synthesis modules to realize multichannel and runtime animations, visual TTS and real-time viseme detection and rendering. The framework is recognized as an effective design for future realistic industrial products of humanoid kiosk and human-to-human mobile communication. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
18
Issue :
4
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
32867084
Full Text :
https://doi.org/10.1109/TCSVT.2008.918441