1. Multimodal human attention detection for reading from facial expression, eye gaze, and mouse dynamics
- Author
-
Hong Va Leong, Grace Ngai, Jiajia Li, and Stephen C. F. Chan
- Subjects
Facial expression ,Multimedia ,Computer science ,business.industry ,media_common.quotation_subject ,05 social sciences ,Ocean Engineering ,02 engineering and technology ,computer.software_genre ,050105 experimental psychology ,Multimodal interaction ,User experience design ,Human–computer interaction ,Reading (process) ,Distraction ,Mouseover ,0202 electrical engineering, electronic engineering, information engineering ,Eye tracking ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Affective computing ,business ,computer ,media_common - Abstract
Affective computing has recently become an important area in human-computer interaction research. Techniques have been developed to enable computers to understand human affects or emotions, in order to predict human intention more precisely and provide better service to user to enhance user experience. In this paper, we investigate into the detection of human attention level as a useful from of human affect, which could be influential in intelligent e-learning applications. We adopt ubiquitous hardware available in most computer systems, namely, webcam and mouse. Information from multiple input modalities is fused together for effective human attention detection. We invite human subjects to carry out experiments in reading articles when being imposed upon different kinds of distraction to induce them into different levels of attention. Machine-learning techniques are applied to identify useful features to recognize human attention level by building up user-independent models. Our result indicate performance improvement with multimodal inputs from webcam and mouse over that of a single device. We believe that our work has revealed and intersting affective computing direction with potential application in e-learning.
- Published
- 2016