Back to Search Start Over

Exploring the relationship between children's facial emotion processing characteristics and speech communication ability using deep learning on eye tracking and speech performance measures.

Authors :
Yang, Jingwen
Chen, Zelin
Qiu, Guoxin
Li, Xiangyu
Li, Caixia
Yang, Kexin
Chen, Zhuanggui
Gao, Leyan
Lu, Shuo
Source :
Computer Speech & Language. Nov2022, Vol. 76, pN.PAG-N.PAG. 1p.
Publication Year :
2022

Abstract

• Facial emotion recognition (FER) is associated with multiple speech communication disorders (SCD). • Based on language evaluation and eye-tacking experiment, strong and detailed correlations were found between different dimensions of speech communication ability and various eye-movement patterns. • A machine-learning-based SCD prediction model was designed to screen SCD (accuracy as high as 88.9%). • A group of FER gazing patterns was found to be highly sensitive to the possibility of children's SCD. The ability of efficient facial emotion recognition (FER) plays a significant role in successful human communication and is closely associated with multiple speech communication disorders (SCD) in children. Despite the relevance, little is known about how speech communication abilities (SCA) and FER are correlated or of their underlying mechanism. To address this, we monitored eye movements of 115 children while watching human faces with different emotions and designed a machine-learning based SCD prediction model to explore the underlying pattern of eye movements during the FER task as well as their correlation with SCA. Strong and detailed correlations were found between different dimensions of SCA and various eye-movement features. A group of FER gazing patterns was found to be highly sensitive to the possibility of children's SCD. The SCD prediction model reached an accuracy as high as 88.9%, which offers a possible technique to fast screen SCD for children. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08852308
Volume :
76
Database :
Academic Search Index
Journal :
Computer Speech & Language
Publication Type :
Academic Journal
Accession number :
157301345
Full Text :
https://doi.org/10.1016/j.csl.2022.101389