1. Audiovisual speech perception: Moving beyond McGurk
- Author
-
Van Engen Kj, Mitchell S. Sommers, Jonathan E. Peelle, and Avanti Dey
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Speech recognition ,Perception ,media_common.quotation_subject ,Audiovisual speech ,Special Issue on Reconsidering Classic Ideas in Speech Communication ,Psychology ,media_common - Abstract
Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of active debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based solely on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences in susceptibility are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we argue that McGurk tasks are ill-suited for studying the kind of multisensory speech perception that occurs in real life: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility on McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and stories with congruent auditory and visual speech cues.
- Published
- 2022
- Full Text
- View/download PDF