1. The interaction of acoustic and linguistic grouping cues in auditory object formation
- Author
-
Thomas D. Carrell and Kathy Shapley
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Context effect ,Computer science ,Active listening ,Intelligibility (communication) ,Linguistics ,Sentence - Abstract
One of the earliest explanations for good speech intelligibility in poor listening situations was context [Miller et al., J. Exp. Psychol. 41 (1951)]. Context presumably allows listeners to group and predict speech appropriately and is known as a top‐down listening strategy. Amplitude comodulation is another mechanism that has been shown to improve sentence intelligibility. Amplitude comodulation provides acoustic grouping information without changing the linguistic content of the desired signal [Carrell and Opie, Percept. Psychophys. 52 (1992); Hu and Wang, Proceedings of ICASSP‐02 (2002)] and is considered a bottom‐up process. The present experiment investigated how amplitude comodulation and semantic information combined to improve speech intelligibility. Sentences with high‐ and low‐predictability word sequences [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84 (1988)] were constructed in two different formats: time‐varying sinusoidal sentences (TVS) and reduced‐channel sentences (RC). The stimuli were chosen because they minimally represent the traditionally defined speech cues and therefore emphasized the importance of the high‐level context effects and low‐level acoustic grouping cues. Results indicated that semantic information did not influence intelligibility levels of TVS and RC sentences. In addition amplitude modulation aided listeners’ intelligibility scores in the TVS condition but hindered listeners’ intelligibility scores in the RC condition.
- Published
- 2005