12 results on '"McGettigan, Carolyn"'
Search Results
2. Familiarity and task context shape the use of acoustic information in voice identity perception
- Author
-
Lavan, Nadine, Kreitewolf, Jens, Obleser, Jonas, and McGettigan, Carolyn
- Published
- 2021
- Full Text
- View/download PDF
3. The effects of high variability training on voice identity learning
- Author
-
Lavan, Nadine, Knight, Sarah, Hazan, Valerie, and McGettigan, Carolyn
- Published
- 2019
- Full Text
- View/download PDF
4. Impoverished encoding of speaker identity in spontaneous laughter.
- Author
-
Lavan, Nadine, Short, Bethanie, Wilding, Amy, and McGettigan, Carolyn
- Subjects
HUMAN voice ,LAUGHTER ,HUMAN sounds ,EMOTIONS ,COMMUNICATION methodology - Abstract
Our ability to perceive person identity from other human voices has been described as prodigious. However, emerging evidence points to limitations in this skill. In this study, we investigated the recent and striking finding that identity perception from spontaneous laughter - a frequently occurring and important social signal in human vocal communication - is significantly impaired relative to identity perception from volitional (acted) laughter. We report the findings of an experiment in which listeners made speaker discrimination judgements from pairs of volitional and spontaneous laughter samples. The experimental design employed a range of different conditions, designed to disentangle the effects of laughter production mode versus perceptual features on the extraction of speaker identity. We find that the major driving factor of reduced accuracy for spontaneous laughter is not its perceived emotional quality but rather its distinct production mode, which is phylogenetically homologous with other primates. These results suggest that identity-related information is less successfully encoded in spontaneously produced (laughter) vocalisations. We therefore propose that claims for a limitless human capacity to process identity-related information from voices may be linked to the evolution of volitional vocal control and the emergence of articulate speech. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. You talkin’ to me? Communicative talker gaze activates left-lateralized superior temporal cortex during perception of degraded speech.
- Author
-
McGettigan, Carolyn, Jasmin, Kyle, Eisner, Frank, Agnew, Zarinah K., Josephs, Oliver J., Calder, Andrew J., Jessop, Rosemary, Lawson, Rebecca P., Spielmann, Mona, and Scott, Sophie K.
- Subjects
- *
BRAIN imaging , *COMMUNICATIVE action , *SPEECH perception , *INTELLIGIBILITY of speech , *MAGNETIC resonance imaging of the brain , *CEREBRAL dominance - Abstract
Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes’ responses to intelligible auditory speech signals ( McGettigan and Scott, 2012 ). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013). The aim of the current study was to investigate whether the lateralization of responses to talker gaze differs in an auditory communicative context. Participants in a functional MRI experiment watched and listened to videos of spoken sentences in which the auditory intelligibility and talker gaze direction were manipulated factorially. We observed a left-dominant temporal lobe sensitivity to the talker's gaze direction, in which the left anterior superior temporal sulcus/gyrus and temporal pole showed an enhanced response to direct gaze – further investigation revealed that this pattern of lateralization was modulated by auditory intelligibility. Our results suggest flexibility in the distribution of neural responses to social cues in the face within the context of a challenging speech perception task. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
6. Magnetic resonance imaging of the brain and vocal tract: Applications to the study of speech production and language learning.
- Author
-
Carey, Daniel and McGettigan, Carolyn
- Subjects
- *
PSYCHOLINGUISTICS , *MAGNETIC resonance imaging of the brain , *VOCAL tract , *PHYSIOLOGICAL adaptation , *CHILD psychology , *MOOD (Psychology) - Abstract
The human vocal system is highly plastic, allowing for the flexible expression of language, mood and intentions. However, this plasticity is not stable throughout the life span, and it is well documented that adult learners encounter greater difficulty than children in acquiring the sounds of foreign languages. Researchers have used magnetic resonance imaging (MRI) to interrogate the neural substrates of vocal imitation and learning, and the correlates of individual differences in phonetic “talent”. In parallel, a growing body of work using MR technology to directly image the vocal tract in real time during speech has offered primarily descriptive accounts of phonetic variation within and across languages. In this paper, we review the contribution of neural MRI to our understanding of vocal learning, and give an overview of vocal tract imaging and its potential to inform the field. We propose methods by which our understanding of speech production and learning could be advanced through the combined measurement of articulation and brain activity using MRI – specifically, we describe a novel paradigm, developed in our laboratory, that uses both MRI techniques to for the first time map directly between neural, articulatory and acoustic data in the investigation of vocalisation. This non-invasive, multimodal imaging method could be used to track central and peripheral correlates of spoken language learning, and speech recovery in clinical settings, as well as provide insights into potential sites for targeted neural interventions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
7. The neural processing of masked speech.
- Author
-
Scott, Sophie K. and McGettigan, Carolyn
- Subjects
- *
NERVE tissue proteins , *PSYCHOACOUSTICS , *NOISE , *NEUROBIOLOGY , *SPEECH perception , *NEUROANATOMY - Abstract
Abstract: Spoken language is rarely heard in silence, and a great deal of interest in psychoacoustics has focused on the ways that the perception of speech is affected by properties of masking noise. In this review we first briefly outline the neuroanatomy of speech perception. We then summarise the neurobiological aspects of the perception of masked speech, and investigate this as a function of masker type, masker level and task. This article is part of a Special Issue entitled “Annual Reviews 2013”. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
8. Cortical asymmetries in speech perception: what's wrong, what's right and what's left?
- Author
-
McGettigan, Carolyn and Scott, Sophie K.
- Subjects
- *
SPEECH perception , *TEMPORAL lobe , *CEREBRAL cortex , *NEUROLINGUISTICS , *BRAIN function localization , *NEUROPSYCHOLOGY - Abstract
Over the past 30 years hemispheric asymmetries in speech perception have been construed within a domain-general framework, according to which preferential processing of speech is due to left-lateralized, non-linguistic acoustic sensitivities. A prominent version of this argument holds that the left temporal lobe selectively processes rapid/temporal information in sound. Acoustically, this is a poor characterization of speech and there has been little empirical support for a left-hemisphere selectivity for these cues. In sharp contrast, the right temporal lobe is demonstrably sensitive to specific acoustic properties. We suggest that acoustic accounts of speech sensitivities need to be informed by the nature of the speech signal and that a simple domain-general vs. domain-specific dichotomy may be incorrect. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
9. Speech comprehension aided by multiple modalities: Behavioural and neural interactions
- Author
-
McGettigan, Carolyn, Faulkner, Andrew, Altarelli, Irene, Obleser, Jonas, Baverstock, Harriet, and Scott, Sophie K.
- Subjects
- *
COMPREHENSION , *SPEECH perception , *COGNITIVE ability , *PERFORMANCE evaluation , *AUDITORY perception , *MAGNETIC resonance imaging of the brain , *CEREBRAL cortex - Abstract
Abstract: Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory×visual, auditory×predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
10. Voice Modulation: A Window into the Origins of Human Vocal Control?
- Author
-
Pisanski, Katarzyna, Cartei, Valentina, McGettigan, Carolyn, Raine, Jordan, and Reby, David
- Subjects
- *
SPEECH , *MODULATION coding , *NEUROPHYSIOLOGY , *PRIMATE physiology , *MOTOR ability - Abstract
An unresolved issue in comparative approaches to speech evolution is the apparent absence of an intermediate vocal communication system between human speech and the less flexible vocal repertoires of other primates. We argue that humans’ ability to modulate nonverbal vocal features evolutionarily linked to expression of body size and sex (fundamental and formant frequencies) provides a largely overlooked window into the nature of this intermediate system. Recent behavioral and neural evidence indicates that humans’ vocal control abilities, commonly assumed to subserve speech, extend to these nonverbal dimensions. This capacity appears in continuity with context-dependent frequency modulations recently identified in other mammals, including primates, and may represent a living relic of early vocal control abilities that led to articulated human speech. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
11. Developmental phonagnosia: A selective deficit of vocal identity recognition
- Author
-
Garrido, Lúcia, Eisner, Frank, McGettigan, Carolyn, Stewart, Lauren, Sauter, Disa, Hanley, J. Richard, Schweinberger, Stefan R., Warren, Jason D., and Duchaine, Brad
- Subjects
- *
DEVELOPMENTAL disabilities , *SPEECH perception , *RECOGNITION (Psychology) , *MAGNETIC resonance imaging of the brain , *AUDITORY perception , *FACE perception testing - Abstract
Abstract: Phonagnosia, the inability to recognize familiar voices, has been studied in brain-damaged patients but no cases due to developmental problems have been reported. Here we describe the case of KH, a 60-year-old active professional woman who reports that she has always experienced severe voice recognition difficulties. Her hearing abilities are normal, and an MRI scan showed no evidence of brain damage in regions associated with voice or auditory perception. To better understand her condition and to assess models of voice and high-level auditory processing, we tested KH on behavioural tasks measuring voice recognition, recognition of vocal emotions, face recognition, speech perception, and processing of environmental sounds and music. KH was impaired on tasks requiring the recognition of famous voices and the learning and recognition of new voices. In contrast, she performed well on nearly all other tasks. Her case is the first report of developmental phonagnosia, and the results suggest that the recognition of a speaker''s vocal identity depends on separable mechanisms from those used to recognize other information from the voice or non-vocal auditory stimuli. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
12. Neural correlates of the affective properties of spontaneous and volitional laughter types.
- Author
-
Lavan, Nadine, Rankin, Georgia, Lorking, Nicole, Scott, Sophie, and McGettigan, Carolyn
- Subjects
- *
PSYCHOLOGY of laughter , *EMOTIONS , *PREFRONTAL cortex , *AUDITORY perception , *STATISTICAL correlation - Abstract
Previous investigations of vocal expressions of emotion have identified acoustic and perceptual distinctions between expressions of different emotion categories, and between spontaneous and volitional (or acted) variants of a given category. Recent work on laughter has identified relationships between acoustic properties of laughs and their perceived affective properties (arousal and valence) that are similar across spontaneous and volitional types ( Bryant & Aktipis, 2014 ; Lavan et al., 2016 ). In the current study, we explored the neural correlates of such relationships by measuring modulations of the BOLD response in the presence of itemwise variability in the subjective affective properties of spontaneous and volitional laughter. Across all laughs, and within spontaneous and volitional sets, we consistently observed linear increases in the response of bilateral auditory cortices (including Heschl's gyrus and superior temporal gyrus [STG]) associated with higher ratings of perceived arousal, valence and authenticity. Areas in the anterior medial prefrontal cortex (amPFC) showed negative linear correlations with valence and authenticity ratings across the full set of spontaneous and volitional laughs; in line with previous research ( McGettigan et al., 2015 ; Szameitat et al., 2010 ), we suggest that this reflects increased engagement of these regions in response to laughter of greater social ambiguity. Strikingly, an investigation of higher-order relationships between the entire laughter set and the neural response revealed a positive quadratic profile of the BOLD response in right-dominant STG (extending onto the dorsal bank of the STS), where this region responded most strongly to laughs rated at the extremes of the authenticity scale. While previous studies claimed a role for right STG in bipolar representation of emotional valence, we instead argue that this may in fact exhibit a relatively categorical response to emotional signals, whether positive or negative. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.