19 results on '"Verena G. Skuk"'
Search Results
2. Parameter-Specific Morphing Reveals Contributions of Timbre to the Perception of Vocal Emotions in Cochlear Implant Users
- Author
-
Celina I, von Eiff, Verena G, Skuk, Romi, Zäske, Christine, Nussbaum, Sascha, Frühholz, Ute, Feuer, Orlando, Guntinas-Lichius, and Stefan R, Schweinberger
- Subjects
Speech and Hearing ,Cochlear Implants ,Acoustic Stimulation ,Otorhinolaryngology ,Emotions ,Auditory Perception ,Quality of Life ,Speech Perception ,Humans ,Cochlear Implantation ,Music - Abstract
Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing.Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level.Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings.Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.
- Published
- 2022
- Full Text
- View/download PDF
3. Investigating the common set of acoustic parameters in sexual orientation groups: A voice averaging approach.
- Author
-
Sven Kachel, André Radtke, Verena G Skuk, Romi Zäske, Adrian P Simpson, and Melanie C Steffens
- Subjects
Medicine ,Science - Abstract
While the perception of sexual orientation in voices often relies on stereotypes, it is unclear whether speech stereotypes and accurate perceptions of sexual orientation are each based on acoustic cues common to speakers of a given group. We ask if the stereotypical belief, that members of the same sexual orientation group share similar acoustic patterns, is accurate to some degree. To address this issue, we are the first to use a novel voice morphing technique to create voice averages from voices that represent extremes of a given sexual orientation group either in terms of actual or perceived sexual orientation. Importantly, averaging preserves only those acoustic cues shared by the original speakers. 144 German listeners judged the sexual orientation of twelve natural-sounding sentence stimuli, each representing an average of five original utterances. Half of the averages were based on targets' self-ratings of sexual orientation: On a 7-point Kinsey-like scale, we selected targets who were most typical for a certain sexual orientation group according to their self-identifications. The other half were based on extreme ratings by others (i.e., on speech-related sexual-orientation stereotypes). Listeners judged sexual orientation from the voice averages with above-chance accuracy suggesting 1) that the perception of actual and stereotypical sexual orientation, respectively, are based on acoustic cues shared by speakers of the same group, and 2) that the stereotypical belief that members of the same sexual orientation group share similar acoustic patterns is accurate to some degree. Mean fundamental frequency and other common acoustic parameters showed systematic variation depending on speaker gender and sexual orientation. Effects of sexual orientation were more pronounced for stereotypical voice averages than for those based on speakers' self-ratings, suggesting that sexual-orientation stereotypes exaggerate even those differences present in the most salient groups of speakers. Implications of our findings for stereotyping and discrimination are discussed.
- Published
- 2018
- Full Text
- View/download PDF
4. Parameter-Specific Morphing Reveals Contributions of Timbre and Fundamental Frequency Cues to the Perception of Voice Gender and Age in Cochlear Implant Users
- Author
-
Orlando Guntinas-Lichius, Tobias Oberhoffner, Christian Dobel, Stefan R. Schweinberger, Verena G. Skuk, and Louisa Kirchen
- Subjects
Male ,Auditory perception ,Linguistics and Language ,medicine.medical_specialty ,medicine.medical_treatment ,media_common.quotation_subject ,Audiology ,Language and Linguistics ,Age and gender ,Speech and Hearing ,Cochlear implant ,Perception ,medicine ,Humans ,Active listening ,media_common ,Fundamental frequency ,Cochlear Implantation ,Morphing ,Cochlear Implants ,Speech Perception ,Voice ,Female ,Cues ,Psychology ,Timbre - Abstract
PurposeUsing naturalistic synthesized speech, we determined the relative importance of acoustic cues in voice gender and age perception in cochlear implant (CI) users.MethodWe investigated 28 CI users' abilities to utilize fundamental frequency (F0) and timbre in perceiving voice gender (Experiment 1) and vocal age (Experiment 2). Parameter-specific voice morphing was used to selectively control acoustic cues (F0; time; timbre, i.e., formant frequencies, spectral-level information, and aperiodicity, as defined in TANDEM-STRAIGHT) in voice stimuli. Individual differences in CI users' performance were quantified via deviations from the mean performance of 19 normal-hearing (NH) listeners.ResultsCI users' gender perception seemed exclusively based on F0, whereas NH listeners efficiently used timbre. For age perception, timbre was more informative than F0 for both groups, with minor contributions of temporal cues. While a few CI users performed comparable to NH listeners overall, others were at chance. Separate analyses confirmed that even high-performing CI users classified gender almost exclusively based on F0. While high performers could discriminate age in male and female voices, low performers were close to chance overall but used F0 as a misleading cue to age (classifying female voices as young and male voices as old). Satisfaction with CI generally correlated with performance in age perception.ConclusionsWe confirmed that CI users' gender classification is mainly based on F0. However, high performers could make reasonable usage of timbre cues in age perception. Overall, parameter-specific morphing can serve to objectively assess individual profiles of CI users' abilities to perceive nonverbal social-communicative vocal signals.
- Published
- 2020
- Full Text
- View/download PDF
5. Vocal emotion adaptation aftereffects within and across speaker genders: Roles of timbre and fundamental frequency
- Author
-
von Eiff Ci, Verena G. Skuk, Christine Nussbaum, and Schweinberger
- Subjects
Fundamental frequency ,Psychology ,Adaptation (computer science) ,Timbre ,Cognitive psychology - Abstract
Although previous research demonstrated perceptual aftereffects in emotional voice adaptation, the contribution of different vocal cues to these effects is unclear. In two experiments, we used parameter-specific morphing of adaptor voices to investigate the relative roles of fundamental frequency (F0) and timbre in vocal emotion adaptation, using angry and fearful utterances. Participants adapted to voices containing emotion-specific information in either F0 or timbre, with all other parameters kept constant at an intermediate 50% morph level. Full emotional adaptors and ambiguous adaptors were used as reference conditions. Adaptors were either of the same (Experiment 1) or opposite speaker gender (Experiment 2) of target voices. In Experiment 1, we found consistent aftereffects in all adaptation conditions. Crucially, aftereffects following timbre adaptors were much larger than following F0 adaptors and were only marginally smaller than those following full adaptors. In Experiment 2, adaptation aftereffects appeared massively and proportionally reduced, with differences between morph types being no longer significant. These results suggest that timbre plays a larger role than F0 in vocal emotion adaptation, and that vocal emotion adaptation is compromised by eliminating gender-congruency between adaptors and targets. Our findings also add to mounting evidence suggesting a major role of timbre in auditory adaptation.
- Published
- 2021
- Full Text
- View/download PDF
6. Adaptation aftereffects in vocal emotion perception elicited by expressive faces and voices.
- Author
-
Verena G Skuk and Stefan R Schweinberger
- Subjects
Medicine ,Science - Abstract
The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of the same modality. By contrast, crossmodal aftereffects in the perception of emotional vocalizations have not been demonstrated yet. In three experiments we investigated the influence of emotional voice as well as dynamic facial video adaptors on the perception of emotion-ambiguous voices morphed on an angry-to-happy continuum. Contrastive aftereffects were found for unimodal (voice) adaptation conditions, in that test voices were perceived as happier after adaptation to angry voices, and vice versa. Bimodal (voice + dynamic face) adaptors tended to elicit larger contrastive aftereffects. Importantly, crossmodal (dynamic face) adaptors also elicited substantial aftereffects in male, but not in female participants. Our results (1) support the idea of contrastive processing of emotions (2), show for the first time crossmodal adaptation effects under certain conditions, consistent with the idea that emotion processing is multimodal in nature, and (3) suggest gender differences in the sensory integration of facial and vocal emotional stimuli.
- Published
- 2013
- Full Text
- View/download PDF
7. The Role of Stimulus Type and Social Signal for Voice Perception in Cochlear Implant Users: Response to the Letter by Meister et al
- Author
-
Louisa Kirchen, Christine Nussbaum, Romi Zäske, Orlando Guntinas-Lichius, Stefan R. Schweinberger, Tobias Oberhoffner, Christian Dobel, Celina Isabelle von Eiff, and Verena G. Skuk
- Subjects
Linguistics and Language ,Speech perception ,medicine.medical_treatment ,media_common.quotation_subject ,Stimulus (physiology) ,Cochlear Implantation ,Language and Linguistics ,Speech and Hearing ,Nonverbal communication ,Cochlear Implants ,Interactive effects ,Cochlear implant ,Perception ,medicine ,Speech Perception ,Voice ,Humans ,Cues ,Cochlear implantation ,Psychology ,Timbre ,Cognitive psychology ,media_common - Abstract
Purpose In their letter, Meister et al. (2020) appropriately point to a potential influence of stimulus type, arguing cochlear implant (CI) users may have the ability to use timbre cues only for complex stimuli such as sentences but not for brief stimuli such as vowel–consonant–vowel or single words. While we cannot exclude this possibility on the basis of Skuk et al. (2020) alone, we hold that there is a strong need to consider type of social signal (e.g., gender, age, emotion, speaker identity) to assess the profile of preserved and impaired aspects of voice processing in CI users. We discuss directions for further research to systematically consider interactive effects of stimulus type and social signal. In our view, this is crucial to understand and enhance nonverbal vocal perception skills that are relevant to successful communication with a CI.
- Published
- 2020
8. Vocal emotion adaptation aftereffects within and across speaker genders: Roles of timbre and fundamental frequency
- Author
-
Christine Nussbaum, Celina I. von Eiff, Verena G. Skuk, and Stefan R. Schweinberger
- Subjects
Male ,Linguistics and Language ,Cognitive Neuroscience ,Emotions ,Visual Perception ,Voice ,Developmental and Educational Psychology ,Humans ,Female ,Experimental and Cognitive Psychology ,Cues ,Adaptation, Physiological ,Language and Linguistics - Abstract
While the human perceptual system constantly adapts to the environment, some of the underlying mechanisms are still poorly understood. For instance, although previous research demonstrated perceptual aftereffects in emotional voice adaptation, the contribution of different vocal cues to these effects is unclear. In two experiments, we used parameter-specific morphing of adaptor voices to investigate the relative roles of fundamental frequency (F0) and timbre in vocal emotion adaptation, using angry and fearful utterances. Participants adapted to voices containing emotion-specific information in either F0 or timbre, with all other parameters kept constant at an intermediate 50% morph level. Full emotional voices and ambiguous voices were used as reference conditions. All adaptor stimuli were either of the same (Experiment 1) or opposite speaker gender (Experiment 2) of subsequently presented target voices. In Experiment 1, we found consistent aftereffects in all adaptation conditions. Crucially, aftereffects following timbre adaptation were much larger than following F0 adaptation and were only marginally smaller than those following full adaptation. In Experiment 2, adaptation aftereffects appeared massively and proportionally reduced, with differences between morph types being no longer significant. These results suggest that timbre plays a larger role than F0 in vocal emotion adaptation, and that vocal emotion adaptation is compromised by eliminating gender-correspondence between adaptor and target stimuli. Our findings also add to mounting evidence suggesting a major role of timbre in auditory adaptation.
- Published
- 2022
- Full Text
- View/download PDF
9. The Jena Speaker Set (JESS)-A database of voice stimuli from unfamiliar young and old adult speakers
- Author
-
Jessika Golle, Romi Zäske, Stefan R. Schweinberger, and Verena G. Skuk
- Subjects
Attractiveness ,Adult ,Male ,Adolescent ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,computer.software_genre ,050105 experimental psychology ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,Memory ,Perception ,Developmental and Educational Psychology ,Personality ,Humans ,Speech ,0501 psychology and cognitive sciences ,Set (psychology) ,Accent (sociolinguistics) ,General Psychology ,media_common ,Aged ,Aged, 80 and over ,Database ,05 social sciences ,Reproducibility of Results ,Speech corpus ,Middle Aged ,Inter-rater reliability ,Speech Perception ,Voice ,Optimal distinctiveness theory ,Female ,Psychology (miscellaneous) ,Psychology ,computer ,030217 neurology & neurosurgery - Abstract
Here we describe the Jena Speaker Set (JESS), a free database for unfamiliar adult voice stimuli, comprising voices from 61 young (18-25 years) and 59 old (60-81 years) female and male speakers uttering various sentences, syllables, read text, semi-spontaneous speech, and vowels. Listeners rated two voice samples (short sentences) per speaker for attractiveness, likeability, two measures of distinctiveness ("deviation"-based [DEV] and "voice in the crowd"-based [VITC]), regional accent, and age. Interrater reliability was high, with Cronbach's α between .82 and .99. Young voices were generally rated as more attractive than old voices, but particularly so when male listeners judged female voices. Moreover, young female voices were rated as more likeable than both young male and old female voices. Young voices were judged to be less distinctive than old voices according to the DEV measure, with no differences in the VITC measure. In age ratings, listeners almost perfectly discriminated young from old voices; additionally, young female voices were perceived as being younger than young male voices. Correlations between the rating dimensions above demonstrated (among other things) that DEV-based distinctiveness was strongly negatively correlated with rated attractiveness and likeability. By contrast, VITC-based distinctiveness was uncorrelated with rated attractiveness and likeability in young voices, although a moderate negative correlation was observed for old voices. Overall, the present results demonstrate systematic effects of vocal age and gender on impressions based on the voice and inform as to the selection of suitable voice stimuli for further research into voice perception, learning, and memory.
- Published
- 2019
10. Autistic Traits are Linked to Individual Differences in Familiar Voice Identification
- Author
-
Laura Broemer, Romina Palermo, Verena G. Skuk, and Stefan R. Schweinberger
- Subjects
education.field_of_study ,05 social sciences ,Population ,Speaker recognition ,medicine.disease ,Social relation ,Developmental psychology ,Correlation ,03 medical and health sciences ,0302 clinical medicine ,Autistic traits ,Developmental and Educational Psychology ,medicine ,Autism ,0501 psychology and cognitive sciences ,Active listening ,Identification (biology) ,education ,Psychology ,030217 neurology & neurosurgery ,050104 developmental & child psychology - Abstract
Autistic traits vary across the general population, and are linked with face recognition ability. Here we investigated potential links between autistic traits and voice recognition ability for personally familiar voices in a group of 30 listeners (15 female, 16–19 years) from the same local school. Autistic traits (particularly those related to communication and social interaction) were negatively correlated with voice recognition, such that more autistic traits were associated with fewer familiar voices identified and less ability to discriminate familiar from unfamiliar voices. In addition, our results suggest enhanced accessibility of personal semantic information in women compared to men. Overall, this study establishes a detailed pattern of relationships between voice identification performance and autistic traits in the general population.
- Published
- 2017
- Full Text
- View/download PDF
11. Voice Morphing
- Author
-
Verena G. Skuk and Hideki Kawahara
- Abstract
Voice morphing is a framework to generate a new sound which has the mixed attribute of given voice examples. It provides a flexible tool for investigating perceptual attributes in voice communication, especially for quantifying paralinguistic and extralinguistic cues. Recent advances in parametric representation of speech sounds have made the morphing-based approach a practical alternative or complementary strategy to procedures using speech-production models. Stimulus continuum spanning between two typical voice recordings having different perceptual attributes provides an external reference to quantify subjective responses. Generalized morphing, which enables extrapolation as well as interpolation of arbitrarily many numbers of voices, provides a unique strategy for investigating speaker identity. Its ability to selectively manipulate any combination of fundamental parameters facilitates the development of technology to intervene and mediate paralinguistic and extralinguistic communication channels.
- Published
- 2018
- Full Text
- View/download PDF
12. Role of timbre and fundamental frequency in voice gender adaptation
- Author
-
Verena G. Skuk, Stefan R. Schweinberger, and Lea M. Dammann
- Subjects
Adult ,Male ,medicine.medical_specialty ,Speech perception ,Adolescent ,Acoustics and Ultrasonics ,Voice Quality ,Transfer, Psychology ,Acoustics ,Normal Distribution ,Adaptation (eye) ,Audiology ,Loudness ,Young Adult ,Arts and Humanities (miscellaneous) ,Phonetics ,medicine ,Humans ,Set (psychology) ,Sex Characteristics ,Gender Identity ,Speech processing ,Speaker recognition ,Acoustic Stimulation ,Speech Perception ,Female ,Psychology ,Timbre - Abstract
Prior adaptation to male (or female) voices causes androgynous voices to be perceived as more female (or male). Using a selective adaptation paradigm the authors investigate the relative impact of the vocal fold vibration rate (F0) and timbre (operationally in this paper as characteristics that differentiate two voices of the same F0 and loudness) on this basic voice gender aftereffect. TANDEM-STRAIGHT was used to morph between 10 pairs of male and female speakers uttering 2 different vowel-consonant-vowel sequences (20 continua). Adaptor stimuli had one parameter (either F0 or timbre) set at a clearly male or female level, while the other parameter was set at an androgynous level, as determined by an independent set of listeners. Compared to a control adaptation condition (in which both F0 and timbre were clearly male or female), aftereffects were clearly reduced in both F0 and timbre adaptation conditions. Critically, larger aftereffects were found after timbre adaptation (comprising androgynous F0) compared to F0 adaptation (comprising an androgynous timbre). Together these results suggest that timbre plays a larger role than F0 in voice gender adaptation. Finally, the authors found some evidence that individual differences among listeners reflect in part pre-experimental contact to male and female voices.
- Published
- 2015
- Full Text
- View/download PDF
13. Speaker perception
- Author
-
Adrian P. Simpson, Verena G. Skuk, Hideki Kawahara, Stefan R. Schweinberger, and Romi Zäske
- Subjects
Speaker diarisation ,General Neuroscience ,Perception ,media_common.quotation_subject ,Speech recognition ,General Medicine ,Speaker recognition ,Psychology ,General Psychology ,media_common - Published
- 2013
- Full Text
- View/download PDF
14. Autistic Traits are Linked to Individual Differences in Familiar Voice Identification
- Author
-
Verena G, Skuk, Romina, Palermo, Laura, Broemer, and Stefan R, Schweinberger
- Subjects
Adult ,Male ,Discrimination, Psychological ,Communication ,Auditory Perception ,Individuality ,Voice ,Humans ,Female ,Recognition, Psychology ,Autistic Disorder - Abstract
Autistic traits vary across the general population, and are linked with face recognition ability. Here we investigated potential links between autistic traits and voice recognition ability for personally familiar voices in a group of 30 listeners (15 female, 16-19 years) from the same local school. Autistic traits (particularly those related to communication and social interaction) were negatively correlated with voice recognition, such that more autistic traits were associated with fewer familiar voices identified and less ability to discriminate familiar from unfamiliar voices. In addition, our results suggest enhanced accessibility of personal semantic information in women compared to men. Overall, this study establishes a detailed pattern of relationships between voice identification performance and autistic traits in the general population.
- Published
- 2017
15. Gender differences in familiar voice identification
- Author
-
Verena G. Skuk and Stefan R. Schweinberger
- Subjects
Adult ,Male ,medicine.medical_specialty ,Adolescent ,Voice Quality ,Audiology ,behavioral disciplines and activities ,Speech Acoustics ,Young Adult ,Discrimination, Psychological ,Sex Factors ,Phonetics ,Surveys and Questionnaires ,medicine ,Humans ,Overall performance ,Set (psychology) ,Analysis of Variance ,Recognition, Psychology ,Middle Aged ,Speaker recognition ,Sensory Systems ,Identification (information) ,Acoustic Stimulation ,Speech Perception ,Female ,Optimal distinctiveness theory ,Syllable ,Audiometry, Speech ,Psychology ,Social psychology ,psychological phenomena and processes - Abstract
We investigated gender differences in the identification of personally familiar voices in a gender-balanced sample of 40 listeners. From various types of utterances, listeners had to identify by name 20 speakers (10 female) among a set of 70 possible classmates who were all 12th grade pupils from the same local secondary school. Mean identification rates were 67% from sentences, and around 35% for an isolated /Hello/ or a VCV syllable. Even from non-verbal harrumphs, speakers were identified with an accuracy of 18%, i.e. highly above chance levels. Substantial individual differences were observed between listeners. Importantly, superior overall performance of female listeners was qualified by an interaction between voice gender and listener gender. Male listeners exhibited an own-gender bias (i.e. better identification for male than female voices), whereas female listeners identified voices of both genders at similar levels. Individual own-gender identification biases were correlated with differences in reported contact to a speaker's voice and voice distinctiveness. Overall, the present study establishes a number of factors that account for substantial individual differences in personal voice identification.
- Published
- 2013
- Full Text
- View/download PDF
16. Influences of fundamental frequency, formant frequencies, aperiodicity, and spectrum level on the perception of voice gender
- Author
-
Verena G. Skuk and Stefan R. Schweinberger
- Subjects
Auditory perception ,Adult ,Male ,Linguistics and Language ,Speech recognition ,media_common.quotation_subject ,Acoustics ,Models, Biological ,Language and Linguistics ,Speech Acoustics ,Voice analysis ,Speech and Hearing ,Judgment ,Young Adult ,Phonetics ,Perception ,Humans ,media_common ,Mathematics ,Sex Characteristics ,Fundamental frequency ,Formant ,Spectral envelope ,Speech Perception ,Voice ,Female ,Syllable ,Cues - Abstract
Purpose To determine the relative importance of acoustic parameters (fundamental frequency [F0], formant frequencies [FFs], aperiodicity, and spectrum level [SL]) on voice gender perception, the authors used a novel parameter-morphing approach that, unlike spectral envelope shifting, allows the application of nonuniform scale factors to transform formants and more direct comparison of parameter impact. Method In each of 2 experiments, 16 listeners with normal hearing (8 female, 8 male) classified voice gender for morphs between female and male speakers, using syllable tokens from 2 male–female speaker pairs. Morphs varied single acoustic parameters (Experiment 1) or selected combinations (Experiment 2), keeping residual parameters androgynous, as determined in a baseline experiment. Results The strongest cue related to gender perception was F0, followed by FF and SL. Aperiodicity did not systematically influence gender perception. Morphing F0 and FF in conjunction produced convincing changes in perceived gender—changes that were equivalent to those for Full morphs interpolating all parameters. Despite the importance of F0, morphing FF and SL in combination produced effective changes in voice gender perception. Conclusions The most important single parameters for gender perception are, in order, F0, FF, and SL. At the same time, F0 and vocal tract resonances have a comparable impact on voice gender perception. Supplemental Material https://doi.org/10.23641/asha.6170438
- Published
- 2013
17. Speaker perception
- Author
-
Stefan R, Schweinberger, Hideki, Kawahara, Adrian P, Simpson, Verena G, Skuk, and Romi, Zäske
- Abstract
While humans use their voice mainly for communicating information about the world, paralinguistic cues in the voice signal convey rich dynamic information about a speaker's arousal and emotional state, and extralinguistic cues reflect more stable speaker characteristics including identity, biological sex and social gender, socioeconomic or regional background, and age. Here we review the anatomical and physiological bases for individual differences in the human voice, before discussing how recent methodological progress in voice morphing and voice synthesis has promoted research on current theoretical issues, such as how voices are mentally represented in the human brain. Special attention is dedicated to the distinction between the recognition of familiar and unfamiliar speakers, in everyday situations or in the forensic context, and on the processes and representational changes that accompany the learning of new voices. We describe how specific impairments and individual differences in voice perception could relate to specific brain correlates. Finally, we consider that voices are produced by speakers who are often visible during communication, and review recent evidence that shows how speaker perception involves dynamic face-voice integration. The representation of para- and extralinguistic vocal information plays a major role in person perception and social communication, could be neuronally encoded in a prototype-referenced manner, and is subject to flexible adaptive recalibration as a result of specific perceptual experience. WIREs Cogn Sci 2014, 5:15-25. doi: 10.1002/wcs.1261 CONFLICT OF INTEREST: The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.
- Published
- 2013
18. Perceiving vocal age and gender: an adaptation approach
- Author
-
Jürgen M. Kaufmann, Verena G. Skuk, Romi Zäske, and Stefan R. Schweinberger
- Subjects
Adult ,Male ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,Adaptation (eye) ,Developmental psychology ,Age and gender ,Young Adult ,Sex Factors ,Arts and Humanities (miscellaneous) ,Perception ,Adaptation, Psychological ,Developmental and Educational Psychology ,Interactive processing ,Humans ,Pitch Perception ,media_common ,Aged ,Voice-onset time ,Age Factors ,Contrast (statistics) ,General Medicine ,Middle Aged ,Prolonged exposure ,Speech Perception ,Voice ,Female ,Analysis of variance ,Psychology - Abstract
Aftereffects of adaptation have revealed both independent and interactive coding of facial signals including identity and expression or gender and age. By contrast, interactive processing of non-linguistic features in voices has rarely been investigated. Here we studied bidirectional cross-categorical aftereffects of adaptation to vocal age and gender. Prolonged exposure to young (~20yrs) or old (~70yrs) male or female voices biased perception of subsequent test voices away from the adapting age (Exp. 1) and the adapting gender (Exp. 2). Relative to gender-congruent adaptor-test pairings, vocal age aftereffects (VAAEs) were reduced but remained significant when voice gender changed between adaptation and test. This suggests that the VAAE relies on both gender-specific and gender-independent age representations for male and female voices. By contrast, voice gender aftereffects (VGAEs) were not modulated by age-congruency of adaptor and test voices (Exp. 2). Instead, young voice adaptors generally induced larger VGAEs than old voice adaptors. This suggests that young voices are particularly efficient gender adaptors, likely reflecting more pronounced sexual dimorphism in these voices. In sum, our findings demonstrate how high-level processing of vocal age and gender is partially intertwined.
- Published
- 2013
19. The GlobeFish and the GlobeMouse
- Author
-
Anke Huckauf, Verena G. Skuk, Bernd Froehlich, and Jan Hochstrate
- Subjects
Computer science ,Computer graphics (images) ,Isotonic ,Six degrees of freedom ,Input device ,Graphics ,Simulation - Abstract
We introduce two new six degree of freedom desktop input devices based on the key concept of combining forceless isotonic rotational input with force-requiring elastic translational input. The GlobeFish consists of a custom three degrees of freedom trackball which is elastically connected to a frame. The trackball is accessible from the top and bottom and can be moved slightly in all spatial directions by using force. The GlobeMouse device works in a similar way. Here the trackball is placed on top of a movable base, which requires to change the grip on the device to switch between rotating the trackball and moving the base.Our devices are manipulated with the fingertips allowing precise interaction with virtual objects. The elastic translation allows uniform input for all three axes and the isotonic trackball provides a natural mapping for rotations. Our user study revealed that the new devices perform significantly better in a docking task in comparison to the SpaceMouse, an integrated six degrees of freedom controller. Subjective data confirmed these results.
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.