86,375 results on '"Auditory Perception"'
Search Results
2. Harmonies on the String: Exploring the Synergy of Music and STEM
- Author
-
Christopher Dignam
- Abstract
The process of perceiving music involves the transference of environmental physics in the air to anatomical and physiological interpretations of resonance in the body and psychological perceptions in the brain. These processes and musical interpretations are the basis of physical and cognitive science, neurophysiology, psychoacoustics, and cultural psychology. The intersection of interdisciplinary and transdisciplinary curricular offerings forms the basis of STEM (Science, Technology, Engineering, and Mathematics). In this study, the researcher explores the synergy of music in STEM for formulating and affording authentic STEAM programmatic offerings for learners. The blending of the art of music within STEM provides opportunities for teachers and students to address and connect content through creative, innovative approaches for deeper, meaningful learning. Threading the art of music within STEM affords discovery-learning opportunities that facilitate both critical thinking and social, emotional learning skills development in students. This study provides perspective in terms of developing curricular offerings for students that blend physical and cognitive science with the art of sound. The researcher provides authentic curricular exemplars regarding the synergy of music in STEM and concludes by offering recommendations for designing and implementing expressive curricular programmatic offerings for students from early childhood settings through higher education.
- Published
- 2024
3. Listener Perception of Appropriateness of L1 and L2 Refusals in English
- Author
-
Maria Kostromitina and Yongzhi Miao
- Abstract
English has become an international language (EIL) as speakers around the world use it as a universal means of communication. Accordingly, scholars have investigated different aspects of EIL affecting communicative success. Speech scholars have been interested in speech constructs like accentedness, comprehensibility, and acceptability (e.g., Kang et al., 2023). On the other hand, pragmatic researchers have examined lexico-grammatical features of EIL that contribute to first language (L1) English listeners' perceptions of appropriateness in speech acts (e.g., Taguchi, 2006). However, little is known about: a) how appropriateness is perceived by users of EIL of diverse L1s and b) how those appropriateness perceptions are related to lexico-grammatical and phonological features. Therefore, the present study had 184 listeners (L1 = English, Spanish, Chinese, and Indian languages) evaluate 40 speech acts performed by 20 speakers (L1 English and Chinese, 50% each) in terms of appropriateness on a 9-point numerical scale. Results from linear mixed-effects regressions suggested that: a) listener L1 did not contribute to listener ratings and b) speakers' rhythm and lexico-grammatical features (i.e., use of different pragmatic strategies) significantly contributed to listener appropriateness ratings. The findings provide empirical evidence to support the phonology-pragmatics link in appropriateness perceptions and offer implications regarding the operationalization of English interactional appropriateness.
- Published
- 2024
4. Investigating EFL Students' Perspectives of the Influence of Podcasts on Enhancing Listening Proficiency
- Author
-
Fatimah Ghazi Mohamm and Hanadi Abdulrahman Khadawardi
- Abstract
Listening is widely regarded as the predominant language proficiency utilized in virtually all forms of communication. However, its intricacies often engender feelings of complexity and, at times, provoke anxiety and frustration among both foreign and second-language learners. The enhancement of successful communication fundamentally hinges upon the precise comprehension of spoken messages. In this quantitative investigation, the present study delves into the perceptions of English as a Foreign Language (EFL) students concerning the utilization of podcasts as a tool to cultivate and bolster their listening proficiency. The study cohort comprised female university students enrolled in a preparatory year program. The examination of attitudes toward podcasts was conducted via a survey questionnaire. The findings unveiled that most participants derived enjoyment from utilizing podcasts, which in turn catalyzed their enthusiasm for English language acquisition. Additionally, they conceded that podcasts held promise in augmenting their linguistic abilities, with a primary focus on listening comprehension. These outcomes posit that podcasting serves as a medium with significant implications for students' learning trajectories, particularly regarding the acquisition of listening proficiencies.
- Published
- 2024
5. The Effect of Attention on Auditory Processing in Adults on the Autism Spectrum
- Author
-
Jewel E. Crasta and Erica C. Jacoby
- Abstract
This study examined the effect of attention on auditory processing in autistic individuals. Electroencephalography data were recorded during two attention conditions (passive and active) from 24 autistic adults and 24 neurotypical controls, ages 17-30 years. The passive condition involved only listening to the clicks and the active condition involved a button press following single clicks in a modified paired-click paradigm. Participants completed the Adolescent/Adult Sensory Profile and the Social Responsiveness Scale 2. The autistic group showed delayed N1 latencies and reduced evoked and phase-locked gamma power compared to neurotypical peers across both clicks and conditions. Longer N1 latencies and reduced gamma synchronization predicted greater social and sensory symptoms. Directing attention to auditory stimuli may be associated with more typical neural auditory processing in autism.
- Published
- 2024
- Full Text
- View/download PDF
6. The Relationship between Autism and Pitch Perception Is Modulated by Cognitive Abilities
- Author
-
Jia Hoong Ong, Chen Zhao, Alex Bacon, Florence Yik Nam Leung, Anamarija Veic, Li Wang, Cunmei Jiang, and Fang Liu
- Abstract
Previous studies reported mixed findings on autistic individuals' pitch perception relative to neurotypical (NT) individuals. We investigated whether this may be partly due to individual differences in cognitive abilities by comparing their performance on various pitch perception tasks on a large sample (n = 164) of autistic and NT children and adults. Our findings revealed that: (i) autistic individuals either showed similar or worse performance than NT individuals on the pitch tasks; (ii) cognitive abilities were associated with some pitch task performance; and (iii) cognitive abilities modulated the relationship between autism diagnosis and pitch perception on some tasks. Our findings highlight the importance of taking an individual differences approach to understand the strengths and weaknesses of pitch processing in autism.
- Published
- 2024
- Full Text
- View/download PDF
7. Do Early Musical Impairments Predict Later Reading Difficulties? A Longitudinal Study of Pre-Readers with and without Familial Risk for Dyslexia
- Author
-
Manon Couvignou, Hugo Peyre, Franck Ramus, and Régine Kolinsky
- Abstract
The present longitudinal study investigated the hypothesis that early musical skills (as measured by melodic and rhythmic perception and memory) predict later literacy development via a mediating effect of phonology. We examined 130 French-speaking children, 31 of whom with a familial risk for developmental dyslexia (DD). Their abilities in the three domains were assessed longitudinally with a comprehensive battery of behavioral tests in kindergarten, first grade, and second grade. Using a structural equation modeling approach, we examined potential longitudinal effects from music to literacy via phonology. We then investigated how familial risk for DD may influence these relationships by testing whether atypical music processing is a risk factor for DD. Results showed that children with a familial risk for DD consistently underperformed children without familial risk in music, phonology, and literacy. A small effect of musical ability on literacy via phonology was observed, but may have been induced by differences in stability across domains over time. Furthermore, early musical skills did not add significant predictive power to later literacy difficulties beyond phonological skills and family risk status. These findings are consistent with the idea that certain key auditory skills are shared between music and speech processing, and between DD and congenital amusia. However, they do not support the notion that music perception and memory skills can serve as a reliable early marker of DD, nor as a valuable target for reading remediation.
- Published
- 2024
- Full Text
- View/download PDF
8. Comparison of Speech and Music Input in North American Infants' Home Environment over the First 2 Years of Life
- Author
-
Lindsay Hippe, Victoria Hennessy, Naja Ferjan Ramirez, and T. Christina Zhao
- Abstract
Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants' daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants' home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj_sEaBMN4
- Published
- 2024
- Full Text
- View/download PDF
9. Recognitions of Image and Speech to Improve Learning Diagnosis on STEM Collaborative Activity for Precision Education
- Author
-
Chia-Ju Lin, Wei-Sheng Wang, Hsin-Yu Lee, Yueh-Min Huang, and Ting-Ting Wu
- Abstract
The rise of precision education has encouraged teachers to use intelligent diagnostic systems to understand students' learning processes and provide immediate guidance to prevent students from giving up when facing learning difficulties. However, current research on precision education rarely employs multimodal learning analytics approaches to understand students' learning behaviors. Therefore, this study aims to investigate the impact of teachers intervene based on different modalities of learning analytics diagnosing systems on students' learning behaviors, learning performance, and motivation in STEM collaborative learning activities. We conducted a quasi-experiment with three groups: a control group without any learning analytics system assistance, experimental group 1 with a unimodal learning analytics approach based on image data, and experimental group 2 with a multimodal learning analytics approach based on both image and voice data. We collected students' image or voice data according to the experimental design and employed artificial intelligence techniques for facial expression recognition, eye gaze tracking, and speech recognition to identify students' learning behaviors. The results of this research indicate that teacher interventions, augmented by learning analytics systems, have a significant positive impact on student learning outcomes and motivation. In experimental group 2, the acquisition of multimodal data facilitated a more precise identification and addressing of student learning challenges. Relative to the control group, students in the experimental groups exhibited heightened self-efficacy and were more motivated learners. Moreover, students in experimental group 2 demonstrated a deeper level of engagement in collaborative processes and the behavior associated with constructing knowledge.
- Published
- 2024
- Full Text
- View/download PDF
10. Brief Report: Characterization of Sensory Over-Responsivity in a Broad Neurodevelopmental Concern Cohort Using the Sensory Processing Three Dimensions (SP3D) Assessment
- Author
-
Maia C. Lazerwitz, Mikaela A. Rowe, Kaitlyn J. Trimarchi, Rafael D. Garcia, Robyn Chu, Mary C. Steele, Shalin Parekh, Jamie Wren-Jarvis, Ioanna Bourla, Ian Mark, Elysa J. Marco, and Pratik Mukherjee
- Abstract
Sensory Over-Responsivity (SOR) is an increasingly recognized challenge among children with neurodevelopmental concerns (NDC). To investigate, we characterized the incidence of auditory and tactile over-responsivity (AOR, TOR) among 82 children with NDC. We found that 70% of caregivers reported concern for their child's sensory reactions. Direct assessment further revealed that 54% of the NDC population expressed AOR, TOR, or both -- which persisted regardless of autism spectrum disorder (ASD) diagnosis. These findings support the high prevalence of SOR as well as its lack of specificity to ASD. Additionally, AOR is revealed to be over twice as prevalent as TOR. These conclusions present several avenues for further exploration, including deeper analysis of the neural mechanisms and genetic contributors to sensory processing challenges.
- Published
- 2024
- Full Text
- View/download PDF
11. Event Boundary Perception in Audio Described Films by People without Sight
- Author
-
Roger Johansson, Tina Rastegar, Viveka Lyberg-Åhlander, and Jana Holsanova
- Abstract
Audio description (AD) plays a crucial role in making audiovisual media accessible to people with a visual impairment, enhancing their experience and understanding. This study employs an event segmentation task to examine how people without sight perceive and segment narrative events in films with AD, compared to sighted viewers without AD. Two AD versions were utilized, differing in the explicitness of conveyed event boundaries. Results reveal that the participants without sight generally perceived event boundaries similarly to their sighted peers, affirming AD's effectiveness in conveying event structures. However, when key event boundaries were more implicitly expressed, event boundary recognition diminished. Collectively, these findings offer valuable insights into event segmentation processes across sensory modalities. Additionally, they underscore the significance of how AD presents event boundaries, influencing the perception and interpretation of audiovisual media for people with a visual impairment and providing applied insights into event segmentation, multimodal processing, and audiovisual accessibility.
- Published
- 2024
- Full Text
- View/download PDF
12. Auditory Challenges and Listening Effort in School-Age Children with Autism: Insights from Pupillary Dynamics during Speech-in-Noise Perception
- Author
-
Suyun Xu, Hua Zhang, Juan Fan, Xiaoming Jiang, Minyue Zhang, Jingjing Guan, Hongwei Ding, and Yang Zhang
- Abstract
Purpose: This study aimed to investigate challenges in speech-in-noise (SiN) processing faced by school-age children with autism spectrum conditions (ASCs) and their impact on listening effort. Method: Participants, including 23 Mandarin-speaking children with ASCs and 19 age-matched neurotypical (NT) peers, underwent sentence recognition tests in both quiet and noisy conditions, with a speech-shaped steady-state noise masker presented at 0-dB signal-to-noise ratio in the noisy condition. Recognition accuracy rates and task-evoked pupil responses were compared to assess behavioral performance and listening effort during auditory tasks. Results: No main effect of group was found on accuracy rates. Instead, significant effects emerged for autistic trait scores, listening conditions, and their interaction, indicating that higher trait scores were associated with poorer performance in noise. Pupillometric data revealed significantly larger and earlier peak dilations, along with more varied pupillary dynamics in the ASC group relative to the NT group, especially under noisy conditions. Importantly, the ASC group's peak dilation in quiet mirrored that of the NT group in noise. However, the ASC group consistently exhibited reduced mean dilations than the NT group. Conclusions: Pupillary responses suggest a different resource allocation pattern in ASCs--An initial sharper and larger dilation may signal an intense, narrowed resource allocation, likely linked to heightened arousal, engagement, and cognitive load, whereas a subsequent faster tail-off may indicate a greater decrease in resource availability and engagement, or a quicker release of arousal and cognitive load. The presence of noise further accentuates this pattern. This highlights the unique SiN processing challenges children with ASCs may face, underscoring the importance of a nuanced, individual-centric approach for interventions and support.
- Published
- 2024
- Full Text
- View/download PDF
13. Phonolexical Processing of Mandarin Segments and Tones by English Speakers at Different Mandarin Proficiency Levels
- Author
-
Yen-Chen Hao
- Abstract
The current study examined the phonolexical processing of Mandarin segments and tones by English speakers at different Mandarin proficiency levels. Eleven English speakers naive to Mandarin, 15 intermediate and 9 advanced second language (L2) learners participated in a word-learning experiment. After learning the sound and meaning of 16 Mandarin disyllabic words, they judged the matching between sound and meaning pairs, with half of the pairs being complete matches while the other half contained segmental or tonal mismatches. The results showed that all three groups were more sensitive to segmental than tonal mismatches. The two learner groups outperformed the Naive group on segmental mismatches but not on tonal mismatches. However, their reaction times revealed that the learners but not the Naive group attended to tonal variations. The current findings suggest that increasing L2 experience has limited benefit on learners' phonolexical processing of L2 tones, probably due to their non-tonal native language background. Experience in a tonal L2 may enhance learners' attention to the tonal dimension but may not necessarily improve their accuracy.
- Published
- 2024
- Full Text
- View/download PDF
14. The Not-so-Slight Perceptual Consequences of Slight Hearing Loss in School-Age Children: A Scoping Review
- Author
-
Chhayakanta Patro and Srikanta Kumar Mishra
- Abstract
Purpose: This study aimed to conduct a scoping review of research exploring the effects of slight hearing loss on auditory and speech perception in children. Method: A comprehensive search conducted in August 2023 identified a total of 402 potential articles sourced from eight prominent bibliographic databases. These articles were subjected to rigorous evaluation for inclusion criteria, specifically focusing on their reporting of speech or auditory perception using psychoacoustic tasks. The selected studies exclusively examined school-age children, encompassing those between 5 and 18 years of age. Following rigorous evaluation, 10 articles meeting these criteria were selected for inclusion in the review. Results: The analysis of included articles consistently shows that even slight hearing loss in school-age children significantly affects their speech and auditory perception. Notably, most of the included articles highlighted a common trend, demonstrating that perceptual deficits originating due to slight hearing loss in children are particularly observable under challenging experimental conditions and/or in cognitively demanding listening tasks. Recent evidence further underscores that the negative impacts of slight hearing loss in school-age children cannot be solely predicted by their pure-tone thresholds alone. However, there is limited evidence concerning the effect of slight hearing loss on the segregation of competing speech, which may be a better representation of listening in the classroom. Conclusion: This scoping review discusses the perceptual consequences of slight hearing loss in school-age children and provides insights into an array of methodological issues associated with studying perceptual skills in school-age children with slight hearing losses, offering guidance for future research endeavors.
- Published
- 2024
- Full Text
- View/download PDF
15. Acoustic and Semantic Processing of Auditory Scenes in Children with Autism Spectrum Disorders
- Author
-
Breanne D. Yerkes, Christina M. Vanden Bosch der Nederlanden, Julie F. Beasley, Erin E. Hannon, and Joel S. Snyder
- Abstract
Purpose: Processing real-world sounds requires acoustic and higher-order semantic information. We tested the theory that individuals with autism spectrum disorder (ASD) show enhanced processing of acoustic features and impaired processing of semantic information. Methods: We used a change deafness task that required detection of speech and non-speech auditory objects being replaced and a speech-in-noise task using spoken sentences that must be comprehended in the presence of background speech to examine the extent to which 7-15 year old children with ASD (n = 27) rely on acoustic and semantic information, compared to age-matched (n = 27) and IQ-matched (n = 27) groups of typically developing (TD) children. Within a larger group of 7-15 year old TD children (n = 105) we correlated IQ, ASD symptoms, and the use of acoustic and semantic information. Results: Children with ASD performed worse overall at the change deafness task relative to the age-matched TD controls, but they did not differ from IQ-matched controls. All groups utilized acoustic and semantic information similarly and displayed an attentional bias towards changes that involved the human voice. Similarly, for the speech-in-noise task, age-matched--but not IQ-matched--TD controls performed better overall than the ASD group. However, all groups used semantic context to a similar degree. Among TD children, neither IQ nor the presence of ASD symptoms predict the use of acoustic or semantic information. Conclusion: Children with and without ASD used acoustic and semantic information similarly during auditory change deafness and speech-in-noise tasks.
- Published
- 2024
- Full Text
- View/download PDF
16. Using Chatbots to Support EFL Listening Decoding Skills in a Fully Online Environment
- Author
-
Weijiao Huang, Chengyuan Jia, Khe Foon Hew, and Jia Guo
- Abstract
Aural decoding skill is an important contributor to successful EFL listening comprehension. This paper first described a preliminary study involving a 12-week undergraduate flipped decoding course, based on the flipped SEF-ARCS decoding model. Although the decoding model (N = 44) was significantly more effective in supporting students' decoding performance than a conventional decoding course (N = 36), two main challenges were reported: teacher's excessive workload, and high requirement for the individual teacher's decoding skills. To address these challenges, we developed a chatbot based on the self-determination theory and social presence theory to serve as a 24/7 conversational agent, and adapted the flipped decoding course to a fully online chatbot-supported learning course to reduce the dependence on the teacher. Although results revealed that the chatbot-supported fully online group (N = 46) and the flipped group (N = 43) performed equally well in decoding test, the chatbot-supported fully online approach was more effective in supporting students' behavioral and emotional engagement than the flipped learning approach. Students' perceptions of the chatbot-supported decoding activities were also explored. This study provides a useful pedagogical model involving the innovative use of chatbot to develop undergraduate EFL aural decoding skills in a fully online environment.
- Published
- 2024
17. Effect of Age and Unaided Acoustic Hearing on Pediatric Cochlear Implant Users' Ability to Distinguish Yes/No Statements and Questions
- Author
-
Emily Buss, Margaret E. Richter, Victoria N. Sweeney, Amanda G. Davis, Margaret T. Dillon, and Lisa R. Park
- Abstract
Purpose: The purpose of this study was to evaluate the ability to discriminate yes/no questions from statements in three groups of children--bilateral cochlear implant (CI) users, nontraditional CI users with aidable hearing preoperatively in the ear to be implanted, and controls with normal hearing. Half of the nontraditional CI users had sufficient postoperative acoustic hearing in the implanted ear to use electric-acoustic stimulation, and half used a CI alone. Method: Participants heard recorded sentences that were produced either as yes/no questions or as statements by three male and three female talkers. Three raters scored each participant response as either a question or a statement. Bilateral CI users (n = 40, 4-12 years old) and normal-hearing controls (n = 10, 4-12 years old) were tested binaurally in the free field. Nontraditional CI recipients (n = 22, 6-17 years old) were tested with direct audio input to the study ear. Results: For the bilateral CI users, performance was predicted by age but not by 125-Hz acoustic thresholds; just under half (n = 17) of the participants in this group had measurable 125-Hz thresholds in their better ear. For nontraditional CI recipients, better performance was predicted by lower 125-Hz acoustic thresholds in the test ear, and there was no association with participant age. Performance approached that of the normal-hearing controls for some participants in each group. Conclusions: Results suggest that a 125-Hz acoustic hearing supports discrimination of yes/no questions and statements in pediatric CI users. Bilateral CI users with little or no acoustic hearing at 125 Hz develop the ability to perform this task, but that ability emerges later than for children with better acoustic hearing. These results underscore the importance of preserving acoustic hearing for pediatric CI users when possible.
- Published
- 2024
- Full Text
- View/download PDF
18. How Hearing Loss and Cochlear Implantation Affect Verbal Working Memory: Evidence from Adolescents
- Author
-
Susan Nittrouer
- Abstract
Purpose: Verbal working memory is poorer for children with hearing loss than for peers with normal hearing (NH), even with cochlear implantation and early intervention. Poor verbal working memory can affect academic performance, especially in higher grades, making this deficit a significant problem. This study examined the stability of verbal working memory across middle childhood, tested working memory in adolescents with NH or cochlear implants (CIs), explored whether signal enhancement can improve verbal working memory, and tested two hypotheses proposed to explain the poor verbal working memory of children with hearing loss: (a) Diminished auditory experience directly affects executive functions, including working memory; (b) degraded auditory inputs inhibit children's abilities to recover the phonological structure needed for encoding verbal material into storage. Design: Fourteen-year-olds served as subjects: 55 with NH; 52 with CIs. Immediate serial recall tasks were used to assess working memory. Stimuli consisted of nonverbal, spatial stimuli and four kinds of verbal, acoustic stimuli: nonrhyming and rhyming words, and nonrhyming words with two kinds of signal enhancement: audiovisual and indexical. Analyses examined (a) stability of verbal working memory across middle childhood, (b) differences in verbal and nonverbal working memory, (c) effects of signal enhancement on recall, (d) phonological processing abilities, and (e) source of the diminished verbal working memory in adolescents with cochlear implants. Results: Verbal working memory remained stable across middle childhood. Adolescents across groups performed similarly for nonverbal stimuli, but those with CIs displayed poorer recall accuracy for verbal stimuli; signal enhancement did not improve recall. Poor phonological sensitivity largely accounted for the group effect. Conclusions: The central executive for working memory is not affected by hearing loss or cochlear implantation. Instead, the phonological deficit faced by adolescents with CIs denigrates the representation in storage and augmenting the signal does not help.
- Published
- 2024
- Full Text
- View/download PDF
19. No Differences in Auditory Steady-State Responses in Children with Autism Spectrum Disorder and Typically Developing Children
- Author
-
Seppo P. Ahlfors, Steven Graham, Hari Bharadwaj, Fahimeh Mamashli, Sheraz Khan, Robert M. Joseph, Ainsley Losh, Stephanie Pawlyszyn, Nicole M. McGuiggan, Mark Vangel, Matti S. Hämäläinen, and Tal Kenet
- Abstract
Auditory steady-state response (ASSR) has been studied as a potential biomarker for abnormal auditory sensory processing in autism spectrum disorder (ASD), with mixed results. Motivated by prior somatosensory findings of group differences in inter-trial coherence (ITC) between ASD and typically developing (TD) individuals at twice the steady-state stimulation frequency, we examined ASSR at 25 and 50 as well as 43 and 86 Hz in response to 25-Hz and 43-Hz auditory stimuli, respectively, using magnetoencephalography. Data were recorded from 22 ASD and 31 TD children, ages 6-17 years. ITC measures showed prominent ASSRs at the stimulation and double frequencies, without significant group differences. These results do not support ASSR as a robust ASD biomarker of abnormal auditory processing in ASD. Furthermore, the previously observed atypical double-frequency somatosensory response in ASD did not generalize to the auditory modality. Thus, the hypothesis about modality-independent abnormal local connectivity in ASD was not supported.
- Published
- 2024
- Full Text
- View/download PDF
20. Developmental Effects in the 'Vocale Rapide dans le Bruit' Speech-in-Noise Identification Test: Reference Performances of Normal-Hearing Children and Adolescents
- Author
-
Lionel Fontan and Jeanne Desreumaux
- Abstract
Purpose: The main objective of this study was to assess the existence of developmental effects on the performance of the Vocale Rapide dans le Bruit (VRB) speech-in-noise (SIN) identification test that was recently developed for the French language and to collect reference scores for children and adolescents. Method: Seventy-two native French speakers, aged 10-20 years, participated in the study. Each participant listened and repeated four lists of eight sentences, each containing three key words to be scored. The sentences were presented in free field at different signal-to-noise ratios (SNRs) using a four-talker babble noise. The SNR yielding 50% of correct repetitions of key words (SNR[subscript 50]) was recorded for each list. Results: A strong relationship between age and SNR[subscript 50] was found, better performance occurring with increasing age (average drop in SNR[subscript 50] per year: 0.34 dB). Large differences (Cohen's d [greater than or equal to] 1.2) were observed between the SNR[subscript 50] achieved by 10- to 13-year-old participants and those of adults. For participants aged 14-15 years, the difference fell just above the 5% level of significance. No effects of hearing thresholds or level of education were observed. Conclusions: The study confirms the existence of developmental effects on SIN identification performance as measured using the VRB test and provides reference data for taking into account these effects during clinical practice. Explanations as to why age effects perdure during adolescence are discussed.
- Published
- 2024
- Full Text
- View/download PDF
21. Evaluating Speaker-Listener Cognitive Effort in Speech Communication through Brain-to-Brain Synchrony: A Pilot Functional Near-Infrared Spectroscopy Investigation
- Author
-
Geoff D. Green II, Ewa Jacewicz, Hendrik Santosa, Lian J. Arzbecker, and Robert A. Fox
- Abstract
Purpose: We explore a new approach to the study of cognitive effort involved in listening to speech by measuring the brain activity in a listener in relation to the brain activity in a speaker. We hypothesize that the strength of this brain-to-brain synchrony (coupling) reflects the magnitude of cognitive effort involved in verbal communication and includes both listening effort and speaking effort. We investigate whether interbrain synchrony is greater in native-to-native versus native-to-nonnative communication using functional near-infrared spectroscopy (fNIRS). Method: Two speakers participated, a native speaker of American English and a native speaker of Korean who spoke English as a second language. Each speaker was fitted with the fNIRS cap and told short stories. The native English speaker provided the English narratives, and the Korean speaker provided both the nonnative (accented) English and Korean narratives. In separate sessions, fNIRS data were obtained from seven English monolingual participants ages 20-24 years who listened to each speaker's stories. After listening to each story in native and nonnative English, they retold the content, and their transcripts and audio recordings were analyzed for comprehension and discourse fluency, measured in the number of hesitations and articulation rate. No story retellings were obtained for narratives in Korean (an incomprehensible language for English listeners). Utilizing fNIRS technique termed sequential scanning, we quantified the brain-to-brain synchronization in each speaker-listener dyad. Results: For native-to-native dyads, multiple brain regions associated with various linguistic and executive functions were activated. There was a weaker coupling for native-to-nonnative dyads, and only the brain regions associated with higher order cognitive processes and functions were synchronized. All listeners understood the content of all stories, but they hesitated significantly more when retelling stories told in accented English. The nonnative speaker hesitated significantly more often than the native speaker and had a significantly slower articulation rate. There was no brain-to-brain coupling during listening to Korean, indicating a break in communication when listeners failed to comprehend the speaker. Conclusions: We found that effortful speech processing decreased interbrain synchrony and delayed comprehension processes. The obtained brain-based and behavioral patterns are consistent with our proposal that cognitive effort in verbal communication pertains to both the listener and the speaker and that brain-to-brain synchrony can be an indicator of differences in their cumulative communicative effort.
- Published
- 2024
- Full Text
- View/download PDF
22. The Effect of Collective Sight-Singing before Melodic Dictation: A Pilot Study
- Author
-
Caroline Caregnato, Ronaldo da Silva, Cristiane Hatsue Vital Otutumi, and Luciano Jeyson Santos da Rocha
- Abstract
Sight-singing and musical dictation are considered as complementary activities by different Ear Training pedagogues but, surprisingly, studies conducted with participants working individually were not able to find benefits of singing associated with dictation taking. This pilot study aims at observing the effect of a sight-singing, performed collectively before melodic dictation, on dictation results. We carried out an experimental study involving 54 students from three universities, who were tested in situations emulating Ear Training classes. The experimental group performed a collective sight-singing before the dictation, and the control group remained silent during the activity. Statistical analyses demonstrated that the experimental group had a significantly better performance on dictation than the control group, showing new data in relation to previous researches, that did not observe contributions of sight-singing related to dictation taking. We believe that collective sight-singing promotes cooperation between students, leading to better performance on reading than individual activities, thus improving dictation results. Although our pilot study counted on a small number of participants, remaining the necessity of future research expanding this one, it points to the potential benefits that collective activities could bring to the often-individualized instruction in Ear Training classes.
- Published
- 2024
- Full Text
- View/download PDF
23. Techniques and Resources for Teaching and Learning Bird Sounds
- Author
-
Caitlin Beebe and W. Douglas Robinson
- Abstract
The sounds of birds form the outdoor playlist of our lives. Birds appeal to the public, in part because of the wide variety of interesting sounds they make. This popularity has led to a long history of amateur participation in ornithology, which has recently produced rapid increases in freely available online databases with hundreds of thousands of bird sounds recorded by birdwatchers. These databases provide unique opportunities for teachers to guide students through processes to learn to identify bird species by their sounds. The techniques we summarize here include combining the auditory components of recognizing different types of sounds birds make with visual components of reading sonograms, widely available visual representations of sounds.
- Published
- 2024
- Full Text
- View/download PDF
24. Nasal/Oral Vowel Perception in French-Speaking Children with Cochlear Implants and Children with Typical Hearing
- Author
-
Sophie Fagniart, Véronique Delvaux, Bernard Harmegnies, Anne Huberlant, Kathy Huet, Myriam Piccaluga, Isabelle Watterman, and Brigitte Charlier
- Abstract
Purpose: The present study investigates the perception of vowel nasality in French-speaking children with cochlear implants (CIs; CI group) and children with typical hearing (TH; TH group) aged 4-12 years. By investigating the vocalic nasality feature in French, the study aims to document more broadly the effects of the acoustic limitations of CI in processing segments characterized by acoustic cues that require optimal spectral resolution. The impact of various factors related to children's characteristics, such as chronological/auditory age, age of implantation, and exposure to cued speech, has been studied on performance, and the acoustic characteristics of the stimuli in perceptual tasks have also been investigated. Method: Identification and discrimination tasks involving French nasal and oral vowels were administered to two groups of children: 13 children with CIs (CI group) and 25 children with TH (TH group) divided into three age groups (4-6 years, 7-9 years, and 10-12 years). French nasal vowels were paired with their oral phonological counterpart (phonological pairing) as well as to the closest oral vowel in terms of phonetic proximity (phonetic pairing). Post hoc acoustic analyses of the stimuli were linked to the performance in perception. Results: The results indicate an effect of the auditory status on the performance in the two tasks, with the CI group performing at a lower level than the TH group. However, the scores of the children in the CI group are well above chance level, exceeding 80%. The most common errors in identification were substitutions between nasal vowels and phonetically close oral vowels as well as confusions between the phoneme /u/ and other oral vowels. Phonetic pairs showed lower discrimination performance in the CI group with great variability in the results. Age effects were observed only in TH children for nasal vowel identification, whereas in children with CIs, a positive impact of cued speech practice and early implantation was found. Differential links between performance and acoustic characteristics were found within our groups, suggesting that in children with CIs, selective use of certain acoustic features, presumed to be better transmitted by the implant, leads to better perceptual performance. Conclusions: The study's results reveal specific challenges in children with CIs when processing segments characterized by fine spectral resolution cues. However, the CI children in our study appear to effectively compensate for these difficulties by utilizing various acoustic cues assumed to be well transmitted by the implant, such as cues related to the temporal resolution of stimuli.
- Published
- 2024
- Full Text
- View/download PDF
25. Investigating Perception to Production Transfer in Children with Cochlear Implants: A High Variability Phonetic Training Study
- Author
-
Hao Zhang, Xuequn Dai, Wen Ma, Hongwei Ding, and Yang Zhang
- Abstract
Purpose: This study builds upon an established effective training method to investigate the advantages of high variability phonetic identification training for enhancing lexical tone perception and production in Mandarin-speaking pediatric cochlear implant (CI) recipients, who typically face ongoing challenges in these areas. Method: Thirty-two Mandarin-speaking children with CIs were quasirandomly assigned into the training group (TG) and the control group (CG). The 16 TG participants received five sessions of high variability phonetic training (HVPT) within a period of 3 weeks. The CG participants did not receive the training. Perception and production of Mandarin tones were administered before (pretest) and immediately after (posttest) the completion of HVPT via lexical tone recognition task and picture naming task. Both groups participated in the identical pretest and posttest with the same time frame between the two test sessions. Results: TG showed significant improvement from pretest to posttest in identifying Mandarin tones for both trained and untrained speech stimuli. Moreover, perceptual learning of HVPT significantly facilitated trainees' production of T1 and T2 as rated by a cohort of 10 Mandarin-speaking adults with normal hearing, which was corroborated by acoustic analyses revealing improved fundamental frequency (F0) median for T1 and T2 production and enlarged F0 movement for T2 production. In contrast, TG children's production of T3 and T4 showed nonsignificant changes across two test sessions. Meanwhile, CG did not exhibit significant changes in either perception or production. Conclusions: The results suggest a limited and inconsistent transfer of perceptual learning to lexical tone production in children with CIs, which challenges the notion of a robust transfer and highlights the complexity of the interaction between perceptual training and production outcomes. Further research on individual differences with a longitudinal design is needed to optimize the training protocol or tailor interventions to better meet the diverse needs of learners.
- Published
- 2024
- Full Text
- View/download PDF
26. Mandarin-Speaking Amusics' Online Recognition of Tone and Intonation
- Author
-
Lirong Tang, Yangxiaoxue Xu, Shiting Yang, Xiangyun Meng, Boqi Du, Chen Sun, Li Liu, Qi Dong, and Yun Nan
- Abstract
Purpose: Congenital amusia is a neurogenetic disorder of musical pitch processing. Its linguistic consequences have been examined separately for speech intonations and lexical tones. However, in a tonal language such as Chinese, the processing of intonations and lexical tones interacts with each other during online speech perception. Whether and how the musical pitch disorder might affect linguistic pitch processing during online speech perception remains unknown. Method: We investigated this question with intonation (question vs. statement) and lexical tone (rising Tone 2 vs. falling Tone 4) identification tasks using the same set of sentences, comparing behavioral and event-related potential measurements between Mandarin-speaking amusics and matched controls. We specifically focused on the amusics without behavioral lexical tone deficits (the majority, i.e., pure amusics). Results: Results showed that, despite relative to normal performance when tested in word lexical tone test, pure amusics demonstrated inferior recognition than controls during sentence tone and intonation identification. Compared to controls, pure amusics had larger N400 amplitudes in question stimuli during tone task and smaller P600 amplitudes in intonation task. Conclusion: These data indicate that musical pitch disorder affects both tone and intonation processing during sentence processing even for pure amusics, whose lexical tone processing was intact when tested with words.
- Published
- 2024
- Full Text
- View/download PDF
27. Effects of Deep-Brain Stimulation on Speech: Perceptual and Acoustic Data
- Author
-
Yunjung Kim, Austin Thompson, and Ignatius S. B. Nip
- Abstract
Purpose: This study examined speech changes induced by deep-brain stimulation (DBS) in speakers with Parkinson's disease (PD) using a set of auditory-perceptual and acoustic measures. Method: Speech recordings from nine speakers with PD and DBS were compared between DBS-On and DBS-Off conditions using auditory-perceptual and acoustic analyses. Auditory-perceptual ratings included voice quality, articulation precision, prosody, speech intelligibility, and listening effort obtained from 44 listeners. Acoustic measures were made for voicing proportion, second formant frequency slope, vowel dispersion, articulation rate, and range of fundamental frequency and intensity. Results: No significant changes were found between DBS-On and DBS-Off for the five perceptual ratings. Four of six acoustic measures revealed significant differences between the two conditions. While articulation rate and acoustic vowel dispersion increased, voicing proportion and intensity range decreased from the DBS-Off to DBS-On condition. However, a visual examination of the data indicated that the statistical significance was mostly driven by a small number of participants, while the majority did not show a consistent pattern of such changes. Conclusions: Our data, in general, indicate no-to-minimal changes in speech production ensued from DBS stimulation. The findings are discussed with a focus on large interspeaker variability in PD in terms of their speech characteristics and the potential effects of DBS on speech.
- Published
- 2024
- Full Text
- View/download PDF
28. Speech Sound Categories Affect Lexical Competition: Implications for Analytic Auditory Training
- Author
-
Kristi Hendrickson, Katlyn Bay, Philip Combiths, Meaghan Foody, and Elizabeth Walker
- Abstract
Objectives: We provide a novel application of psycholinguistic theories and methods to the field of auditory training to provide preliminary data regarding which minimal pair contrasts are more difficult for listeners with typical hearing to distinguish in real-time. Design: Using eye-tracking, participants heard a word and selected the corresponding image from a display of four: the target word, two unrelated words, and a word from one of four contrast categories (i.e., voiced-initial [e.g., "peach-beach"], voiced-final [e.g., "back-bag"], manner-initial [e.g., "talk-sock"], and manner-final [e.g., "bat-bass"]). Results: Fixations were monitored to measure how strongly words compete for recognition depending on the contrast type (voicing, manner) and location (word-initial or final). Manner contrasts competed more for recognition than did voicing contrasts, and contrasts that occurred in word-final position were harder to distinguish than word-initial position. Conclusion: These results are an important initial step toward creating an evidence-based hierarchy for auditory training for individuals who use cochlear implants.
- Published
- 2024
- Full Text
- View/download PDF
29. Syllable Position Effects in the Perception of L2 Portuguese /l/ and /[voiced alveolar tap or flap]/ by L1-Mandarin Learners
- Author
-
Chao Zhou and Anabela Rato
- Abstract
This study reports syllable position effects on second language (L2) Portuguese speech perception, revealing that L2 segmental learning may be prone to an influence from the suprasegmental level. The results show that first language (L1) Mandarin learners had diminished performance on the discrimination between the target Portuguese liquids (/l/ and /[voiced alveolar tap or flap]/) and their position-dependent deviant productions, suggesting that the cause of their perceptual confusability differs across syllable positions. Another syllabic position effect was attested in the acquisition order (/l/[subscript onset] > /l/[subscript coda], /[voiced alveolar tap or flap]/[subscript coda] > /[voiced alveolar tap or flap]/[subscript onset]), demonstrating that an L2 sound is not mastered equally in all positions. Furthermore, we also observed that an increase in L2 experience affected only the perceptual identification accuracy of [l], but not of [[voiced alveolar tap or flap]]. This seems to suggest that L2 experience may exert different degrees of impact, depending on the L2 segments. Both theoretical and methodological implications of these results are discussed.
- Published
- 2024
- Full Text
- View/download PDF
30. The Relationship between Perception and Production of Illusory Vowels in a Second Language
- Author
-
Song Yi Kim and Jeong-Im Han
- Abstract
Korean learners of English are known to repair consonant clusters, which are not allowed in their native language, with an epenthetic vowel [close central unrounded vowel]. The purpose of the present study is to examine whether the perception-production link of such an illusory vowel in a second language (L2) is only within and not across processing levels, as proposed in a previous study regarding L2 segments. We assessed the perception and production of English onset clusters by Korean learners and native English speakers at the prelexical (AX discrimination and pseudoword read-aloud tasks) and lexical (lexical decision and picture-naming tasks) levels, using the same participants and stimuli across the tasks. Results showed that accuracy in not producing an epenthetic vowel between the two consonants of onset cluster was not significantly associated with accurate perception of the cluster either within or across processing levels. The results suggest that production and perception accuracy in L2 phonotactics are independent to a certain extent.
- Published
- 2024
- Full Text
- View/download PDF
31. On the Effects of Task Focus and Processing Level on the Perception-Production Link in Second-Language Speech Learning
- Author
-
Miquel Llompart
- Abstract
This study presents a reanalysis of existing data to investigate whether a relationship between perception and production abilities regarding a challenging second-language (L2) phonological contrast is observable (a) when both modalities must rely on accessing stored lexical representations and (b) when there is an asymmetry in task focus between perception and production. In the original studies, German learners of English were tested on their mastery of the English /[open-mid front unrounded vowel]/-/ae/ contrast in an auditory lexical decision task with phonological substitutions, a word-reading task, and a segmentally focused imitation task. Results showed that accurate nonword rejection in the lexical decision task was predicted by the Euclidean distance between the two vowels in word reading but not in imitation. These results extend previous findings to lexical perception and production, highlight the influence of task focus on the degree of coupling between the two modalities, and may have important implications for pronunciation training methods.
- Published
- 2024
- Full Text
- View/download PDF
32. Auditory Category Learning in Children with Dyslexia
- Author
-
Casey L. Roark, Vishal Thakkar, Bharath Chandrasekaran, and Tracy M. Centanni
- Abstract
Purpose: Developmental dyslexia is proposed to involve selective procedural memory deficits with intact declarative memory. Recent research in the domain of category learning has demonstrated that adults with dyslexia have selective deficits in Information-Integration (II) category learning that is proposed to rely on procedural learning mechanisms and unaffected Rule-Based (RB) category learning that is proposed to rely on declarative, hypothesis testing mechanisms. Importantly, learning mechanisms also change across development, with distinct developmental trajectories in both procedural and declarative learning mechanisms. It is unclear how dyslexia in childhood should influence auditory category learning, a critical skill for speech perception and reading development. Method: We examined auditory category learning performance and strategies in 7- to 12-year-old children with dyslexia (n = 25; nine females, 16 males) and typically developing controls (n = 25; 13 females, 12 males). Participants learned nonspeech auditory categories of spectrotemporal ripples that could be optimally learned with either RB selective attention to the temporal modulation dimension or procedural integration of information across spectral and temporal dimensions. We statistically compared performance using mixed-model analyses of variance and identified strategies using decision-bound computational models. Results: We found that children with dyslexia have an apparent selective RB category learning deficit, rather than a selective II learning deficit observed in prior work in adults with dyslexia. Conclusion: These results suggest that the important skill of auditory category learning is impacted in children with dyslexia and throughout development, individuals with dyslexia may develop compensatory strategies that preserve declarative learning while developing difficulties in procedural learning.
- Published
- 2024
- Full Text
- View/download PDF
33. Amplitude Modulation Perception and Cortical Evoked Potentials in Children with Listening Difficulties and Their Typically Developing Peers
- Author
-
Lauren Petley, Chelsea Blankenship, Lisa L. Hunter, Hannah J. Stewart, Li Lin, and David R. Moore
- Abstract
Purpose: Amplitude modulations (AMs) are important for speech intelligibility, and deficits in speech intelligibility are a leading source of impairment in childhood listening difficulties (LiD). The present study aimed to explore the relationships between AM perception and speech-in-noise (SiN) comprehension in children and to determine whether deficits in AM processing contribute to childhood LiD. Evoked responses were used to parse the neural origins of AM processing. Method: Forty-one children with LiD and 44 typically developing children, ages 8-16 years, participated in the study. Behavioral AM depth thresholds were measured at 4 and 40 Hz. SiN tasks included the Listening in Spatialized Noise-Sentences Test (LiSN-S) and a coordinate response measure (CRM)- based task. Evoked responses were obtained during an AM change detection task using alternations between 4 and 40 Hz, including the N1 of the acoustic change complex, auditory steady-state response (ASSR), P300, and a late positive response (late potential [LP]). Maturational effects were explored via age correlations. Results: Age correlated with 4-Hz AM thresholds, CRM separated talker scores, and N1 amplitude. Age-normed LiSN-S scores obtained without spatial or talker cues correlated with age-corrected 4-Hz AM thresholds and area under the LP curve. CRM separated talker scores correlated with AM thresholds and area under the LP curve. Most behavioral measures of AM perception correlated with the signal-to-noise ratio and phase coherence of the 40-Hz ASSR. AM change response time also correlated with area under the LP curve. Children with LiD exhibited deficits with respect to 4-Hz thresholds, AM change accuracy, and area under the LP curve. Conclusions: The observed relationships between AM perception and SiN performance extend the evidence that modulation perception is important for understanding SiN in childhood. In line with this finding, children with LiD demonstrated poorer performance on some measures of AM perception, but their evoked responses implicated a primarily cognitive deficit.
- Published
- 2024
- Full Text
- View/download PDF
34. Dalcroze Method and Rhythm in Music Education in Turkey
- Author
-
Apaydin, Özkan
- Abstract
The Swiss composer, academician and music educator Emile Jaquez Dalcroze brought a new perspective to education with different methods, especially, children's gaining the sense of rhythm and improvisation skills, which is called Dalcroze method in the related literature. In this study, the role and functional dimensions of Dalcroze method and the rhythm phenomenon which are envisaged in music lesson curricula and which are accepted as the basis of music as the skeleton of music were investigated in Turkey. For this purpose, the scanning method was used and both national and international sources were examined. In addition, in the study, the basic principles of the Dalcroze method and the formation and dissemination processes of the method were mentioned. The results have revealed that the philosophy and the main principles of Dalcroze method, implemented since the 1920s, appear as an approach and method that puts the student in the center. The method especially gives children the chance to learn by experience, rather than an oppressive, compelling or purely musical talent-based education approach. It focuses on an approach that supports social development, self-confidence and creativity along with their musical development can be mentioned. In addition, it has been realized that in Turkey, with the transition to constructivist education since 2005, there has been an increase in researches and applications for student-centered educational approaches. However, it is not widespread enough.
- Published
- 2023
35. Musical Hearing and the Acquisition of Foreign-Language Intonation
- Author
-
Jekiel, Mateusz and Malarski, Kamil
- Abstract
The present study seeks to determine whether superior musical hearing is correlated with successful production of second language (L2) intonation patterns. Fifty Polish speakers of English at the university level were recorded before and after an extensive two-semester accent training course in English. Participants were asked to read aloud a series of short dialogues containing different intonation patterns, complete two musical hearing tests measuring tone deafness and melody discrimination, and a survey regarding musical experience. We visually analyzed and assessed participants' intonation by comparing their F[subscript 0] contours with the model provided by their accent training teachers following ToBI (Tones and Break Indices) guidelines and compared the results with the musical hearing test scores and the survey responses. The results suggest that more accurate pitch perception can be related to more correct production of L2 intonation patterns as participants with superior musical ear produced more native-like speech contours after training, similar to those of their teachers. After dividing participants into four categories based on their musical hearing test scores and musical experience, we also observed that some students with better musical hearing test scores were able to produce more correct L2 intonation patterns. However, students with poor musical hearing test scores and no musical background also improved, suggesting that the acquisition of L2 intonation in a formal classroom setting can be successful regardless of one's musical hearing skills.
- Published
- 2023
36. Opportunity to Provide Augmented Reality Media for the Intervention of Communication, Perception, Sound, and Rhythm for Deaf Learners Based on Cultural Context
- Author
-
Subagya, Arsy Anggrellanggi, Erma Kumala Sari, and Priyono
- Abstract
The development of communication, perception, sound, and rhythm (DCPSR) is a learning subject that provides stimulation, and intervention for the appreciation of sound which is done intentionally or unintentionally so that the remnants of hearing and the feeling of vibration possessed by students with hearing impairment can be used as well as possible. This study aims to describe the current conditions for the implementation of DCPSR special schools and the need for Augmented Reality (AR)-based DCPSR media. Data was collected by distributing questionnaires through a google form by accidental sampling 131 special education teachers in Indonesia and focused group focus (FGD). Instrument validity uses content validity and reliability uses interrater reliability. The data analysis technique used the descriptive-qualitative analysis method. The results showed that 18.32% of teachers did not even understand the concept of DCPSR, ??while 72.51% of teachers thought that DCPSR needed to be taught. Schools still use conventional media (54.2%), and 99.24% of teachers feel the need and need innovative DCPSR media in the form of AR-based media, especially in communication material (oral motor) 34.45%, as well as sound and rhythm perception (detection) 46.56%. The results of the analysis show that teachers who teach the deaf need the development of AR-based DCPSR media that is easy to operate and reach, such as smartphones with additional facilities for audio, images, captions, and cues. The results of the FGD showed that prioritizing the development of this AR-based media on sound discrimination material on the essential sounds around students.
- Published
- 2023
37. Iranian EFL Teachers' Oral/Aural Skills Language Assessment Literacy: Instrument Development and Validation
- Author
-
Kobra Tavassoli and Zahra Sorat
- Abstract
Despite widespread studies on language assessment literacy (LAL), there are still many unexplored areas about LAL (Gan & Lam, 2022). One of these areas is identifying various aspects of LAL regarding different language skills and scrutinizing the English as a foreign language (EFL) teachers' involvement with these aspects. Accordingly, this study attempted to (a) explore Iranian EFL teachers' perceptions, preferences, and difficulties of oral/aural skills LAL and (b) develop a scale to measure these teachers' oral/aural skills LAL. The study was carried out in two phases. First, semi-structured interviews were conducted with 10 Iranian EFL teachers to identify their perceptions, preferences, and difficulties of oral/aural skills LAL. Second, the researchers developed a questionnaire based on a review of the literature on assessing oral/aural skills and the results of interviews. The questionnaire was reviewed by experts, revised accordingly, and administered to 150 Iranian EFL teachers who were selected through convenience sampling. The reliability of the questionnaire and its construct validity were then checked. The results of both phases of the study were compatible. The outcomes showed that almost all teachers represented dissatisfaction about their oral/aural skills LAL and they were enthusiastic to participate in assessment training courses. Furthermore, it was found that due to their lack of knowledge about oral/aural skills assessment, traditional techniques of assessment were widely used by Iranian EFL teachers.
- Published
- 2023
38. The Power of the Voice in Facilitating and Maintaining Online Presence in the Era of Zoom and Teams
- Author
-
Cribb, Michael
- Abstract
With the lockdowns of the COVID-19 pandemic and increasing popularity of videoconferencing software such as Zoom, the move to online and /or hybrid teaching has never been more rapid. With this change, however, maintaining presence in the classroom has become a great challenge simply because of the nature of online teaching. Presence is a teaching quality that enables the teacher to "own the room" and create an atmosphere of focus and inspiration. With the loss of face-to-face contact and the diminution of body language that online teaching entails, the teacher has to rely more and more on their own voice to hold presence in the class. While voice has always been an important tool in the teacher's expressive armoury, it takes on a more central role in online teaching and can be the only element that connects teachers to students. Yet many teachers still front classes where voice audio quality is severely restricted due in part to poor choice of microphone and setups on their behalf. In this article I will discuss the notion of presence in online classrooms with regard to voice, and show how teachers can maintain and manipulate this feature in order to retain appeal for students.
- Published
- 2023
39. Learning L2 Pronunciation with Google Translate
- Author
-
Khademi, Hamidreza and Cardoso, Walcir
- Abstract
This article, based on Khademi's (2021) Master's thesis, examines the use of Google Translate (GT) and its speech capabilities, Text-to-Speech Synthesis (TTS) and Automatic Speech Recognition (ASR), in helping L2 learners acquire the pronunciation of English past -ed allomorphy (/t/, /d/, /id/) in a semi-autonomous context, considering three levels of pronunciation development: phonological awareness, perception, and production. Our pre/posttest results indicate significant improvements in the participants' awareness and perception of the English past -ed, but no improvements in production (except for /id/). These findings corroborate our hypothesis that GT's speech capabilities can be used as pedagogical tools to help learners acquire the target pronunciation feature. [For the complete volume, "Intelligent CALL, Granular Systems and Learner Data: Short Papers from EUROCALL 2022 (30th, Reykjavik, Iceland, August 17-19, 2022)," see ED624779.]
- Published
- 2022
40. Using an Online High-Variability Phonetic Training Program to Develop L2 Learners' Perception of English Fricatives
- Author
-
Iino, Atsushi and Wistner, Brian
- Abstract
This study investigated the degree to which Japanese learners of English accurately perceive English fricatives over time and the extent to which fricatives were misidentified. To train and measure perception skills, an online high-variability phonetic training program was used in an English as a Foreign Language (EFL) class in Japan for eight weeks. The results indicated that learners' perception of some of the fricatives improved over time, while others remained difficult to distinguish from other fricatives. Implications for EFL pronunciation instruction are considered. [For the complete volume, "Intelligent CALL, Granular Systems and Learner Data: Short Papers from EUROCALL 2022 (30th, Reykjavik, Iceland, August 17-19, 2022)," see ED624779.]
- Published
- 2022
41. Accent Difference Makes No Difference to Phoneme Acquisition
- Author
-
Jones, Marc and Blume, Carolyn
- Abstract
ELT materials tend to use prestige variety speakers as models, an underlying assumption being that this is needed in order to acquire the phonology necessary to parse English speech (Rose & Galloway, 2019). Global Englishes Language Teaching (GELT) (Galloway & Rose, 2018) provides the potential for movement away from such 'native speaker' ideologies, but lacks empirical evidence. In this study, the use of GELT input in comparison with prestige varieties of English was investigated. Sixteen first-year L1 Japanese university students in an English Medium Instruction programme participated in a self-paced listening study via a learning management system (LMS). All participants were tested on their perception of the English vowels /ae/, /[open-mid back unrounded vowel]/, /[open-mid central unrounded vowel]/ and/ /[open-mid back rounded vowel]/. After this pretest, they were separated into two groups: using edited TED talks, the experimental group (G) (N=8) watched videos of Global English varieties, and the control group (P) (N=8) watched videos of prestige English varieties. Both groups acquired losses, i.e., immediate posttest scores were mainly lower than pretest scores on vowel identification. Scores were predicted by the variation in interval between lessons and posttest, but not by the varieties of English used. This provides support for the view that GELT is as valid a language teaching approach as using prestige varieties.
- Published
- 2022
42. Auto-Scoring of Student Speech: Proprietary vs. Open-Source Solutions
- Author
-
Daniels, Paul
- Abstract
This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, "Speech Assessment for Moodle" ("SAM"), is an open-source solution developed by the author that makes use of Google's speech recognition engine to transcribe speech into text which is then automatically scored using a phoneme-based algorithm. "SAM" is designed as a custom quiz type for "Moodle," a widely adopted open-source course management system. The second auto-scoring system, "EnglishCentral," is a popular proprietary language learning solution which utilizes a trained intelligibility model to automatically score speech. Results of this study indicated a positive correlation between the speaking scores generated by both systems, meaning students who scored higher on the "SAM" speaking tasks also tended to score higher on the "EnglishCentral" speaking tasks and vice versa. In addition to comparing the scores generated from these two systems against each other, students' computer-scored speaking scores were compared to human-generated scores from small-group face-to-face speaking tasks. The results indicated that students who received higher scores with the online computer-graded speaking tasks tended to score higher on the human-graded small-group speaking tasks and vice versa.
- Published
- 2022
43. Recognition of Emotional Prosody in Mandarin-Speaking Children: Effects of Age, Noise, and Working Memory
- Author
-
Chen Kuang, Xiaoxiang Chen, and Fei Chen
- Abstract
Age, babble noise, and working memory have been found to affect the recognition of emotional prosody based on non-tonal languages, yet little is known about how exactly they influence tone-language-speaking children's recognition of emotional prosody. In virtue of the tectonic theory of Stroop effects and the Ease of Language Understanding (ELU) model, this study aimed to explore the effects of age, babble noise, and working memory on Mandarin-speaking children's understanding of emotional prosody. Sixty Mandarin-speaking children aged three to eight years and 20 Mandarin-speaking adults participated in this study. They were asked to recognize the happy or sad prosody of short sentences with different semantics (negative, neutral, or positive) produced by a male speaker. The results revealed that the prosody-semantics congruity played a bigger role in children than in adults for accurate recognition of emotional prosody in quiet, but a less important role in children compared with adults in noise. Furthermore, concerning the recognition accuracy of emotional prosody, the effect of working memory on children was trivial despite the listening conditions. But for adults, it was very prominent in babble noise. The findings partially supported the tectonic theory of Stroop effects which highlights the perceptual enhancement generated by cross-channel congruity, and the ELU model which underlines the importance of working memory in speech processing in noise. These results suggested that the development of emotional prosody recognition is a complex process influenced by the interplay among age, background noise, and working memory.
- Published
- 2024
- Full Text
- View/download PDF
44. Development and Initial Validation of the 'Musical Discrimination and Styles Task': Measuring Children and Adolescent Music Aptitude and Achievement
- Author
-
Dawn R. Mitchell White
- Abstract
This dissertation had two purposes: (1) to create and document the development of a new music achievement test entitled the Musical Discrimination and Styles Task (MDAST), and (2) to describe the strength of the evidence supporting the validity and reliability of this new developmentally appropriate music aptitude and achievement instrument. I created a theoretical framework based on (1) the cognitive theory of child development of Jean Piaget, (2) the phase model of artistic development by David Hargreaves, and (3) the theoretical models of music discrimination, audiation, and music achievement by Edwin Gordon. The MDAST's design assessed students' abilities to evaluate pitch and rhythmic discriminations and compare musical contours (all three commonly used in musical assessment), composers, and styles (new addition here but based on empirical evidence). The items were developed with the assistance of an expert panel. Following pilot testing of the created pool of items, the MDAST was reduced to 15 items organized into five subtests. Items were scored 0 (incorrect) and 1 (correct). The following research questions guided this research: (1.) What is the strength of the evidence supporting the validity of the Musical Discrimination and Styles Task (MDAST)? (a.) Content validity evidence as provided by a panel of experts? (b.) Internal structure validity evidence as provided by exploratory factor analysis? (c.) Relations to other variables as provided by examining the relationships between grade level and the subtests? (2.) What is the strength of the evidence supporting the reliability of the Musical Discrimination and Styles Task (MDAST)? (a.) Internal consistency reliability as provided by the Kuder-Richardson Formula 20 (Cronbach's alpha)? Three hundred sixty-two (n=362) students from a community charter school in the southeastern part of the United States took the 15-item test in Qualtrics from September 13, 2022, to October 13, 2022. Confirmatory factor analysis tested the five-factor measurement model for the MDAST. First, the researcher focused on "Descriptive Statistics" by calculating the "Item Difficulties by Total Group and by Grade Levels" [Table 4] and a "Subtest Correlation Matrix" [Table 5]. This study also established content validity through peer expert reviews who took the test and measured successful items at 80% agreement [Table 6]. Then, the researcher used confirmatory factor analysis to evaluate the five-factor model underlying the MDAST for internal structure validity. The researcher then assessed reliability using the Kuder-Richardson Formula 20 on both the total test and the subtests [Table 7]. The results for the overall test revealed for all participants was [alpha] = 0.681. For the subtests, their results were as follows: (1) Pitch [alpha] = 0.449, (2) Rhythm [alpha] = 0.398, (3) Contours [alpha] = 0.118, (4) Composers [alpha] = 0.346, and (5) Styles [alpha] = 0.056. There were implications for future research, as well as those for current practice. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
- Published
- 2024
45. Perception of Regional Spoken Arabic by Native Speakers
- Author
-
Amal A. Alotaibi
- Abstract
This dissertation examines native speakers' word recognition of, differentiation between, and social attitudes toward varieties of Arabic. It is a particularly interesting test case because of the Arabic unique regional variation situation and the available literature lacks data on how Arabic speakers perceive different accents, with a particular emphasis on their connections to the speakers' sociological and regional backgrounds. Therefore, the main purpose of this study is to discover how native speakers perceive various Arabic speech varieties and test the accent familiarity effect to determine the effects of dialectal variation and language experience on speech perception. Specifically, whether the availability of information regarding the speakers' accent in the speech signal would influence the recognition of spoken words and sentences. To do this, I examine how two groups of native Arabic speakers; Najdi Arabic (NA) and Saudi Southern Arabic (SA), perceive and adapt to three different regional accents; NA, SA ('own' or 'nearby' accent), and Egyptian Arabic (EA) ('distant' accent). I conducted three perception studies to explore NA and SA speakers' processing of regional Arabic varieties. In the first experiment, I examined participants' ability to recognize speech stimuli in their 'own', a 'nearby', or a 'distant' variety. NA and SA participants were asked to make a lexical decision ('word' or 'nonword') on target items placed at the end of sentences spoken by NA speaker, SA speaker, and EA speaker. Results show that participants were good at recognizing 'words' from 'nonwords' with an accuracy level of (93.3%). Moreover, 'nonword' trials have slightly slower reaction times compared to the 'word' trial type, especially for the 'distant' accent since it is not that familiar to them. Similarly, SA participants' performance in 'nonword' trials shows slower reaction times as compared to the performance of NA participants. This demonstrates how regional accents can affect word recognition and that responding to a 'distant' variety requires more time and effort from the listeners. In the second experiment, I examined participants' ability to distinguish between the different regional accents. Another set of NA and SA participants performed a discrimination task where they were asked to determine whether two different talkers were from the 'same' region or 'different' regions. Results from this study show that all participants had relatively similar reaction times. In terms of trial types, responses from 'different' trials had faster reaction times, particularly those with 'distant' dialect (where EA is one of the combinations of the two audio samples). In the third experiment, I examined participants' attitudes, social representations, and social judgments toward the same regional accents, NA, SA, and EA. A new group of NA and SA participants were asked to rate nine audio samples spoken by three NA speakers, three SA speakers, and three EA speakers on social and personal traits, including accentedness, on a 6-point rating scale. Results from this social judgment task reveal that participants from both groups were lenient with speakers who speak their 'own' variety, especially in accentedness ratings. The statistical analyses also reveal significant main effects of participant accent and talker accent across multiple characteristics. Taken together, the findings of these three studies have shed light on the effects of familiarity with the "own" Arabic variety (familiar accent), "nearby" Arabic variety (less familiar accent), and "distant" Arabic variety (unfamiliar accent) on accents' perception and recognition. In particular, the present research provides us with a better understanding of how native Arabic speakers generally handle the linguistic variation they encounter in speech in their daily life, recognize regional accented words, distinguish between regional accents, and express their own social views and accent ratings toward these various regional accents that are either 'own', 'nearby', or 'distant' accent to the participants. It will contribute to our comprehension of how accent perception works in general, how native Arabic speakers recognize regionally accented words and nonwords, discriminate between different regional accents, and evaluate the sociological background of regionally accented talkers. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
- Published
- 2024
46. Audiovisual Spoken Word Processing in Typical-Hearing and Cochlear Implant-Using Children: An ERP Investigation
- Author
-
Elizabeth Pierotti
- Abstract
The process of spoken word recognition is influenced by both bottom-up sensory information and top-down cognitive information. These cues are used to process the phonological and semantic representations of speech. Several studies have used EEG/ERPs to study the neural mechanisms of children's spoken word recognition, but less is known about the role of visual speech information (facial and lip cues) on this process. It is also unclear if populations with different early sensory experiences (e.g. deaf children who receive cochlear implants; CIs) show the same pattern of neural responses during audiovisual (AV) spoken word recognition. Here we investigate ERP components corresponding to typical hearing (TH) and CI-using school age children's sensory, phonological, and semantic neural responses during a picture-audiovisual word matching task. Children (TH n = 22; CI n = 13; ages 8-13 years) were asked to match picture primes and AV video targets of speakers naming the pictures. ERPs were time-locked to the onset of the target's meaningful visual and auditory speech information. The results suggest that while CI and TH children may not differ in their sensory (Visual P1, Auditory N1) or semantic (N400, Late N400) responses, there may be differences in the intermediary components associated with either phonological or strategic processing. Specifically, we find an N280 response for the CI group and a P300 component in the TH group. Subjects' ERPs are correlated with their age, hearing experience, task performance, and language measures. We interpret these findings in light of the unique strategies that may be employed by these two groups of children based on the utilization of different speech cues or task-level predictions. These findings better inform our understanding of the neural bases of AV speech processing in children, specifically where differences may emerge between groups of children with differential sensory experiences; the results have implications for improving spoken language access for children with cochlear implants. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
- Published
- 2024
47. A Computer-Assisted Consecutive Interpreting Workflow: Training and Evaluation
- Author
-
Sijia Chen and Jan-Louis Kruger
- Abstract
Following a preliminary study that examined the potential effectiveness of a computer-assisted consecutive interpreting (CACI) mode, this paper presents a further trial of the CACI workflow. The workflow involves respeaking using speech recognition (SR) in phase I and production assisted by the SR text and its machine translation (MT) output in phase II. This study introduces a training framework for CACI that encompasses respeaking, sight translation, and post-editing. Additionally, it seeks to evaluate the CACI workflow with a group of trained students. Comparative analyses were conducted between conventional consecutive interpreting (CI) and CACI. Most of the findings from the preliminary study were successfully replicated in this study. The investigation revealed that CACI outperformed conventional CI in overall interpreting quality and accuracy in both directions of interpreting. Moreover, CACI exhibited higher fluency and lower cognitive load compared to conventional CI, albeit only in the L1-L2 direction. The quality of respeaking was found to be positively correlated with interpreting quality in both directions, underscoring the critical role of respeaking within the CACI workflow.
- Published
- 2024
- Full Text
- View/download PDF
48. Understanding the Opportunities for Introducing Multimodal Tactile Graphics in Classrooms
- Author
-
Hrishikesh V. Rao
- Abstract
Tactile graphics convert visual information into touchable patterns and braille, providing an essential solution for accessible graphics for individuals with visual impairments. Pioneering research advocates for the potential of audio-augmented tactile graphics, which convey complex information through auditory and tactile modalities rather than touch alone. Despite recognizing their advantages, there is a significant lack of knowledge on how educators can effectively produce and implement these tools to achieve specific educational goals. Addressing this gap is crucial for the widespread adoption of audio-tactile graphics in education. This thesis presents three studies that investigate different aspects of this gap, focusing on the integration of audio-tactile graphics within educational contexts that traditionally rely solely on tactile graphics: Study 1 evaluates workplace factors that impact transcription workflows at educational institutions and transcription agencies, including tool selection for creating tactile graphics. It uncovers the socio-technical dynamics that influence the transcription process, such as the interaction between transcription teams and external stakeholders like classroom teachers and students, as well as internal organizational structures. The study reveals that while common tools are used across settings, distinct workflows and structural differences greatly affect the transcription approach. Insights from this study are crucial for developing tactile graphics and authoring tools that cater to the specific needs of different transcription contexts. Study 2 narrows the focus to the educational setting, investigating how educators can translate visual images into audio-tactile graphics using their existing resources. A co-design workshop engages educators in simulating an improvised transcription process, resulting in the creation of T3 graphics. These graphics support educational goals such as facilitating tactile exploration and promoting independent reading in novice learners. Discussions highlight how the adoption of T3 graphics could significantly redefine job roles, extend the transcription process, require new training, and introduce complementary teaching methods. These findings indicate the potential of audio-tactile graphics to enhance educational outcomes and outline key areas for their practical implementation in schools. Study 3 assesses the real-world application of the T3 tablet in educational settings, examining the pedagogical strategies that educators employ with audio-tactile graphics in the classroom. The study observes that the proposed workflow for T3 graphics seamlessly integrates with existing methods of creating traditional tactile graphics. Over a period of six weeks, educators demonstrate how the T3 supports diverse educational tasks (like exams, classroom instruction and homework), supporting both traditional and novel teaching methodologies. Notably, the T3 proves to be an effective tool in promoting independent reading among beginners and in creating intricate, interactive tactile graphics. These findings suggest that audio-tactile devices like the T3 have the potential to play a crucial role in educational strategies, transforming the teaching strategies if educators had access to audio-tactile graphics as part of their instruction toolkit. The thesis makes the following core contributions: (1) A workflow for creating audio-supported tactile graphics that aligns with current transcription practices for traditional tactile graphics in schools. (2) Design recommendations for audio-tactile graphics to help visually impaired students navigate information with greater independence and proficiency. (3) Innovative teaching methodologies derived from educators' experiences with audio-tactile tools in the classroom and orientation & mobility (O&M) exercises that meet the education outcomes of teachers of visually impaired students. These contributions collectively enhance our understanding of the role of audio-tactile graphics in educational settings and provide a foundation for their further development and long-term adoption in educational contexts. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
- Published
- 2024
49. Inferring Dynamics of Sociolinguistic Variation in Speech Perception
- Author
-
Aini Li
- Abstract
This dissertation examines whether and how psycholinguistic priming, and social knowledge are integrated in the identification of sociolinguistic variants. Using the English variable (ING), the alternation between -ing and -in' (e.g. thinking vs. thinkin') as a testing ground, this dissertation probes whether and how individuals utilize constraints of different types when they perceive variation in real time. Through six perception experiments, I combine existing experimental paradigms in a novel way to probe how listeners make inferences about the identity of sociolinguistic variants under circumstances of uncertainty. Listeners hear synthesized stimuli in which there is ambiguity between two sociolinguistic variants, -ing and -in', and are placed in situations that require them to resolve this ambiguity through categorization. In Chapter 3, I demonstrate the effectiveness of the methods I will use to introduce uncertainty at the word level and the sentence level. I show in Chapter 4 that phonological variant identification in perception is subject to psycholinguistic priming. All else being equal, hearing a clear -ing makes listeners more likely to choose -ing again, given an ambiguous target for categorization. This phonological variant priming effect, however, decays rapidly over time, after only one monosyllabic word, suggesting that phonological variant priming is activation-based. In Chapter 5, I further investigate whether phonological variant priming is sensitive to social expectations. Results show that psycholinguistic priming and talker accent both come into play when listeners categorize ambiguous variants, and crucially, they interact by way of prime variant relative frequency, suggest -ing that social and linguistic unexpectedness jointly modulate priming. In Chapter 6, I establish that listeners possess and make use of dynamic social factors such as stylistic covariation during phonological variant identification. Additionally, target whole-word frequency can be revealing of the perceptual consequences of different types of s-conditioning. Finally, Chapter 7 discusses implications of these empirical results in the context of the typology of conditioning on variation in individuals. Overall, this dissertation establishes that psycholinguistic, social and linguistic factors all play a role when listeners perceive variation. However, different factors and processes are not integrated in the same way, suggesting that individuals have sophisticated knowledge of how variation is conditioned but how this knowledge is used is context-dependent. By combining the framework of variationist sociolinguistics with the methods and theories of psycholinguistics, the results of my dissertation shed light on how sociolinguistic variation is processed in real-time language use. This ultimately has the potential to develop a better understanding of the structure and systematicity of language at the community level. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
- Published
- 2024
50. Perceptual Adaptation to Foreign Accents by Second Language Learners
- Author
-
Hitoshi Nishizawa
- Abstract
Many studies evidence the flexibility of speech perception in the first language (L1), which allows rapid adaptation to unfamiliar foreign accents. Two influential studies by Bradlow and Bent (2008) and a follow-up study by Baese-Berk et al. (2013) found that increased variability as a function of the number of talkers and accents facilitated the generalization of adaptation across talkers and accents by L1 listeners. However, very few studies have examined second language (L2) learners as listeners (Baese-Berk, 2018). Thus, little is known about perceptual flexibility in L2. Critically, there has been no study directly examining the effect of increased variability on adaptation to foreign accents by L2 listeners. My goal with this study is to address this research gap by closely following these studies with L2 listeners. I examine if variability facilitates adaptation to unfamiliar foreign accents by L2 listeners.To do this, I recruited 280 Japanese learners of English for a two-day experiment that consisted of a pre-test, treatment, and post-test. For the pre-test and post-test, I used a Mandarin talker and a Vietnamese talker. The participants were randomly assigned to seven groups and received different treatments: an identical-talker group, single-medium group, single-high group, single-low group, multi-talker group, multi-accent group, and control group. The identical-talker group had the same Mandarin talker as the tests. The rest of the groups had a different talker from the tests. The single-medium group featured one Mandarin talker with a similar level of intelligibility to the test talkers, while the single-high group had a high intelligibility Mandarin talker, and the single-low group had a low intelligibility Mandarin talker. The multi-talker group had five Mandarin talkers. The multi-accent group had five L2 accents that included the Mandarin accent but not the Vietnamese accent. The talkers in the multi-talker and multi-accent groups featured similar intelligibility levels to the single-medium talker. The control group had five American L1 talkers. In the pre-test, treatment, and post-test, the L2 listeners transcribed short sentences. Accuracy of word recognition was used as a measure of adaptation. The results showed no statistically significant improvements in any of the groups. Numerically however, the control group and the single-high group improved more than others for the Mandarin talker, while the single-medium and multi-talker groups improved more than others for the Vietnamese talker. The small improvements suggest possible differences in the mechanism of adaptation to foreign accents between L1 and L2, which may be modulated by listeners' proficiency. I discuss how speech perception theories and hypotheses that were developed within studies on L1 listeners and L2 phoneme acquisition theories may or may not explain the results. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.