11,949 results on '"auditory system"'
Search Results
2. Neurodegeneration after repeated noise trauma in the mouse lower auditory pathway
- Author
-
Gröschel, Moritz, Manchev, Tanyo, Fröhlich, Felix, Jansen, Sebastian, Ernst, Arne, and Basta, Dietmar
- Published
- 2024
- Full Text
- View/download PDF
3. Ptf1a expression is necessary for correct targeting of spiral ganglion neurons within the cochlear nuclei
- Author
-
Elliott, Karen L., Iskusnykh, Igor Y., Chizhikov, Victor V., and Fritzsch, Bernd
- Published
- 2023
- Full Text
- View/download PDF
4. Hearing impairment amongst people with Osteogenesis Imperfecta in Germany.
- Author
-
Felicio-Briegel, A., Müller, J., Pollotzek, M., Neuling, M., Polterauer, D., Gantner, S., Simon, J., Briegel, I., and Simon, F.
- Subjects
- *
OSTEOGENESIS imperfecta , *HEARING disorders , *AUDITORY pathways , *AUDIOMETRY , *GERMANS - Abstract
Introduction: Hearing impairment concerns a relevant percentage of individuals with Osteogenesis Imperfecta (OI). When looking at the current literature, the percentage of affected individuals with OI varies greatly from 32 to 58% of patients having mild OI and 21% to 27% of patients having moderate to severe OI. Little is known about the German population with OI. Method: The goal of this study was to detect how many patients with OI, who visited the annual meeting of the German Association for Osteogenesis Imperfecta in 2023, proved to have a hearing impairment. In this prospective, cross-sectional study, each included individual obtained ear microscopy, audiometry, stapedius reflexes, tympanometry, and OAEs. Furthermore, each patient was asked a set of questions concerning the medical history. Results: Of the included patients, 33% had hearing impairment. A significant difference was found for the mean air–bone gap (ABG) as well as the hearing threshold of the right ears. The difference was found between OI type III and IV (p = 0.0127) for the mean ABG and OI type I and IV (p = 0.0138) as well as III and IV (p = 0.0281) for the hearing threshold. Spearman's rank correlation showed a high correlation between age and hearing threshold. Of the patients between 40 and 50 years old, 56% had hearing loss. Conclusion: Hearing loss in individuals with OI is still a relevant problem, especially age-related in OI type I. Audiometry should be performed at least when individuals experience subjective hearing loss. The implementation of a screening starting at 40 years should be discussed and studied. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Optimization of the Operant Silent Gap-in-Noise Detection Paradigm in Humans.
- Author
-
Negri, Louis, Oliver, Patrick, Mitchell, Rebecca, Sinha, Lavanya, Kearney, Jacob, Saad, Dominic, Nodal, Fernando R., and Bajo, Victoria M.
- Subjects
- *
ACOUSTIC reflex , *OPERANT behavior , *STARTLE reaction , *NEURAL inhibition , *YOUNG adults - Abstract
Background: In the auditory domain, temporal resolution is the ability to respond to rapid changes in the envelope of a sound over time. Silent gap-in-noise detection tests assess temporal resolution. Whether temporal resolution is impaired in tinnitus and whether those tests are useful for identifying the condition is still debated. We have revisited these questions by assessing the silent gap-in-noise detection performance of human participants. Methods: Participants were seventy-one young adults with normal hearing, separated into preliminary, tinnitus and matched-control groups. A preliminary group (n = 18) was used to optimise the silent gap-in-noise detection two-alternative forced-choice paradigm by examining the effect of the position and the salience of the gap. Temporal resolution was tested in case-control observational study of tinnitus (n = 20) and matched-control (n = 33) groups using the previously optimized silent gap-in-noise behavioral paradigm. These two groups were also tested using silent gap prepulse inhibition of the auditory startle reflex (GPIAS) and Auditory Brain Responses (ABRs). Results: In the preliminary group, reducing the predictability and saliency of the silent gap increased detection thresholds and reduced gap detection sensitivity (slope of the psychometric function). In the case-control study, tinnitus participants had higher gap detection thresholds than controls for narrowband noise stimuli centred at 2 and 8 kHz, with no differences in GPIAS or ABRs. In addition, ABR data showed latency differences across the different tinnitus subgroups stratified by subject severity. Conclusions: Operant silent gap-in-noise detection is impaired in tinnitus when the paradigm is optimized to reduce the predictability and saliency of the silent gap and to avoid the ceiling effect. Our behavioral paradigm can distinguish tinnitus and control groups suggesting that temporal resolution is impaired in tinnitus. However, in young adults with normal hearing, the paradigm is unable to objectively identify tinnitus at the individual level. The GPIAS paradigm was unable to differentiate the tinnitus and control groups, suggesting that operant, as opposed to reflexive, silent gap-in-noise detection is a more sensitive measure for objectively identifying tinnitus. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Effects of Radio Frequency Electromagnetic Fields on the Nervous System. In vivo Experiments (Review)
- Author
-
Natalia I. Khorseva and Pavel E. Grigoriev
- Subjects
radio frequency electromagnetic field ,wi-fi ,5g ,parts of the brain ,hippocampus ,auditory system ,morpho-histological changes in vivo ,young animals ,Sports medicine ,RC1200-1245 ,Biology (General) ,QH301-705.5 - Abstract
This article is a continuation of a review whose first part analysed the works on the effect of radio frequency electromagnetic fields (RF EMF) on the central nervous system (CNS) in vitro (changes in the action potential, cell and myelin sheath morphology, as well as in the permeability of the blood–brain barrier (using cultures of nerve cells only); in addition, it presented various approaches to studying the effects of RF EMF and pointed out difficulties of systematizing the experimental data. The present article dwells on the morpho-histological changes in CNS structures under RF EMF exposure in young animals, since it allows us to give an indirect assessment of possible negative consequences of RF EMF exposure for children and adolescents as the cohort most vulnerable to any environmental factors. Morphological and histological changes in CNS structures (cerebral cortex, brainstem, cerebellum, auditory system, etc.) as well as changing electroencephalographic parameters were analysed. A bulk of data on the morphological and histological changes in the hippocampus was considered separately. In addition, the paper presents an analysis of changes in the biometric parameters of experimental animals under chronic exposure to RF EMF and its effect on cell viability (including nerve cell apoptosis and autophagy). Thus, having a reliable corpus of modern experimental studies proving the seriousness of the problem of the effects of electromagnetic fields on the nervous system in children and adolescents is important in the context of ever-increasing electromagnetic pollution, primarily from electromagnetic fields produced by cellular networks.
- Published
- 2024
- Full Text
- View/download PDF
7. Effects of the two-pore potassium channel subunit Task5 on neuronal function and signal processing in the auditory brainstem.
- Author
-
Saber, Mahshid Helia, Kaiser, Michaela, Rüttiger, Lukas, and Körber, Christoph
- Subjects
AUDITORY pathways ,AUDITORY perception ,ION channels ,COCHLEAR nucleus ,ACTION potentials ,POTASSIUM channels - Abstract
Processing of auditory signals critically depends on the neuron's ability to fire brief, precisely timed action potentials (APs) at high frequencies and high fidelity for prolonged times. This requires the expression of specialized sets of ion channels to quickly repolarize neurons, prevent aberrant AP firing and tightly regulate neuronal excitability. Although critically important, the regulation of neuronal excitability has received little attention in the auditory system. Neuronal excitability is determined to a large extent by the resting membrane potential (RMP), which in turn depends on the kind and number of ion channels open at rest; mostly potassium channels. A large part of this resting potassium conductance is carried by two-pore potassium channels (K2P channels). Among the K2P channels, the subunit Task5 is expressed almost exclusively in the auditory brainstem, suggesting a specialized role in auditory processing. However, since it failed to form functional ion channels in heterologous expression systems, it was classified "non-functional" for a long time and its role in the auditory system remained elusive. Here, we generated Task5 knock-out (KO) mice. The loss of Task5 resulted in changes in neuronal excitability in bushy cells of the ventral cochlear nucleus (VCN) and principal neurons of the medial nucleus of the trapezoid body (MNTB). Moreover, auditory brainstem responses (ABRs) to loud sounds were altered in Tasko5-KO mice. Thus, our study provides evidence that Task5 is indeed a functional K2P subunit and contributes to sound processing in the auditory brainstem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. The Critical Thing about the Ear’s Sensory Hair Cells.
- Author
-
Hudspeth, A. J. and Martin, Pascal
- Subjects
- *
HAIR cells , *EAR , *EXTERNAL ear , *INNER ear , *SOUND energy , *VESTIBULAR apparatus , *COCHLEA - Abstract
The capabilities of the human ear are remarkable. We can normally detect acoustic stimuli down to a threshold sound-pressure level of 0 dB (decibels) at the entrance to the external ear, which elicits eardrum vibrations in the picometer range. From this threshold up to the onset of pain, 120 dB, our ears can encompass sounds that differ in power by a trillionfold. The comprehension of speech and enjoyment of music result from our ability to distinguish between tones that differ in frequency by only 0.2%. All these capabilities vanish upon damage to the ear’s receptors, the mechanoreceptive sensory hair cells. Each cochlea, the auditory organ of the inner ear, contains some 16,000 such cells that are frequency-tuned between ∼20 Hz (cycles per second) and 20,000 Hz. Remarkably enough, hair cells do not simply capture sound energy: they can also exhibit an active process whereby sound signals are amplified, tuned, and scaled. This article describes the active process in detail and offers evidence that its striking features emerge from the operation of hair cells on the brink of an oscillatory instability—one example of the critical phenomena that are widespread in physics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Evaluation of the Auditory System in Autistic Children Using Evoked Otoacoustic Emissions and a Contralateral Suppression Test.
- Author
-
Aslan, Eda, Guzel, Isil, Caypinar, Basak, Samanci, Baver, and Oysu, Cagatay
- Subjects
- *
BRAIN stem physiology , *HEARING disorder diagnosis , *COCHLEA physiology , *AUDITORY perception testing , *AUTISM , *DIAGNOSIS , *OTOACOUSTIC emissions , *DESCRIPTIVE statistics , *ASPERGER'S syndrome , *HEARING , *CHILDREN - Abstract
In this study, we evaluated the cochlea, medial olivocochlear system, and brainstem function in autistic children using evoked otoacoustic emissions (OAEs) and a noninvasive contralateral suppression (CLS) test. In total, we included 21 autistic children with normal hearing (study group) and 11 healthy children (control group). Transient-evoked OAEs (TEOAEs) and CLS of TEOAE were evaluated in the left and right ears of all patients. In a silent room, spontaneous, transient, and dP ILO292 were evaluated. The mean age of the study and control group was 9.1 years (range: 6-13 and 6-12 years, respectively). For the study group, there was no statistically significant difference between the OAE and CLS values of the right ear (P >.05). However, for the left ears, OAE values were statistically significantly higher than the CLS values (P <.05). In the control group, the OAE values of both ears were statistically significantly higher than the CLS values (P <.05). In autistic children with normal hearing, the medial olivocochlear system functions more effectively in the right ear than the left ear. Asymmetry between the ears is likely responsible for the peripheral auditory lateralization and independence in auditory function between the left and right ears. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. GluN2C/D‐containing NMDA receptors enhance temporal summation and increase sound‐evoked and spontaneous firing in the inferior colliculus.
- Author
-
Drotos, Audrey C., Zarb, Rachel L., Booth, Victoria, and Roberts, Michael T.
- Subjects
- *
AUDITORY neurons , *METHYL aspartate receptors , *INFERIOR colliculus , *VASOACTIVE intestinal peptide , *AUDITORY pathways - Abstract
Key points Along the ascending auditory pathway, there is a broad shift from temporal coding, which is common in the lower auditory brainstem, to rate coding, which predominates in auditory cortex. This temporal‐to‐rate transition is particularly prominent in the inferior colliculus (IC), the midbrain hub of the auditory system, but the mechanisms that govern how individual IC neurons integrate information across time remain largely unknown. Here, we report the widespread expression of
Glun2c andGlun2d mRNA in IC neurons. GluN2C/D‐containing NMDA receptors are relatively insensitive to voltage‐dependent Mg2+ blockade, and thus can conduct current at resting membrane potential. Usingin situ hybridization and pharmacology, we show that vasoactive intestinal peptide neurons in the IC express GluN2D‐containing NMDA receptors that are activatable by commissural inputs from the contralateral IC. In addition, GluN2C/D‐containing receptors have much slower kinetics than other NMDA receptors, and we found that GluN2D‐containing receptors facilitate temporal summation of synaptic inputs in vasoactive intestinal peptide neurons. In a model neuron, we show that a GluN2C/D‐like conductance interacts with the passive membrane properties of the neuron to alter temporal and rate coding of stimulus trains. Consistent with this, we showin vivo that blocking GluN2C/D‐containing receptors decreases both the spontaneous firing rate and the overall firing rate elicited by amplitude‐modulated sounds in many IC neurons. These results suggest that GluN2C/D‐containing NMDA receptors influence rate coding for auditory stimuli in the IC by facilitating the temporal integration of synaptic inputs. NMDA receptors are critical components of most glutamatergic circuits in the brain, and the diversity of NMDA receptor subtypes yields receptors with a variety of functions. We found that many neurons in the auditory midbrain express GluN2C and/or GluN2D NMDA receptor subunits, which are less sensitive to Mg2+ blockade than the more commonly expressed GluN2A/B subunits. We show that GluN2C/D‐containing receptors conducted current at resting membrane potential and enhanced temporal summation of synaptic inputs. In a model, we show that GluN2C/D‐containing receptors provide additive gain for input‐output functions driven by trains of synaptic inputs. In line with this, we found that blocking GluN2C/D‐containing NMDA receptorsin vivo decreased both spontaneous firing rates and firing evoked by amplitude‐modulated sounds. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
11. Auditory System Injury on the Battlefield—Solutions for Point-of-Injury and Prolonged Casualty Care: A DOTmLPF-P Analysis.
- Author
-
Merkley, John Andrew, Lovelace, Suheily, Boudin-George, Amy, and Gates, Kathy
- Subjects
- *
AUDITORY pathways , *MEDICAL personnel , *ACOUSTIC trauma , *NOISE-induced deafness , *BATTLE casualties , *AUDIOLOGISTS - Abstract
Introduction "Good hearing" (DoDI 6030.03 6.5&6.6) is a combat multiplier, critical to service members' lethality and survivability on the battlefield. Exposure to an explosive blast or high-intensity continuous noise is common in operational settings with the potential to compromise both hearing and vestibular health and jeopardize safety and high-level mission performance. The Joint Trauma System Acoustic Trauma Clinical Practice Guideline was published in 2018, providing recommendations for the assessment and treatment of aural blast injuries and acoustic trauma in the forward deployed environment. Combat care capabilities responsive to current threat environments emphasize prolonged casualty care. Despite recommendations, auditory system health has not been assessed routinely or in its entirety on the battlefield. This is due primarily to the large footprint of an audiometric booth and to the heavy logistical burden of providing high-quality, comprehensive auditory system (including vestibular) examinations in the combat environment. Materials and Methods The Defense Health Agency Hearing Center of Excellence has completed a Doctrine, Organization, Training, Materiel, Leadership & Education, Personnel, Facilities, and Policy (DOTmLPF-P) analysis of battlefield auditory system assessment and treatment, using 67 existing DoD documents and artifacts related to operational medicine. Results Our analysis found that acoustic trauma is generally not addressed in any of the DOTmLPF-P domains. We recommend that auditory system assessment and treatment be incorporated across the continuum of care on the battlefield. This should be addressed through Prolonged Field Care and Tactical Combat Casualty Care guidance and in all Tactical Combat Casualty Care training programs. Equipment sets should be modified to include boothless technology and associated materiel for auditory system assessment. Policy and Doctrine changes would be required to mandate and support the implementation of these services. Uniformed audiologists should be added to the organizational structure at role 3 or higher to provide direct patient care; consult with other health care providers and commanders; develop and support enforcement of noise hazard guidelines; track hearing readiness; and, when necessary, provide specialized hearing protection devices that can compensate for hearing loss. Conclusions These recommendations aim to help the DoD bring about necessary assessments and interventions for acoustic trauma so that service members can have better hearing outcomes and maintain critical auditory system function on the battlefield. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Cognitive Functions and Subjective Hearing in Cochlear Implant Users.
- Author
-
Zhang, Fawen, McGuire, Kelli, Skeeters, Madeline, Barbara, Matthew, Chang, Pamara F., Zhang, Nanhua, Xiang, Jing, and Huang, Bin
- Subjects
- *
COGNITIVE testing , *AUDIOMETRY , *COCHLEAR implants , *COGNITIVE ability , *PROSTHETICS - Abstract
Background and Objectives: A cochlear implant (CI) is an effective prosthetic device used to treat severe-to-profound hearing loss. The present study examined cognitive function in CI users by employing a web-based cognitive testing platform, i.e., BrainCheck, and explored the correlation between cognitive function and subjective evaluation of hearing. Subjects and Methods: Forty-two CI users (mean age: 58.90 years) were surveyed in the subjective evaluation of hearing, and 20/42 participated in the BrainCheck cognitive tests (immediate recognition, Trail Making A, Trail Making B, Stroop, digit symbol substitution, and delayed recognition). As controls for cognitive function, young normal-hearing (YNH, mean age=23.83 years) and older normal-hearing (ONH, mean age=52.67 years) listener groups were subjected to Brain-Check testing. Results: CI users exhibited poorer cognitive function than the normal hearing groups in all tasks except for immediate and delayed recognition. The highest percentage of CI users who had "possible" and "likely" cognitive impairment, based on BrainCheck scores (ranging from 0-200), was observed in tests assessing executive function. The composite cognitive score across domains tended to be related to subjective hearing (p=0.07). Conclusions: The findings of the current study suggest that CI users had a higher likelihood of cognitive impairment in the executive function domain than in lower-level domains. BrianCheck online cognitive testing affords a convenient and effective tool to self-evaluate cognitive function in CI users. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Auditory cortical regions show resting-state functional connectivity with the default mode-like network in echolocating bats.
- Author
-
Washington, Stuart D., Shattuck, Kyle, Steckel, Jan, Peremans, Herbert, Jonckers, Elisabeth, Hinz, Rukun, Venneman, Tom, Van den Berg, Monica, Van Ruijssevelt, Lisbeth, Verellen, Thomas, Pritchett, Dominique L., Scholliers, Jan, Sayuan Liang, Wang, Paul C., Verhoye, Marleen, Esser, Karl-Heinz, Van der Linden, Annemie, and Keliris, Georgios A.
- Subjects
- *
FUNCTIONAL connectivity , *INDEPENDENT component analysis , *BATS , *FUNCTIONAL magnetic resonance imaging , *SOCIAL perception - Abstract
Echolocating bats are among the most social and vocal of all mammals. These animals are ideal subjects for functional MRI (fMRI) studies of auditory social communication given their relatively hypertrophic limbic and auditory neural structures and their reduced ability to hear MRI gradient noise. Yet, no resting-state networks relevant to social cognition (e.g., default mode-like networks or DMLNs) have been identified in bats since there are few, if any, fMRI studies in the chiropteran order. Here, we acquired fMRI data at 7 Tesla from nine lightly anesthetized pale spear-nosed bats (Phyllostomus discolor). We applied independent components analysis (ICA) to reveal resting-state networks and measured neural activity elicited by noise ripples (on: 10 ms; off: 10 ms) that span this species' ultrasonic hearing range (20 to 130 kHz). Resting-state networks pervaded auditory, parietal, and occipital cortices, along with the hippocampus, cerebellum, basal ganglia, and auditory brainstem. Two midline networks formed an apparent DMLN. Additionally, we found four predominantly auditory/parietal cortical networks, of which two were left-lateralized and two right-lateralized. Regions within four auditory/parietal cortical networks are known to respond to social calls. Along with the auditory brainstem, regions within these four cortical networks responded to ultrasonic noise ripples. Iterative analyses revealed consistent, significant functional connectivity between the left, but not right, auditory/parietal cortical networks and DMLN nodes, especially the anterior-most cingulate cortex. Thus, a resting-state network implicated in social cognition displays more distributed functional connectivity across left, relative to right, hemispheric cortical substrates of audition and communication in this highly social and vocal species. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Autoimmune Inner Ear Disease
- Author
-
Baysal, Elif, Selimoglu, Erol, Hagr, Abdulrahman, Cingi, Cemal, Series Editor, Kalcioglu, Mahmut Tayyar, editor, Bayar Muluk, Nuray, editor, and Jenkins, Herman Arthur, editor
- Published
- 2024
- Full Text
- View/download PDF
15. Neuroanatomy for Neurobionic Hearing Restoration
- Author
-
Samii, Amir, Giordano, Mario, Kanaan, Imad N., editor, and Beneš, Vladimír, editor
- Published
- 2024
- Full Text
- View/download PDF
16. Safety Pharmacology and Tinnitus
- Author
-
Szczepek, Agnieszka J., Hock, Franz J., Section editor, Gralinski, Michael R., Section editor, Hock, Franz J., editor, and Pugsley, Michael K., editor
- Published
- 2024
- Full Text
- View/download PDF
17. Central Gain Model for Tinnitus: A Review on Noise-Induced Plasticity or When Less at the Periphery Is More in the Center
- Author
-
Parameshwarappa, Vinay, Norena, Arnaud J., Schlee, Winfried, editor, Langguth, Berthold, editor, De Ridder, Dirk, editor, Vanneste, Sven, editor, Kleinjung, Tobias, editor, and Møller, Aage R., editor
- Published
- 2024
- Full Text
- View/download PDF
18. Effects of the two-pore potassium channel subunit Task5 on neuronal function and signal processing in the auditory brainstem
- Author
-
Mahshid Helia Saber, Michaela Kaiser, Lukas Rüttiger, and Christoph Körber
- Subjects
cochlear nucleus ,MNTB ,ABR ,auditory system ,bushy cells ,stellate cells ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Processing of auditory signals critically depends on the neuron’s ability to fire brief, precisely timed action potentials (APs) at high frequencies and high fidelity for prolonged times. This requires the expression of specialized sets of ion channels to quickly repolarize neurons, prevent aberrant AP firing and tightly regulate neuronal excitability. Although critically important, the regulation of neuronal excitability has received little attention in the auditory system. Neuronal excitability is determined to a large extent by the resting membrane potential (RMP), which in turn depends on the kind and number of ion channels open at rest; mostly potassium channels. A large part of this resting potassium conductance is carried by two-pore potassium channels (K2P channels). Among the K2P channels, the subunit Task5 is expressed almost exclusively in the auditory brainstem, suggesting a specialized role in auditory processing. However, since it failed to form functional ion channels in heterologous expression systems, it was classified “non-functional” for a long time and its role in the auditory system remained elusive. Here, we generated Task5 knock-out (KO) mice. The loss of Task5 resulted in changes in neuronal excitability in bushy cells of the ventral cochlear nucleus (VCN) and principal neurons of the medial nucleus of the trapezoid body (MNTB). Moreover, auditory brainstem responses (ABRs) to loud sounds were altered in Tasko5-KO mice. Thus, our study provides evidence that Task5 is indeed a functional K2P subunit and contributes to sound processing in the auditory brainstem.
- Published
- 2024
- Full Text
- View/download PDF
19. Preattentive mechanisms of change detection in early auditory cortex: A 7 Tesla fMRI study
- Author
-
Szycik, G.R., Stadler, J., Brechmann, A., and Münte, T.F.
- Published
- 2013
- Full Text
- View/download PDF
20. Exploring the Differences Between an Immature and a Mature Human Auditory System Through Auditory Late Responses in Quiet and in Noise.
- Author
-
Duquette-Laplante, Fauve, Jutras, Benoît, Néron, Noémie, Fortin, Sandra, and Koravand, Amineh
- Subjects
- *
AUDITORY pathways , *NOISE , *SIGNAL-to-noise ratio , *WHITE noise , *VERBAL learning , *INFORMATION storage & retrieval systems - Abstract
• Children had longer latencies than adults in all listening conditions for P1 and N1. • There were fewer identifiable wave peak components in children than in adults. • /da/ was the most resistant stimulus to noise, especially in white noise. • P1, P2 and N2 amplitudes were reduced, and latencies were elongated in noise. • N1 amplitude was significantly more negative in the noise conditions than in quiet for /da/ Children are disadvantaged compared to adults when they perceive speech in a noisy environment. Noise reduces their ability to extract and understand auditory information. Auditory-Evoked Late Responses (ALRs) offer insight into how the auditory system can process information in noise. This study investigated how noise, signal-to-noise ratio (SNR), and stimulus type affect ALRs in children and adults. Fifteen participants from each group with normal hearing were studied under various conditions. The findings revealed that both groups experienced delayed latencies and reduced amplitudes in noise but that children had fewer identifiable waves than adults. Babble noise had a significant impact on both groups, limiting the analysis to one condition: the /da/ stimulus at +10 dB SNR for the P1 wave. P1 amplitude was greater in quiet for children compared to adults, with no stimulus effect. Children generally exhibited longer latencies. N1 latency was longer in noise, with larger amplitudes in white noise compared to quiet for both groups. P2 latency was shorter with the verbal stimulus in quiet, with larger amplitudes in children than adults. N2 latency was shorter in quiet, with no amplitude differences between the groups. Overall, noise prolonged latencies and reduced amplitudes. Different noise types had varying impacts, with the eight-talker babble noise causing more disruption. Children's auditory system responded similarly to adults but may be more susceptible to noise. This research emphasizes the need to understand noise's impact on children's auditory development, given their exposure to noisy environments, requiring further exploration of noise parameters in children. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Population coding of time-varying sounds in the nonlemniscal inferior colliculus.
- Author
-
Shi, Kaiwen, Quass, Gunnar L., Rogalla, Meike M., Ford, Alexander N., Czarny, Jordyn E., and Apostolides, Pierre F.
- Subjects
- *
INFERIOR colliculus , *AUDITORY cortex , *AMPLITUDE modulation , *THALAMIC nuclei , *SPEECH perception , *MODULATION (Music theory) , *NEURONS - Abstract
The inferior colliculus (IC) of the midbrain is important for complex sound processing, such as discriminating conspecific vocalizations and human speech. The IC's nonlemniscal, dorsal "shell" region is likely important for this process, as neurons in these layers project to higher-order thalamic nuclei that subsequently funnel acoustic signals to the amygdala and nonprimary auditory cortices, forebrain circuits important for vocalization coding in a variety of mammals, including humans. However, the extent to which shell IC neurons transmit acoustic features necessary to discern vocalizations is less clear, owing to the technical difficulty of recording from neurons in the IC's superficial layers via traditional approaches. Here, we use two-photon Ca2+ imaging in mice of either sex to test how shell IC neuron populations encode the rate and depth of amplitude modulation, important sound cues for speech perception. Most shell IC neurons were broadly tuned, with a low neurometric discrimination of amplitude modulation rate; only a subset was highly selective to specific modulation rates. Nevertheless, neural network classifier trained on fluorescence data from shell IC neuron populations accurately classified amplitude modulation rate, and decoding accuracy was only marginally reduced when highly tuned neurons were omitted from training data. Rather, classifier accuracy increased monotonically with the modulation depth of the training data, such that classifiers trained on full-depth modulated sounds had median decoding errors of ∼0.2 octaves. Thus, shell IC neurons may transmit time-varying signals via a population code, with perhaps limited reliance on the discriminative capacity of any individual neuron. NEW & NOTEWORTHY: The IC's shell layers originate a "nonlemniscal" pathway important for perceiving vocalization sounds. However, prior studies suggest that individual shell IC neurons are broadly tuned and have high response thresholds, implying a limited reliability of efferent signals. Using Ca2+ imaging, we show that amplitude modulation is accurately represented in the population activity of shell IC neurons. Thus, downstream targets can read out sounds' temporal envelopes from distributed rate codes transmitted by populations of broadly tuned neurons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Measuring absorbed energy in the human auditory system using finite element models: A comparison with experimental results.
- Author
-
Castro-Egler, Cristina, Garcia-Gonzalez, Antonio, Aguilera, Jose A., Cerezo, Pablo M., Lopez-Crespo, Pablo, and González-Herrera, Antonio
- Subjects
- *
AUDITORY pathways , *FINITE element method , *EAR canal , *SEMICIRCULAR canals , *TYMPANIC membrane - Abstract
BACKGROUND: There are different ways to analyze energy absorbance (EA) in the human auditory system. In previous research, we developed a complete finite element model (FEM) of the human auditory system. OBJECTIVE: In this current work, the external auditory canal (EAC), middle ear, and inner ear (spiral cochlea, vestibule, and semi-circular canals) were modelled based on human temporal bone histological sections. METHODS: Multiple acoustic, structure, and fluid-coupled analyses were conducted using the FEM to perform harmonic analyses in the 0.1–10 kHz range. Once the FEM had been validated with published experimental data, its numerical results were used to calculate the EA or energy reflected (ER) by the tympanic membrane. This EA was also measured in clinical audiology tests which were used as a diagnostic parameter. RESULTS: A mathematical approach was developed to calculate the EA and ER, with numerical and experimental results showing adequate correlation up to 1 kHz. Another published FEM had adapted its boundary conditions to replicate experimental results. Here, we recalculated those numerical results by applying the natural boundary conditions of human hearing and found that the results almost totally agreed with our FEM. CONCLUSION: This boundary problem is frequent and problematic in experimental hearing test protocols: the more invasive they are, the more the results are affected. One of the main objectives of using FEMs is to explore how the experimental test conditions influence the results. Further work will still be required to uncover the relationship between middle ear structures and EA to clarify how to best use FEMs. Moreover, the FEM boundary conditions must be more representative in future work to ensure their adequate interpretation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. The Many Unknowns of Partial Sensory Disconnection during Sleep: A Review of the Literature.
- Author
-
Cirelli, Chiara and Tononi, Giulio
- Subjects
RAPID eye movement sleep ,AUDITORY pathways ,OLFACTORY cortex ,PAIN ,THRESHOLD (Perception) - Abstract
When we are asleep, we lose the ability to promptly respond to external stimuli, and yet we spend many hours every day in this inherently risky behavioral state. This simple fact strongly suggests that sleep must serve essential functions that rely on the brain going offline, on a daily basis, and for long periods of time. If these functions did not require partial sensory disconnection, it would be difficult to explain why they are not performed during waking. Paradoxically, despite its central role in defining sleep and what sleep does, sensory disconnection during sleep remains a mystery. We have a limited understanding of how it is implemented along the sensory pathways; we do not know whether the same mechanisms apply to all sensory modalities, nor do we know to what extent these mechanisms are shared between non-rapid eye movement (NREM) sleep and REM sleep. The main goal of this contribution is to review some knowns and unknowns about sensory disconnection during sleep as a first step to fill this gap. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Harnessing the Power of Artificial Intelligence in Otolaryngology and the Communication Sciences
- Author
-
Wilson, Blake S, Tucci, Debara L, Moses, David A, Chang, Edward F, Young, Nancy M, Zeng, Fan-Gang, Lesica, Nicholas A, Bur, Andrés M, Kavookjian, Hannah, Mussatto, Caroline, Penn, Joseph, Goodwin, Sara, Kraft, Shannon, Wang, Guanghui, Cohen, Jonathan M, Ginsburg, Geoffrey S, Dawson, Geraldine, and Francis, Howard W
- Subjects
Biological Psychology ,Biomedical and Clinical Sciences ,Neurosciences ,Psychology ,Artificial Intelligence ,Communication ,Humans ,Otolaryngology ,Machine learning ,Artificial intelligence ,Deep learning ,Human communication ,Hearing ,Speech production ,Speech perception ,Auditory prostheses ,Auditory system ,Hearing aids ,Hearing loss ,Cochlear implants ,Neural prostheses ,Neuroprostheses ,Brain-computer interfaces ,Laryngeal pathology ,Thyroid pathology ,Clinical Sciences ,Otorhinolaryngology ,Biological psychology - Abstract
Use of artificial intelligence (AI) is a burgeoning field in otolaryngology and the communication sciences. A virtual symposium on the topic was convened from Duke University on October 26, 2020, and was attended by more than 170 participants worldwide. This review presents summaries of all but one of the talks presented during the symposium; recordings of all the talks, along with the discussions for the talks, are available at https://www.youtube.com/watch?v=ktfewrXvEFg and https://www.youtube.com/watch?v=-gQ5qX2v3rg . Each of the summaries is about 2500 words in length and each summary includes two figures. This level of detail far exceeds the brief summaries presented in traditional reviews and thus provides a more-informed glimpse into the power and diversity of current AI applications in otolaryngology and the communication sciences and how to harness that power for future applications.
- Published
- 2022
25. Attention and Sequence Modeling for Match-Mismatch Classification of Speech Stimulus and EEG Response
- Author
-
Marvin Borsdorf, Siqi Cai, Saurav Pahuja, Dashanka De Silva, Haizhou Li, and Tanja Schultz
- Subjects
Auditory system ,EEG decoding ,match-mismatch classification ,speech envelope ,speech stimulus ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
For the development of neuro-steered hearing aids, it is important to study the relationship between a speech stimulus and the elicited EEG response of a human listener. The recent Auditory EEG Decoding Challenge 2023 (Signal Processing Grand Challenge, IEEE International Conference on Acoustics, Speech and Signal Processing) dealt with this relationship in the context of a match-mismatch classification task. The challenge's task was to find the speech stimulus that elicited a specific EEG response from two given speech stimuli. Participating in the challenge, we adopted the challenge's baseline model and explored an attention encoder to replace the spatial convolution in the EEG processing pipeline, as well as additional sequence modeling methods based on RNN, LSTM, and GRU to preprocess the speech stimuli. We compared speech envelopes and mel-spectrograms as two different types of input speech stimulus and evaluated our models on a test set as well as held-out stories and held-out subjects benchmark sets. In this work, we show that the mel-spectrograms generally offer better results. Replacing the spatial convolution with an attention encoder helps to capture better spatial and temporal information in the EEG response. Additionally, the sequence modeling methods can further enhance the performance, when mel-spectrograms are used. Consequently, both lead to higher performances on the test set and held-out stories benchmark set. Our best model outperforms the baseline by 1.91% on the test set and 1.35% on the total ranking score. We ranked second in the challenge.
- Published
- 2024
- Full Text
- View/download PDF
26. Molecular mechanisms for activity-dependent control of neuronal excitability in the central auditory pathway
- Author
-
Bondarenko, Kseniia
- Subjects
molecular mechanisms ,central auditory pathway ,Kv3 ,neurons ,Potassium channels ,auditory system ,Kv3.1 ,Kv3.3 ,auditory brainstem ,MNTB neurons ,Medial Nuclei of Trapezoid Body ,neuronal activity ,neuroscience ,hearing system ,superior olivary complex ,cognition ,nerve cells ,neuronal firing ,Auditory neuroscience ,Thesis - Abstract
Voltage-gated potassium channels of the Kv3 subfamily mediate fast repolarisation of action potentials, allowing neurons to fire at high frequencies. High-frequency firing is especially important in the auditory system, where fast and precise information transmission is crucial for the flawless perception of sound. Previous experiments from the laboratory have shown that only two (Kv3.1 and Kv3.3) out of four Kv3 subunits are expressed in the Medial Nuclei of Trapezoid Body (MNTB) in the auditory brainstem. Tetraethylammonium (TEA) is known to block all Kv3 channels, but a key unknown is whether Kv3.1 or Kv3.3 subunits confer any unique properties on the Kv3 channels formed from these subunits. Since there are no subunit-specific antagonists, my project aimed to combine transgenic manipulation with light-activated pharmacology to investigate the roles of these subunits. This technology involved tethering a light-activated TEA moiety to the pore vestibule of Kv3.3 or Kv3.1; where the respective subunit had been mutated to possess a highly reactive cysteine substitution at the appropriate site. The hypothesis was that by blocking one specific subunit in a Kv3 channel, we could selectively suppress those Kv3 channels mediated by either Kv3.1 or Kv3.3 subunits, which should reveal any subunit-specific physiological functions. This project investigated the action of light-activated pharmacology based on the photochromic tethered ligand MAQ. Electrophysiology was performed ex vivo on brainstem slices from CRISPR/Cas9-edited mice with the single amino acid substitution (for MAQ anchoring) in either Kv3.1 (E380C in mouse kcnc1) or Kv3.3 (N483C in mouse kcnc3) subunit. MAQ, tethered to either Kv3.1
E380C or Kv3.3N483C , introduced a reversible light activated block of the Kv3 channel pore, studied in the principal MNTB neurons of transgenic lines. However, the portion of light-controlled potassium current was too small to answer scientific questions (8.6% of total potassium current at +40 mV). Therefore much of the experimental work revolved around testing the materials and assumptions of the original hypothesis. Several conditions were tested to determine the reason for the partial light-induced block by MAQ. The quality of the MAQ was verified using the NMR analysis of the synthesized compound. Homology modelling of the Kv3.3N483C channel with docked MAQ ligand confirmed its binding ability when tethered to the channel pore. The tests with the non-specific photochromic ligand AAQ had shown that azobenzene moiety, a key part of both AAQ and MAQ, successfully undergoes a conformational change when switched between 380 nm and 500 nm using the current optical setup and thus, the set light intensity is enough to convert trans-MAQ molecule to its cis form. I also confirmed that the experimental conditions provide maximum achievable block and that the quality of the brain slice preparation did not undermine the effect achieved using the photo-activated pharmacology. However, the immunofluorescent studies showed that large portions of edited Kv3.1 were retained in the cytosol in the Kv3.1E380C mouse, while edited Kv3.3 and non-edited Kv3.1 in Kv3.3N483C mouse were almost absent, suggesting that Kv3.3 subunits are essential to achieve trafficking of Kv3 channels to the presynaptic terminal. Together, these alterations in Kv3 channel expression in CRISPR/Cas9-edited strains caused reduced photo-controlled portions of Kv3.1-specific and Kv3.3-specific potassium currents. In addition, MAQ showed strong neuronal toxicity that was not reported before. Further immunofluorescent work revealed for the first time that the Kv3.3 subunit is present in the axon initial segments and the nodes of Ranvier. I also deployed the expansion microscopy technique to bypass the resolution limit of confocal microscopes combined with the superresolution post-acquisition algorithms to establish the role of the Kv3.1 and Kv3.3 subunit through their location in the subcellular structures of MNTB neurons. I found that Kv3.1 is mostly present in somatic Kv3 channels (post-synaptic MNTB cell membrane), while Kv3.3 is present in both somatic and presynaptic Kv3s. My findings support the hypothesis that Kv3.3 has a presynaptic function in regulating action potential duration at the synapse in addition to its postsynaptic role in regulating somatic action potentials. It also supported the electrophysiological evidence obtained previously in the lab.- Published
- 2022
- Full Text
- View/download PDF
27. Real-Time Multirate Multiband Amplification for Hearing Aids
- Author
-
Sokolova, Alice, Sengupta, Dhiman, Hunt, Martin, Gupta, Rajesh, Aksanli, Baris, Harris, Fredric, and Garudadri, Harinath
- Subjects
Information and Computing Sciences ,Computer Vision and Multimedia Computation ,Assistive Technology ,Bioengineering ,Ear ,Hearing aids ,digital signal processing ,auditory system ,channelization ,wearable computers ,speech processing ,open source hardware ,real-time systems ,embedded software ,research initiatives ,Engineering ,Technology ,Information and computing sciences - Abstract
Hearing loss is a common problem affecting the quality of life for thousands of people. However, many individuals with hearing loss are dissatisfied with the quality of modern hearing aids. Amplification is the main method of compensating for hearing loss in modern hearing aids. One common amplification technique is dynamic range compression, which maps audio signals onto a person's hearing range using an amplification curve. However, due to the frequency dependent nature of the human cochlea, compression is often performed independently in different frequency bands. This paper presents a real-time multirate multiband amplification system for hearing aids, which includes a multirate channelizer for separating an audio signal into eleven standard audiometric frequency bands, and an automatic gain control system for accurate control of the steady state and dynamic behavior of audio compression as specified by ANSI standards. The spectral channelizer offers high frequency resolution with low latency of 5.4 ms and about 14× improvement in complexity over a baseline design. Our automatic gain control includes a closed-form solution for satisfying any designated attack and release times for any desired compression parameters. The increased frequency resolution and precise gain adjustment allow our system to more accurately fulfill audiometric hearing aid prescriptions.
- Published
- 2022
28. Metabonomics and Transcriptomics Analyses Reveal the Development Process of the Auditory System in the Embryonic Development Period of the Small Yellow Croaker under Background Noise.
- Author
-
Jiang, Qinghua, Liang, Xiao, Ye, Ting, Zhang, Yu, and Lou, Bao
- Subjects
- *
LARIMICHTHYS , *AUDITORY pathways , *INNER ear , *EMBRYOLOGY , *AUDITORY perception , *OTOLITHS , *NOISE pollution - Abstract
Underwater noise pollution has become a potential threat to aquatic animals in the natural environment. The main causes of such pollution are frequent human activities creating underwater environmental noise, including commercial shipping, offshore energy platforms, scientific exploration activities, etc. However, in aquaculture environments, underwater noise pollution has also become an unavoidable problem due to background noise created by aquaculture equipment. Some research has shown that certain fish show adaptability to noise over a period of time. This could be due to fish's special auditory organ, i.e., their "inner ear"; meanwhile, otoliths and sensory hair cells are the important components of the inner ear and are also essential for the function of the auditory system. Recently, research in respect of underwater noise pollution has mainly focused on adult fish, and there is a lack of the research on the effects of underwater noise pollution on the development process of the auditory system in the embryonic development period. Thus, in this study, we collected embryo–larval samples of the small yellow croaker (Larimichthys polyactis) in four important stages of otic vesicle development through artificial breeding. Then, we used metabonomics and transcriptomics analyses to reveal the development process of the auditory system in the embryonic development period under background noise (indoor and underwater environment sound). Finally, we identified 4026 differentially expressed genes (DEGs) and 672 differential metabolites (DMs), including 37 DEGs associated with the auditory system, and many differences mainly existed in the neurula stage (20 h of post-fertilization/20 HPF). We also inferred the regulatory mode and process of some important DEGs (Dnmt1, CPS1, and endothelin-1) in the early development of the auditory system. In conclusion, we suggest that the auditory system development of L. polyactis begins at least in the neurula stage or earlier; the other three stages (tail bud stage, caudal fin fold stage, and heart pulsation stage, 28–35 HPF) mark the rapid development period. We speculate that the effect of underwater noise pollution on the embryo–larval stage probably begins even earlier. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Compensation in neuro-system related to age-related hearing loss.
- Author
-
Diao, Tongxiang, Ma, Xin, Fang, Xuan, Duan, Maoli, and Yu, Lisheng
- Subjects
- *
COCHLEAR implants , *HEARING aids , *NEURAL pathways , *NEUROPLASTICITY , *INTELLIGIBILITY of speech , *PRESBYCUSIS - Abstract
Age-related hearing loss (ARHL) is a major cause of chronic disability among the elderly. Individuals with ARHL not only have trouble hearing sounds, but also with speech perception. As the perception of auditory information is reliant on integration between widespread brain networks to interpret auditory stimuli, both auditory and extra-auditory systems which mainly include visual, motor and attention systems, play an important role in compensating for ARHL. To better understand the compensatory mechanism of ARHL and inspire better interventions that may alleviate ARHL. We mainly focus on the existing information on ARHL-related central compensation. The compensatory effects of hearing aids (HAs) and cochlear implants (CIs) on ARHL were also discussed. Studies have shown that ARHL can induce cochlear hair cell damage or loss and cochlear synaptopathy, which could induce central compensation including compensation of auditory and extra-auditory neural networks. The use of HAs and CIs can improve bottom-up processing by enabling 'better' input to the auditory pathways and then to the cortex by enhancing the diminished auditory signal. The central compensation of ARHL and its possible correlation with HAs and CIs are current hotspots in the field and should be given focus in future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. ABR-Attention: An Attention-Based Model for Precisely Localizing Auditory Brainstem Response.
- Author
-
Ji, Junyu, Wang, Xin, Jing, Xiaobei, Zhu, Mingxing, Pan, Hongguang, Jia, Desheng, Zhao, Chunrui, Yong, Xu, Xu, Yangjie, Zhao, Guoru, Sun, Poly Z.H., Li, Guanglin, and Chen, Shixiong
- Subjects
DIAGNOSTIC imaging ,SOUND pressure ,DEEP learning ,AUDITORY pathways ,EVOKED potentials (Electrophysiology) - Abstract
Auditory Brainstem Response (ABR) is an evoked potential in the brainstem’s neural centers in response to sound stimuli. Clinically, characteristic waves, especially Wave V latency, extracted from ABR can objectively indicate auditory loss and diagnose diseases. Several methods have been developed for the extraction of characteristic waves. To ensure the effectiveness of the method, most of the methods are time-consuming and rely on the heavy workloads of clinicians. To reduce the workload of clinicians, automated extraction methods have been developed. However, the above methods also have limitations. This study introduces a novel deep learning network for automatic extraction of Wave V latency, named ABR-Attention. ABR-Attention model includes a self-attention module, first and second-derivative attention module, and regressor module. Experiments are conducted on the accuracy with 10-fold cross-validation, the effects on different sound pressure levels (SPLs), the effects of different error scales and the effects of ablation. ABR-Attention shows efficacy in extracting Wave V latency of ABR, with an overall accuracy of $96.76~\pm ~0.41$ % and an error scale of 0.1ms, and provides a new solution for objective localization of ABR characteristic waves. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Cochlear Implant Artifacts Removal in EEG-Based Objective Auditory Rehabilitation Assessment.
- Author
-
Zheng, Qi, Wu, Yubo, Zhu, Jianing, Cao, Leqiang, Bai, Yanru, and Ni, Guangjian
- Subjects
SUPPORT vector machines ,HILBERT-Huang transform ,INDEPENDENT component analysis ,NEUROPROSTHESES ,COCHLEAR implants ,AUDITORY pathways - Abstract
Cochlear implant (CI) is a neural prosthesis that can restore hearing for patients with severe to profound hearing loss. Observed variability in auditory rehabilitation outcomes following cochlear implantation may be due to cerebral reorganization. Electroencephalography (EEG), favored for its CI compatibility and non-invasiveness, has become a staple in clinical objective assessments of cerebral plasticity post-implantation. However, the electrical activity of CI distorts neural responses, and EEG susceptibility to these artifacts presents significant challenges in obtaining reliable neural responses. Despite the use of various artifact removal techniques in previous studies, the automatic identification and reduction of CI artifacts while minimizing information loss or damage remains a pressing issue in objectively assessing advanced auditory functions in CI recipients. To address this problem, we propose an approach that combines machine learning algorithms—specifically, Support Vector Machines (SVM)—along with Independent Component Analysis (ICA) and Ensemble Empirical Mode Decomposition (EEMD) to automatically detect and minimize electrical artifacts in EEG data. The innovation of this research is the automatic detection of CI artifacts using the temporal properties of EEG signals. By applying EEMD and ICA, we can process and remove the identified CI artifacts from the affected EEG channels, yielding a refined signal. Comparative analysis in the temporal, frequency, and spatial domains suggests that the corrected EEG recordings of CI recipients closely align with those of peers with normal hearing, signifying the restoration of reliable neural responses across the entire scalp while eliminating CI artifacts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Objective Neurophysiological Indices for the Assessment of Chronic Tinnitus Based on EEG Microstate Parameters.
- Author
-
Wang, Yingying, Zeng, Peiying, Gu, Zhixiang, Liu, Hongyu, Han, Shuqing, Liu, Xinran, Huang, Xin, Shao, Liyang, and Tao, Yuan
- Subjects
NEURAL physiology ,AUDITORY pathways ,TINNITUS ,SYMPTOMS ,PROBABILITY theory - Abstract
Chronic tinnitus is highly prevalent but lacks precise diagnostic or effective therapeutic standards. Its onset and treatment mechanisms remain unclear, and there is a shortage of objective assessment methods. We aim to identify abnormal neural activity and reorganization in tinnitus patients and reveal potential neurophysiological markers for objectively evaluating tinnitus. By way of analyzing EEG microstates, comparing metrics under three resting states (OE, CE, and OECEm) between tinnitus sufferers and controls, and correlating them with tinnitus symptoms. This study reflected specific changes in the EEG microstates of tinnitus patients across multiple resting states, as well as inconsistent correlations with tinnitus symptoms. Microstate parameters were significantly different when patients were in OE and CE states. Specifically, the occurrence of Microstate A and the transition probabilities (TP) from other Microstates to A increased significantly, particularly in the CE state (32-37%, ${p}\le 0.05$); and both correlated positively with the tinnitus intensity. Nevertheless, under the OECEm state, increases were mainly observed in the duration, coverage, and occurrence of Microstate B (15-47%, ${p} < 0.05$), which negatively correlated with intensity ($\text{R} < $ -0.513, ${p} < 0.05$). Additionally, TPx between Microstates C and D were significantly reduced and positively correlated with HDAS levels ($\text{R}>$ 0.548, ${p} < 0.05$). Furthermore, parameters of Microstate D also correlated with THI grades ($\text{R} < $ -0.576, ${p} < 0.05$). The findings of this study could offer compelling evidence for central neural reorganization associated with chronic tinnitus. EEG microstate parameters that correlate with tinnitus symptoms could serve as neurophysiological markers, contributing to future research on the objective assessment of tinnitus. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. ASYMMETRIES IN COGNITION AND MEASURES OF INTELLIGENCE IN CHILDREN WITH HEARING LOSSES IN RIGHT OR LEFT EARS.
- Author
-
Gwizda, Grażyna, Marciniak, Aleksandra, Bakalczuk, Grzegorz, and Mielnik-Niedzielska, Grażyna
- Subjects
- *
ACADEMIC medical centers , *COGNITIVE processing speed , *CASE-control method , *COMPARATIVE studies , *T-test (Statistics) , *HEARING disorders , *INTELLECT , *AUDIOMETRY , *DESCRIPTIVE statistics , *COGNITIVE testing , *INTELLIGENCE tests , *CHILDREN - Abstract
Introduction: Hemispheric asymmetry of the central nervous system affects various features of the brain involved in cognitive ability. Functional asymmetry, such as different hearing ability in the left or right ear, will also affect cognitive processes. Material and methods: The aim of this study was to assess how intelligence measures and cognitive abilities in children and adolescents might have been affected by hearing deficits in the left or right ear. The study involved 208 children, 126 who were in an experimental group and 82 in a control group. In the experimental group, there were 26 children who were diagnosed with right-sided hearing loss, 34 with left- sided hearing loss, and 66 with bilateral hearing loss; all children in this group had used hearing devices since diagnosis. We assessed hearing unilaterally and bilaterally and looked for asymmetries in terms of intelligence measures and visual and spatial functioning. Results: Children with bilateral hearing impairment had lower intelligence compared to those without impairment. Children with unilateral hearing impairment had similar intelligence level compared to well hearing children. Children with left-sided hearing impairment had higher intelligence compared to those with right-sided hearing impairment and lower nonverbal intelligence compared to well-hearing children. Children with right-sided hearing impairment had lower verbal intelligence. Conclusions: Hearing impairment has an impact on various measures of intelligence, as well as on the organisation and performance of cognitive processes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Diversity matters — extending sound intensity coding by inner hair cells via heterogeneous synapses.
- Author
-
Moser, Tobias, Karagulyan, Nare, Neef, Jakob, and Jaime Tobón, Lina María
- Subjects
- *
HAIR cells , *SPIRAL ganglion , *AUDITORY pathways , *SYNAPSES , *SOUND pressure , *INNER ear - Abstract
Our sense of hearing enables the processing of stimuli that differ in sound pressure by more than six orders of magnitude. How to process a wide range of stimulus intensities with temporal precision is an enigmatic phenomenon of the auditory system. Downstream of dynamic range compression by active cochlear micromechanics, the inner hair cells (IHCs) cover the full intensity range of sound input. Yet, the firing rate in each of their postsynaptic spiral ganglion neurons (SGNs) encodes only a fraction of it. As a population, spiral ganglion neurons with their respective individual coding fractions cover the entire audible range. How such "dynamic range fractionation" arises is a topic of current research and the focus of this review. Here, we discuss mechanisms for generating the diverse functional properties of SGNs and formulate testable hypotheses. We postulate that an interplay of synaptic heterogeneity, molecularly distinct subtypes of SGNs, and efferent modulation serves the neural decomposition of sound information and thus contributes to a population code for sound intensity. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Auditory brainstem responses in the nine-banded armadillo (Dasypus novemcinctus).
- Author
-
Moffitt, Thomas Brad, Atcherson, Samuel, and Padberg, Jeffrey
- Subjects
ARMADILLOS ,STIMULUS intensity ,ANIMAL experimentation ,AUDIOGRAM ,MARSUPIALS ,BRAIN stem - Abstract
The auditory brainstem response (ABR) to tone burst stimuli of thirteen frequencies ranging from 0.5 to 48 kHz was recorded in the nine-banded armadillo (Dasypus novemcinctus), the only extant member of the placental mammal superorder Xenarthra in North America. The armadillo ABR consisted of five main peaks that were visible within the first 10 ms when stimuli were presented at high intensities. The latency of peak I of the armadillo ABR increased as stimulus intensity decreased by an average of 20 ms/dB. Estimated frequency-specific thresholds identified by the ABR were used to construct an estimate of the armadillo audiogram describing the mean thresholds of the eight animals tested. The majority of animals tested (six out of eight) exhibited clear responses to stimuli from 0.5 to 38 kHz, and two animals exhibited responses to stimuli of 48 kHz. Across all cases, the lowest thresholds were observed for frequencies from 8 to 12 kHz. Overall, we observed that the armadillo estimated audiogram bears a similar pattern as those observed using ABR in members of other mammalian clades, including marsupials and later-derived placental mammals. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Editorial: Biomarkers of peripheral and central auditory system integrity and function
- Author
-
Stefan Weder, Christo William Bester, and Stephen O'Leary
- Subjects
auditory system ,diagnostic and therapeutic applications ,patient-centered approaches ,biomarkers ,technological advances ,Neurology. Diseases of the nervous system ,RC346-429 - Published
- 2024
- Full Text
- View/download PDF
37. Passive exposure to task-relevant stimuli enhances categorization learning
- Author
-
Christian Schmid, Muhammad Haziq, Melissa M Baese-Berk, James M Murray, and Santiago Jaramillo
- Subjects
learning ,auditory system ,neural networks ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
Learning to perform a perceptual decision task is generally achieved through sessions of effortful practice with feedback. Here, we investigated how passive exposure to task-relevant stimuli, which is relatively effortless and does not require feedback, influences active learning. First, we trained mice in a sound-categorization task with various schedules combining passive exposure and active training. Mice that received passive exposure exhibited faster learning, regardless of whether this exposure occurred entirely before active training or was interleaved between active sessions. We next trained neural-network models with different architectures and learning rules to perform the task. Networks that use the statistical properties of stimuli to enhance separability of the data via unsupervised learning during passive exposure provided the best account of the behavioral observations. We further found that, during interleaved schedules, there is an increased alignment between weight updates from passive exposure and active training, such that a few interleaved sessions can be as effective as schedules with long periods of passive exposure before active training, consistent with our behavioral observations. These results provide key insights for the design of efficient training schedules that combine active learning and passive exposure in both natural and artificial systems.
- Published
- 2024
- Full Text
- View/download PDF
38. Auditory brainstem responses in the nine-banded armadillo (Dasypus novemcinctus)
- Author
-
Thomas Brad Moffitt, Samuel Atcherson, and Jeffrey Padberg
- Subjects
Auditory brainstem response ,Evolutionary neurobiology ,Xenarthra ,Auditory system ,Comparative neurobiology ,Medicine ,Biology (General) ,QH301-705.5 - Abstract
The auditory brainstem response (ABR) to tone burst stimuli of thirteen frequencies ranging from 0.5 to 48 kHz was recorded in the nine-banded armadillo (Dasypus novemcinctus), the only extant member of the placental mammal superorder Xenarthra in North America. The armadillo ABR consisted of five main peaks that were visible within the first 10 ms when stimuli were presented at high intensities. The latency of peak I of the armadillo ABR increased as stimulus intensity decreased by an average of 20 μs/dB. Estimated frequency-specific thresholds identified by the ABR were used to construct an estimate of the armadillo audiogram describing the mean thresholds of the eight animals tested. The majority of animals tested (six out of eight) exhibited clear responses to stimuli from 0.5 to 38 kHz, and two animals exhibited responses to stimuli of 48 kHz. Across all cases, the lowest thresholds were observed for frequencies from 8 to 12 kHz. Overall, we observed that the armadillo estimated audiogram bears a similar pattern as those observed using ABR in members of other mammalian clades, including marsupials and later-derived placental mammals.
- Published
- 2023
- Full Text
- View/download PDF
39. Auditory and Vestibular Findings in Brazilian Adults Affected by COVID-19: An Exploratory Study.
- Author
-
Arruda de Souza Alcarás, Patrícia, Alves Corazza, Maria Cristina, Vianna, Larissa, Miranda de Araujo, Cristiano, Alves Corazza, Luíza, Zeigelboim, Bianca Simone, and Moreira de Lacerda, Adriana Bender
- Subjects
- *
BRAZILIANS , *AUDITORY evoked response , *ACOUSTIC reflex , *COVID-19 , *SARS-CoV-2 , *AUDITORY neuropathy , *WORD deafness - Abstract
Introduction: The aim of the study was to describe auditory and vestibular findings in Brazilian adults after COVID-19 in a municipality from the outskirts of the São Paulo state. Methods: This was a transversal and exploratory study comprising sixteen participants infected by the SARS-CoV-2 virus, confirmed through RT-PCR detection, aged 20 to 55 years. Subjects underwent anamnesis, vestibular and auditory testing. Fisher's exact test was used to evaluate medication use, chemical and physical exposure, and occupational risk and McNemar test was used to compare auditory and vestibular symptoms pre- and post-COVID-19. Results: Most patients were women (75%) and had been exposed to the virus over 90 days before testing (50%). 18.8% used hydroxychloroquine, 68.8% used ivermectin, and 87.5% used azithromycin to treat COVID-19. Auditory complaints were reported by 31.2% and vestibular by 18.7%. There was no statistical difference before and after the disease. Other reported symptomatology was hair loss, pain, fatigue, memory loss, difficulty to concentrate, and headache. Auditory findings were relevant in contralateral acoustic reflex, in the distortion-product otoacoustic emissions, and in the brainstem auditory evoked potential, characterizing a neurosensorial compromise. 43.74% of patients had altered vectonystagmography. When comparing both ears, no statistical relevance was found; however, when results were crossed with medication use and exposures, there was statistical relevance in the amplitude of the V wave for medications and absolute latency of the V wave to exposure to physical agents. Discussion/Conclusion: This study demonstrated auditory and vestibular findings of neurosensorial nature, considering hearing and of a peripheral vestibulopathy. As it is a study of transversal nature, it is not possible to extend results to general population; yet it may be a finding to future studies. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Compression in the Auditory System.
- Author
-
Nechaev, D. I.
- Subjects
AUDITORY pathways ,BASILAR membrane ,ACOUSTIC nerve ,FREQUENCY tuning ,DYNAMICAL systems - Abstract
This review addresses the mechanism of compression in the mammalian auditory system. This mechanism provides high sensitivity over the wide dynamic range of the system and also sharpens the frequency tuning. The review discusses three main ways to detect compression: direct recording of vibrations of the basilar membrane, recording from the auditory nerve, and psychoacoustic studies. The review ends with a brief discussion of the morphofunctional basis of compression in the auditory cochlea. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. EFEITOS DA TERAPIA RENAL SUBSTITUTIVA NO SISTEMA AUDITIVO: UMA REVISÃO DE ESCOPO.
- Author
-
Alessandra Caldas, Érica, Andréia Caldas, Patricia, Claudia Gonçalves, Maria, Rademaker Burke, Patrick, Gomes Bittencourt, Aline, and Salgado Filho, Natalino
- Abstract
Copyright of Arquivos de Ciências da Saúde da UNIPAR is the property of Associacao Paranaense de Ensino e Cultura and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
42. Treating the Brain With Focused Ultrasound.
- Author
-
Bates, Mary
- Subjects
DIAGNOSTIC ultrasonic imaging ,ULTRASONIC imaging ,HIGH-intensity focused ultrasound ,SOUND waves ,AUDIO frequency ,AUDITORY pathways - Abstract
Focused ultrasound is an early stage, noninvasive therapy with the potential to treat a range of medical conditions. Like diagnostic ultrasound, it uses sound waves above the range of human hearing. But its purpose is to interact with tissues in the body, rather than just produce images of them. In focused ultrasound, multiple, intersecting beams of high frequency sound are aimed to converge on specific targets deep within the body. There, the ultrasound energy can act in multiple ways to either modify or destroy tissue. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Joint Attention in Hearing Parent–Deaf Child and Hearing Parent–Hearing Child Dyads
- Author
-
Bortfeld, Heather and Oghalai, John S
- Subjects
Information and Computing Sciences ,Control Engineering ,Mechatronics and Robotics ,Engineering ,Machine Learning ,Assistive Technology ,Behavioral and Social Science ,Rehabilitation ,Prevention ,Pediatric ,Bioengineering ,Basic Behavioral and Social Science ,Clinical Research ,Ear ,Auditory system ,Pediatrics ,Cochlear implants ,Assistive technology ,Face ,Gesture recognition ,Visualization ,Interaction ,joint attention ,language learning ,multimodal cue integration ,Control engineering ,mechatronics and robotics ,Machine learning - Abstract
Here we characterize establishment of joint attention in hearing parent-deaf child dyads and hearing parent-hearing child dyads. Deaf children were candidates for cochlear implantation who had not yet been implanted and who had no exposure to formal manual communication (e.g., American Sign Language). Because many parents whose deaf children go through early cochlear implant surgery do not themselves know a visual language, these dyads do not share a formal communication system based in a common sensory modality prior to the child's implantation. Joint attention episodes were identified during free play between hearing parents and their hearing children (N = 4) and hearing parents and their deaf children (N = 4). Attentional episode types included successful parent-initiated joint attention, unsuccessful parent-initiated joint attention, passive attention, successful child-initiated joint attention, and unsuccessful child-initiated joint attention. Group differences emerged in both successful and unsuccessful parent-initiated attempts at joint attention, parent passive attention, and successful child-initiated attempts at joint attention based on proportion of time spent in each. These findings highlight joint attention as an indicator of early communicative efficacy in parent-child interaction for different child populations. We discuss the active role parents and children play in communication, regardless of their hearing status.
- Published
- 2020
44. Mechanisms underlying auditory processing deficits in Fragile X syndrome
- Author
-
McCullagh, Elizabeth A, Rotschafer, Sarah E, Auerbach, Benjamin D, Klug, Achim, Kaczmarek, Leonard K, Cramer, Karina S, Kulesza, Randy J, Razak, Khaleel A, Lovelace, Jonathan W, Lu, Yong, Koch, Ursula, and Wang, Yuan
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,Mental Health ,Pediatric ,Rare Diseases ,Fragile X Syndrome ,Intellectual and Developmental Disabilities (IDD) ,Brain Disorders ,Neurosciences ,Autism ,2.1 Biological and endogenous factors ,Aetiology ,Neurological ,Mental health ,Animals ,Auditory Perception ,Autism Spectrum Disorder ,Humans ,Models ,Biological ,auditory system ,autism spectrum disorders ,circuit development ,Fragile X syndrome ,hyperacusis ,synaptic transmission ,Biochemistry and Cell Biology ,Physiology ,Medical Physiology ,Biochemistry & Molecular Biology ,Biochemistry and cell biology ,Medical physiology - Abstract
Autism spectrum disorders (ASD) are strongly associated with auditory hypersensitivity or hyperacusis (difficulty tolerating sounds). Fragile X syndrome (FXS), the most common monogenetic cause of ASD, has emerged as a powerful gateway for exploring underlying mechanisms of hyperacusis and auditory dysfunction in ASD. This review discusses examples of disruption of the auditory pathways in FXS at molecular, synaptic, and circuit levels in animal models as well as in FXS individuals. These examples highlight the involvement of multiple mechanisms, from aberrant synaptic development and ion channel deregulation of auditory brainstem circuits, to impaired neuronal plasticity and network hyperexcitability in the auditory cortex. Though a relatively new area of research, recent discoveries have increased interest in auditory dysfunction and mechanisms underlying hyperacusis in this disorder. This rapidly growing body of data has yielded novel research directions addressing critical questions regarding the timing and possible outcomes of human therapies for auditory dysfunction in ASD.
- Published
- 2020
45. Neuron Activation in Response to Auditory Stimulation
- Author
-
Llarena, Gabriela
- Subjects
Molecular biology ,Neurosciences ,auditory system ,fos/TRAP2 ,superior colliculus - Abstract
AbstractNeuron Activation in Response to Auditory StimulationGabriela LlarenaGaining genetic access to neurons allows us to study the structure and function of the mammalian auditory system. Audition captures sound stimuli detected from the environment and sends this information to the brain through the auditory pathway. The information gathered by the auditory system is integrated with information gathered from the other sensory modalities: vision, olfaction, touch, and gustation; this allows us to modify our behavior accordingly and is crucial for survival. Methods to identify neuron populations based on their response properties are limited, as most genetic manipulation experiments identify neurons based on their anatomy or genetic composition. Neuroscientists have developed a novel tool that genetically targets different neuron populations based on their functional criteria. The fos/TRAP2 method, or “Targeted Recombination of Active Populations” (TRAP), genetically targets populations of auditory-responsive neurons when exposed to an auditory stimulus (Guenthner, Casey J, et al., 2013). Using a transgenic mouse line of Fos2A-iCreERT2, or TRAP2, knock-in mice, I designed an auditory stimulation experiment that utilizes a cre-inducible system in cells that express fos in the presence of 4-Hydroxytamoxifen to identify and locate the auditory-responsive neuron populations. I found that mice exposed to an auditory stimulus expressed greater numbers of active cells in areas receiving auditory inputs, such as the primary auditory cortex (A1), superior colliculus (SC), and inferior colliculus (IC), than in mice that had their ears plugged; in the primary somatosensory cortex (S1), both experimental conditions showed similar numbers of active neurons. These results demonstrate how the fos/TRAP2 method could serve as a tool to label neurons that respond to auditory stimuli; this method uses the effector gene R26tdTomato to fluorescently label cells, which can be analyzed using immunohistochemistry. The fos/TRAP2 method and analyses can provide insight into different auditory-responsive neuron populations; it can be used to determine which neuronal markers the cells express to identify their type, such as whether they are excitatory or inhibitory neurons, and the cells’ defining characteristics. Additionally, this method can be used to sort cells for RNA sequencing (RNA Seq) analysis or to induce expression of optogenetic tools used to assay auditory circuitry. Identifying the function of different auditory-responsive neuron populations can be used to further investigate the auditory pathway and sensory integration to better understand how mammals detect and process sound.
- Published
- 2024
46. Explicit-memory multiresolution adaptive framework for speech and music separation
- Author
-
Ashwin Bellur, Karan Thakkar, and Mounya Elhilali
- Subjects
Auditory system ,Speech enhancement ,Music separation ,Multi-scale redundant representations ,Temporal coherence ,Explicit memory ,Acoustics. Sound ,QC221-246 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract The human auditory system employs a number of principles to facilitate the selection of perceptually separated streams from a complex sound mixture. The brain leverages multi-scale redundant representations of the input and uses memory (or priors) to guide the selection of a target sound from the input mixture. Moreover, feedback mechanisms refine the memory constructs resulting in further improvement of selectivity of a particular sound object amidst dynamic backgrounds. The present study proposes a unified end-to-end computational framework that mimics these principles for sound source separation applied to both speech and music mixtures. While the problems of speech enhancement and music separation have often been tackled separately due to constraints and specificities of each signal domain, the current work posits that common principles for sound source separation are domain-agnostic. In the proposed scheme, parallel and hierarchical convolutional paths map input mixtures onto redundant but distributed higher-dimensional subspaces and utilize the concept of temporal coherence to gate the selection of embeddings belonging to a target stream abstracted in memory. These explicit memories are further refined through self-feedback from incoming observations in order to improve the system’s selectivity when faced with unknown backgrounds. The model yields stable outcomes of source separation for both speech and music mixtures and demonstrates benefits of explicit memory as a powerful representation of priors that guide information selection from complex inputs.
- Published
- 2023
- Full Text
- View/download PDF
47. A review of the auditory-gut-brain axis.
- Author
-
Graham, Amy S., Ben-Azu, Benneth, Tremblay, Marie-Ève, Torre III, Peter, Senekal, Marjanne, Laughton, Barbara, van der Kouwe, Andre, Jankiewicz, Marcin, Kaba, Mamadou, and Holmes, Martha J.
- Subjects
AUDITORY pathways ,ANIMAL models in research ,VAGUS nerve ,HEARING disorders ,CELLULAR signal transduction ,IRRITABLE colon - Abstract
Hearing loss places a substantial burden on medical resources across the world and impacts quality of life for those affected. Further, it can occur peripherally and/or centrally. With many possible causes of hearing loss, there is scope for investigating the underlying mechanisms involved. Various signaling pathways connecting gut microbes and the brain (the gut-brain axis) have been identified and well established in a variety of diseases and disorders. However, the role of these pathways in providing links to other parts of the body has not been explored in much depth. Therefore, the aim of this review is to explore potential underlying mechanisms that connect the auditory system to the gut-brain axis. Using select keywords in PubMed, and additional hand-searching in google scholar, relevant studies were identified. In this review we summarize the key players in the auditory-gut-brain axis under four subheadings: anatomical, extracellular, immune and dietary. Firstly, we identify important anatomical structures in the auditory-gut-brain axis, particularly highlighting a direct connection provided by the vagus nerve. Leading on from this we discuss several extracellular signaling pathways which might connect the ear, gut and brain. A link is established between inflammatory responses in the ear and gut microbiome-altering interventions, highlighting a contribution of the immune system. Finally, we discuss the contribution of diet to the auditory-gut-brain axis. Based on the reviewed literature, we propose numerous possible key players connecting the auditory system to the gut-brain axis. In the future, a more thorough investigation of these key players in animal models and human research may provide insight and assist in developing effective interventions for treating hearing loss. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. The Development of Speaking and Singing in Infants May Play a Role in Genomics and Dementia in Humans.
- Author
-
Yamoah, Ebenezer N., Pavlinkova, Gabriela, and Fritzsch, Bernd
- Subjects
- *
AUDITORY pathways , *AUDITORY perception , *INFANTS , *PERCEPTUAL disorders , *AUDITORY cortex , *DEMENTIA - Abstract
The development of the central auditory system, including the auditory cortex and other areas involved in processing sound, is shaped by genetic and environmental factors, enabling infants to learn how to speak. Before explaining hearing in humans, a short overview of auditory dysfunction is provided. Environmental factors such as exposure to sound and language can impact the development and function of the auditory system sound processing, including discerning in speech perception, singing, and language processing. Infants can hear before birth, and sound exposure sculpts their developing auditory system structure and functions. Exposing infants to singing and speaking can support their auditory and language development. In aging humans, the hippocampus and auditory nuclear centers are affected by neurodegenerative diseases such as Alzheimer's, resulting in memory and auditory processing difficulties. As the disease progresses, overt auditory nuclear center damage occurs, leading to problems in processing auditory information. In conclusion, combined memory and auditory processing difficulties significantly impact people's ability to communicate and engage with their societal essence. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Comparable Encoding, Comparable Perceptual Pattern: Acoustic and Electric Hearing.
- Author
-
Kong, Fanhui, Zhou, Huali, Mo, Yefei, Shi, Mingyue, Meng, Qinglin, and Zheng, Nengheng
- Subjects
COCHLEAR implants ,AUDITORY pathways ,SPEECH ,VOCODER ,ACOUSTIC models ,SPEECH perception ,INTELLIGIBILITY of speech - Abstract
Perception with electric neuroprostheses is sometimes expected to be simulated using properly designed physical stimuli. Here, we examined a new acoustic vocoder model for electric hearing with cochlear implants (CIs) and hypothesized that comparable speech encoding can lead to comparable perceptual patterns for CI and normal hearing (NH) listeners. Speech signals were encoded using FFT-based signal processing stages including band-pass filtering, temporal envelope extraction, maxima selection, and amplitude compression and quantization. These stages were specifically implemented in the same manner by an Advanced Combination Encoder (ACE) strategy in CI processors and Gaussian-enveloped Tones (GET) or Noise (GEN) vocoders for NH. Adaptive speech reception thresholds (SRTs) in noise were measured using four Mandarin sentence corpora. Initial consonant (11 monosyllables) and final vowel (20 monosyllables) recognition were also measured. NaÏve NH listeners were tested using vocoded speech with the proposed GET/GEN vocoders as well as conventional vocoders (controls). Experienced CI listeners were tested using their daily-used processors. Results showed that: 1) there was a significant training effect on GET vocoded speech perception; 2) the GEN vocoded scores (SRTs with four corpora and consonant and vowel recognition scores) as well as the phoneme-level confusion pattern matched with the CI scores better than controls. The findings suggest that the same signal encoding implementations may lead to similar perceptual patterns simultaneously in multiple perception tasks. This study highlights the importance of faithfully replicating all signal processing stages in the modeling of perceptual patterns in sensory neuroprostheses. This approach has the potential to enhance our understanding of CI perception and accelerate the engineering of prosthetic interventions. The GET/GEN MATLAB program is freely available at https://github.com/BetterCI/GETVocoder. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. A Novel Earprint: Stimulus-Frequency Otoacoustic Emission for Biometric Recognition.
- Author
-
Liu, Yin, Jiang, Borui, Liu, Hongqing, Zhao, Yu, Xiong, Fen, and Zhou, Yi
- Abstract
Otoacoustic emission (OAE) biometrics are inherently robust to replay and falsification attacks. The widely studied transient-evoked OAE (TEOAE) is non-stationary and offers biometric value only in normal-hearing individuals since it is more susceptible to hearing loss. To address these issues, this paper presents a novel yet promising OAE biometric modality-stimulus-frequency OAE (SFOAE). Unlike TEOAE, SFOAE is a highly stationary signal whose fine structures are idiosyncratic to an individual, and relatively stable over time, making it easier to be a biometric without additional complex feature extraction. Moreover, SFOAE is even present in ears with 50 dB HL hearing loss, applicable to hearing-impaired users. In this paper, SFOAE spectra in response to three stimulus levels are fused in the feature level to consolidate different information, followed by a linear discriminant analysis or a multi-kernel convolutional neural network to further reduce the intra-subject variability and increase the inter-subject variability. Tested on a large cohort of subjects containing varying levels of deafness, the SFOAE-based biometric system yields an equal error rate of 0.541% and 1.364% in closed-set and open-set verification scenarios, respectively. In an identification mode, 99.43% and 97.37% accuracies are attained for closed-set and open-set protocols, respectively. In particular, we observe perfect performance in a population restricted to normal hearing in closed-set scenarios. The reason why the system performs well has been examined based on several comparative tests. Although there are implementation issues to be resolved before SFOAE can be applied in the field of biometric, this paper preliminarily demonstrates the basis and excellent potential of SFOAE as a biometric. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.