10 results on '"Jan-Willem A. Wasmann"'
Search Results
2. Preliminary Evaluation of Automated Speech Recognition Apps for the Hearing Impaired and Deaf
- Author
-
Leontien Pragt, Peter van Hengel, Dagmar Grob, and Jan-Willem A. Wasmann
- Subjects
automated speech audiometry ,(automatic speech recognition), automated speech recognition, (ASR) ,evaluation metric ,hearing impairment ,speech-to-text ,voice-to-text technology ,Medicine ,Public aspects of medicine ,RA1-1270 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
ObjectiveAutomated speech recognition (ASR) systems have become increasingly sophisticated, accurate, and deployable on many digital devices, including on a smartphone. This pilot study aims to examine the speech recognition performance of ASR apps using audiological speech tests. In addition, we compare ASR speech recognition performance to normal hearing and hearing impaired listeners and evaluate if standard clinical audiological tests are a meaningful and quick measure of the performance of ASR apps.MethodsFour apps have been tested on a smartphone, respectively AVA, Earfy, Live Transcribe, and Speechy. The Dutch audiological speech tests performed were speech audiometry in quiet (Dutch CNC-test), Digits-in-Noise (DIN)-test with steady-state speech-shaped noise, sentences in quiet and in averaged long-term speech-shaped spectrum noise (Plomp-test). For comparison, the app's ability to transcribe a spoken dialogue (Dutch and English) was tested.ResultsAll apps scored at least 50% phonemes correct on the Dutch CNC-test for a conversational speech intensity level (65 dB SPL) and achieved 90–100% phoneme recognition at higher intensity levels. On the DIN-test, AVA and Live Transcribe had the lowest (best) signal-to-noise ratio +8 dB. The lowest signal-to-noise measured with the Plomp-test was +8 to 9 dB for Earfy (Android) and Live Transcribe (Android). Overall, the word error rate for the dialogue in English (19–34%) was lower (better) than for the Dutch dialogue (25–66%).ConclusionThe performance of the apps was limited on audiological tests that provide little linguistic context or use low signal to noise levels. For Dutch audiological speech tests in quiet, ASR apps performed similarly to a person with a moderate hearing loss. In noise, the ASR apps performed more poorly than most profoundly deaf people using a hearing aid or cochlear implant. Adding new performance metrics including the semantic difference as a function of SNR and reverberation time could help to monitor and further improve ASR performance.
- Published
- 2022
- Full Text
- View/download PDF
3. The Rise of AI Chatbots in Hearing Health Care
- Author
-
De Wet Swanepoel, Vinaya Manchaiah, and Jan-Willem A. Wasmann
- Subjects
Speech and Hearing - Published
- 2023
- Full Text
- View/download PDF
4. Connected Cochlear Implant Care: Anywhere & Anytime
- Author
-
Jan-Willem A. Wasmann, Wendy Huinck, and Cris Lanting
- Abstract
Objectives. The stability of remote testing in cochlear implant care was studied by testing the influence of time-of-day and listener fatigue and motivation on the outcomes of aided thresholds test (ATT) and digit triplets test (DTT) in CI recipients using self-tests at home on a smartphone or tablet.Design. A single-center repeated measures cohort study design (n = 50 adult CI recipients). The ATT and DTT were tested at home ten times, with nine of these sessions planned within a period of eight days. Outcomes were modeled as a function of time-of-day, momentary motivation, listeners’ task-related fatigue, and chronotype (i.e., someone’s preference for morning or evening due to the sleep-wake cycle) using linear mixed models. Additional factors included aided monosyllabic word recognition in quiet, daily-life fatigue, age, and CI experience. Results. Out of 500 planned measurements, 407 ATTs and 476 DTTs were completed. The ATT thresholds were stable across sessions. The DTT model explained 74% of the total variance observed. Fifty-eight percent of the total variance is explained by individual differences in participants’ DTT performance. For each 10% increase in word recognition in quiet, the DTT Speech Reception Threshold (SRT) improved by an average of 1.6 dB. DTT SRT improved on average by 0.1 dB per repeated session and correlated with the number of successful DTTs per participant. There was no significant time-of-day effect on auditory performance in at-home administered tests. Conclusions. This study is one of the first to report on the validity and stability of remote assessments in CI recipients and revealed relevant factors. CI recipients can be self-tested at any waking hour to monitor performance via smartphone or tablet. Motivation, task-related fatigue, or chronotype did not affect the outcomes of the ATT or DTT in the studied cohort. Word recognition in quiet provides a good predictor for deciding if the DTT should be included in a remote test battery. At-home testing is reliable for cochlear implant recipients and offers an opportunity to provide care in a virtual hearing clinic setting.
- Published
- 2023
- Full Text
- View/download PDF
5. Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age
- Author
-
David R. Moore, Emmanuel A. M. Mylanus, Wendy J. Huinck, Jan-Willem A. Wasmann, Cris P. Lanting, Paul J. Govaerts, Dennis L. Barbour, Jeroen W. M. van der Laak, and De Wet Swanepoel
- Subjects
medicine.medical_specialty ,Hearing loss ,Computer science ,media_common.quotation_subject ,Interoperability ,Audiology ,Sensory disorders Donders Center for Medical Neuroscience [Radboudumc 12] ,Article ,Speech and Hearing ,All institutes and research themes of the Radboud University Medical Center ,Hearing ,Artificial Intelligence ,Health care ,medicine ,Humans ,Hearing Loss ,media_common ,business.industry ,Digital transformation ,Stakeholder ,Women's cancers Radboud Institute for Health Sciences [Radboudumc 17] ,Otorhinolaryngology ,Applications of artificial intelligence ,medicine.symptom ,business ,Delivery of Health Care ,Autonomy - Abstract
The global digital transformation enables computational audiology for advanced clinical applications that can reduce the global burden of hearing loss. In this article, we describe emerging hearing-related artificial intelligence applications and argue for their potential to improve access, precision, and efficiency of hearing health care services. Also, we raise awareness of risks that must be addressed to enable a safe digital transformation in audiology. We envision a future where computational audiology is implemented via interoperable systems using shared data and where health care providers adopt expanded roles within a network of distributed expertise. This effort should take place in a health care system where privacy, responsibility of each stakeholder, and patients' safety and autonomy are all guarded by design.
- Published
- 2021
6. Preliminary Evaluation of Automated Speech Recognition Apps for the Hearing Impaired and Deaf
- Author
-
Leontien Pragt, Peter van Hengel, Dagmar Grob, and Jan-Willem A. Wasmann
- Subjects
hearing impairment ,QA75.5-76.95 ,General Medicine ,Sensory disorders Donders Center for Medical Neuroscience [Radboudumc 12] ,(automatic speech recognition), automated speech recognition, (ASR) ,Women's cancers Radboud Institute for Health Sciences [Radboudumc 17] ,speech-to-text ,voice-to-text technology ,Electronic computers. Computer science ,otorhinolaryngologic diseases ,Medicine ,evaluation metric ,Public aspects of medicine ,RA1-1270 ,automated speech audiometry - Abstract
ObjectiveAutomated speech recognition (ASR) systems have become increasingly sophisticated, accurate, and deployable on many digital devices, including on a smartphone. This pilot study aims to examine the speech recognition performance of ASR apps using audiological speech tests. In addition, we compare ASR speech recognition performance to normal hearing and hearing impaired listeners and evaluate if standard clinical audiological tests are a meaningful and quick measure of the performance of ASR apps.MethodsFour apps have been tested on a smartphone, respectively AVA, Earfy, Live Transcribe, and Speechy. The Dutch audiological speech tests performed were speech audiometry in quiet (Dutch CNC-test), Digits-in-Noise (DIN)-test with steady-state speech-shaped noise, sentences in quiet and in averaged long-term speech-shaped spectrum noise (Plomp-test). For comparison, the app's ability to transcribe a spoken dialogue (Dutch and English) was tested.ResultsAll apps scored at least 50% phonemes correct on the Dutch CNC-test for a conversational speech intensity level (65 dB SPL) and achieved 90–100% phoneme recognition at higher intensity levels. On the DIN-test, AVA and Live Transcribe had the lowest (best) signal-to-noise ratio +8 dB. The lowest signal-to-noise measured with the Plomp-test was +8 to 9 dB for Earfy (Android) and Live Transcribe (Android). Overall, the word error rate for the dialogue in English (19–34%) was lower (better) than for the Dutch dialogue (25–66%).ConclusionThe performance of the apps was limited on audiological tests that provide little linguistic context or use low signal to noise levels. For Dutch audiological speech tests in quiet, ASR apps performed similarly to a person with a moderate hearing loss. In noise, the ASR apps performed more poorly than most profoundly deaf people using a hearing aid or cochlear implant. Adding new performance metrics including the semantic difference as a function of SNR and reverberation time could help to monitor and further improve ASR performance.
- Published
- 2021
7. Performance and Potential of Machine Learning Audiometry
- Author
-
Jan-Willem A. Wasmann and Dennis L. Barbour
- Subjects
Speech and Hearing ,medicine.medical_specialty ,medicine.diagnostic_test ,Computer science ,medicine ,Audiometry ,Audiology - Published
- 2021
- Full Text
- View/download PDF
8. Automated and machine learning approaches in diagnostic hearing assessment: a scoping review
- Author
-
Leontien Pragt, Jan-Willem A. Wasmann, Robert Eikelboom, and null De Wet Swanepoel
- Abstract
Hearing loss affects one in five people worldwide and is estimated to affect one in four by 2050. Treatment relies on accurate diagnosis of hearing loss, but this first step is out of reach for more than 80% of those affected. Increasingly automated digital approaches are being developed for self-administered hearing assessment without professionals’ direct involvement. This scoping review provides an overview of automated approaches, based on 56 reports from 2012 until June 2021, adding to the 29 published prior to 2012. Twenty-seven automated approaches were identified. An increasing number of approaches report similar accuracy as manual hearing assessments. Machine learning approaches are more efficient and personal digital devices make assessments more affordable and accessible. Validity can be enhanced by quality surveillance including noise monitoring and detecting inconclusive results. Employed within identified limitations, automated hearing assessments can support task-shifting, self-care, and clinical care pathways.
- Published
- 2021
- Full Text
- View/download PDF
9. Emerging Hearing Assessment Technologies for Patient Care
- Author
-
Jan-Willem A. Wasmann and Dennis L. Barbour
- Subjects
Speech and Hearing ,business.industry ,medicine ,Medical emergency ,medicine.disease ,business ,Patient care - Published
- 2021
- Full Text
- View/download PDF
10. Comment on 'Baha Skin Complications in the Pediatric Population: Systematic Review with Meta-Analysis'
- Author
-
Myrthe K. S. Hol, Maarten A. Vijverberg, Emmanuel A. M. Mylanus, Arjan J. Bosman, Coosje J. I. Caspers, Jan-Willem A. Wasmann, and Ivo J. Kruyt
- Subjects
medicine.medical_specialty ,Text mining ,Otorhinolaryngology ,business.industry ,Meta-analysis ,Medicine ,Neurology (clinical) ,business ,Intensive care medicine ,Sensory Systems ,Sensory disorders Donders Center for Medical Neuroscience [Radboudumc 12] ,Pediatric population - Abstract
Item does not contain fulltext
- Published
- 2019
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.