129 results on '"Andrew J. Oxenham"'
Search Results
2. Envelope-following responses to single and double amplitude modulation: No correlate of modulation masking
- Author
-
Magdalena Wojtczak, PuiYii Goh, and Andrew J. Oxenham
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
Magneto- and electroencephalographic (M/EEG) measures of neural responses to speech and other natural sounds provide a noninvasive window into the neural tracking of temporal dynamics of natural sounds. However, the interpretation of these measures and their relation to perception remain unclear. In this study, amplitude modulation (AM) of a low-pass noise carrier (cutoff 4 kHz) was used to measure EEG envelope following responses (EFRs) for two AM rates presented simultaneously or in isolation. The spectral separations between the two AM rates were selected to produce varying degrees of perceptual modulation masking. The AM rates used spanned the range from 8 to 203 Hz, reflecting cortical and subcortical EFR generators. For double-rate AM, the EFRs had two distinct spectral peaks, corresponding to the component AM rates. In all conditions, the peak EFR amplitudes for two simultaneous AM rates did not differ significantly from those presented singly, even when the two rates produced significant perceptual masking. The results show that EEG measures of neural responses synchronized to temporal envelope fluctuations fail to reflect the important perceptual phenomenon of modulation masking. [Work supported by NIH R01 DC012262 (Oxenham) and R01 DC015987 (Wojtczak).]
- Published
- 2023
- Full Text
- View/download PDF
3. Graduate programs related to acoustics at the University of Minnesota
- Author
-
Kristi Oeding, Kelly L. Whiteford, Peggy Nelson, Hubert H. Lim, Mark A. Bee, and Andrew J. Oxenham
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
The University of Minnesota (UMN) has graduate programs that span the areas of Animal Bioacoustics, Psychological and Physiological Acoustics, and Speech Communication. Degrees are offered in Psychology (PhD), Speech-Language-Hearing Sciences (MA in speech-language pathology, AuD, and PhD in speech-language-hearing sciences), Biomedical Engineering (MS and PhD), Ecology, Evolution, and Behavior (PhD), and Neuroscience (PhD). Faculty across departments have a shared interest in understanding how the ear and brain work together to process sound and in developing new technologies and approaches for improving hearing disorders. Located on campus is the Center for Applied and Translational Sensory Science (CATSS), which provides opportunities for interdisciplinary collaborations across departments and industry to understand how sensory impairments work. Within CATSS is the Multi-Sensory Perception Lab, which houses shared equipment, including eye trackers and electroencephalography. The Center for Magnetic Resonance Research houses several ultrahigh field magnets, while the Center for Neural Engineering and affiliated faculty labs also house multiple neuromodulation and neurorecording devices to interact with and monitor neural activity in humans and animals.
- Published
- 2022
- Full Text
- View/download PDF
4. The role of pitch and harmonic cancellation when listening to speech in harmonic background sounds
- Author
-
Daniel R. Guest and Andrew J. Oxenham
- Subjects
Adult ,Male ,0301 basic medicine ,Adolescent ,Acoustics and Ultrasonics ,Speech recognition ,Young Adult ,03 medical and health sciences ,Tone (musical instrument) ,Hearing ,Arts and Humanities (miscellaneous) ,Octave ,Humans ,Envelope (radar) ,Mathematics ,Speech Reception Threshold Test ,Speech Intelligibility ,Fundamental frequency ,Filter (signal processing) ,Psychological and Physiological Acoustics ,030104 developmental biology ,Amplitude ,Harmonics ,Auditory Perception ,Speech Perception ,Harmonic ,Female ,Noise ,Perceptual Masking - Abstract
Fundamental frequency differences (ΔF0) between competing talkers aid in the perceptual segregation of the talkers (ΔF0 benefit), but the underlying mechanisms remain incompletely understood. A model of ΔF0 benefit based on harmonic cancellation proposes that a masker's periodicity can be used to cancel (i.e., filter out) its neural representation. Earlier work suggested that an octave ΔF0 provided little benefit, an effect predicted by harmonic cancellation due to the shared periodicity of masker and target. Alternatively, this effect can be explained by spectral overlap between the harmonic components of the target and masker. To assess these competing explanations, speech intelligibility of a monotonized target talker, masked by a speech-shaped harmonic complex tone, was measured as a function of ΔF0, masker spectrum (all harmonics or odd harmonics only), and masker temporal envelope (amplitude modulated or unmodulated). Removal of the masker's even harmonics when the target was one octave above the masker improved speech reception thresholds by about 5 dB. Because this manipulation eliminated spectral overlap between target and masker components but preserved shared periodicity, the finding is consistent with the explanation for the lack of ΔF0 benefit at the octave based on spectral overlap, but not with the explanation based on harmonic cancellation.
- Published
- 2019
- Full Text
- View/download PDF
5. Pitch discrimination with mixtures of three concurrent harmonic complexes
- Author
-
Jackson Graves and Andrew J. Oxenham
- Subjects
Adult ,Male ,Acoustics and Ultrasonics ,Acoustics ,Semitone ,050105 experimental psychology ,Pitch Discrimination ,Musical acoustics ,03 medical and health sciences ,Tone (musical instrument) ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,Band-pass filter ,Humans ,0501 psychology and cognitive sciences ,Mixing (physics) ,Physics ,05 social sciences ,Psychological and Physiological Acoustics ,Acoustic Stimulation ,Harmonics ,Harmonic ,Female ,030217 neurology & neurosurgery ,Psychoacoustics - Abstract
In natural listening contexts, especially in music, it is common to hear three or more simultaneous pitches, but few empirical or theoretical studies have addressed how this is achieved. Place and pattern-recognition theories of pitch require at least some harmonics to be spectrally resolved for pitch to be extracted, but it is unclear how often such conditions exist when multiple complex tones are presented together. In three behavioral experiments, mixtures of three concurrent complexes were filtered into a single bandpass spectral region, and the relationship between the fundamental frequencies and spectral region was varied in order to manipulate the extent to which harmonics were resolved either before or after mixing. In experiment 1, listeners discriminated major from minor triads (a difference of 1 semitone in one note of the triad). In experiments 2 and 3, listeners compared the pitch of a probe tone with that of a subsequent target, embedded within two other tones. All three experiments demonstrated above-chance performance, even in conditions where the combinations of harmonic components were unlikely to be resolved after mixing, suggesting that fully resolved harmonics may not be necessary to extract the pitch from multiple simultaneous complexes.
- Published
- 2019
- Full Text
- View/download PDF
6. Role of perceptual integration in pitch discrimination at high frequencies
- Author
-
Anahita H. Mehta and Andrew J. Oxenham
- Subjects
Pulmonary and Respiratory Medicine ,Pediatrics, Perinatology and Child Health - Abstract
At very high frequencies, fundamental-frequency difference limens (F0DLs) for five-component harmonic complex tones can be better than predicted by optimal integration of information, assuming performance is limited by noise at the peripheral level, but are in line with predictions based on more central sources of noise. This study investigates whether there is a minimum number of harmonic components needed for such super-optimal integration effects and if harmonic range or inharmonicity affects this super-optimal integration. Results show super-optimal integration, even with two harmonic components and for most combinations of consecutive harmonic, but not inharmonic, components.
- Published
- 2022
- Full Text
- View/download PDF
7. Profile analysis and ripple discrimination at high frequencies
- Author
-
Daniel R. Guest and Andrew J. Oxenham
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
Profile analysis and spectrotemporal ripple discrimination are psychoacoustic tasks used to probe the auditory system’s spectral and intensity resolution. In the present experiment, we compared performance at low- and high-frequencies in four related psychoacoustic tasks: level discrimination, profile analysis, spectrotemporal ripple detection, and spectrotemporal ripple direction discrimination. The level discrimination and ripple detection tasks were designed so that cues from single auditory filters were sufficient for performing the tasks. The profile analysis and ripple direction discrimination tasks were designed to render cues from single auditory filters insufficient to perform the tasks. Based on data from a group of ∼20 young normal-hearing, listeners, we found that profile analysis was markedly worse at high-frequencies than low-frequencies, even when accounting for differences in level discrimination at low- and high-frequencies. In contrast, no significant differences were observed between low- and high-frequencies for either the ripple detection or ripple direction discrimination tasks. We further analyzed our behavioral data using computational simulations of auditory-nerve and midbrain responses. This analysis suggested that differences in performance at low- and high-frequencies cannot be explained at the level of the auditory periphery, but instead emerge at more central loci. [Work supported by NIH grants F31 DC019247 and R01 DC005216.]
- Published
- 2022
- Full Text
- View/download PDF
8. Measuring harmonic benefit in musicians and non-musicians in several tasks
- Author
-
Daniel R. Guest, Neha Rajappa, and Andrew J. Oxenham
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
Prior work has demonstrated that harmonic tones are easier to detect in noise and yield better F0 discrimination in noise than inharmonic tones. These effects, referred to as harmonic benefit, appear to be approximately the same size in musicians and non-musicians, despite musicians’ overall better pitch discrimination. The present study aimed to replicate these findings and extend them to include measurements of harmonic benefit in other tasks. Non-musicians and musicians were compared in four tasks: detection in noise, F0 discrimination, FM detection, and AM detection. The stimuli in each task were either harmonic or inharmonic complex tones and were presented in threshold-equalizing noise at a range of signal-to-noise ratios. We found that harmonic benefit for F0 discrimination was large and remained large even after accounting for differences in the detectability of harmonic and inahrmonic tones. In contrast, harmonic benefit was small for FM and AM detection and could mostly be accounted for by differences in detectability. In contrast to prior studies, we found that musicians showed a larger harmonic benefit than non-musicians. These results provide insight into how musical training may specialize the auditory system for processing of harmonicity and pitch. [Work supported by NIH grants F31 DC019247 and R01 DC005216.]
- Published
- 2022
- Full Text
- View/download PDF
9. Adaptation effects along a voice/non-voice continuum
- Author
-
Zi Gao and Andrew J. Oxenham
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Abstract
Adaptation, the selective decrease in neural responses following repeated stimulation, is likely important for perception in the face of variable sensory input. Previous researchers have observed contrastive adaptation effects involving speaker identity, gender, and vowels. The present study investigated whether such contrast effects can occur between voice and non-voice stimuli. A 10-step continuum between “voice” (/a/, /o/, or /u/ vowels) and “instrument” (bassoon, horn, or viola) was generated for each possible pair. In each trial, an adaptor, either voice or instrument, was played four times, followed by a test stimulus from along the appropriate continuum. When trials with voice and instrumental adaptors were grouped into separate blocks, strong contrastive adaptation effects were observed, with the test stimuli more likely to be identified as voice following instrumental adaptors and vice versa. Preliminary results comparing interleaved with blocked conditions and same-ear with different-ear conditions suggest longer-term build-up and persistence effects that may extend across both ears. [Work supported by NIH grant R01 DC012262.]
- Published
- 2022
- Full Text
- View/download PDF
10. Fundamental-frequency discrimination based on temporal-envelope cues: Effects of bandwidth and interference
- Author
-
Anahita H. Mehta and Andrew J. Oxenham
- Subjects
Male ,Time Factors ,Speech perception ,Adolescent ,Acoustics and Ultrasonics ,Computer science ,Minnesota ,Speech recognition ,01 natural sciences ,Hospitals, University ,Pitch Discrimination ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,0103 physical sciences ,otorhinolaryngologic diseases ,Humans ,Pitch Perception ,030223 otorhinolaryngology ,010301 acoustics ,Extramural ,Bandwidth (signal processing) ,Auditory Threshold ,Fundamental frequency ,humanities ,Jasa Express Letters ,Speech enhancement ,Cochlear Implants ,Acoustic Stimulation ,Auditory Perception ,Speech Perception ,Female ,Cues ,Music ,psychological phenomena and processes - Abstract
Both music and speech perception rely on hearing out one pitch in the presence of others. Pitch discrimination of narrowband sounds based only on temporal-envelope cues is rendered nearly impossible by introducing interferers in both normal-hearing listeners and cochlear-implant (CI) users. This study tested whether performance improves in normal-hearing listeners if the target is presented over a broad spectral region. The results indicate that performance is still strongly affected by spectrally remote interferers, despite increases in bandwidth, suggesting that envelope-based pitch is unlikely to allow CI users to perceive pitch when multiple harmonic sounds are presented at once.
- Published
- 2018
- Full Text
- View/download PDF
11. Examining replicability of an otoacoustic measure of cochlear function during selective attention
- Author
-
Andrew J. Oxenham, Jordan A. Beim, and Magdalena Wojtczak
- Subjects
Adult ,Male ,medicine.medical_specialty ,Auditory Pathways ,Visual perception ,Acoustics and Ultrasonics ,Otoacoustic Emissions, Spontaneous ,Stimulus (physiology) ,Audiology ,01 natural sciences ,Bootstrap analysis ,Cochlear function ,03 medical and health sciences ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,Reflex ,0103 physical sciences ,Attentional modulation ,medicine ,Humans ,Auditory pathways ,Attention ,Selective attention ,010301 acoustics ,Cochlea ,Reproducibility of Results ,Middle Aged ,Psychological and Physiological Acoustics ,Acoustic Stimulation ,Auditory Perception ,Speech Perception ,Visual Perception ,Female ,Psychology ,Photic Stimulation ,030217 neurology & neurosurgery - Abstract
Attention to a target stimulus within a complex scene often results in enhanced cortical representations of the target relative to the background. It remains unclear where along the auditory pathways attentional effects can first be measured. Anatomy suggests that attentional modulation could occur through corticofugal connections extending as far as the cochlea itself. Earlier attempts to investigate the effects of attention on human cochlear processing have revealed small and inconsistent effects. In this study, stimulus-frequency otoacoustic emissions were recorded from a total of 30 human participants as they performed tasks that required sustained selective attention to auditory or visual stimuli. In the first sample of 15 participants, emission magnitudes were significantly weaker when participants attended to the visual stimuli than when they attended to the auditory stimuli, by an average of 5.4 dB. However, no such effect was found in the second sample of 15 participants. When the data were pooled across samples, the average attentional effect was significant, but small (2.48 dB), with 12 of 30 listeners showing a significant effect, based on bootstrap analysis of the individual data. The results highlight the need for considering sources of individual differences and using large sample sizes in future investigations.Attention to a target stimulus within a complex scene often results in enhanced cortical representations of the target relative to the background. It remains unclear where along the auditory pathways attentional effects can first be measured. Anatomy suggests that attentional modulation could occur through corticofugal connections extending as far as the cochlea itself. Earlier attempts to investigate the effects of attention on human cochlear processing have revealed small and inconsistent effects. In this study, stimulus-frequency otoacoustic emissions were recorded from a total of 30 human participants as they performed tasks that required sustained selective attention to auditory or visual stimuli. In the first sample of 15 participants, emission magnitudes were significantly weaker when participants attended to the visual stimuli than when they attended to the auditory stimuli, by an average of 5.4 dB. However, no such effect was found in the second sample of 15 participants. When the data were poole...
- Published
- 2018
- Full Text
- View/download PDF
12. Auditory enhancement under simultaneous masking in normal-hearing and hearing-impaired listeners
- Author
-
Heather A. Kreft, Magdalena Wojtczak, and Andrew J. Oxenham
- Subjects
Male ,Auditory perception ,Masking (art) ,medicine.medical_specialty ,Time Factors ,Acoustics and Ultrasonics ,Hearing loss ,Hearing Loss, Sensorineural ,Perceptual Masking ,Audiology ,behavioral disciplines and activities ,01 natural sciences ,03 medical and health sciences ,0302 clinical medicine ,Hearing ,Arts and Humanities (miscellaneous) ,Adaptation, Psychological ,0103 physical sciences ,Sensation ,otorhinolaryngologic diseases ,medicine ,Humans ,Sound pressure ,010301 acoustics ,Aged ,medicine.diagnostic_test ,Auditory Threshold ,Psychological and Physiological Acoustics ,humanities ,Noise ,Persons With Hearing Impairments ,Acoustic Stimulation ,Case-Control Studies ,Auditory Perception ,Audiometry, Pure-Tone ,Female ,Audiometry ,medicine.symptom ,Psychology ,psychological phenomena and processes ,030217 neurology & neurosurgery - Abstract
Auditory enhancement, where a target sound within a masker is rendered more audible by the prior presentation of the masker alone, may play an important role in auditory perception under variable everyday acoustic conditions. Cochlear hearing loss may reduce enhancement effects, potentially contributing to the difficulties experienced by hearing-impaired (HI) individuals in noisy and reverberant environments. However, it remains unknown whether, and by how much, enhancement under simultaneous masking is reduced in HI listeners. Enhancement of a pure tone under simultaneous masking with a multi-tone masker was measured in HI listeners and age-matched normal-hearing (NH) listeners as function of the spectral notch width of the masker, using stimuli at equal sensation levels as well as at equal sound pressure levels, but with the stimuli presented in noise to the NH listeners to maintain the equal sensation level between listener groups. The results showed that HI listeners exhibited some enhancement in all conditions. However, even when conditions were made as comparable as possible, in terms of effective spectral notch width and presentation level, the enhancement effect in HI listeners under simultaneous masking was reduced relative to that observed in NH listeners.
- Published
- 2018
- Full Text
- View/download PDF
13. A review of the effect of musical training on neural and perceptual coding of speech
- Author
-
Andrew J. Oxenham, Angela Sim, and Kelly L. Whiteford
- Subjects
Open science ,Acoustics and Ultrasonics ,media_common.quotation_subject ,Frequency following response ,law.invention ,Stimulus (psychology) ,Arts and Humanities (miscellaneous) ,law ,Perception ,CLARITY ,Generalizability theory ,Psychology ,Association (psychology) ,Neural coding ,media_common ,Cognitive psychology - Abstract
Several studies have reported enhanced neural coding of stimulus periodicity (frequency following response; FFR) and/or shorter neural response latencies to speech sounds in musicians than in non-musicians. Such enhanced early encoding may underlie the reported musician advantage for perceiving speech in noise. If such perceptual benefits are confirmed, it is possible that musical training may be an effective intervention for offsetting the decline of this important skill with age. This presentation will review evidence for and against the musician advantage in the neural coding and perception of speech, and will highlight several reasons to be cautious of the generalizability of the musician advantage in the published literature, including (1) small sample sizes, (2) varying definitions of musician and non-musician, (3) dichotomous between-group comparisons (e.g., comparing professional musicians to non-musicians), (4) possible speech-corpus specific effects, and (5) the multitude of degrees of freedom in FFR analyses. Efforts towards large-sample replications that use open science techniques, including preregistered hypotheses, methods and analyses, as well as open source data, should bring clarity to the association between musical expertise and the neural coding and perception of speech in noise. [Work supported by NIH under Grant R01 DC005216 and NSF-BCS under Grant 1840818.]
- Published
- 2021
- Full Text
- View/download PDF
14. Dissociating sensitivity from bias in the mini profile of music perception skills
- Author
-
PuiYii Goh, Andrew J. Oxenham, Kara Stevens, and Kelly L. Whiteford
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Music perception ,medicine ,Sensitivity (control systems) ,Audiology ,Psychology - Published
- 2021
- Full Text
- View/download PDF
15. Speech intelligibility is best predicted by intensity, not cochlea-scaled entropy
- Author
-
Heather A. Kreft, Jeffrey E. Boucher, and Andrew J. Oxenham
- Subjects
Adult ,Male ,Adolescent ,Acoustics and Ultrasonics ,Entropy ,Loudness Perception ,Acoustics ,Speech recognition ,Intelligibility (communication) ,01 natural sciences ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,0103 physical sciences ,Humans ,Entropy (information theory) ,010301 acoustics ,Mathematics ,Relative intensity ,Speech Intelligibility ,Signal Processing, Computer-Assisted ,Jasa Express Letters ,Cochlea ,Amplitude ,Auditory Perception ,Female ,Noise ,Perceptual Masking ,Algorithms ,030217 neurology & neurosurgery - Abstract
Cochlea-scaled entropy (CSE) is a measure of spectro-temporal change that has been reported to predict the contribution of speech segments to overall intelligibility. This paper confirms that CSE is highly correlated with intensity, making it impossible to determine empirically whether it is CSE or simply intensity that determines speech importance. A more perceptually relevant version of CSE that uses dB-scaled differences, rather than differences in linear amplitude, failed to predict speech intelligibility. Overall, a parsimonious account of the available data is that the importance of speech segments to overall intelligibility is best predicted by their relative intensity, not by CSE.
- Published
- 2017
- Full Text
- View/download PDF
16. Discrimination and streaming of speech sounds based on differences in interaural and spectral cues
- Author
-
Marion David, Nicolas Grimault, Mathieu Lavandier, Andrew J. Oxenham, ToxAlim (ToxAlim), Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Ecole Nationale Vétérinaire de Toulouse (ENVT), Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Ecole d'Ingénieurs de Purpan (INPT - EI Purpan), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Institut National de la Recherche Agronomique (INRA), École Nationale des Travaux Publics de l'État (ENTPE), Ministère de l'Ecologie, du Développement Durable, des Transports et du Logement-École Nationale des Travaux Publics de l'État (ENTPE), Laboratoire Génie Civil et Bâtiment (LGCB), Neurosciences Sensorielles Comportement Cognition, Centre National de la Recherche Scientifique (CNRS)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon, Auditory Perception and Cognition Laboratory, University of Minnesota [Twin Cities], University of Minnesota System-University of Minnesota System, Neurobiologie des interactions cellulaires et neurophysiopathologie - NICN (NICN), Centre National de la Recherche Scientifique (CNRS)-Université de la Méditerranée - Aix-Marseille 2, Département génie civil et batiment (DGCB), ENTP, Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Centre National de la Recherche Scientifique (CNRS), and University of Minnesota [Twin Cities] (UMN)
- Subjects
Adult ,Male ,Time Factors ,Acoustics and Ultrasonics ,Computer science ,Acoustics ,Speech recognition ,Speech sounds ,Stimulus (physiology) ,behavioral disciplines and activities ,01 natural sciences ,050105 experimental psychology ,Hearing ,Arts and Humanities (miscellaneous) ,Phonetics ,0103 physical sciences ,Humans ,0501 psychology and cognitive sciences ,Sound Localization ,10. No inequality ,010301 acoustics ,ComputingMilieux_MISCELLANEOUS ,[SPI.ACOU]Engineering Sciences [physics]/Acoustics [physics.class-ph] ,Analysis of Variance ,05 social sciences ,Middle Aged ,Psychological and Physiological Acoustics ,Stimulus Variability ,Speech processing ,[PHYS.MECA.ACOU]Physics [physics]/Mechanics [physics]/Acoustics [physics.class-ph] ,Noise ,Acoustic Stimulation ,Speech Perception ,Spatial cues ,Female ,Cues ,Perceptual Masking ,psychological phenomena and processes - Abstract
Differences in spatial cues, including interaural time differences (ITDs), interaural level differences (ILDs) and spectral cues, can lead to stream segregation of alternating noise bursts. It is unknown how effective such cues are for streaming sounds with realistic spectro-temporal variations. In particular, it is not known whether the high-frequency spectral cues associated with elevation remain sufficiently robust under such conditions. To answer these questions, sequences of consonant-vowel tokens were generated and filtered by non-individualized head-related transfer functions to simulate the cues associated with different positions in the horizontal and median planes. A discrimination task showed that listeners could discriminate changes in interaural cues both when the stimulus remained constant and when it varied between presentations. However, discrimination of changes in spectral cues was much poorer in the presence of stimulus variability. A streaming task, based on the detection of repeated syllables in the presence of interfering syllables, revealed that listeners can use both interaural and spectral cues to segregate alternating syllable sequences, despite the large spectro-temporal differences between stimuli. However, only the full complement of spatial cues (ILDs, ITDs, and spectral cues) resulted in obligatory streaming in a task that encouraged listeners to integrate the tokens into a single stream.
- Published
- 2017
- Full Text
- View/download PDF
17. Graduate programs at the University of Minnesota related to acoustics
- Author
-
Andrew J. Oxenham, Hubert H. Lim, Peggy B. Nelson, Mark A. Bee, and Kelly L. Whiteford
- Subjects
Medical education ,Engineering ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,business.industry ,business - Published
- 2020
- Full Text
- View/download PDF
18. Perception of melodies and triads at high frequencies
- Author
-
Andrew J. Oxenham and Daniel R. Guest
- Subjects
Melody ,Range (music) ,medicine.medical_specialty ,Acoustics and Ultrasonics ,media_common.quotation_subject ,Major and minor ,Pitch perception ,Audiology ,Phase locking ,Arts and Humanities (miscellaneous) ,Perception ,Harmonic ,medicine ,Mathematics ,media_common - Abstract
Accurate pitch perception is possible for harmonic complex tones with fundamental frequencies (F0s) in the musical range ( 6 kHz). However, it is unknown whether this basic pitch perception supports multiple simultaneous pitch perception. To address this, we measured (1) melody discrimination with and without a complex-tone masker and (2) major-minor discrimination for triads and arpeggios composed of complex tones with low (∼280 Hz) or high (∼1400 Hz) F0s. The tones were filtered to ensure that in high-F0 conditions only harmonics beyond the limits of phase locking were audible. Melody perception was poorer for isolated high-F0 tones than for isolated low-F0 tones, although performance was above chance in both cases. Adding a complex-tone masker in the same spectral region degraded performance for low- and high-F0 tones. Listeners could discriminate major and minor triads and arpeggios for low-F0 tones. For high-F0 tones, some listeners could discriminate major and minor arpeggios but none could discriminate major and minor triads. These results will help elucidate whether different mechanisms underlie the perception of combinations of complex tones at low and high frequencies. [Work supported by NIH R01DC005216 and NSF NRT-UtB1734815.]
- Published
- 2020
- Full Text
- View/download PDF
19. Investigating the parameters of temporal integration in pitch
- Author
-
Anahita H. Mehta and Andrew J. Oxenham
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Published
- 2020
- Full Text
- View/download PDF
20. Benefit of tonal context on relative pitch perception in musicians and non-musicians
- Author
-
Andrew J. Oxenham and Sara Miay Kim Madsen
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Perception ,media_common.quotation_subject ,Context (language use) ,Psychology ,Relative pitch ,Cognitive psychology ,media_common - Published
- 2020
- Full Text
- View/download PDF
21. Using individual differences to test the role of temporal and place cues in coding frequency modulation
- Author
-
Andrew J. Oxenham and Kelly L. Whiteford
- Subjects
Adult ,Male ,Auditory perception ,Periodicity ,medicine.medical_specialty ,Time Factors ,Adolescent ,Acoustics and Ultrasonics ,Acoustics ,Models, Neurological ,Individuality ,Differential Threshold ,Interaural time difference ,Audiology ,Dichotic Listening Tests ,Pitch Discrimination ,Amplitude modulation ,Young Adult ,Arts and Humanities (miscellaneous) ,medicine ,Humans ,Auditory system ,Psychoacoustics ,Cochlear Nerve ,Mathematics ,Principal Component Analysis ,Dichotic listening ,Auditory Threshold ,Psychological and Physiological Acoustics ,Cochlea ,medicine.anatomical_structure ,Acoustic Stimulation ,Auditory Perception ,Female ,Cues ,Tonotopy ,Perceptual Masking ,Frequency modulation - Abstract
The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.
- Published
- 2015
- Full Text
- View/download PDF
22. Assessing the effects of temporal coherence on auditory stream formation through comodulation masking release
- Author
-
Simon Krogholt Christiansen and Andrew J. Oxenham
- Subjects
Psychological Acoustics [66] ,Adult ,Male ,Auditory perception ,Auditory stream ,Time Factors ,Adolescent ,Acoustics and Ultrasonics ,Acoustics ,Perceptual Masking ,Motion ,Young Adult ,Arts and Humanities (miscellaneous) ,Flanking maneuver ,Humans ,Psychoacoustics ,Narrowband noise ,Physics ,Auditory Threshold ,Time perception ,Noise ,Sound ,Acoustic Stimulation ,Time Perception ,Auditory Perception ,Audiometry, Pure-Tone ,Female ,Cues - Abstract
Recent studies of auditory streaming have suggested that repeated synchronous onsets and offsets over time, referred to as "temporal coherence," provide a strong grouping cue between acoustic components, even when they are spectrally remote. This study uses a measure of auditory stream formation, based on comodulation masking release (CMR), to assess the conditions under which a loss of temporal coherence across frequency can lead to auditory stream segregation. The measure relies on the assumption that the CMR, produced by flanking bands remote from the masker and target frequency, only occurs if the masking and flanking bands form part of the same perceptual stream. The masking and flanking bands consisted of sequences of narrowband noise bursts, and the temporal coherence between the masking and flanking bursts was manipulated in two ways: (a) By introducing a fixed temporal offset between the flanking and masking bands that varied from zero to 60 ms and (b) by presenting the flanking and masking bursts at different temporal rates, so that the asynchronies varied from burst to burst. The results showed reduced CMR in all conditions where the flanking and masking bands were temporally incoherent, in line with expectations of the temporal coherence hypothesis.
- Published
- 2014
- Full Text
- View/download PDF
23. Acoustical Society of America Gold Medal 2014: Brian C. J. Moore
- Author
-
Robert P. Carlyon and Andrew J. Oxenham
- Subjects
geography ,geography.geographical_feature_category ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,media_common.quotation_subject ,Spring (hydrology) ,Art history ,Art ,Gold medal ,media_common - Abstract
The Gold Medal is presented in the spring to a member of the Society, without age limitation, for contributions to acoustics. The first Gold Medal was presented in 1954 on the occasion of the Society’s Twenty-Fifth Anniversary Celebration and biennially until 1981. It is now an annual award.
- Published
- 2014
- Full Text
- View/download PDF
24. Symmetric interactions and interference between pitch and timbre
- Author
-
Emily J. Allen and Andrew J. Oxenham
- Subjects
Adult ,Male ,Psychological Acoustics [66] ,Sound Spectrography ,Time Factors ,Spectral shape analysis ,Acoustics and Ultrasonics ,Acoustics ,Spectral centroid ,Young Adult ,Tone (musical instrument) ,Discrimination, Psychological ,Audiometry ,Arts and Humanities (miscellaneous) ,Humans ,Sensitivity (control systems) ,Pitch Perception ,Mathematics ,Analysis of Variance ,Auditory Threshold ,Fundamental frequency ,Middle Aged ,Acoustic Stimulation ,Pitch Discrimination ,Harmonic ,Female ,Timbre ,Music ,Psychoacoustics - Abstract
Variations in the spectral shape of harmonic tone complexes are perceived as timbre changes and can lead to poorer fundamental frequency (F0) or pitch discrimination. Less is known about the effects of F0 variations on spectral shape discrimination. The aims of the study were to determine whether the interactions between pitch and timbre are symmetric, and to test whether musical training affects listeners' ability to ignore variations in irrelevant perceptual dimensions. Difference limens (DLs) for F0 were measured with and without random, concurrent, variations in spectral centroid, and vice versa. Additionally, sensitivity was measured as the target parameter and the interfering parameter varied by the same amount, in terms of individual DLs. Results showed significant and similar interference between pitch (F0) and timbre (spectral centroid) dimensions, with upward spectral motion often confused for upward F0 motion, and vice versa. Musicians had better F0DLs than non-musicians on average, but similar spectral centroid DLs. Both groups showed similar interference effects, in terms of decreased sensitivity, in both dimensions. Results reveal symmetry in the interference effects between pitch and timbre, once differences in sensitivity between dimensions and subjects are controlled. Musical training does not reliably help to overcome these effects.
- Published
- 2014
- Full Text
- View/download PDF
25. Psychological and Physiological Acoustics: From sound to sensation
- Author
-
Andrew J. Oxenham
- Subjects
Sound (medical instrument) ,Auditory perception ,Acoustics and Ultrasonics ,media_common.quotation_subject ,Acoustics ,Cognition ,Profound hearing loss ,Presentation ,medicine.anatomical_structure ,Arts and Humanities (miscellaneous) ,Sensation ,otorhinolaryngologic diseases ,medicine ,Auditory system ,Inner ear ,Psychology ,media_common - Abstract
The area of psychological and physiological acoustics encompasses a wide and multidisciplinary range of topics. It is concerned with questions of what happens to sound once it enters the auditory system, and how sound is processed to facilitate communication and navigation. Topics include the biomechanics of the middle and inner ear; the neuroscience of the auditory nerve, brainstem, and cortex; and behavioral studies of auditory perception and cognition. This presentation will provide an overview of some of the many areas currently under investigation, ranging from basic questions about the neural representations of different sound features to clinical applications, such as the development and improvement of hearing aids, as well as cochlear, brainstem, and even midbrain implants that bypass the peripheral auditory system to provide some hearing to people with profound hearing loss. [Work supported by NIH Grant No. R01 DC012262.]The area of psychological and physiological acoustics encompasses a wide and multidisciplinary range of topics. It is concerned with questions of what happens to sound once it enters the auditory system, and how sound is processed to facilitate communication and navigation. Topics include the biomechanics of the middle and inner ear; the neuroscience of the auditory nerve, brainstem, and cortex; and behavioral studies of auditory perception and cognition. This presentation will provide an overview of some of the many areas currently under investigation, ranging from basic questions about the neural representations of different sound features to clinical applications, such as the development and improvement of hearing aids, as well as cochlear, brainstem, and even midbrain implants that bypass the peripheral auditory system to provide some hearing to people with profound hearing loss. [Work supported by NIH Grant No. R01 DC012262.]
- Published
- 2019
- Full Text
- View/download PDF
26. Sensitivity to AM incoherence is affected by center frequency and modulation rate
- Author
-
Andrew J. Oxenham and Kelly L. Whiteford
- Subjects
Physics ,Amplitude modulation ,Out of phase ,Optics ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,business.industry ,Center frequency ,business ,Frequency modulation ,Phase locking - Abstract
Fine-grained sensitivity to frequency modulation (FM) at slow rates and low-frequency carriers is thought to be due to auditory-nerve phase locking (time code). Alternatively, a unitary code for FM at all rates and carrier frequencies could be based on cochlear conversion of FM to amplitude modulation (AM) (place code). One weakness of the place-coding theory is it cannot readily explain rate- and carrier-dependent trends in FM sensitivity. This study asked whether FM trends could potentially be explained by sensitivity to two AM envelopes that are out of phase (incoherent AM) at separate cochlear locations, thereby simulating the effects of FM. AM discrimination was assessed for two-component complexes centered at low (500 and 1500 Hz) and high (7000 Hz) frequencies, spaced 2/3 or 4/3 octaves apart, and modulated at slow (2 Hz) and fast (20 Hz) rates. Coherent and incoherent two-component AM detection was assessed for the same conditions. Preliminary results show that sensitivity to AM incoherence is best at low center frequencies and slow rates, consistent with trends traditionally found in FM detection that have been attributed to time coding. Findings suggest time coding may not be necessary to explain trends in FM sensitivity. [Work supported by NIH Grant R01DC005216.] Fine-grained sensitivity to frequency modulation (FM) at slow rates and low-frequency carriers is thought to be due to auditory-nerve phase locking (time code). Alternatively, a unitary code for FM at all rates and carrier frequencies could be based on cochlear conversion of FM to amplitude modulation (AM) (place code). One weakness of the place-coding theory is it cannot readily explain rate- and carrier-dependent trends in FM sensitivity. This study asked whether FM trends could potentially be explained by sensitivity to two AM envelopes that are out of phase (incoherent AM) at separate cochlear locations, thereby simulating the effects of FM. AM discrimination was assessed for two-component complexes centered at low (500 and 1500 Hz) and high (7000 Hz) frequencies, spaced 2/3 or 4/3 octaves apart, and modulated at slow (2 Hz) and fast (20 Hz) rates. Coherent and incoherent two-component AM detection was assessed for the same conditions. Preliminary results show that sensitivity to AM incoherence is be...
- Published
- 2019
- Full Text
- View/download PDF
27. Neural correlates of auditory enhancement
- Author
-
Anahita H. Mehta and Andrew J. Oxenham
- Subjects
Physics ,Neural correlates of consciousness ,Contrast enhancement ,Acoustics and Ultrasonics ,medicine.diagnostic_test ,media_common.quotation_subject ,Electroencephalography ,Subjective constancy ,Tone (musical instrument) ,Arts and Humanities (miscellaneous) ,Salience (neuroscience) ,Perception ,Modulation (music) ,medicine ,Neuroscience ,media_common - Abstract
Auditory enhancement is the increase in salience of a target embedded in a simultaneous masker that occurs when a copy of the masker, termed the precursor, is presented first. The effect reflects the general principle of contrast enhancement and may help in the perceptual constancy of speech under varying acoustic conditions. The physiological mechanisms underlying auditory enhancement remain unknown. This study investigated EEG responses under conditions that elicited perceptual enhancement. The target tone was amplitude-modulated at two modulation frequencies to target cortical (~40 Hz) and subcortical (100–200 Hz) responses. Measurements were made in either passive conditions or under active tasks to examine the potential effects of attention on the neural correlates of enhancement. Robust effects of enhancement were observed at the cortical level, replicating our earlier findings. Preliminary data under passive conditions also suggest a trend towards increased neural response to the enhanced target tone at frequencies exceeding 200 Hz, suggesting a subcortical contribution. The results suggest that this paradigm can be used to tap into the neural correlates of auditory enhancement at both cortical and subcortical levels simultaneously and show the potential for tapping into attentional modulation of auditory enhancement. [Work supported by NIH grant R01DC012262.]
- Published
- 2019
- Full Text
- View/download PDF
28. Distinguishing peripheral and central contributions to speech context effects
- Author
-
Andrew J. Oxenham
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Context effect ,Psychology ,Cognitive psychology ,Peripheral - Published
- 2019
- Full Text
- View/download PDF
29. Pitch perception of concurrent high-frequency complex tones
- Author
-
Andrew J. Oxenham and Daniel R. Guest
- Subjects
Tone (musical instrument) ,Range (music) ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Acoustics ,Perception ,media_common.quotation_subject ,Harmonics ,Harmonic ,Pitch perception ,Phase locking ,media_common ,Mathematics - Abstract
Accurate pitch perception is possible for harmonic complex tones with fundamental frequencies (F0s) in the musical range (e.g., 1.4 kHz) but with all harmonics beyond the putative limits of phase locking. However, it is unknown whether pitch perception in more complex scenarios, such as with concurrent complex tones, is possible using these stimuli. To address this question, we measured (1) F0 difference limens (F0DLs) and (2) target-to-masker ratios (TMRs) required to detect a fixed F0 difference in a mixture of complex tones with low F0s (∼280 Hz) or high F0s (∼1400 Hz). The target tones were filtered to ensure that in the high-F0 case, only harmonics beyond the limits of phase locking were present. Pitch perception was poorer for isolated high-F0 tones than for isolated low-F0 tones and adding a masker complex tone with a geometrically centered F0 impaired performance for both high-F0 and low-F0 tones. The TMRs required to achieve good performance in the presence of two complex tone maskers were higher for high-F0 tones than for low-F0 tones. The results should help determine whether different mechanisms underlie the perception of combinations of complex tones at low and high frequencies. [Work supported by Grants NIH R01 DC005216 and NSF NRT-UtB1734815.]Accurate pitch perception is possible for harmonic complex tones with fundamental frequencies (F0s) in the musical range (e.g., 1.4 kHz) but with all harmonics beyond the putative limits of phase locking. However, it is unknown whether pitch perception in more complex scenarios, such as with concurrent complex tones, is possible using these stimuli. To address this question, we measured (1) F0 difference limens (F0DLs) and (2) target-to-masker ratios (TMRs) required to detect a fixed F0 difference in a mixture of complex tones with low F0s (∼280 Hz) or high F0s (∼1400 Hz). The target tones were filtered to ensure that in the high-F0 case, only harmonics beyond the limits of phase locking were present. Pitch perception was poorer for isolated high-F0 tones than for isolated low-F0 tones and adding a masker complex tone with a geometrically centered F0 impaired performance for both high-F0 and low-F0 tones. The TMRs required to achieve good performance in the presence of two complex tone maskers were higher...
- Published
- 2019
- Full Text
- View/download PDF
30. Spectral and temporal models of human pitch perception with mixtures of three concurrent harmonic complexes
- Author
-
Jackson Graves and Andrew J. Oxenham
- Subjects
Acoustics and Ultrasonics ,Computer science ,business.industry ,Template matching ,Autocorrelation ,Pattern recognition ,Context (language use) ,Tone (musical instrument) ,Arts and Humanities (miscellaneous) ,Band-pass filter ,Mixing (mathematics) ,Harmonics ,Harmonic ,Artificial intelligence ,business - Abstract
In music and other everyday situations, humans are often presented with three or more simultaneous pitches, each carried by a harmonic complex tone, but few empirical or theoretical studies have addressed how pitch is perceived in this context. In three behavioral experiments, mixtures of three concurrent complexes were filtered into a single bandpass spectral region, and the relationship between the fundamental frequencies and spectral region was varied in order to manipulate the extent to which harmonics were resolved either before or after mixing. Listeners were asked to discriminate major from minor chords (Experiment 1) or to compare the pitch of a probe tone to that of a target embedded in the mixture (Experiments 2 and 3). In all three experiments, listeners performed above chance even under conditions where traditional rate-place models would not predict individually resolved components. Human behavioral results were compared to predictions from two classes of pitch model: a rate-place model using harmonic template matching and a temporal model using summary autocorrelation. Predictions from a combined model, using both rate-place and temporal information, were more accurate than predictions from either model alone, suggesting that humans may integrate these two kinds of information. [Work supported by NIH grant R01DC005216.]
- Published
- 2019
- Full Text
- View/download PDF
31. Behavioral measures of cochlear compression and temporal resolution as predictors of speech masking release in hearing-impaired listeners
- Author
-
Peggy B. Nelson, Melanie J. Gregan, and Andrew J. Oxenham
- Subjects
Psychological Acoustics [66] ,Adult ,Masking (art) ,medicine.medical_specialty ,Time Factors ,Speech perception ,Acoustics and Ultrasonics ,Hearing Loss, Sensorineural ,Acoustics ,Perceptual Masking ,Audiology ,Severity of Illness Index ,Correlation ,Young Adult ,Arts and Humanities (miscellaneous) ,medicine ,Humans ,Aged ,Mathematics ,Auditory masking ,Auditory Threshold ,Middle Aged ,Compression (physics) ,Cochlea ,Noise ,Persons With Hearing Impairments ,Acoustic Stimulation ,Case-Control Studies ,Temporal resolution ,Speech Perception ,Audiometry, Pure-Tone ,Audiometry, Speech - Abstract
Hearing-impaired (HI) listeners often show less masking release (MR) than normal-hearing listeners when temporal fluctuations are imposed on a steady-state masker, even when accounting for overall audibility differences. This difference may be related to a loss of cochlear compression in HI listeners. Behavioral estimates of compression, using temporal masking curves (TMCs), were compared with MR for band-limited (500-4000 Hz) speech and pure tones in HI listeners and age-matched, noise-masked normal-hearing (NMNH) listeners. Compression and pure-tone MR estimates were made at 500, 1500, and 4000 Hz. The amount of MR was defined as the difference in performance between steady-state and 10-Hz square-wave-gated speech-shaped noise. In addition, temporal resolution was estimated from the slope of the off-frequency TMC. No significant relationship was found between estimated cochlear compression and MR for either speech or pure tones. NMNH listeners had significantly steeper off-frequency temporal masking recovery slopes than did HI listeners, and a small but significant correlation was observed between poorer temporal resolution and reduced MR for speech. The results suggest either that the effects of hearing impairment on MR are not determined primarily by changes in peripheral compression, or that the TMC does not provide a sufficiently reliable measure of cochlear compression.
- Published
- 2013
- Full Text
- View/download PDF
32. Effects of temporal stimulus properties on the perception of across-frequency asynchrony
- Author
-
Jordan A. Beim, Christophe Micheyl, Andrew J. Oxenham, and Magdalena Wojtczak
- Subjects
Psychological Acoustics [66] ,Periodicity ,medicine.medical_specialty ,Auditory Pathways ,Time Factors ,Acoustics and Ultrasonics ,media_common.quotation_subject ,Acoustics ,Monaural ,Audiology ,Stimulus (physiology) ,Models, Biological ,Coincidence ,Pitch Discrimination ,Judgment ,Discrimination, Psychological ,Audiometry ,Arts and Humanities (miscellaneous) ,Perception ,medicine ,Humans ,Psychoacoustics ,Mathematics ,media_common ,Analysis of Variance ,medicine.diagnostic_test ,Auditory Threshold ,Time perception ,Acoustic Stimulation ,Time Perception ,Cues - Abstract
The role of temporal stimulus parameters in the perception of across-frequency synchrony and asynchrony was investigated using pairs of 500-ms tones consisting of a 250-Hz tone and a tone with a higher frequency of 1, 2, 4, or 6 kHz. Subjective judgments suggested veridical perception of across-frequency synchrony but with greater sensitivity to changes in asynchrony for pairs in which the lower-frequency tone was leading than for pairs in which it was lagging. Consistent with the subjective judgments, thresholds for the detection of asynchrony measured in a three-alternative forced-choice task were lower when the signal interval contained a pair with the low-frequency tone leading than a pair with a high-frequency tone leading. A similar asymmetry was observed for asynchrony discrimination when the standard asynchrony was relatively small (≤20 ms) but not for larger standard asynchronies. Independent manipulation of onset and offset ramp durations indicated a dominant role of onsets in the perception of across-frequency asynchrony. A physiologically inspired model, involving broadly tuned monaural coincidence detectors that receive inputs from frequency-selective onset detectors, was able to accurately reproduce the asymmetric distributions of synchrony judgments. The model provides testable predictions for future physiological investigations of responses to broadband stimuli with across-frequency delays.
- Published
- 2013
- Full Text
- View/download PDF
33. Vowel enhancement effects in cochlear-implant users
- Author
-
Ningyuan Wang, Heather A. Kreft, and Andrew J. Oxenham
- Subjects
Adult ,Male ,medicine.medical_specialty ,Acoustics and Ultrasonics ,medicine.medical_treatment ,Acoustics ,Stimulation ,Deafness ,Audiology ,Pitch Discrimination ,Young Adult ,Arts and Humanities (miscellaneous) ,Phonetics ,Cochlear implant ,Vowel ,otorhinolaryngologic diseases ,medicine ,Humans ,Cochlear Nerve ,Cochlea ,Aged ,Aged, 80 and over ,business.industry ,Cochlear nerve ,Middle Aged ,Speech enhancement ,Cochlear Implants ,Formant ,Acoustic Stimulation ,Female ,sense organs ,Noise ,business - Abstract
Auditory enhancement of certain frequencies can occur through prior stimulation of surrounding frequency regions. The underlying neural mechanisms are unknown, but may involve stimulus-driven changes in cochlear gain via the medial olivocochlear complex (MOC) efferents. Cochlear implants (CIs) bypass the cochlea and stimulate the auditory nerve directly. If the MOC plays a critical role in enhancement then CI users should not exhibit this effect. Results using vowel stimuli, with and without preceding sounds designed to enhance formants, provided evidence of auditory enhancement in both normal-hearing listeners and CI users, suggesting that vowel enhancement is not mediated solely by cochlear effects.
- Published
- 2012
- Full Text
- View/download PDF
34. Comparing models of the combined-stimulation advantage for speech recognition
- Author
-
Andrew J. Oxenham and Christophe Micheyl
- Subjects
Psychological Acoustics [66] ,Speech perception ,Acoustics and Ultrasonics ,Computer science ,Speech recognition ,media_common.quotation_subject ,Stimulation ,Phonetics ,Speech processing ,Models, Biological ,Electric Stimulation ,Noise ,Cochlear Implants ,Acoustic Stimulation ,Arts and Humanities (miscellaneous) ,Perception ,Speech Perception ,Information source ,Humans ,Detection theory ,Cues ,media_common - Abstract
The “combined-stimulation advantage” refers to an improvement in speech recognition when cochlear-implant or vocoded stimulation is supplemented by low-frequency acoustic information. Previous studies have been interpreted as evidence for “super-additive” or “synergistic” effects in the combination of low-frequency and electric or vocoded speech information by human listeners. However, this conclusion was based on predictions of performance obtained using a suboptimal high-threshold model of information combination. The present study shows that a different model, based on Gaussian signal detection theory, can predict surprisingly large combined-stimulation advantages, even when performance with either information source alone is close to chance, without involving any synergistic interaction. A reanalysis of published data using this model reveals that previous results, which have been interpreted as evidence for super-additive effects in perception of combined speech stimuli, are actually consistent with a more parsimonious explanation, according to which the combined-stimulation advantage reflects an optimal combination of two independent sources of information. The present results do not rule out the possible existence of synergistic effects in combined stimulation; however, they emphasize the possibility that the combined-stimulation advantages observed in some studies can be explained simply by non-interactive combination of two information sources.
- Published
- 2012
- Full Text
- View/download PDF
35. Does fundamental-frequency discrimination measure virtual pitch discrimination?
- Author
-
Andrew J. Oxenham, David M. Wrobleski, Christophe Micheyl, and Kristin Divis
- Subjects
Psychological Acoustics [66] ,Adult ,Male ,Brightness ,Auditory Pathways ,Signal Detection, Psychological ,Time Factors ,Acoustics and Ultrasonics ,Feedback, Psychological ,Acoustics ,Pitch perception ,Measure (mathematics) ,Pitch Discrimination ,Young Adult ,Arts and Humanities (miscellaneous) ,Humans ,Mathematics ,Auditory Threshold ,Fundamental frequency ,Virtual pitch ,Acoustic Stimulation ,Harmonics ,Harmonic ,Audiometry, Pure-Tone ,Female ,Cues ,Timbre ,Psychoacoustics - Abstract
Studies of pitch perception often involve measuring difference limens for complex tones (DLCs) that differ in fundamental frequency (F0). These measures are thought to reflect F0 discrimination and to provide an indirect measure of subjective pitch strength. However, in many situations discrimination may be based on cues other than the pitch or the F0, such as differences in the frequencies of individual components or timbre (brightness). Here, DLCs were measured for harmonic and inharmonic tones under various conditions, including a randomized or fixed lowest harmonic number, with and without feedback. The inharmonic tones were produced by shifting the frequencies of all harmonics upwards by 6.25%, 12.5%, or 25% of F0. It was hypothesized that, if DLCs reflect residue-pitch discrimination, these frequency-shifted tones, which produced a weaker and more ambiguous pitch than would yield larger DLCs than the harmonic tones. However, if DLCs reflect comparisons of component pitches, or timbre, they should not be systematically influenced by frequency shifting. The results showed larger DLCs and more scattered pitch matches for inharmonic than for harmonic complexes, confirming that the inharmonic tones produced a less consistent pitch than the harmonic tones, and consistent with the idea that DLCs reflect F0 pitch discrimination.
- Published
- 2010
- Full Text
- View/download PDF
36. Insights from individual differences: Uncovering the code for frequency modulation
- Author
-
Andrew J. Oxenham, Kelly L. Whiteford, and Heather A. Kreft
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Computer science ,Code (cryptography) ,Algorithm ,Frequency modulation - Published
- 2018
- Full Text
- View/download PDF
37. The role of pitch and harmonic cancellation when listening to speech in background sounds
- Author
-
Daniel Guest and Andrew J. Oxenham
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) - Published
- 2018
- Full Text
- View/download PDF
38. Effects of background noise level on behavioral estimates of basilar-membrane compression
- Author
-
Peggy B. Nelson, Melanie J. Gregan, and Andrew J. Oxenham
- Subjects
Adult ,Psychological Acoustics [66] ,Sound Spectrography ,Acoustics and Ultrasonics ,Loudness Perception ,Acoustics ,Ambient noise level ,Perceptual Masking ,Stimulus (physiology) ,behavioral disciplines and activities ,Background noise ,Young Adult ,Arts and Humanities (miscellaneous) ,otorhinolaryngologic diseases ,Humans ,Psychoacoustics ,Physics ,Auditory Threshold ,Basilar Membrane ,Basilar membrane ,Noise ,Acoustic Stimulation ,QUIET ,Auditory Perception ,Audiometry, Pure-Tone ,Female ,sense organs ,psychological phenomena and processes - Abstract
Hearing-impaired (HI) listeners often show poorer performance on psychoacoustic tasks than do normal-hearing (NH) listeners. Although some such deficits may reflect changes in suprathreshold sound processing, others may be due to stimulus audibility and the elevated absolute thresholds associated with hearing loss. Masking noise can be used to raise the thresholds of NH to equal the thresholds in quiet of HI listeners. However, such noise may have other effects, including changing peripheral response characteristics, such as the compressive input-output function of the basilar membrane in the normal cochlea. This study estimated compression behaviorally across a range of background noise levels in NH listeners at a 4 kHz signal frequency, using a growth of forward masking paradigm. For signals 5 dB or more above threshold in noise, no significant effect of broadband noise level was found on estimates of compression. This finding suggests that broadband noise does not significantly alter the compressive response of the basilar membrane to sounds that are presented well above their threshold in the noise. Similarities between the performance of HI listeners and NH listeners in threshold-equalizing noise are therefore unlikely to be due to a linearization of basilar-membrane responses to suprathreshold stimuli in the NH listeners.
- Published
- 2010
- Full Text
- View/download PDF
39. Tracking eye and head movements in natural conversational settings: Effects of hearing loss and background noise level
- Author
-
Hao Lu, Martin F. McKinney, Tao Zhang, and Andrew J. Oxenham
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,Hearing loss ,Head (linguistics) ,media_common.quotation_subject ,Ambient noise level ,Eye movement ,Audiology ,behavioral disciplines and activities ,Background noise ,03 medical and health sciences ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,otorhinolaryngologic diseases ,medicine ,Eye tracking ,Conversation ,Loudspeaker ,medicine.symptom ,030223 otorhinolaryngology ,Psychology ,psychological phenomena and processes ,030217 neurology & neurosurgery ,media_common - Abstract
Although beam-forming algorithms for hearing aids can produce gains in target-to-masker, the wearer’s head will not always be facing the target talker, potentially limiting the value of beam-forming in real-world environments, unless eye movements are also accounted for. The aim of this study was to determine the extent to which the head direction and eye gaze track the position of the talker in natural conversational settings. Three groups of participants were recruited: younger listeners, older listeners with clinically normal hearing, and older listeners with mild-to-moderate hearing loss. The experimental set-up included one participant at a time in conversation with two confederates approximately equally spaced around a small round table. Different levels of background noise were introduced by playing background sounds via loudspeakers that surrounded the participants in the conversation. In general, head movements tended to undershoot the position of the current talker, but head and eye movements together generally predicted the current talker position well. Preliminary data revealed no strong effects of hearing loss, or background noise level on the amount of time spent looking at the talker, although younger listeners tended to use their eyes, as opposed to head movements, more than the older listeners. [Work supported by Starkey Laboratories.]
- Published
- 2018
- Full Text
- View/download PDF
40. Using binaural beat sensitivity to explore mechanisms of bimodal temporal envelope beat sensitivity
- Author
-
Coral Dirks, Peggy B. Nelson, and Andrew J. Oxenham
- Subjects
Sound localization ,Speech perception ,Binaural beats ,Acoustics and Ultrasonics ,Computer science ,Speech recognition ,medicine.medical_treatment ,Stimulus (physiology) ,law.invention ,Basilar membrane ,Arts and Humanities (miscellaneous) ,law ,Cochlear implant ,otorhinolaryngologic diseases ,medicine ,Electrode array ,Tonotopy ,Binaural recording - Abstract
Current cochlear implant (CI) fitting strategies aim to maximize speech perception through the CI by allocating all spectral information across the electrode array without regard to tonotopic placement of each electrode along the basilar membrane. For patients with considerable residual hearing in the non-implanted ear, this approach may not be optimal for binaural hearing. This study aims to explore fitting procedures in which CI maps better complement information from the acoustic ear by reducing the frequency mismatch between them. We investigate the mechanisms of binaural temporal-envelope beat sensitivity in normal-hearing listeners using bandpass filtered pulse trains with parameters including stimulus level, filter bandwidth, filter slope, and spectral overlap using bandpass filtered pulse trains. We find the minimum baseline interaural timing difference and spectral mismatch that normal-hearing listeners can tolerate while maintaining their ability to detect interaural timing differences. Initial results consistently demonstrate maximum sensitivity to binaural beats when place of stimulation is matched across ears. The outcomes of this study will provide new information on binaural interactions in normal-hearing listeners and guide methodology for incoming single-sided-deafness patients as we adjust their CI maps in an effort to reduce the frequency-mismatch. [Work supported by NIH grant F32DC016815-01.]Current cochlear implant (CI) fitting strategies aim to maximize speech perception through the CI by allocating all spectral information across the electrode array without regard to tonotopic placement of each electrode along the basilar membrane. For patients with considerable residual hearing in the non-implanted ear, this approach may not be optimal for binaural hearing. This study aims to explore fitting procedures in which CI maps better complement information from the acoustic ear by reducing the frequency mismatch between them. We investigate the mechanisms of binaural temporal-envelope beat sensitivity in normal-hearing listeners using bandpass filtered pulse trains with parameters including stimulus level, filter bandwidth, filter slope, and spectral overlap using bandpass filtered pulse trains. We find the minimum baseline interaural timing difference and spectral mismatch that normal-hearing listeners can tolerate while maintaining their ability to detect interaural timing differences. Initial ...
- Published
- 2018
- Full Text
- View/download PDF
41. Short- and long-term memory for pitch and non-pitch contours in congenital amusia
- Author
-
Andrew J. Oxenham, Barbara Tillmann, Agathe Pralus, Lesly Fornoni, Anne Caclin, and Jackson Graves
- Subjects
Melody ,medicine.medical_specialty ,Acoustics and Ultrasonics ,Long-term memory ,Pitch perception ,Amusia ,Audiology ,medicine.disease ,Degree (music) ,Loudness ,Arts and Humanities (miscellaneous) ,medicine ,Psychology ,Pitch contour - Abstract
Congenital amusia is a disorder characterized by deficits in music and pitch perception, but the degree to which the deficit is specific to pitch remains unclear. Amusia often results in difficulties discriminating and remembering melodies and melodic contours. Non-amusic listeners can perceive contours in dimensions other than pitch, such as loudness and brightness, but it is unclear whether amusic pitch contour deficits also extend to these other auditory dimensions. This question was addressed by testing the identification of ten familiar French melodies and the discrimination of changes in the contour of novel four-note melodies. Melodic contours were defined by pitch, brightness, or loudness. Amusic participants were impaired relative to matched controls in all dimensions, but showed some ability to extract contours in all three dimensions. In the novel contour discrimination task, amusic participants exhibited less impairment for loudness-based melodies than for pitch- or brightness-based melodies, ...
- Published
- 2018
- Full Text
- View/download PDF
42. Complex frequency modulation detection and discrimination at low and high frequencies
- Author
-
Kelly L. Whiteford and Andrew J. Oxenham
- Subjects
Physics ,Carrier signal ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Harmonics ,Acoustics ,Tonotopy ,Frequency modulation - Abstract
Whether frequency modulation (FM) is represented by place (tonotopic) or temporal (phase-locking) information in the peripheral system may depend on the carrier frequency (fc) and the modulation rate (fm), with only fcs below 4 kHz and fms below 10 Hz thought to involve temporal coding. This study tested the role of temporal coding in harmonic complexes by measuring FM detection and discrimination for two F0s (200 Hz and 1400 Hz), modulated at slow (1 Hz) and fast (20 Hz) rates, for tones with lower (2-5) or upper (6-9) harmonics embedded in threshold equalizing noise. Pure-tone FM detection was measured for fcs between 200 and 12000 Hz at the same fms. In detection tasks, participants selected which of two intervals contained FM. In discrimination tasks, participants selected which of three FM complex tones was incoherently modulated. Preliminary results suggest better FM detection for slow than fast rates, even when all tones are above 4 kHz. However, this effect was stronger at lower frequencies, where...
- Published
- 2018
- Full Text
- View/download PDF
43. Sequential stream segregation based on spatial cues: Behavioral and neural measurements
- Author
-
Marion David and Andrew J. Oxenham
- Subjects
Fricative consonant ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,medicine.diagnostic_test ,Computer science ,Speech recognition ,Vowel ,medicine ,Spatial cues ,Repetition (music) ,Electroencephalography ,Stimulus (physiology) ,Task (project management) - Abstract
Differences in simulated spatial cues in the horizontal plane have been shown to enhance both voluntary and obligatory stream segregation of sounds with realistic spectro-temporal variations, such as sequences of syllables. In this experiment, listeners were presented with sequences of speech tokens, each consisting of a fricative consonant and a voiced vowel. The CV tokens were concatenated into interleaved sequences that alternated in simulated spatial positions. The interleaved sequences lasted 1 min. The listeners had to press a button each time they heard a repeated token. In the selective attention task, the listeners were asked to attend only one of the two interleaved sequences; in the global attention task, the listeners had to perceive the interleaved sequences as single stream to detect a repetition between the sequences. Simultaneous EEG measurements were made. The behavioral results confirmed that listeners were able to either attend selectively or globally, depending on the task requirements. The EEG waveforms differed between the two tasks, despite identical physical stimuli, reflecting the difference between global and selective attention. Both behavioral and EEG results reflected the effects of increasing spatial separation in enhancing selective attention and making global attention to the sequences more difficult.Differences in simulated spatial cues in the horizontal plane have been shown to enhance both voluntary and obligatory stream segregation of sounds with realistic spectro-temporal variations, such as sequences of syllables. In this experiment, listeners were presented with sequences of speech tokens, each consisting of a fricative consonant and a voiced vowel. The CV tokens were concatenated into interleaved sequences that alternated in simulated spatial positions. The interleaved sequences lasted 1 min. The listeners had to press a button each time they heard a repeated token. In the selective attention task, the listeners were asked to attend only one of the two interleaved sequences; in the global attention task, the listeners had to perceive the interleaved sequences as single stream to detect a repetition between the sequences. Simultaneous EEG measurements were made. The behavioral results confirmed that listeners were able to either attend selectively or globally, depending on the task requirements...
- Published
- 2018
- Full Text
- View/download PDF
44. Hearing loss and the future of auditory implants
- Author
-
Andrew J. Oxenham
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,Hearing loss ,media_common.quotation_subject ,Adult population ,Severe hearing loss ,Audiology ,Arts and Humanities (miscellaneous) ,Perception ,otorhinolaryngologic diseases ,medicine ,General health ,medicine.symptom ,Social isolation ,Cognitive decline ,Psychology ,Everyday life ,media_common - Abstract
Hearing loss is a major and growing health concern worldwide. According to the National Institute on Deafness and Communication Disorders (NIDCD), 17% of the adult population in the US (around 36 million people) report some form of hearing loss, with the proportion of affected individuals rising to nearly 50% among those aged 75 or older. Loss of hearing has been associated with increased social isolation, more rapid cognitive decline, and other more general health issues, although causal relationships have yet to be established. This tutorial will review the physiology of hearing loss, along with its perceptual consequences, as measured in the laboratory and experienced in everyday life. The focus of the tutorial will be on implantable technologies that have been used to alleviate severe hearing loss and deafness, with particular emphasis on cochlear implants. Although cochlear implants have enjoyed remarkable success over the past few decades, they do not restore normal hearing, and may be approaching their technological limits in terms of the benefits that patients can gain from them. The tutorial will end by exploring future directions of implantable and other technologies in the quest to restore and maintain hearing throughout the lifespan.
- Published
- 2018
- Full Text
- View/download PDF
45. Frequency difference limens as a function of fundamental frequency and harmonic number
- Author
-
Anahita H. Mehta and Andrew J. Oxenham
- Subjects
Physics ,Tone (musical instrument) ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Harmonics ,Acoustics ,Harmonic ,Phase (waves) ,Harmonic number ,Fundamental frequency ,Constant (mathematics) ,Noise (radio) - Abstract
Several studies have investigated the relation between the lowest harmonic present in a complex tone and fundamental frequency (F0) difference limens (F0DLs). It is generally assumed that F0DLs are smaller when lower harmonics are present and that the ability to discriminate small changes in F0 worsen as harmonic number increases. This worsening of performance has been attributed to a lack of peripherally resolved harmonics. This assumption was tested by measuring F0DLs for harmonic complexes where the lowest harmonic present in a twelve-harmonic complex tone varied from the 3rd to the 15th harmonic, with F0s varying from 30 Hz to 2000 Hz. The harmonics were presented in either sine or random phase and were embedded in threshold-equalizing noise. Aside from F0s between 100 and 400 Hz, performance did not follow the expected pattern of good performance with low-numbered (resolved) harmonics and poorer performance with high-numbered (unresolved) harmonics. At lower F0s, performance was relatively constant a...
- Published
- 2018
- Full Text
- View/download PDF
46. Auditory enhancement and other context effects in normal, impaired, and electric hearing
- Author
-
Lei Feng, Heather A. Kreft, and Andrew J. Oxenham
- Subjects
Masking (art) ,Empirical data ,Speech discrimination ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Context effect ,Forward masking ,otorhinolaryngologic diseases ,Psychology ,Cognitive psychology - Abstract
Neal Viemeister’s contributions to our understanding of auditory enhancement stood out because they combined novel and intriguing empirical data with a testable theoretical framework to explain the results. Thirty-five years later, the resultant “adaptation of inhibition” hypothesis remains the default explanation for auditory enhancement effects, and it can be used to account for both basic auditory context effects as well as speech context effects. Recent work in our lab has focused on comparing speech and non-speech context effects in people with normal and impaired hearing, as well as cochlear implants, in an attempt to elucidate the underlying neural mechanisms and to work towards compensating for any loss of context effects in clinical populations via signal processing. This presentation will provide a survey of recent progress in this area with examples from simultaneous masking, forward masking, and speech discrimination experiments. [Work supported by NIH grant R01DC012262.]
- Published
- 2018
- Full Text
- View/download PDF
47. Modulation rate discrimination using half-wave rectified and sinusoidally amplitude modulated stimuli in cochlear-implant users
- Author
-
Heather A. Kreft, David A. Nelson, and Andrew J. Oxenham
- Subjects
Male ,Auditory perception ,Periodicity ,Acoustics and Ultrasonics ,medicine.medical_treatment ,Acoustics ,Stimulus (physiology) ,Amplitude modulation ,Discrimination, Psychological ,Arts and Humanities (miscellaneous) ,Cochlear implant ,otorhinolaryngologic diseases ,medicine ,Humans ,Inner ear ,Letters to the Editor ,Pitch Perception ,Cochlea ,Aged ,Physics ,Auditory Threshold ,Middle Aged ,Electric Stimulation ,Cochlear Implants ,medicine.anatomical_structure ,Amplitude ,Auditory Perception ,Female ,sense organs ,Binaural recording ,psychological phenomena and processes - Abstract
Detection and modulation rate discrimination were measured in cochlear-implant users for pulse-trains that were either sinusoidally amplitude modulated or were modulated with half-wave rectified sinusoids, which in acoustic hearing have been used to simulate the response to low-frequency temporal fine structure. In contrast to comparable results from acoustic hearing, modulation rate discrimination was not statistically different for the two stimulus types. The results suggest that, in contrast to binaural perception, pitch perception in cochlear-implant users does not benefit from using stimuli designed to more closely simulate the cochlear response to low-frequency pure tones.
- Published
- 2010
- Full Text
- View/download PDF
48. Can temporal fine structure represent the fundamental frequency of unresolved harmonics?
- Author
-
Andrew J. Oxenham, Christophe Micheyl, and Michael V. Keebler
- Subjects
Adult ,Psychological Acoustics [66] ,Physics ,Auditory perception ,Analysis of Variance ,Time Factors ,Adolescent ,Acoustics and Ultrasonics ,Acoustics ,Ambient noise level ,Perceptual Masking ,Natural frequency ,Fundamental frequency ,Background noise ,Young Adult ,Acoustic Stimulation ,Arts and Humanities (miscellaneous) ,Harmonics ,Auditory Perception ,Humans ,Psychoacoustics ,Pitch Perception - Abstract
At least two modes of pitch perception exist: in one, the fundamental frequency (F0) of harmonic complex tones is estimated using the temporal fine structure (TFS) of individual low-order resolved harmonics; in the other, F0 is derived from the temporal envelope of high-order unresolved harmonics that interact in the auditory periphery. Pitch is typically more accurate in the former than in the latter mode. Another possibility is that pitch can sometimes be coded via the TFS from unresolved harmonics. A recent study supporting this third possibility [Moore et al. (2006a). J. Acoust. Soc. Am. 119, 480–490] based its conclusion on a condition where phase interaction effects (implying unresolved harmonics) accompanied accurate F0 discrimination (implying TFS processing). The present study tests whether these results were influenced by audible distortion products. Experiment 1 replicated the original results, obtained using a low-level background noise. However, experiments 2–4 found no evidence for the use of TFS cues with unresolved harmonics when the background noise level was raised, or the stimulus level was lowered, to render distortion inaudible. Experiment 5 measured the presence and phase dependence of audible distortion products. The results provide no evidence that TFS cues are used to code the F0 of unresolved harmonics.
- Published
- 2009
- Full Text
- View/download PDF
49. Auditory stream formation affects comodulation masking release retroactively
- Author
-
Andrew J. Oxenham, Torsten Dau, and Stephan D. Ewert
- Subjects
Adult ,Psychological Acoustics [66] ,Masking (art) ,Auditory perception ,Physics ,Signal Detection, Psychological ,Sound Spectrography ,Acoustics and Ultrasonics ,Acoustics ,Perceptual Masking ,Octave (electronics) ,Signal ,Acoustic Stimulation ,Arts and Humanities (miscellaneous) ,Flanking maneuver ,Modulation (music) ,Auditory Perception ,Humans ,Psychoacoustics - Abstract
Many sounds in the environment have temporal envelope fluctuations that are correlated in different frequency regions. Comodulation masking release (CMR) illustrates how such coherent fluctuations can improve signal detection. This study assesses how perceptual grouping mechanisms affect CMR. Detection thresholds for a 1-kHz sinusoidal signal were measured in the presence of a narrowband (20-Hz-wide) on-frequency masker with or without four comodulated or independent flanking bands that were spaced apart by either 1/6 (narrow spacing) or 1 octave (wide spacing). As expected, CMR was observed for the narrow and wide comodulated flankers. However, in the wide (but not narrow) condition, this CMR was eliminated by adding a series of gated flanking bands after the signal. Control experiments showed that this effect was not due to long-term adaptation or general distraction. The results. are interpreted in terms of the sequence of "postcursor" flanking bands forming a perceptual stream with the original flanking bands, resulting in perceptual segregation of the flanking bands from the masker. The results are consistent with the idea that modulation analysis occurs within, not across, auditory objects, and that across-frequency CMR only occurs if the on-frequency and flanking bands fall within the same auditory object or stream.
- Published
- 2009
- Full Text
- View/download PDF
50. Pitfalls in behavioral estimates of basilar-membrane compression in humans
- Author
-
Magdalena Wojtczak and Andrew J. Oxenham
- Subjects
Masking (art) ,Auditory masking ,Offset (computer science) ,Acoustics and Ultrasonics ,media_common.quotation_subject ,Acoustics ,Perceptual Masking ,behavioral disciplines and activities ,Signal ,Basilar membrane ,Arts and Humanities (miscellaneous) ,otorhinolaryngologic diseases ,Contrast (vision) ,sense organs ,Psychoacoustics ,psychological phenomena and processes ,media_common - Abstract
Psychoacoustic estimates of basilar-membrane compression often compare on- and off-frequency forward masking. Such estimates involve assuming that the recovery from forward masking for a given signal frequency is independent of masker frequency. To test this assumption, thresholds for a brief 4-kHz signal were measured as a function of masker-signal delay. Comparisons were made between on-frequency (4 kHz) and off-frequency (either 2.4 or 4.4 kHz) maskers, adjusted in level to produce the same amount of masking at a 0-ms delay between masker offset and signal onset. Consistent with the assumption, forward-masking recovery from a moderate-level (83 dB SPL) 2.4-kHz masker and a high-level (92 dB SPL) 4.4-kHz masker was the same as from the equivalent on-frequency maskers. In contrast, recovery from a high-level (92 dB SPL) 2.4-kHz forward masker was slower than from the equivalent on-frequency masker. The results were used to simulate temporal masking curves, taking into account the differences in on- and off-frequency masking recoveries at high levels. The predictions suggest that compression estimates assuming frequency-independent masking recovery may overestimate compression by as much as a factor of 2. The results suggest caution in interpreting forward-masking data in terms of basilar-membrane compression, particularly when high-level maskers are involved.
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.