64 results on '"L. Rogers"'
Search Results
2. Source levels of the underwater calls of a male leopard seal
- Author
-
Tracey L. Rogers
- Subjects
Male ,Sound Spectrography ,Time Factors ,Acoustics and Ultrasonics ,biology ,Seals, Earless ,Bioacoustics ,Acoustics ,Leopard ,biology.organism_classification ,Fishery ,Motion ,Sex Factors ,Sound ,Geography ,Arts and Humanities (miscellaneous) ,Sex factors ,biology.animal ,Hydrurga leptonyx ,Pressure ,Animals ,Vocalization, Animal ,Underwater ,Apex predator - Abstract
Leopard seals (Hydrurga leptonyx) are top predators in the Antarctic ecosystem. They produce stereotyped calls as part of a stylized underwater vocal display. Understanding of their acoustic behavior is improved by identifying the amplitude of their calls. The amplitude of five types of calls (n = 50) from a single male seal were measured as broadband source levels and ranged from 153 to 177 dB re 1 μPa at 1 m. The mean source levels differed between call types, the lower frequency calls (L, D, and O) tended to have source levels 10 dB higher than the higher frequency calls (H and M). Information on call-type source levels is important to take into account for passive acoustic studies investigating repertoire usage as calls produced with greater amplitudes are likely to have larger acoustic ranges, especially when these are also the calls with lower frequencies, such as is the case in leopard seals.
- Published
- 2014
3. Effects of cochlear-implant simulation on processing of vowel sequences by young normal-hearing listeners
- Author
-
Gail S. Donaldson, Jenna Vallario, and Catherine L. Rogers
- Subjects
Acoustics and Ultrasonics ,Computer science ,medicine.medical_treatment ,Speech recognition ,Stimulus (physiology) ,Signal ,Task (project management) ,Arts and Humanities (miscellaneous) ,Duration (music) ,Cochlear implant ,Vowel ,medicine ,Psychophysics ,Active listening ,Syllable - Abstract
To better understand the effects of listening environment on efficiency of phonetic processing, the present study examined the effects of signal degradation on phonetic processing of two-syllable sequences by normal-hearing listeners. Auditory temporal-order processing of American-English vowel sequences was compared across two listening conditions, each presented to a separate group of young, normal-hearing listeners: 1) unprocessed resynthesized stimuli and 2) stimuli that had been processed to simulate the signal produced by a 16-channel cochlear implant (CI). Using the methods of Fogerty, Humes and Kewley-Port [2010, J. Acoust. Soc. Am., 127, 2509-2520], 70-ms resynthesized versions of the syllables “pit, pet, put,” and “pot” were presented in a two-syllable temporal-order processing task. Task difficulty was increased by decreasing syllable-onset asynchrony (SOA), i.e., the duration between syllable onsets. SOA thresholds for accuracy of syllable-sequence identification were estimated using the method of constant stimuli on each of four 72-trial blocks. Data analyzed to date show a threshold difference of approximately 20 ms between the unprocessed and CI-processed listener groups, or a difference in threshold of a factor of two or greater. Results will be discussed with regard to implications for phonetic processing of speech in challenging listening environments and practical implications for CI users. To better understand the effects of listening environment on efficiency of phonetic processing, the present study examined the effects of signal degradation on phonetic processing of two-syllable sequences by normal-hearing listeners. Auditory temporal-order processing of American-English vowel sequences was compared across two listening conditions, each presented to a separate group of young, normal-hearing listeners: 1) unprocessed resynthesized stimuli and 2) stimuli that had been processed to simulate the signal produced by a 16-channel cochlear implant (CI). Using the methods of Fogerty, Humes and Kewley-Port [2010, J. Acoust. Soc. Am., 127, 2509-2520], 70-ms resynthesized versions of the syllables “pit, pet, put,” and “pot” were presented in a two-syllable temporal-order processing task. Task difficulty was increased by decreasing syllable-onset asynchrony (SOA), i.e., the duration between syllable onsets. SOA thresholds for accuracy of syllable-sequence identification were estimated using the metho...
- Published
- 2018
4. Conversational and clear speech intelligibility of /bVd/ syllables produced by native and non-native English speakers
- Author
-
Catherine L. Rogers, Teresa M. DeMasi, and Jean C. Krause
- Subjects
Speech enhancement ,Speech Acoustics ,Comprehension ,Native english ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Multilingualism ,Phonation ,Intelligibility (communication) ,Psychology ,Manner of articulation ,Linguistics - Abstract
The ability of native and non-native speakers to enhance intelligibility of target vowels by speaking clearly was compared across three talker groups: monolingual English speakers and native Spanish speakers with either an earlier or a later age of immersion in an English-speaking environment. Talkers produced the target syllables “bead, bid, bayed, bed, bad” and “bod” in ‘conversational’ and clear speech styles. The stimuli were presented to native English-speaking listeners in multi-talker babble with signal-to-noise ratios of −8 dB for the monolingual and early learners and −4 dB for the later learners. The monolinguals and early learners of English showed a similar average clear speech benefit, and the early learners showed equal or greater intelligibility than monolinguals for most target vowels. The 4-dB difference in signal-to-noise ratio yielded approximately equal average intelligibility for the monolinguals and later learners. The average clear speech benefit was smallest for the later learners, and a significant clear speech decrement was obtained for the target syllable “bid.” These results suggest that later learners of English as a second language may be less able than monolinguals to accommodate listeners in noisy environments, due to a reduced ability to improve intelligibility by speaking more clearly.
- Published
- 2010
5. Flipping the phonetics classroom
- Author
-
Catherine L. Rogers
- Subjects
Class (computer programming) ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Computer science ,Gauge (instrument) ,ComputingMilieux_COMPUTERSANDEDUCATION ,Mathematics education ,Phonetics ,Flipped classroom ,Linguistics - Abstract
The 21st century has seen a continued expansion of bandwidth, processor speed, and availability of multimedia recording tools for the average PC. Consequently, the flipped classroom model has gained popularity, in both K-12 and higher-level education. In a flipped classroom, the instructor typically records short lectures, to be viewed online by the students prior to class. Classroom time is then devoted to activities that might normally be assigned as homework, such as problem solving, examination of case studies, discussion or other interactive exercises. Advantages of the pre-recorded lecture include the opportunity for students to replay the lectures and the availability of captioning, while disadvantages of this format may include a lack of spontaneity or ability to stop and ask a question. The drawbacks may, however, be compensated for by the increased feedback opportunities for students, allowing teachers more opportunities to gauge gaps in student knowledge. The phonetics classroom seems particula...
- Published
- 2017
6. Vowel identification by cochlear implant users: Contributions of duration cues and dynamic spectral cues
- Author
-
Soo Hee Oh, Catherine L. Rogers, Lindsay B. Johnson, and Gail S. Donaldson
- Subjects
Adult ,Male ,Speech Communication ,medicine.medical_specialty ,Speech perception ,Time Factors ,Acoustics and Ultrasonics ,medicine.medical_treatment ,Acoustics ,Audiology ,Speech Acoustics ,Limited access ,Young Adult ,Arts and Humanities (miscellaneous) ,Phonetics ,Vowel ,Cochlear implant ,medicine ,Humans ,Aged ,Aged, 80 and over ,Spectral processing ,Middle Aged ,Identification (information) ,Cochlear Implants ,Acoustic Stimulation ,Duration (music) ,Case-Control Studies ,Speech Perception ,Female ,Cues ,Psychology - Abstract
A recent study from our laboratory assessed vowel identification in cochlear implant (CI) users, using full /dVd/ syllables and partial (center- and edges-only) syllables with duration cues neutralized [Donaldson, Rogers, Cardenas, Russell, and Hanna (2013). J. Acoust. Soc. Am. 134, 3021–3028]. CI users' poorer performance for partial syllables as compared to full syllables, and for edges-only syllables as compared to center-only syllables, led to the hypotheses (1) that CI users may rely strongly on vowel duration cues; and (2) that CI users have more limited access to dynamic spectral cues than steady-state spectral cues. The present study tested those hypotheses. Ten CI users and ten young normal hearing (YNH) listeners heard full /dVd/ syllables and modified (center- and edges-only) syllables in which vowel duration cues were either preserved or eliminated. The presence of duration cues significantly improved vowel identification scores in four CI users, suggesting a strong reliance on duration cues. Duration effects were absent for the other CI users and the YNH listeners. On average, CI users and YNH listeners demonstrated similar performance for center-only stimuli and edges-only stimuli having the same total duration of vowel information. However, three CI users demonstrated significantly poorer performance for the edges-only stimuli, indicating apparent deficits of dynamic spectral processing.
- Published
- 2015
7. Temporal-order processing of American-English vowel sequences by native and non-native English-speaking listeners
- Author
-
Bogyeong Cheon, Catherine L. Rogers, and Gail S. Donaldson
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Duration (music) ,Vowel ,American English ,Psychophysics ,Syllable ,Speech processing ,Psychology ,Variety (linguistics) ,Linguistics ,Task (project management) - Abstract
To understand the development of native-like proficiency in speech processing, we must consider the apparent ease with which native speakers process speech sounds under a variety of conditions. In the present study, auditory temporal-order processing of American-English vowel sequences was compared across three listener groups: monolingual English speakers and relatively early vs. later learners of English as a second language. Using the methods of Fogerty, Humes and Kewley-Port [2010, J. Acoust. Soc. Am., 127, 2509-2520], 70-ms resynthesized versions of the syllables “pit, pet, put,” and “pot” were presented in a two-syllable temporal-order processing task. Task difficulty was increased by decreasing syllable-onset asynchrony (SOA), i.e., the duration between syllable onsets. SOA thresholds for accuracy of syllable-sequence identification were estimated using the method of constant stimuli on each of four 72-trial blocks. Similar SOA thresholds were obtained for native English speakers and early learners...
- Published
- 2017
8. Examining the temporal structure and information entropy of leopard seal calling bouts
- Author
-
John R. Buck and Tracey L. Rogers
- Subjects
Acoustics and Ultrasonics ,biology ,Nonparametric statistics ,Leopard ,Estimator ,Markov model ,biology.organism_classification ,Arts and Humanities (miscellaneous) ,Sliding window protocol ,biology.animal ,Subsequence ,Statistics ,Hydrurga leptonyx ,Entropy (information theory) ,Mathematics - Abstract
Leopard seals (Hydrurga leptonyx) produce sequences of stereotyped sounds, or bouts, during their breeding season. The seals share common sounds but combine them in individually distinctive sequences. This study examines the underlying structure of the calling bouts by estimating the information entropy of the sound sequences with three entropy estimators. The independent identically distributed (IID) model estimates entropy from the simple frequencies of each sound. The Markov model estimates entropy from the frequency of pairs of sounds. Finally, the nonparametric sliding window match length (SWML) estimator exploits a relationship between the information entropy and the average subsequence match length. A better model for a given sequence achieves a lower entropy estimate. This study analyzed the calling bouts of 35 leopard seals recorded during the 1992-1994 and 1997-1998 Antarctic field seasons. The decrease of entropy estimates between the IID and Markov models for all seals analyzed confirmed the p...
- Published
- 2016
9. Vowel identification by cochlear implant users: contributions of static and dynamic spectral cues
- Author
-
Gail S. Donaldson, Catherine L. Rogers, Emily S. Cardenas, Benjamin A. Russell, and Nada H. Hanna
- Subjects
Adult ,Male ,Sound Spectrography ,Time Factors ,Acoustics and Ultrasonics ,Voice Quality ,Prosthesis Design ,Speech Acoustics ,Young Adult ,Arts and Humanities (miscellaneous) ,Humans ,Correction of Hearing Impairment ,Aged ,Aged, 80 and over ,Auditory Threshold ,Recognition, Psychology ,Middle Aged ,Cochlear Implantation ,Cochlear Implants ,Persons With Hearing Impairments ,Acoustic Stimulation ,Case-Control Studies ,Speech Perception ,Audiometry, Pure-Tone ,Female ,Cues ,Audiometry, Speech - Abstract
Previous research has shown that normal hearing listeners can identify vowels in syllables on the basis of either quasi-static or dynamic spectral cues; however, it is not known how well cochlear implant (CI) users with current-generation devices can make use of these cues. The present study assessed vowel identification in adult CI users and a comparison group of young normal hearing (YNH) listeners. Stimuli were naturally spoken /dVd/ syllables and modified syllables that retained only quasi-static spectral cues from an 80-ms segment of the vowel center ("C80" stimuli) or dynamic spectral cues from two 20-ms segments of the vowel edges ("E20" stimuli). YNH listeners exhibited near-perfect performance for the unmodified (99.8%) and C80 (92.9%) stimuli and maintained good performance for the E20 stimuli (70.2%). CI users exhibited poorer average performance than YNH listeners for the Full stimuli (72.3%) and proportionally larger reductions in performance for the C80 stimuli (41.8%) and E20 stimuli (29.0%). Findings suggest that CI users have difficulty identifying vowels on the basis of spectral cues in the absence of duration cues, and have limited access to brief dynamic spectral cues. Error analyses suggest that CI users may rely strongly on vowel duration cues when those cues are available.
- Published
- 2013
10. A spark-generated bubble model with semi-empirical mass transport
- Author
-
Jeffrey A. Cook, Robert L. Rogers, Austin M. Gleeson, and Randy M. Roberts
- Subjects
Physics ,Nonlinear acoustics ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Oscillation ,Bubble ,Acoustics ,Spark (mathematics) ,Time evolution ,Mechanics ,Plasma ,Underwater ,Ambient pressure - Abstract
This paper describes the time evolution of bubbles generated by underwater electrical discharges. The oscillations of these high-temperature vapor and plasma bubbles generate acoustic signatures similar to the signatures generated by air guns, underwater explosions, and combustible sources. A set of model equations is developed that allows the time evolution of the bubble generated by a spark discharge to be calculated numerically from a given power input. The acoustic signatures produced by the model were compared to previously recorded experimental data, and the model was found to agree over wide ranges of energy and ambient pressure on several characteristic values of the acoustic signatures. The bubble period in particular matched very well between model and experiment, indicating that the total energy losses predicted by the model over the oscillation of the bubble were approximately correct, although no reliable information was gained about the relative magnitudes of the individual energy loss mecha...
- Published
- 1997
11. The energy partition of underwater sparks
- Author
-
Robert L. Rogers, Randy M. Roberts, Jeffrey A. Cook, Austin M. Gleeson, and Thomas A. Griffy
- Subjects
Physics ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Ionization ,Acoustics ,Bubble ,Work (physics) ,Ab initio ,Thermal power station ,Energy transformation ,Black-body radiation ,Mechanics ,Underwater - Abstract
Underwater sparks have long been used by the geophysical prospecting community as a source of intense low‐frequency sound. While bubble hydrodynamic models are well developed, the mechanism of transferring energy from the thermal power input, through the various energy conversion channels, to the work done by the bubble has not been adequately studied. In this work an ab initio model of the bubble dynamics, including blackbody ablation, ionization, dissociation, and radiative transport, is developed. This model is a first step in enumerating the important physical mechanisms within bubbles generated by underwater sparks. The predictions of this model are compared with experimental results. Experimental work is still needed to validate the model, and to determine if and how model parameters related to actual physical parameters and measurable effects.
- Published
- 1996
12. Toneburst-evoked auditory brainstem response in a leopard seal, Hydrurga leptonyx
- Author
-
Carolyn J. Hogg, Joy S. Tripovich, Tracey L. Rogers, and Suzanne C. Purdy
- Subjects
Male ,medicine.medical_specialty ,Auditory Pathways ,Time Factors ,Acoustics and Ultrasonics ,Seals, Earless ,Acoustics ,Audiology ,Arts and Humanities (miscellaneous) ,Audiometry ,biology.animal ,Hydrurga leptonyx ,otorhinolaryngologic diseases ,medicine ,Evoked Potentials, Auditory, Brain Stem ,Pressure ,Reaction Time ,Auditory pathways ,Animals ,Sound pressure ,medicine.diagnostic_test ,biology ,Age Factors ,Leopard ,Auditory Threshold ,Electroencephalography ,biology.organism_classification ,Auditory brainstem response ,Acoustic Stimulation ,Hearing range ,sense organs ,Brain Stem - Abstract
Toneburst-evoked auditory brainstem responses (ABRs) were recorded in a captive subadult male leopard seal. Three frequencies from 1 to 4 kHz were tested at sound levels from 68 to 122 dB peak equivalent sound pressure level (peSPL). Results illustrate brainstem activity within the 1-4 kHz range, with better hearing sensitivity at 4 kHz. As is seen in human ABR, only wave V is reliably identified at the lower stimulus intensities. Wave V is present down to levels of 82 dB peSPL in the right ear and 92 dB peSPL in the left ear at 4 kHz. Further investigations testing a wider frequency range on seals of various sex and age classes are required to conclusively report on the hearing range and sensitivity in this species.
- Published
- 2011
13. Perception of silent-center syllables by native and non-native English speakers
- Author
-
Alexandra S. Lopez and Catherine L. Rogers
- Subjects
Adult ,Male ,medicine.medical_specialty ,Speech perception ,Sound Spectrography ,Time Factors ,Acoustics and Ultrasonics ,media_common.quotation_subject ,Multilingualism ,Speech Perception [71] ,Audiology ,Speech Acoustics ,Native english ,Arts and Humanities (miscellaneous) ,Phonetics ,Vowel ,Perception ,medicine ,Humans ,Learning ,Prosody ,media_common ,Middle Aged ,Linguistics ,Speech Perception ,Female ,Cues ,Psychology - Abstract
The amount of acoustic information that native and non-native listeners need for syllable identification was investigated by comparing the performance of monolingual English speakers and native Spanish speakers with either an earlier or a later age of immersion in an English-speaking environment. Duration-preserved silent-center syllables retaining 10, 20, 30, or 40 ms of the consonant-vowel and vowel-consonant transitions were created for the target vowels /i, I, eI, epsilon, ae/ and /a/, spoken by two males in /bVb/ context. Duration-neutral syllables were created by editing the silent portion to equate the duration of all vowels. Listeners identified the syllables in a six-alternative forced-choice task. The earlier learners identified the whole-word and 40 ms duration-preserved syllables as accurately as the monolingual listeners, but identified the silent-center syllables significantly less accurately overall. Only the monolingual listener group identified syllables significantly more accurately in the duration-preserved than in the duration-neutral condition, suggesting that the non-native listeners were unable to recover from the syllable disruption sufficiently to access the duration cues in the silent-center syllables. This effect was most pronounced for the later learners, who also showed the most vowel confusions and the greatest decrease in performance from the whole word to the 40 ms transition condition.
- Published
- 2008
14. Age-related differences in the acoustic characteristics of male leopard seals, Hydrurga leptonyx
- Author
-
Tracey L. Rogers
- Subjects
Male ,Aging ,Sound Spectrography ,Time Factors ,Acoustics and Ultrasonics ,Bioacoustics ,Seals, Earless ,Acoustics ,Zoology ,Biology ,Sexual Behavior, Animal ,Arts and Humanities (miscellaneous) ,biology.animal ,Age related ,Hydrurga leptonyx ,otorhinolaryngologic diseases ,Seasonal breeder ,Animals ,Repertoire ,Age Factors ,Leopard ,Reproducibility of Results ,biology.organism_classification ,Social function ,Echolocation ,Vocal learning ,Stereotyped Behavior ,Vocalization, Animal - Abstract
During the breeding season, the underwater vocalizations and calling rates of adult male leopard seals are highly stereotyped. In contrast, sub-adult males have more variable acoustic behavior. Although adult males produce only five stereotyped broadcast calls as part of their long-range underwater breeding displays the sub-adults have a greater repertoire including the adult-like broadcast calls, as well as variants of these. Whether this extended repertoire has a social function is unknown due to the paucity of behavioral data for this species. The broadcast calls of the sub-adults are less stereotyped in their acoustic characteristics and they have a more variable calling rate. These age-related differences have major implications for geographic variation studies, where the acoustic behavior of different populations are compared, as well as for acoustic surveying studies, where numbers of calls are used to indicate numbers of individuals present. Sampling regimes which unknowingly include recordings from sub-adult animals will artificially exaggerate differences between populations and numbers of calling animals. The acoustic behavior of sub-adult and adult male leopard seals were significantly different and although this study does not show evidence that these differences reflect vocal learning in the male leopard seal it does suggest that contextual learning may be present.
- Published
- 2007
15. The effects of noise on learners of English as a second language
- Author
-
Catherine L. Rogers
- Subjects
Acoustics and Ultrasonics ,media_common.quotation_subject ,Intelligibility (communication) ,Linguistics ,Noise ,Identification (information) ,Presentation ,Arts and Humanities (miscellaneous) ,Second language ,English as a second language ,Vowel ,Psychology ,Function (engineering) ,media_common - Abstract
In the 65 years since the publication of The Effects of Noise on Man, as the world has grown ever noisier, it has also grown more diverse and more connected than ever before. Today, nearly a fifth of U.S. residents speak a language other than English at home. Consequently, exploring the effects of noise on people communicating in a second language is a crucial component of understanding the effects of noise on mankind in our increasingly pluralistic society. This presentation will overview some of the challenges faced by second-language learners as speakers and listeners, particularly focusing on the extraction of phonetic information from a noisy or impoverished signal. Results of ongoing research investigating vowel identification by native and non-native English-speaking listeners will be presented. Overall accuracy, the slope of the function relating signal-to-noise ratio and intelligibility, and effects of noise on listeners’ ability to benefit from phonetic enhancement strategies will be compared ac...
- Published
- 2015
16. Individual variation in the pup attraction call produced by female Australian fur seals during early lactation
- Author
-
Rhondda Canfield, Tracey L. Rogers, John P. Y. Arnould, and Joy S. Tripovich
- Subjects
Acoustics and Ultrasonics ,Foraging ,Zoology ,Imprinting, Psychological ,Arts and Humanities (miscellaneous) ,Discriminant function analysis ,Lactation ,medicine ,Animals ,biology ,Regression tree analysis ,Fur Seals ,Discriminant Analysis ,Acoustics ,biology.organism_classification ,Attraction ,Animals, Suckling ,Arctocephalus ,Stereotypy (non-human) ,medicine.anatomical_structure ,Variation (linguistics) ,Animals, Newborn ,Tape Recording ,Regression Analysis ,Female ,Vocalization, Animal - Abstract
Otariid seals (fur seals and sea lions) are colonial breeders with large numbers of females giving birth on land during a synchronous breeding period. Once pups are born, females alternate between feeding their young ashore and foraging at sea. Upon return, both mother and pup must relocate each other and it is thought to be primarily facilitated by vocal recognition. Vocalizations of thirteen female Australian fur seals (Arctocephalus pusillus doriferus) were recorded during the breeding seasons of December 2000 and 2001, when pups are aged from newborns to one month. The pup attraction call was examined to determine whether females produce individually distinct calls which could be used by pups as a basis for vocal recognition. Potential for individual coding, discriminant function analysis (DFA), and classification and regression tree analysis were used to determine which call features were important in separating individuals. Using the results from all three analyses: F0, MIN F and DUR were considered important in separating individuals. In 76% of cases, the PAC was classified to the correct caller, using DFA, suggesting that there is sufficient stereotypy within individual calls, and sufficient variation between them, to enable vocal recognition by pups of this species.
- Published
- 2006
17. Vowel intelligibility and the second-language learner
- Author
-
Catherine L. Rogers
- Subjects
Formant ,Speech perception ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Second language ,Basic research ,Vowel ,Vowel perception ,Speaking style ,Intelligibility (communication) ,Psychology ,Linguistics ,Cognitive psychology - Abstract
Diane Kewley-Port’s work has contributed to our understanding of vowel perception and production in a wide variety of ways, from mapping the discriminability of vowel formants in conditions of minimal uncertainty to vowel processing in challenging conditions, such as increased presentation rate and noise. From the results of these studies, we have learned much about the limits of vowel perception for normal-hearing listeners and the robustness of vowels in speech perception. Continuously intertwined with this basic research has been its application to our understanding of vowel perception and vowel acoustics across various challenges, such as hearing impairment and second-language learning. Diane’s work on vowel perception and production by second-language learners and ongoing research stemming from her influence will be considered in light of several factors affecting communicative success and challenge for second-language learners. In particular, we will compare the influence of speaking style, noise, and syllable disruption on the intelligibility of vowels perceived and produced by native and non-native English-speaking listeners.
- Published
- 2014
18. Challenges for second-language learners in difficult acoustic environments
- Author
-
Catherine L. Rogers
- Subjects
Vocabulary ,Speech perception ,Acoustics and Ultrasonics ,business.industry ,media_common.quotation_subject ,Internet privacy ,Cognition ,Presentation ,Arts and Humanities (miscellaneous) ,Second language ,Everyday tasks ,Customer service ,Second language learners ,business ,Psychology ,media_common - Abstract
Most anyone who has lived in a foreign country for any length of time knows that even everyday tasks can become tiring and frustrating when one must accomplish them while navigating a seemingly endless maze of unfamiliar social customs, vocabulary and speech that seem far removed from one’s language laboratory experience. Add to these challenges noise, reverberation, and/or cognitive demand (e.g., learning caculus, responding to multiple customer, and co-worker demands) and even experienced learners may begin to question their proficiency. This presentation will provide an overview of the speech perception and production challenges faced by second-language learners in difficult acoustic environments that we may encounter every day, such as in large lecture halls, retail or customer service, to name a few. Past and current research investigating the effects of various environmental challenges on both relatively early and later learners of a second language will be considered, as well as strategies that may mitigate challenges for both speakers and listeners in some of these conditions.
- Published
- 2014
19. Graduate studies in acoustics, Speech and Hearing at the University of South Florida, Department of Communication Sciences and Disorders
- Author
-
Catherine L. Rogers
- Subjects
Speech perception ,Acoustics and Ultrasonics ,Hearing loss ,Acoustics ,media_common.quotation_subject ,Variety (cybernetics) ,Arts and Humanities (miscellaneous) ,ComputingMilieux_COMPUTERSANDEDUCATION ,medicine ,Applied research ,Quality (business) ,Professional association ,Communication sciences ,medicine.symptom ,Auditory Physiology ,Psychology ,media_common - Abstract
This poster will provide an overview of programs and opportunities for students who are interested in learning more about graduate studies in the Department of Communication Sciences and Disorders at the University of South Florida. Ours is a large and active department, offering students the opportunity to pursue either basic or applied research in a variety of areas. Current strengths of the research faculty in the technical areas of Speech Communication and Psychological and Physiological Acoustics include the following: second-language speech perception and production, aging, hearing loss and speech perception, auditory physiology, and voice acoustics and voice quality. Entrance requirements and opportunities for involvement in student research and professional organizations will also be described.
- Published
- 2014
20. An empirical fractal model for corona discharges in salt water
- Author
-
J. C. Espinosa, H. M. Jones, Robert L. Rogers, and Austin M. Gleeson
- Subjects
Fractal ,Materials science ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Electric field ,Salt water ,Waveform ,Mechanics ,Plasma ,Corona ,Corona discharge ,Voltage - Abstract
A corona discharge in water and a concomitant acoustic pulse is produced when a high electric field is applied across a pair of electrodes. The corona structure consists of many branching plasma fingers. The acoustic signal is produced by the formation and collapse of a vapor bubble; however, the details of this process are unclear. Furthermore, the relation between the acoustic and fractal stages is unknown. Voltage and current waveforms, obtained previously, are accurately described by a fractal model. High‐speed photographs of discharges have been taken to further investigate the nature of corona discharges and elucidate the relationship between the two stages. From these experiments, an empirical model is under development that incorporates both the fractal and acoustic aspects of corona discharges. The most recent results of this effort will be presented. [Work supported by the Office of Naval Research.]
- Published
- 1996
21. Cylindrical bubble evolution and acoustic signature through the arc phase of an electrical discharge
- Author
-
David L. Fisher and Robert L. Rogers
- Subjects
Materials science ,Acoustics and Ultrasonics ,business.industry ,Bubble ,Mechanics ,Plasma ,Radius ,Spherical model ,Arc (geometry) ,Plasma arc welding ,Optics ,Arts and Humanities (miscellaneous) ,Physics::Plasma Physics ,Electrode ,Electric discharge ,business - Abstract
The arc phase of an electrical discharge in salt water is investigated using a 1‐D nonlinear fluid model in cylindrical (r) coordinates. For most electrode geometries, the arc phase is more accurately modeled using a cylindrical geometry compared to the usual and simpler to implement spherical model. This is due to the fact that until the bubble radius is comparable to the distance between electrodes, the preferred geometry is cylindrical. Both the bubbles external (water) and internal (dissociated water and plasma) are discretized and modeled with nonlinear fluid equations. The model includes the energy flow from the capacitor, into the plasma arc through its resistivity, and then finally into the hydroacoustic pulse. Simulation results will be compared to experiments. Also the efficiency of various electrode configurations will be investigated. [Work supported by the Office of Naval Research.]
- Published
- 1996
22. Perception of conversational and clear speech syllables by native and non-native English-speaking listeners
- Author
-
Catherine L. Rogers, Marissa Voors, and Jenna Luque
- Subjects
Native english ,Psychometric function ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,English as a second language ,Perception ,media_common.quotation_subject ,Psychology ,Linguistics ,media_common - Abstract
In a recent study later, but not earlier, learners of English as a second language produced a smaller clear-speech benefit than native English-speaking talkers for vowels produced in six /bVd/ syllables (Rogers et al., 2010, JASA 123, 410–423). The present study compares perception of the same syllables by native and non-native English-speaking listeners. Conversational and clear-speech productions of the target syllables, “bead, bid, bayed, bed, bad,” and “bod,” were selected from three monolingual English speakers who had produced a significant clear-speech benefit in Rogers et al. (2010). The syllables were then mixed with noise at several signal-to-noise ratios (SNRs). Perception of these stimuli by three groups of listeners will be examined: (1) monolingual native English speakers, (2) ‘early’ learners of English as a second language, with an age of immersion (AOI) of 12 or earlier, and (3) later learners of English as a second language, with an AOI of 15 or later. Analyses of results of the six-alternative forced-choice task will focus on comparisons across listener groups, for the following measures: (1) estimates of clear-speech benefit at approximately 50% correct; (2) performance at a common SNR; and (3) estimates of the slope of the psychometric function. [Work supported by NIH.]
- Published
- 2014
23. Effects of vowel duration and increasing dynamic spectral information on identification of center-only and edges-only syllables by cochlear-implant users and young normal-hearing listeners
- Author
-
Gail S. Donaldson, Soo Hee Oh, Lindsay B. Johnson, and Catherine L. Rogers
- Subjects
Identification (information) ,medicine.medical_specialty ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Duration (music) ,Vowel ,Cochlear implant ,medicine.medical_treatment ,medicine ,Audiology ,Psychology ,behavioral disciplines and activities ,psychological phenomena and processes - Abstract
In a previous study, cochlear implant (CI) users’ vowel-identification performance was compared to that of young normal-hearing (YNH) listeners. Stimuli included full syllables and two duration-neutralized conditions: center-only and edges-only (silent-center). CI users performed more poorly than YNH listeners overall and showed proportionately larger decrements in performance for partial syllables. Error analyses suggested that at least some CI users rely more heavily on vowel-duration cues than YNH listeners. The present study was designed to test this hypothesis and to determine whether increasing duration of dynamic cues in the edges-only conditions would improve performance, particularly among poorer-performing CI users. Ten YNH listeners and ten adult CI users heard /dVd/ syllables recorded from three talkers. Full syllables were edited to create center-only and edges-only stimuli in which vowel duration cues were or were not preserved, plus edges-only stimuli with different durations of dynamic information. Performance of both groups improved in the duration-preserved condition for center-only, but not edges-only, stimuli. The center-only duration benefit was larger for the CI than for the YNH group. Increasing the duration of dynamic information in the silent-center stimuli improved vowel-identification performance for both groups. Individual differences among CI users and implications for listener-training programs will be discussed.
- Published
- 2013
24. Preliminary comparison of second-formant discrimination thresholds in cochlear implant users and young normal-hearing listeners
- Author
-
B. Russell, Catherine L. Rogers, Amanda J. Cooley, and Gail S. Donaldson
- Subjects
education.field_of_study ,medicine.medical_specialty ,Acoustics and Ultrasonics ,medicine.medical_treatment ,Population ,Limiting ,Audiology ,Formant ,Arts and Humanities (miscellaneous) ,Cochlear implant ,Vowel perception ,medicine ,education ,Psychology - Abstract
Formant discrimination thresholds (FDTs) may provide insights regarding factors limiting vowel perception by cochlear implant (CI) users, but have not been systematically studied in this population. In the present study, estimates of second-formant (F2) FDTs obtained in three CI users were compared to FDTs obtained from three young normal-hearing (YNH) listeners. Procedures and stimuli were modeled after Kewley-Port and Watson (1994, JASA 95, 485-96) but employed fewer trials and an expanded F2 frequency range. Stimuli were formant-synthesized versions of three target vowels. FDTs were estimated using an adaptive 3AFC task with feedback and based on six consecutive 80-trial stimulus blocks. FDTs for the three YNH listeners were comparable to previously reported FDTs (2.4% of reference frequency versus 1.5% in Kewley-Port and Watson). FDTs for two of the CI users were about 70% larger than the average for the YNH listeners. FDTs for the third CI user approached YNH average values in one frequency region but were enlarged in another region. Data for this CI user could not be explained by place-pitch thresholds (obtained in a previous study) and suggest that CI users' spectral acuity for complex stimuli may not be directly predictable from measures of spectral acuity for simple stimuli.
- Published
- 2013
25. A moving target? Comparing within-talker variability in vowel production between native and non-native English speakers across two speech styles
- Author
-
Catherine L. Rogers, Amber Gordon, and Melitza Pizarro
- Subjects
Correlation ,Speech production ,Formant ,Native english ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,English as a second language ,Speech recognition ,Speaking style ,Vowel ,Intelligibility (communication) ,Psychology ,Linguistics - Abstract
Non-native English speakers may show greater variability in speech production than native talkers due to differences in their developing representations of second-language speech targets. Few studies have compared within-talker variability in speech production between native and non-native speakers. In the present study, vowels produced by four monolingual English speakers and four later learners of English as a second language (Spanish L1) were compared. Five repetitions of six target syllables (“bead, bid, bayed, bed, bad” and “bod”), produced in conversational and clear speech styles, were analyzed acoustically. Fundamental and formant frequencies were measured at 20, 50 and 80% of vowel duration. Standard deviations computed across the five repetitions of each vowel were compared across speaking styles and talker groups. Preliminary data analyses indicate greater within-talker variability for non-native than native talkers. Non-native talkers’ within-talker variability also increased from conversational to clear speech for most measures. For some native talkers, within-talker variability was smaller for vowels with near neighbors in the vowel space than for vowels with more spectrally distant neighbors. This correlation was stronger in clear speech for talkers who showed a significant clear-speech intelligibility benefit in production in a related study. Implications for theories of vowel production will be discussed.
- Published
- 2012
26. Contributions of static and dynamic spectral cues to vowel identification by cochlear implant users
- Author
-
B. Russell, Nada H. Hanna, Emily S. Cardenas, Catherine L. Rogers, and Gail S. Donaldson
- Subjects
medicine.medical_specialty ,Identification (information) ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Duration (music) ,Cochlear implant ,medicine.medical_treatment ,Vowel ,Acoustics ,Vowel perception ,medicine ,Audiology ,Psychology - Abstract
Relatively little is known about cochlear implant (CI) users' ability to make use of static versus dynamic spectral cues in vowel perception tasks. The present study measured vowel identification in CI users and young normal hearing (YNH) listeners using naturally produced /dVd/ stimuli (deed, did, Dade, dead, dad, dud, and Dodd). Vowel identifcation was tested for (1) the unmodified syllables, (2) syllables modified to retain only 60 or 80 ms of the vowel center (center-only conditions), and (3) syllables modified to retain only 30 or 40 ms of the initial and final vowel transitions, with vowel duration neutralized (edges-only conditions). YNH listeners achieved high levels of performance for the unmodified stimuli (avg. 99.8%) and for the center-only stimuli (90.8%); their performance dropped to more moderate levels (68.1%) for the edges-only stimuli. CI users demonstrated moderate performance for the unmodified stimuli (avg. 72.0%) but demonstrated substantially poorer performance for both the center-o...
- Published
- 2011
27. Lexical-neighborhood and competing-task effects on word recognition and word recall by native and non-native listeners
- Author
-
Catherine L. Rogers, April M. Frenton, and Heather A. Hoefer
- Subjects
Spoken word ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Recall ,Speech recognition ,Word Recall ,QUIET ,Word recognition ,Active listening ,Psychology - Abstract
In spoken-word recognition, high-frequency words with few and less frequently occurring minimal-pair neighbors (lexically easy words) are recognized more accurately than low-frequency words with many and more frequently occurring neighbors (lexically hard words). This easy-hard word effect has been found to be larger for non-native listeners with a relatively late age of immersion in an English-speaking environment. Previous research found no effect of a competing digit-recall task on spoken-word recognition and no effect of listener group or listening environment on digit-recall accuracy. The present study compares word recognition by native English-speaking listeners and non-native listeners with either earlier (age 10 or earlier) or later (age 14 or later) ages of immersion in an English-speaking environment. Spoken word lists composed of equal numbers of lexically easy and hard target words were presented in an open-set word-identification task. Spoken words were presented in quiet and in moderate background noise and preceded by a list of zero, three, or six visually presented words, which listeners were asked to recall following the spoken word-recognition task (competing task). The size of the easy-hard word effect, spoken-word recognition accuracy competing-task word-recall accuracy, and word-entry response time will be compared across listener groups and listening conditions.
- Published
- 2011
28. The vowel tango: Rethinking vowel‐inherent spectral change
- Author
-
Merete M. Glasbrenner, Catherine L. Rogers, Teresa M. DeMasi, and Michelle Bianchi
- Subjects
Masking (art) ,Speech production ,Acoustics and Ultrasonics ,Computer science ,Linguistics ,language.human_language ,Formant ,Variation (linguistics) ,Arts and Humanities (miscellaneous) ,Vowel ,Monophthong ,Mid vowel ,language ,Set (psychology) ,North American English - Abstract
Vowel‐inherent spectral change (VISC) refers to vowel‐intrinsic formant movement across a vowel steady state. VISC has been shown to (1) be consistent across talkers within a given dialect, (2) vary regularly across vowels within a dialect, (3) vary regularly across dialects, and (4) be necessary for peak vowel‐identification accuracy. Hence, VISC has become accepted as a phonetic feature of monophthong vowels of North American English. VISC is typically portrayed using averages across tokens and talkers, highlighting regularity but potentially masking individual differences. To understand vowel production by second‐language learners, we were particularly interested in such individual variation. In analyzing individual differences for neighboring target vowels, we found no single time point at which all sets of target vowel tokens were well distinguished from one another. However, looking across three time points, all native‐speaker vowel sets were well distinguished from each possible neighbor set at some time point. Thus, VISC can be seen as the steps in a sort of dance, as each vowel moves to avoid overlapping with another, ultimately causing overlap with another and then more movement. This perspective is compatible with models of efficient coding and stochastic and/or exemplar based models of speech production and perception. [NIH‐NIDCD #1R03DC005561‐01A1.]
- Published
- 2011
29. Effects of age of immersion, task demand, and word type on word‐recognition response times by native and non‐native English‐speaking listeners
- Author
-
Astrid Zerla Doty, Catherine L. Rogers, and Judith B. Bryant
- Subjects
Word list ,Native english ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Recall ,Task demand ,QUIET ,Word recognition ,Active listening ,Word type ,Psychology ,Cognitive psychology - Abstract
Although listeners can adapt to many challenging listening conditions, often with little apparent effect on recognition accuracy, speed of processing may also affect an individual’s ability to cope with such challenges in everyday contexts. Native and non‐native listeners with either earlier (age 10 or earlier) or later (age 14 or later) ages of immersion in an English‐speaking environment heard six lists of 24 words, each composed of 12 lexically easy target words (high‐frequency words from sparse, low‐frequency phonological neighborhoods) and 12 lexically hard target words (low‐frequency words from dense, high‐frequency phonological neighborhoods) in an open‐set word‐identification task. Word lists were presented in quiet, in a moderate degree of background noise, and with or without a competing digit‐recall task. In the digit‐recall task, listeners saw three or six digits on the monitor prior to presentation of the word list and were asked to recall the digits at the end of the word‐recognition task. While there was no effect of the added digit‐recall task on word‐recognition accuracy, response times for correctly identified items were significantly longer for the digit‐recall condition, for the later learners of English only. Group and word type effects on response times will also be addressed.
- Published
- 2010
30. Effects of linguistic experience and speaking style on pitch alignment in focus syllables
- Author
-
Catherine L. Rogers and Jennifer Zasimovitch
- Subjects
Conversational speech ,Native english ,Phrase ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Speaking style ,First language ,Psychology ,Utterance ,Linguistics - Abstract
The location and realization of any pitch accents associated with utterance focus can vary across languages. The present study compared the relative locations of fundamental frequency (F0) maxima and minima in monosyllables produced by native English speakers and non‐native English speakers with Spanish as their first language. Ten monolingual English speakers, 15 early learners of English (age of immersion 12 years or earlier), and 10 later learners of English (age of immersion 15 years or later) produced target monosyllables in the carrier phrase “Say ______ again” in conversational and clear speech styles. The relative locations of F0 maxima and minima were computed for each target syllable. Results for the conversational speech tokens showed significant differences in peak (F0 maximum) alignment for the monolingual and early learner groups compared to the later learners and significant differences in alignment of F0 minima among all three talker groups. Early learner average values fell between those for the monolinguals and later learners in both cases. Ongoing analysis suggests a switch to more native‐like patterns for the early but not the later learners in the clear speech style, suggesting a nuanced awareness and control of second‐language prosodic features by the early learners. [NIH‐NIDCD #1R03DC005561‐01A1]
- Published
- 2010
31. Density estimation of leopard seals using a single stationary passive acoustic sensor
- Author
-
Tracey L. Rogers, Holger Klinck, Nadine E. Constantinou, and David K. Mellinger
- Subjects
Spatial density ,Acoustics and Ultrasonics ,biology ,Frequency band ,Acoustics ,Acoustic sensor ,Leopard ,Density estimation ,Recording system ,Arts and Humanities (miscellaneous) ,biology.animal ,Linear regression ,Environmental science ,Spectrogram - Abstract
The objective of this study is to estimate the spatial density of leopard seals using data recorded with a single stationary passive acoustic recording system in the Bransfield Strait, Antarctica, between 2005 and 2007. The most prominent vocalization of the leopard seal—the low double trill (LDT)—is used as a proxy for the presence of the species in the vicinity of the recording system. Because of the stereotypic nature and high frequency of occurrence of the LDT, a long‐term spectrogram approach can be applied to the data sets to reliably detect the presence of the target species. Energy levels in the target frequency band (200–400 Hz) as derived by the long‐term spectrogram analysis are related to number of manually counted calls extracted for selected periods. A linear regression analysis showed that energy levels are highly correlated with the number of manually counted calls. The number of recorded calls per unit time is converted into number of vocalizing animals per unit time by applying published...
- Published
- 2010
32. Are they really not there? Using passive acoustics to overcome false absences in the study of vocal species that are rare, secretive, or distributed at low densities
- Author
-
Douglas H. Cato, Carolyn J. Hogg, Tracey L. Rogers, and Michaela B. Ciaglia
- Subjects
Audience effect ,geography.geographical_feature_category ,Acoustics and Ultrasonics ,biology ,Range (biology) ,Leopard ,biology.organism_classification ,Arctic ice pack ,Marine species ,Fishery ,Relative index ,Geography ,Arts and Humanities (miscellaneous) ,Abundance (ecology) ,biology.animal ,Hydrurga leptonyx - Abstract
Estimating abundance and spatial use behavior can be challenging for marine species that are rarely sighted. This situation is exacerbated in the polar regions due to the peculiar logistical difficulties of working in the pack ice, which makes survey effort enormously expensive. Presented is a simple approach for modeling sounds per animal over a unit time as a relative index for species where there is information on the production of vocalizations (acoustic behavior including seasonal calling patterns, diurnal calling patterns, inter‐individual stereotypy, inter‐sexual stereotypy, audience effect, and predictable calling rate over a unit of time) and the detection range of those vocalizations (survey distance—theoretical estimates calculated with call intensities). We focus on an Antarctic pack ice seal, the leopard seal, and Hydrurga leptonyx, as estimating abundance from survey effort faces challenges. Our case study shows that with the advent of more sophisticated marine engineering coupled with effor...
- Published
- 2010
33. Determining the spatial distribution of an Antarctic top predator using passive acoustics
- Author
-
Shawn W. Laffan, David I. Warton, Nadine E. Constantinou, and Tracey L. Rogers
- Subjects
geography ,geography.geographical_feature_category ,Acoustics and Ultrasonics ,biology ,Leopard ,biology.organism_classification ,Spatial distribution ,Arctic ice pack ,Oceanography ,Arts and Humanities (miscellaneous) ,Abundance (ecology) ,biology.animal ,Hydrurga leptonyx ,Environmental science ,Ecosystem ,Relative species abundance ,Apex predator - Abstract
The leopard seal (Hydrurga leptonyx) is one of four species of ice seals in Antarctica with each species occupying a distinct position in the Antarctic sea‐ice ecosystem. Ice seals offer a potential source of information about ecosystem interactions and environmental variability integrated over a variety of spatial and temporal scales. During the austral spring and summer, leopard seals move within the pack ice to breed. Acoustic surveying is necessary to assess their distributions as male leopard seals vocalize underwater as part of their breeding display. During the 1999/2000 austral summer, the relative abundance of adult male leopard seals was determined using underwater passive acoustic point‐transect surveys. The abundance data were combined with environmental data in a geographical information system, and a model was developed to determine what factors of the environment are correlated with their abundance and distribution. The model with the best predictive power showed a trend of increased abunda...
- Published
- 2010
34. Vowel identification by younger and older listeners: Relative effectiveness of vowel edges and vowel centers
- Author
-
Elizabeth K. Talmage, Gail S. Donaldson, and Catherine L. Rogers
- Subjects
Adult ,Male ,Aging ,medicine.medical_specialty ,Signal Detection, Psychological ,Sound Spectrography ,Time Factors ,Speech perception ,Adolescent ,Acoustics and Ultrasonics ,Acoustics ,Audiology ,Speech Acoustics ,Young Adult ,Arts and Humanities (miscellaneous) ,Vowel ,medicine ,Humans ,Aged ,Mathematics ,medicine.diagnostic_test ,Pure tone ,Extramural ,Age Factors ,Auditory Threshold ,Middle Aged ,Jasa Express Letters ,Acoustic Stimulation ,Mid vowel ,Speech Perception ,Audiometry, Pure-Tone ,Cues ,Audiometry ,Syllable - Abstract
Young normal-hearing (YNH) and older normal-hearing (ONH) listeners identified vowels in naturally produced /bVb/ syllables and in modified syllables that consisted of variable portions of the vowel edges (silent-center [SC] stimuli) or vowel center (center-only [CO] stimuli). Listeners achieved high levels of performance for all but the shortest stimuli, indicating that they were able to access vowel cues throughout the syllable. ONH listeners performed similarly to YNH listeners for most stimuli, but performed more poorly for the shortest CO stimuli. SC and CO stimuli were equally effective in supporting vowel identification except when acoustic information was limited to 20 ms.
- Published
- 2010
35. Intelligibility of Spanish‐accented English words in noise
- Author
-
Catherine L. Rogers and Jonathan M. Dalby
- Subjects
Comprehension ,Speech perception ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,QUIET ,Speech recognition ,English proficiency ,Cognitive effort ,Intelligibility (communication) ,Psychology ,Connected speech ,Linguistics ,Speech in noise - Abstract
The intelligibility of Mandarin‐accented English sentences, even those spoken by highly proficient non‐native speakers, is degraded more than is native speech when presented to native listeners in noise [Rogers et al. (2004)]. Comprehension of accented speech may require more processing time than native speech even when presented in quiet [Munro and Derwing (1995)]. These effects are similar to effects found by Pisoni and his colleagues for synthetic, as compared to natural speech [Winters and Pisoni (2003)] and together suggest that the ability of native listeners to adapt relatively quickly and effectively to accented speech [Bradlow and Bent (2008); Clark and Garrett (2004)] may come at the expense of increased cognitive effort. The present study examines the effects of noise on the intelligibility of Mandarin‐accented isolated words from speakers representing a wide range of oral English proficiency based on connected‐speech measures. A subset of these words, those with the highest open‐set identification scores as rated by a jury of 10 native listeners, will be presented for identification to a second jury at four signal‐to‐noise ratios: quiet, +10, 0, and −5 dB. Results are compared to those found for connected speech from the same group of talkers. [Work supported by NIH‐NIDCD.]
- Published
- 2009
36. Spoken word recognition in quiet and noise by native and non‐native listeners: Effects of age of immersion and vocabulary size
- Author
-
Catherine L. Rogers, Judith B. Bryant, and Astrid Zerla Doty
- Subjects
Vocabulary ,Native english ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Spoken word recognition ,media_common.quotation_subject ,QUIET ,Perception ,Word recognition ,Psychology ,Linguistics ,media_common - Abstract
In spoken word recognition, high‐frequency words with few and less frequently occurring minimal‐pair “neighbors” (lexically easy words) are recognized more accurately than low‐frequency words with many and more frequently occurring neighbors (lexically hard words). [Bradlow and Pisoni, J. Acoust. Soc. Am.,106, 2074–2085 (1999)] found a larger “easy‐hard” word effect for non‐native than native speakers of English. The present study extends this work by specifically comparing word recognition by non‐native listeners with either earlier (age 10 or earlier) or later (age 14 or later) ages of immersion in an English‐speaking environment to that of native English speakers. Listeners heard six lists of 24 words, each composed of 12 lexically easy and 12 lexically hard target words in an open‐set word‐identification task. Word lists were presented in quiet and in moderate background noise. A substantially larger easy‐hard word effect was obtained only for the later learners, but a measure of oral vocabulary size ...
- Published
- 2009
37. Vowel‐inherent spectral change and the second‐language learner
- Author
-
Merete M. Glasbrenner and Catherine L. Rogers
- Subjects
Acoustics and Ultrasonics ,media_common.quotation_subject ,First language ,American English ,Linguistics ,Native english ,Arts and Humanities (miscellaneous) ,Second language ,English as a second language ,Perception ,Vowel ,Psychology ,Relevant information ,media_common - Abstract
Because vowel inherent spectral change (VISC) is necessary for optimal identification of vowels by native English speakers, learners of English as a second language must acquire relevant information about VISC in order to achieve native-like levels of performance in both perception and production of vowels in English. This chapter reviews studies of both perception and production of VISC by learners of English as a second language, whose first language is Spanish, with either an earlier or later age of immersion in an English speaking environment. In perception, later learners of English appeared to rely more heavily on duration cues than monolinguals and early learners and, in some cases, to be less able to use VISC to discriminate near neighbors in the vowel space. In production, acoustic analyses were performed for American English vowels produced by participants in each group. The data are examined in terms of the degree of separation achieved by each talker group across the course of the vowel, as represented by three time points (20, 50 and 80 % of vowel duration). Additional analyses of productions by the most and least intelligible talkers in each group were used to explore individual talkers’ strategies for using VISC to distinguish neighbor vowels from one another.
- Published
- 2009
38. Identification of transition‐only and steady‐state vowels by young and older normal‐hearing listeners
- Author
-
Gail S. Donaldson, Elizabeth K. Talmage, and Catherine L. Rogers
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Acoustics ,Vowel ,medicine ,Audiology ,Stimulus (physiology) ,Mathematics - Abstract
Normal‐hearing listeners can identify vowels on the basis of either dynamic or steady‐state (SS) cues. To determine which cues are more effective, vowel identification was measured for full, transition‐only (TN) and SS versions of naturally produced exemplars of the syllables “beeb, bib, babe, beb, bab, and bob.” TN stimuli retained 10, 20, 30, 40, 60, or 80 ms of the consonant‐vowel and vowel‐consonant transitions and were neutralized in overall duration. SS stimuli retained 10, 20, 30, 40, 60, or 80 ms of the vowel center but eliminated transitions. Young normal‐hearing (YNH) and older normal‐hearing (ONH) listeners were assessed. Performance declined as the duration of acoustic information decreased. On average, decrements were similar for TN and SS stimuli except at the shortest duration (TN10 and SS20), where average performance was poorer for the TN stimulus. However, relative performance for short‐duration TN and SS stimuli varied substantially across vowels. TN and SS stimuli produced similar perf...
- Published
- 2008
39. Clear speech effects for vowels produced by monolingual and bilingual talkers
- Author
-
Jean C. Krause, Catherine L. Rogers, and Teresa M. DeMasi
- Subjects
Conversational speech ,Native english ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Second language ,QUIET ,Perception ,media_common.quotation_subject ,Intelligibility (communication) ,Psychology ,Linguistics ,media_common - Abstract
The present study investigates the hypothesis that bilinguals may produce a smaller intelligibility benefit than monolinguals when asked to speak clearly. Three groups of talkers were recorded: 13 monolingual native English speakers, 22 early Spanish‐English bilinguals, with an age of onset of learning English (AOL) of 12 or earlier, and 14 later Spanish‐English bilinguals, with an AOL of 15 or later. Talkers produced the target words ‘‘bead, bid, bayed, bed, bad’’ and ‘‘bod’’ in both clear and conversational speech styles. Two repetitions of each target word were mixed with noise and presented to monolingual English‐speaking listeners in a six‐alternative forced‐choice task across two days of testing. Stimuli were also presented in quiet on two subsequent days of testing. In preliminary data from 13 listeners, the early bilinguals were slightly more intelligible in noise than the monolingual talkers, with both groups showing a similar degree of clear speech benefit. Later bilinguals were less intelligibl...
- Published
- 2007
40. Effects of clear speech on duration and fundamental frequency of vowels produced by monolingual and bilingual talkers
- Author
-
Michelle Bianchi, Catherine L. Rogers, Stefan Frisch, and Jean C. Krause
- Subjects
Conversational speech ,medicine.medical_specialty ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Acoustics ,Vowel ,Significant group ,medicine ,Audiology ,Psychology - Abstract
Prosodic characteristics of vowels produced by monolingual and bilingual talkers were investigated. Ten monolingual, 15 early Spanish‐English bilingual (age of onset of immersion of age 12 or earlier), and ten late Spanish‐English bilingual (age of onset of immersion of age 15 or later) talkers produced the target words ‘‘bead, bid, bayed, bed, bad,’’ and ‘‘bod’’ in conversational and clear speech styles. Vowel duration was computed, and F0 measurements were made at 20%, 50%, and 80% of the vowel duration. A significant group by style by vowel interaction showed that monolingual and early bilingual talkers enhanced inherent duration differences between target vowels by lengthening long vowels significantly more than short vowels in clear speech. The vowels of the late bilingual talkers, by contrast, became more alike in duration in clear than in conversational speech. The monolingual talkers showed a falling F0 pattern from 20% to 80% of the vowel duration in both styles; the late bilingual talkers showed a flat or rising F0 pattern in both styles; and the early bilingual talkers showed a flat or rising pattern in conversational speech, but a falling pattern in clear speech. [Work supported by NIH‐NIDCD ♯5R03DC005561.]
- Published
- 2007
41. Vowel space and formant dynamics of vowels produced by monolingual and bilingual talkers in conversational and clear speech styles
- Author
-
Jean C. Krause, Stefan Frisch, Michelle Bianchi, and Catherine L. Rogers
- Subjects
medicine.medical_specialty ,Formant ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Vowel ,Significant group ,medicine ,Audiology ,Psychology ,Linguistics - Abstract
Formant frequency characteristics of vowels produced by monolingual and bilingual talkers were compared. Ten monolingual, 15 early Spanish‐English bilingual (age of onset of immersion is 12 or earlier), and ten late Spanish‐English bilingual (age of onset of immersion of age 15 or later) talkers produced the target words ‘‘bead, bid, bayed, bed, bad,’’ and ‘‘bod’’ in conversational and clear speech styles. Measurements of F1 and F2 were made at 20%, 50%, and 80% of vowel duration. Significant group effects for F1 and F2 at 50% of vowel duration showed similar locations and distances between the vowels for the monolingual and early bilingual talkers, except that F2 values were significantly higher for the early bilingual than for the monolingual talkers for the vowels in the target words ‘‘bead, bid,’’ and ‘‘bed.’’ Smaller between‐vowel distances were found for both F1 and F2 for the late bilingual talkers, especially for the vowels in the target words ‘‘bead, bid, bayed,’’ and ‘‘bed.’’ Changes observed in clear speech were relatively modest for all three groups. Comparisons between formant dynamic vectors and angles across groups and style and between talkers showing large versus small degrees of clear speech intelligibility benefit will be presented. [Work supported by NIH‐NIDCD No. 5R03DC005561.]
- Published
- 2007
42. Factors which influence acoustic surveys of marine mammals
- Author
-
Douglas H. Cato, Tracey L. Rogers, and Michaela B. Ciaglia
- Subjects
education.field_of_study ,Acoustics and Ultrasonics ,biology ,Range (biology) ,Population ,Leopard ,Oceanography ,Marine mammal ,Geography ,Arts and Humanities (miscellaneous) ,Abundance (ecology) ,biology.animal ,Seasonal breeder ,education - Abstract
Traditionally, many marine mammal populations have been estimated by visual surveys. These count the animals that are available—either seals hauled‐out on the ice or whales at the water’s surface. Corrections are then made to include the animals that were not seen either because they were in (seals) or under (whales) the water. However when the majority of the animals in a population are not available to a visual survey this approach may be less effective. So we investigated whether acoustic surveys offered promise for estimating the distribution and abundance of Antarctic pack‐ice seals. Four acoustic surveys were conducted (October 1996, 1997; December 1997, 1999) between longitudes 600E and 1500E. Surveys were bounded to the south by fast‐ice, shelf‐ice or the Antarctic continent and to the north by the edge of the pack‐ice. No crabeater seals were heard. Leopard and Ross seals were highly vociferous in December coinciding with their breeding season. To predict the area surveyed we modeled transmission loss and measurements of received background levels. To identify the number of seals calling we modeled calling behavior. A preliminary estimate of 0.13 male leopard seals/km2 was calculated which is in the high‐density range described from the literature.
- Published
- 2005
43. Seasonal and diurnal calling patterns of Ross and leopards
- Author
-
Douglas H. Cato, Gayle A. Rowney, Michaela B. Ciaglia, and Tracey L. Rogers
- Subjects
geography ,Oceanography ,geography.geographical_feature_category ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,biology ,Satellite telemetry ,biology.animal ,Seasonal breeder ,Leopard ,Arctic ice pack ,Geology - Abstract
The temporal calling patterns of two Antarctic pack ice seals, the leopard and Ross seal, were examined. This included seasonal onset and decline of calling (coinciding with their breeding season) as well as diurnal changes. Understanding of calling behavior has important implications for acoustic surveying, since this allows the number of calls to be related to an index of the number of animals present and to estimate abundance. The monthly changes in diurnal calling and haul‐out patterns (measured via satellite telemetry) were compared. Underwater acoustic recordings were made between 14 October 2003 and 10 January 2004 off Mawson, Eastern Antarctica (660 44.243S and 690 48.748E). Recordings were made using an Acoustics Recording Package (ARP by Dr. John Hildebrand, Scripps Institute of Oceanography, La Jolla, CA) which is designed to sit on the seafloor and passively record acoustic signals. The package was deployed at a depth of 1320.7 m. The sampling rate was 500 Hz and the effective bandwidth from 1...
- Published
- 2005
44. Use of spectral change and duration cues in vowel identification by monolingual and bilingual listeners: Evidence from confusion matrices
- Author
-
Catherine L. Rogers, Teresa M. DeMasi, and Merete M. Glasbrenner
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,musculoskeletal, neural, and ocular physiology ,American English ,Significant group ,Audiology ,behavioral disciplines and activities ,Linguistics ,Identification (information) ,Formant ,Arts and Humanities (miscellaneous) ,Duration (music) ,Vowel ,Mid vowel ,behavior and behavior mechanisms ,medicine ,medicine.symptom ,psychological phenomena and processes ,Confusion ,Mathematics - Abstract
The degree to which vowel duration and time‐varying spectral change cues are used in American English vowel identification was investigated for three groups of listeners: (1) monolingual American English listeners; (2) proficient Spanish‐English bilinguals; and (3) less proficient Spanish‐English bilinguals. Digital manipulation and high‐fidelity synthesis procedures (STRAIGHT) were used to create six versions of six target items (bead, bid, bayed, bed, bad, and bod). These consisted of the unaltered whole word and five versions of the isolated vowel: (1) natural vowel—unaltered; (2) synthetic vowel with no cue alteration; (3) synthetic vowel with neutral duration; (4) synthetic vowel with flattened formants; and (5) synthetic vowel with neutral duration and flattened formants. Isolated‐vowel stimuli were presented to listeners for identification in two 120‐trial blocks, followed by 48 whole‐word stimuli. Significant between‐group differences in percent‐correct performance and a significant group by liste...
- Published
- 2005
45. Proficient bilinguals require more information for vowel identification than monolinguals
- Author
-
Alexandra S. Lopez and Catherine L. Rogers
- Subjects
Identification (information) ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Duration (music) ,Vowel ,American English ,Psychology ,Linguistics ,Speech in noise - Abstract
Even proficient bilinguals have been shown to experience more difficulty understanding speech in noise than monolinguals. One potential explanation is that bilinguals require more information than monolinguals for phoneme identification. We tested this hypothesis by presenting gated, silent‐center vowels to two groups of listeners: (1) monolingual American English speakers and (2) proficient Spanish–English bilinguals, who spoke unaccented or mildly accented English. To create the stimuli, two American English speakers were recorded as they read the following items: ‘‘beeb, bibb, babe, bebb, babb,’’ and ‘‘bob.’’ Duration‐preserved silent‐center versions of three tokens of each item were created by retaining varying amounts of the CV and VC transitions (10, 20, 30, or 40 ms) and attenuating the remainder of the vowel center to silence. Duration‐neutral versions of silent‐center tokens were created by lengthening or shortening the silent portion to match the tokens vowel duration to the average for all the ...
- Published
- 2004
46. Information entropy analysis of leopard seal vocalization bouts
- Author
-
Douglas H. Cato, John R. Buck, and Tracey L. Rogers
- Subjects
Independent and identically distributed random variables ,Acoustics and Ultrasonics ,biology ,Leopard ,Markov model ,Information theory ,biology.organism_classification ,Sequential structure ,Arts and Humanities (miscellaneous) ,biology.animal ,Statistics ,Hydrurga leptonyx ,Entropy (information theory) ,Mathematics - Abstract
Leopard seals (Hydrurga leptonyx) are solitary pinnipeds who are vocally active during their brief breeding season. The seals produce vocal bouts consisting of a sequence of distinct sounds, with an average length of roughly ten sounds. The sequential structure of the bouts is thought to be individually distinctive. Bouts recorded from five leopard seals during 1992–1994 were analyzed using information theory. The first‐order Markov model entropy estimates were substantially smaller than the independent, identically distributed model entropy estimates for all five seals, indicative of constraints on the sequential structure of each seal’s bouts. Each bout in the data set was classified using maximum‐likelihood estimates from the first‐order Markov model for each seal. This technique correctly classified 85% of the bouts, comparable to results in Rogers and Cato [Behaviour (2002)]. The relative entropies between the Markov models were found to be infinite in 18/20 possible cross‐comparisons, indicating the...
- Published
- 2004
47. Nonsense syllable perception by monolingual and bilingual English speakers
- Author
-
Edna E. Nyang, Maria R. Brea‐Spahn, Catherine L. Rogers, and Mei‐Wa Tam
- Subjects
medicine.medical_specialty ,Speech perception ,Acoustics and Ultrasonics ,Practice effect ,media_common.quotation_subject ,Nonsense ,Audiology ,Linguistics ,Speech in noise ,Nonsense syllable ,Arts and Humanities (miscellaneous) ,Task demand ,Perception ,medicine ,Overall performance ,Psychology ,media_common - Abstract
Understanding speech in demanding environments is essential for daily communication. Previous research has shown that even highly proficient bilinguals may experience greater difficulty than monolinguals in understanding speech in noise. In the present study we further address this issue by examining the effects of varying task demand, fatigue, and practice on speech perception by bilinguals. One group of monolingual English listeners and three groups of Spanish–English bilinguals (bilingual since childhood, bilingual since teenager, and bilingual since adulthood) listened to nonsense syllables presented in noise and at increasing presentation rates. Listeners twice completed the two speech perception tasks on two days of testing. On one day the speech tasks were preceded by and on the other day followed by approximately 40 minutes of testing on nonspeech auditory tasks. Monolingual and bilingual listeners’ overall performance, their performance across the two days of testing (practice effect), and their ...
- Published
- 2002
48. On the relationship between perception and production of American English vowels by native speakers of Japanese: A pair of case studies
- Author
-
Kanae Nishi and Catherine L. Rogers
- Subjects
Acoustics and Ultrasonics ,Acoustics ,media_common.quotation_subject ,Speech recognition ,American English ,Arts and Humanities (miscellaneous) ,Similarity (network science) ,Vowel ,Perception ,Mid vowel ,Production (computer science) ,Multidimensional scaling ,Psychology ,media_common - Abstract
In a previous study, 15 American English (AE) vowels in /hVd/ words recorded by native speakers of Japanese (J) were presented in pairs to native speakers of AE. From these data, native‐perceived vowel spaces of J‐accented AE were obtained (Nishi, 2001). In the present study, two male native speakers of Japanese who had served as speakers in the previous study listened to the 15 AE vowels produced by two male native speakers of AE in a similarity rating task. Their perceptual data were analyzed using multidimensional scaling. J‐perceived vowel spaces of AE vowels produced by AE speakers were compared to AE‐perceived vowel spaces of J‐produced AE vowels. Vowels produced by both AE and J speakers were also subjected to acoustic analysis. The acoustic vowel spaces obtained were then compared to the AE and J perceptual vowel spaces. Results revealed considerable differences between the two J speakers in terms of their perception of AE vowels. These differences were found to be strongly correlated with the spe...
- Published
- 2002
49. The effects of training method on frequency discrimination for individual components of complex tonal patterns
- Author
-
Robert F. Port, Catherine L. Rogers, Gary R. Kidd, and Charles S. Watson
- Subjects
Tone (musical instrument) ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,business.industry ,Acoustics ,Frequency discrimination ,Pattern recognition ,Artificial intelligence ,Training methods ,business ,Mathematics - Abstract
It has been assumed that subjects trained to detect increments in the frequency of all components of complex tonal patterns (broad focus) would be less accurate in detecting changes in a single target tone than subjects who have been trained to detect changes in only that component [e.g., Watson et al., J. Acoust. Soc. Am. 60, 1176–1186 (1976)]. In several experiments, using a number of 750‐ms ten‐tone patterns, subjects were trained using one of three methods: in the first two, a S/2AFC procedure was used to train subjects to detect frequency increments in a specific target tone (group one) or to detect frequency increments that could occur in any of the ten components (group two), and in the third, subjects were trained only to identify the individual patterns. Subjects trained using these methods were tested on their ability to detect changes in various components of the patterns, including the target tone for the first group. In all of these experiments, only very slight differences in performance wer...
- Published
- 1993
50. Effects of noise and proficiency level on intelligibility of Chinese‐accented English
- Author
-
Jonathan M. Dalby, Catherine L. Rogers, and Kanae Nishi
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,English proficiency ,Intelligibility (communication) ,Audiology ,Mandarin Chinese ,language.human_language ,Linguistics ,Background noise ,Native english ,Arts and Humanities (miscellaneous) ,language ,medicine ,Psychology - Abstract
It is known that native speech intelligibility is degraded in background noise. This study compares the effect of noise on the intelligibility of English sentences produced by native English speakers and two groups of native Mandarin speakers with different English proficiency levels. High‐proficiency Mandarin speakers spoke with detectible accents, but their speech was transcribed at about 95% of words correct in a previous study, in which no noise was added [C. Rogers and J. Dalby, J. Acoust. Soc. Am. 100, 2725 (1996)]. Low‐proficiency Mandarin speakers were transcribed at about 80% correct in the same study. Forty‐eight sentences spoken by six speakers (two native, two high proficiency, and two low proficiency) were transcribed by listeners under four conditions: with no added noise and mixed with multi‐talker babble at three signal‐to‐noise ratios (+10, 0, and −5 dB). Transcription accuracy was poor for all speakers in the noisiest condition, although substantially greater for native than for Mandarin...
- Published
- 2001
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.