23 results on '"Ruth S. Day"'
Search Results
2. Comprehension of Prescription Drug Information: Overview of a Research Program.
- Author
-
Ruth S. Day
- Published
- 2006
3. A Synthetic Character Application for Informed Consent.
- Author
-
Robert C. Hubal, Curry I. Guinn, Diana C. Sparrow, Evelyn J. Studer, Ruth S. Day, and Wendy Visscher
- Published
- 2004
4. Informed consent procedures: An experimental test using a virtual character in a dialog systems training application.
- Author
-
Robert C. Hubal and Ruth S. Day
- Published
- 2006
- Full Text
- View/download PDF
5. FDA Public Meeting Report on 'Drug Interactions With Hormonal Contraceptives: Public Health and Drug Development Implications'
- Author
-
Diana L. Blithe, Vivek S. Purohit, Alison Edelman, Mohammad Ahsanul Akbar, Joachim Höchel, Erin Berry-Bibee, Ruth S. Day, Jim A. Turpin, Naomi K. Tepper, Myong-Jin Kim, David G. Strauss, Roxanne Jamshidi, Lei Zhang, Pamela E. Scott, Chongwoo Yu, Li Li, and Haiying Sun
- Subjects
Drug ,medicine.medical_specialty ,media_common.quotation_subject ,030226 pharmacology & pharmacy ,Food and drug administration ,03 medical and health sciences ,0302 clinical medicine ,Drug Development ,Agency (sociology) ,Contraceptive Agents, Female ,medicine ,Humans ,Drug Interactions ,Pharmacology (medical) ,030212 general & internal medicine ,media_common ,Panel discussion ,Pharmaceutical industry ,Pharmacology ,United States Food and Drug Administration ,business.industry ,Public health ,Drug interaction ,United States ,Drug development ,Family medicine ,Public Health ,business - Abstract
Potential drug interactions with hormonal contraceptives are an important public health concern. A public meeting on "Drug Interactions With Hormonal Contraceptives: Public Health and Drug Development Implication" was hosted by the United States Food and Drug Administration (FDA). The meeting endeavored to provide an opportunity for the FDA to seek input from experts on the public health concerns associated with the use of hormonal contraceptives and interacting drugs that might affect efficacy and safety, including pharmacokinetic/pharmacodynamic considerations, in the design of drug interaction studies of hormonal contraceptives for drug development and approaches to translating the results of drug interaction information into informative labeling and communication. The input received could be used to refine FDA's thinking on hormonal contraceptives drug interaction study design and interpretation and labeling communication of drug interaction risk. This meeting benefited from strong and diverse participation from the Center for Drug Evaluation and Research at the FDA, Centers for Disease Control and Prevention, National Institutes of Health, Swedish Medical Products Agency, pharmaceutical industry, and representatives of academia. This report provides a summary of the key discussion based on the presentations and panel discussion.
- Published
- 2018
- Full Text
- View/download PDF
6. The perception of stop-liquid clusters in phonological fusion
- Author
-
James E. Cutting and Ruth S. Day
- Subjects
Auditory perception ,Linguistics and Language ,Fusion ,Speech recognition ,media_common.quotation_subject ,Language and Linguistics ,Psycholinguistics ,Speech and Hearing ,Variation (linguistics) ,Stop consonant ,Perception ,Psychoacoustics ,Syllable ,Mathematics ,media_common - Abstract
Phonological fusion occurs when items such as PAY and LAY are presented separately to each ear and listeners report hearing PLAY. Input items that begin with a stop consonant (e.g. /p/) and a liquid (/1/ or /r/) fuse especially well. The present studies examined the effect of various factors on the frequency of fusion responses. Allophonic variation in the liquids (“trilled” versus plain) had no effect on fusion frequency. Phonemic similarity also had no effect; that is, when the input items differed in all phonemes (PAY and LED) they still fused. However, the phonemic order and location of clusters within a syllable did have a large effect:initial stop-liquid clusters fused readily (PAY/LAY yielded PLAY) while final liquid-stop clusters rarely fused (PEEL/PEED rarely yielded PEELED). Various fusion phenomena remained the same when tested in both identification and discrimination paradigms. Finally, fusion scores were not normally distributed over subjects; that is, some subjects fused on most or all trials, while others fused less frequently.
- Published
- 1975
- Full Text
- View/download PDF
7. Processing two dimensions of nonspeech stimuli: The auditory-phonetic distinction reconsidered
- Author
-
Mark J. Blechner, Ruth S. Day, and James E. Cutting
- Subjects
Behavioral Neuroscience ,Arts and Humanities (miscellaneous) ,Experimental and Cognitive Psychology - Published
- 1976
- Full Text
- View/download PDF
8. Failure of selective attention to phonetic segments in consonant-vowel syllables
- Author
-
Charles C. Wood and Ruth S. Day
- Subjects
Consonant ,Speech recognition ,Experimental and Cognitive Psychology ,Sensory Systems ,Linguistics ,Variation (linguistics) ,Vowel ,Stop consonant ,Mid vowel ,Consonant vowel ,Selective attention ,Psychology ,General Psychology ,Relative articulation - Abstract
Subjects performed a two-choice speeded classification task that required selective attention to either the consonant or the vowel in synthetic consonant-vowel (CV) syllables. When required to attend selectively to the consonant, subjects could not ignore irrelevant variation in the vowel. Similarly, when required to attend selectively to the vowel, they could not ignore irrelevant variation in the consonant. These results suggest that information about an initial stop consonant and the following vowel is processed as an integral unit.
- Published
- 1975
- Full Text
- View/download PDF
9. Teaching from notes: Some cognitive consequences
- Author
-
Ruth S. Day
- Subjects
Cognition ,Psychology ,Education ,Cognitive psychology - Published
- 1980
- Full Text
- View/download PDF
10. Alternative Representations
- Author
-
Ruth S. Day
- Published
- 1988
- Full Text
- View/download PDF
11. Verbal Fluency and the Language-Bound Effect
- Author
-
Ruth S. Day
- Subjects
Interpretation (logic) ,Similarity (psychology) ,Information processing ,Verbal fluency test ,Phonetics ,Psychology ,Psycholinguistics ,Sentence ,Word (group theory) ,Cognitive psychology - Abstract
Individuals previously identified as language-bound (LB) and language- optional (LO) participated in a series of experiments designed to study verbal fluency. The two groups showed a striking similarity in the number of responses they produced for categories with constraints at various levels (word form, word content, sentence, interpretation). This similarity occurred for both written and oral modes of response, and over a wide range of time intervals. Other types of measures, however, suggested that the form(s) in which a given category can be represented affected the ease with which the two groups produced their responses. LBs had more difficulty with categories that lent themselves readily to a spatial representation, while LOs had more difficulty with a cateogry based on phonetic constraints. The results were considered in terms of their implications for the LB phenomenon as well as general approaches to the study of verbal fluency.
- Published
- 1979
- Full Text
- View/download PDF
12. Auditory evoked potentials during speech perception
- Author
-
Charles C. Wood, William R. Goff, and Ruth S. Day
- Subjects
Auditory perception ,Adult ,medicine.medical_specialty ,Speech perception ,Adolescent ,Acoustics ,Audiology ,Electroencephalography ,Stimulus (physiology) ,behavioral disciplines and activities ,Lateralization of brain function ,Functional Laterality ,otorhinolaryngologic diseases ,medicine ,Reaction Time ,Humans ,Speech ,Right hemisphere ,Dominance, Cerebral ,Electrodes ,Evoked Potentials ,Multidisciplinary ,medicine.diagnostic_test ,Brain ,Auditory Perception ,Psychology ,Binaural recording ,psychological phenomena and processes - Abstract
Neural responses evoked by the same binaural speech signal were recorded from ten right-handed subjects during two auditory identification tasks. One task required analysis of acoustic parameters important for making a linguistic distinction, while the other task required analysis of an acoustic parameter which provides no linguistic information at the phoneme level. In the time interval between stimulus onset and the subjects' identification responses, evoked potentials from the two tasks were significantly different over the left hemisphere but identical over the right hemisphere. These results indicate that different neural events occur in the left hemisphere during analysis of linguistic versus nonlinguistic parameters of the same acoustic signal.
- Published
- 1971
13. Availability and associative symmetry
- Author
-
Leonard M. Horowitz, Sandra A. Norman, and Ruth S. Day
- Subjects
Cognitive model ,Rational analysis ,Maze learning ,Cognition ,Association ,Memory ,Humans ,Symmetry (geometry) ,Psychology ,General Psychology ,Natural language ,Associative property ,Cognitive psychology - Published
- 1966
14. Availability growth and latent verbal learning
- Author
-
Margaret A. White, Leonard M. Horowitz, Leah L. Light, and Ruth S. Day
- Subjects
Unconscious, Psychology ,Consciousness ,Experimental and Cognitive Psychology ,Verbal Learning ,Verbal learning ,Paired-Associate Learning ,Gender Studies ,Association ,Arts and Humanities (miscellaneous) ,Memory ,Humans ,Psychology ,Cognitive psychology ,Language - Published
- 1968
15. Differences between Language‐Bound and Stimulus‐Bound Subjects in Solving Word Search Puzzles
- Author
-
Ruth S. Day
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Dichotic listening ,Speech sounds ,Word search ,Stimulus (physiology) ,Linguistics ,Mathematics ,Cognitive psychology - Abstract
Studies of dichotic fusion suggest that “language‐bound” (LB) subjects perceive speech sounds through the abstract linguistic structure of their language, while “stimulus‐bound” (SB) subjects can set aside linguistic rules and make accurate judgments about nonlinguistic events. In the present experiment, subjects of both types were asked to scan a matrix of letters in all directions in order to find words that exemplify a particular theme, e.g., musical instruments. SBs consistently found more words. Perhaps SBs simply have better spatial abilities, since the task requires scanning in eight directions. An alternative view is that the groups have comparable spatial abilities, but that LBs are preoccupied with linguistic operations: given a string of letters, they translate it into “phonetic sense” no matter what direction they happen to scan. For example, the highly pronounceable string TENIPS may obscure the fact that SPINET is spelled out in the reverse direction. Hence the two groups may differ in the relative amount of work performed by the two cerebral hemispheres: SBs are free to rely on right‐hemisphere (spatial) operations to conduct an efficient scan, while LBs are more restricted to left‐hemisphere (linguistic) operations and hence spend less time in effective scanning.
- Published
- 1974
- Full Text
- View/download PDF
16. Temporal Order Perception of a Reversible Phoneme Cluster
- Author
-
Ruth S. Day
- Subjects
medicine.medical_specialty ,Speech perception ,Acoustics and Ultrasonics ,Dichotic listening ,Acoustics ,media_common.quotation_subject ,Contrast (statistics) ,Audiology ,Arts and Humanities (miscellaneous) ,Perception ,medicine ,Order (group theory) ,Syllable ,Psychology ,media_common - Abstract
The synthetic syllable /taes/ was presented to one ear, while at the same time /taek/ was presented to the other ear. On some trials, both syllables began at the same time, while on others /taes/ led by 5, 10, 15,⋯, 100 msec, or /taek/ led by these same intervals. When asked to report “what they heard,” subjects often reported /taesk/ or /taeks/. Later, when asked to report “the last sound they heard,” subjects performed well on both /s/ and /k/. These results contrast with those of a previous study involving nonreversible clusters: when asked to report the first phoneme of the dichotic pair /baeek/‐/laeek/, subjects reported hearing /b/ first, independent of the lead conditions presented. A tentative model for temporal factors in speech perception is proposed. It incorporates considerations of the effects of both parallel and serial transmission of phonemic information. [This research was supported in part by a grant from NICHD to the Haskins Laboratories.]
- Published
- 1970
- Full Text
- View/download PDF
17. Digit Span Memory in Language‐Bound and Stimulus‐Bound Subjects
- Author
-
Ruth S. Day
- Subjects
Serial position effect ,medicine.medical_specialty ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Recall ,Dichotic listening ,Memory span ,medicine ,Stimulus (physiology) ,Audiology ,Psychology - Abstract
Dichotic tests involving phonological fusion yield bimodal individual differences. Given items of the general form BANKET/LANKET, some subjects report hearing BLANKET. They report such fusions no matter which item began first. When specifically asked to report the leading phoneme, they report /b/ even when /l/ led by a substantial interval. Since they are reporting the phonological order permitted in English rather than the actual physical events, they have been termed “language bound.” Other subjects, termed “stimulus bound,” do not fuse and are better able to determine which phoneme led. Language‐bound and stimulus‐bound subjects, as defined by the dichotic fusion tests, took some digit‐span memory tests. Nine digits were presented auditorily on each trial and subjects had to recall each item in the appropriate serial position. Stimulus‐bound subjects displayed significantly superior memory. In digit‐span experiments, performance is typically best at the beginning and end of the list, with a sizeable drop in percent correct for middle‐of‐the‐list items. Stimulus‐bound subjects showed only a slight decrease in recall for the middle items, whereas language‐bound subjects showed a very great decrease. Implications concerning short‐term memory capacity and stimulus encoding are discussed.
- Published
- 1973
- Full Text
- View/download PDF
18. A Parallel between Degree of Encodedness and the Ear Advantage: Evidence from a Temporal‐Order Judgment Task
- Author
-
James M. Vigorito and Ruth S. Day
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,Dichotic listening ,Acoustics ,Speech sounds ,Stimulus (physiology) ,Audiology ,behavioral disciplines and activities ,Arts and Humanities (miscellaneous) ,Stop consonant ,otorhinolaryngologic diseases ,medicine ,sense organs ,Psychology - Abstract
Some speech sounds are more highly encoded than others. In acoustic terms, this means that they undergo more restructuring as a function of neighboring phonemes. In psychological terms, it may mean that special processing is required to perceive them. Stop consonants appear to be the most highly encoded speech sounds, vowels the least encoded, with other sounds falling in the middle. Stops, liquids, and vowels served as target phonemes in tests of dichotic temporal‐order judgment (TOJ). A different syllable was presented to each ear with one leading by 50 msec, e.g., BAE(50)/GAE. Subjects reported which syllable began first. Ear difference scores were obtained by taking percent‐correct TOJ on trials where a given ear received the leading stimulus and subtracting percent‐correct TOJ on trials where the other ear led. Stop consonant pairs yielded a right‐ear score, liquids a reduced right‐ear score, and vowels a left‐ear score. A right‐ear advantage in dichotic listening is usually interpreted as reflecting...
- Published
- 1973
- Full Text
- View/download PDF
19. Dichotic Fusion along an Acoustic Continuum
- Author
-
Ruth S. Day and James E. Cutting
- Subjects
Fusion ,medicine.medical_specialty ,Formant ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Dichotic listening ,medicine ,Audiology ,Binaural recording ,Mathematics - Abstract
When stimuli such as banket and lanket are presented dichotically, phonemic fusions often occur: subjects report hearing blanket. Previous studies have shown that stop +/r/ and stop +/l/ items have different fusion properties. For example, /l/ was sometimes substituted for /r/ (but rarely vice versa): gocery/rocery → (yielded) glocery. The present experiment varied the liquid stimuli along an acoustic continuum involving the third formant transition. For example, one set varied from ray to lay. Each was paired dichotically with an initial stop stimulus, in this case pay. All inputs (pay, ray, lay) and possible fusions (pray, play) were acceptible English words. When asked to report “what they heard,” subjects gave many fusion responses. Of these, there was a preponderance of stop +/l/ fusions (88% vs 12%). They occurred even for pairs where the liquid item was reported as an /r/ during separate binaural identification trials. Thus, given that an item was identified as ray, the same subjects reported heari...
- Published
- 1972
- Full Text
- View/download PDF
20. Memory for Dichotic Pairs: Disruption of Ear Report Performance by the Speech‐Nonspeech Distinction
- Author
-
Ruth S. Day, James E. Cutting, and James C. Bartlett
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Dichotic listening ,Duplex perception ,otorhinolaryngologic diseases ,medicine ,sense organs ,Stimulus (physiology) ,Audiology ,Psychology - Abstract
When dichotic pairs are presented in rapid succession, the best strategy is usually to segregate the items by ear of arrival. If the stimuli fall into distinguishable classes such as letters and digits, then report by class is roughly comparable to that obtained by the ear method. In the present experiment, a stimulus class distinction markedly interfered with listeners' ability to use the ear report method. The classes were speech (/ba, da, ga/) and nonspeech (500‐, 700‐, and 1000‐Hz tones). A trial consisted of three successive pairs of speech to one ear and nonspeech to the other. In separate blocks of trials, subjects reported the order of arrival for a given stimulus class (speech or nonspeech) or for a given ear (left or right). Performance was excellent for both types of report as long as all the speech went to one ear and all the nonspeech went to the other ear. However, when speech and nonspeech switched between the ears during a trial, class report remained excellent but ear report dropped substantially. The status of stimuli as speech or nonspeech, then, is a fundamental distinction, one that can reduce the effectiveness of the normally useful ear report method in dichotic memory tests.
- Published
- 1973
- Full Text
- View/download PDF
21. Perceptual Competition between Speech and Nonspeech
- Author
-
James E. Cutting and Ruth S. Day
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,Duplex perception ,Dichotic listening ,media_common.quotation_subject ,Acoustics ,Audiology ,Stimulus (physiology) ,Arts and Humanities (miscellaneous) ,Perception ,otorhinolaryngologic diseases ,medicine ,sense organs ,Psychology ,media_common - Abstract
In contrast with previous dichotic listening experiments that delivered either two speech messages (speech/speech) or two nonspeech messages (nonspeech/nonspeech) on each trial, the present study used mixed trials: speech to one ear and non‐speech to the other ear (speech/nonspeech). The relative onset time of each dichotic pair was varied over a ±200‐msec range. Subjects were asked to report which stimulus began first on every trial. Processing Time: If, according to a speech mode hypothesis, there is a set of processors specialized for speech, then speech stimuli might well require more processing time than nonspeech stimuli. The data support this view: subjects accurately reported hearing a nonspeech stimulus first when it led by a small time interval, whereas they were unable to determine that a speech stimulus began first unless it led by a much greater interval. Ear Advantage: Other studies have found a right‐ear superiority for speech/speech presentations and a left‐ear superiority for nonspeech/nonspeech. Instead of finding a superiority of an ear for its “proper” type of stimulus, the current study obtained over‐all left‐ear performance that was more accurate than right‐ear performance. The study suggests that there are different mechanisms for the perception of speech and nonspeech.
- Published
- 1971
- Full Text
- View/download PDF
22. Separate Speech and Nonspeech Processing in Dichotic Listening
- Author
-
James C. Bartlett and Ruth S. Day
- Subjects
medicine.medical_specialty ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Duplex perception ,Dichotic listening ,Acoustics ,otorhinolaryngologic diseases ,medicine ,Word error rate ,Audiology ,Stimulus (physiology) ,Psychology - Abstract
Temporal order judgment (TOJ) in dichotic listening can be a difficult task. Previous experiments that used two speech stimuli on each trial (S/S) obtained sizable error rates when subjects were required to report which ear led (TOJ‐by‐ear). When subjects were required to identify the leading stimulus (TO J‐by‐stimulus), the error rate increased substantially. Apparently, the two speech stimuli were competing for analysis by the same processor, and so were overloading it. The present experiment used the same TOJ tasks, but presented a speech and a nonspeech stimulus on each trial (S/NS). The error rate was comparable to that of S/S for TO J‐by‐ear, but did not increase for TO J‐by‐stimulus. This would be expected if the speech and nonspeech stimuli are being sent to different processors, each of which performs its analysis without interference from the other. The interpretation of the data given here is consistent with the results of standard identification experiments reported elsewhere: when asked to identify both stimuli on each dichotic trial, subjects made many errors on S/S, while performance was virtually error‐free on S/NS.
- Published
- 1972
- Full Text
- View/download PDF
23. Mutual Interference between Two Linguistic Dimensions of the Same Stimuli
- Author
-
Charles C. Wood and Ruth S. Day
- Subjects
Variation (linguistics) ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Dimension (vector space) ,Basis (linear algebra) ,Interference (wave propagation) ,Binaural recording ,Linguistics ,Mathematics - Abstract
In a previous study subjects identified binaural stimuli that varied along both a linguistic and a nonlinguistic dimension. The linguistic dimension consisted of variation in stop consonants while the nonlinguistic dimension consisted of variation in fundamental frequency. There were four stimuli: /ba/—low, /ba/ high, /da/—low, /da/—high. Reaction times were obtained in a two‐choice identification task when the target dimension was the only one that varied. When there was also irrelevant variation in the nontarget dimension, reaction times increased substantially for the linguistic dimension, but only slightly for the nonlinguistic dimension. Thus the nonlinguistic dimension interfered with the processing of the linguistic dimension more than vice versa. The present study employed the same paradigm, but used two linguistic dimensions: stop consonants and vowels. The stimuli were /ba, bae, da, dae/. Reaction times increased substantially for both dimensions when there was also irrelevant variation in the nontarget dimension. Thus both dimensions interfered with each other to the same extent. On the basis of the dimensions examined in this paradigm thus far, it appears that two linguistic dimensions yield a mutual interference effect, while a linguistic and a nonlinguistic dimension yield a unidirectional effect.
- Published
- 1972
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.