11 results on '"Jiang, Yuhong"'
Search Results
2. Prognostic and immunological role of acetaldehyde dehydrogenase 1B1 in human tumors: A pan-cancer analysis.
- Author
-
Kuang, Yong, Feng, Jiahao, Jiang, Yuhong, Jin, Qianqian, Wang, Qi, Zhang, Changhua, and He, Yulong
- Published
- 2023
- Full Text
- View/download PDF
3. Tell me what you saw: The usefulness of verbal descriptions for others.
- Author
-
Tan, Deborah H and Jiang, Yuhong V
- Subjects
- *
COLLECTIVE memory , *VISUAL perception , *EYEWITNESS identification , *CRIMINAL investigation , *VISUAL memory - Abstract
Describing what one saw to another person is common in everyday experience, such as spatial navigation and crime investigations. Past studies have examined the effects of recounting on one's own memory, neglecting an important function of memory recall in social communication. Here we report surprisingly low utility of one's verbal descriptions for others, even when visual memory for the stimuli has high capacity. Participants described photographs of common objects they had seen to enable judges to identify the target object from a foil in the same basic-level category. When describing from perception, participants were able to provide useful descriptions, allowing judges to accurately identify the target objects 87% of the time. Judges' accuracy decreased to just 57% when participants provided descriptions from memory acquired minutes ago, and to near chance (51.8%) when the verbal descriptions were based on memory acquired 24 hours ago. Comparison of participants' own identification accuracy with judges' accuracy suggests the presence of a common source of errors. This finding suggests that recall and recognition of visual objects share common memory sources. In addition, the low utility of one's verbal descriptions constrains theories about the extension of one's memory to the external world and has implications for eyewitness identification and laws governing it. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
4. Judging social interaction in the Heider and Simmel movie.
- Author
-
Rasmussen, Carly E and Jiang, Yuhong V
- Subjects
- *
SOCIAL interaction , *AUTISM spectrum disorders , *SOCIAL perception , *YOUNG adults - Abstract
Simple displays of moving shapes can give rise to percepts of animacy. These films elicit impoverished narratives in some individuals, such as those with an autism spectrum disorder. However, the verbal demand of producing a narrative limits the utility of this task. Non-verbal tasks have so far focused on detecting animate objects. Lacking from previous research is a task that relies less on verbal description but more than animacy perception. Here, we presented data using a new social interaction judgement task. Healthy young adults viewed the Heider and Simmel movie and pressed one button whenever they perceived social interaction and another button when no social interaction was perceived. We measured the time points at which social judgement began, the fluctuation of the judgement in relation to stimulus kinematic properties, and the overall mean of social judgement. Participants with higher autism traits reported lower levels of social interaction. Reversing the film in time produced lower social interaction judgements, though the temporal profile was preserved. Our study suggests that both low-level motion characteristics and high-level understanding contribute to social interaction judgement. The finding may facilitate future research on other populations and stimulate computational vision work on factors that drive social judgements. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
5. How visual memory changes with intervening recall.
- Author
-
Tan, Deborah H and Jiang, Yuhong V
- Subjects
- *
COLLECTIVE memory , *VISUAL memory , *MEMORY testing , *CRIMINAL investigation , *MEMORY - Abstract
Being asked to recount a visual memory is common in educational settings, spatial navigation, and crime investigation. Previous studies show that recounting one's memory can benefit subsequent memory, but most of this work either used verbal materials or conflated category memory with memory for visual details. To test whether recounting may introduce visually-specific interference effects, we tested people's memory for photographs of objects, but introduced an intervening phase in which people described their memory. We separated memory for the specific exemplar from memory for the basic-level category. Contrary to recent findings on maps and colours, the intervening retrieval practice did not consistently strengthen exemplar memory of objects. Instead, recounting one's visual memory appeared to introduce interference that sometimes cancelled the benefit of increased retrieval effort. Delaying the final memory test by 24 hr increased the benefit of retrieval practice. These findings suggest that intervening retrieval has multiple effects on visual memory. Instead of being a snapshot, this memory constantly changes with retrieval practice and with time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Investigating the role of response in spatial context learning.
- Author
-
Makovski, Tal and Jiang, Yuhong V.
- Subjects
- *
CONTEXT effects (Psychology) , *ACTION theory (Psychology) , *VISUAL perception , *VISUAL learning , *TOUCH , *EYE movements , *MOTOR ability - Abstract
Recent research has shown that simple motor actions, such as pointing or grasping, can modulate the way we perceive and attend to our visual environment. Here we examine the role of action in spatial context learning. Previous studies using keyboard responses have revealed that people are faster locating a target on repeated visual search displays ('contextual cueing'). However, this learning appears to depend on the task and response requirements. In Experiment 1, participants searched for a T-target among L-distractors and responded either by pressing a key or by touching the screen. Comparable contextual cueing was found in both response modes. Moreover, learning transferred between keyboard and touch screen responses. Experiment 2 showed that learning occurred even for repeated displays that required no response, and this learning was as strong as learning for displays that required a response. Learning on no-response trials cannot be accounted for by oculomotor responses, as learning was observed when eye movements were discouraged (Experiment 3). We suggest that spatial context learning is abstracted from motor actions. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
7. Contextual cost: When a visual-search target is not where it should be.
- Author
-
Makovski, Tal and Jiang, Yuhong V.
- Subjects
- *
CONTEXT effects (Psychology) , *PRIMING (Psychology) , *PAIRED associate learning , *EXPERIMENTAL psychology , *DISTRACTION - Abstract
Visual search is often facilitated when the search display occasionally repeats, revealing a contextual-cueing effect. According to the associative-learning account, contextual cueing arises from associating the display configuration with the target location. However, recent findings emphasizing the importance of local context near the target have given rise to the possibility that low-level repetition priming may account for the contextual-cueing effect. This study distinguishes associative learning from local repetition priming by testing whether search is directed toward a target's expected location, even when the target is relocated. After participants searched for a T among Ls in displays that repeated 24 times, they completed a transfer session where the target was relocated locally to a previously blank location (Experiment 1) or to an adjacent distractor location (Experiment 2). Results revealed that contextual cueing decreased as the target appeared farther away from its expected location, ultimately resulting in a contextual cost when the target swapped locations with a local distractor. We conclude that target predictability is a key factor in contextual cueing. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
8. Attention dependency in implicit learning of repeated search context.
- Author
-
Rausei, Valeria, Makovski, Tal, and Jiang, Yuhong V.
- Subjects
PSYCHOLOGY of learning ,MEMORY ,IMPLICIT learning ,AWARENESS ,PSYCHOLOGY - Abstract
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
9. Inhibition Accompanies Reference-Frame Selection.
- Author
-
Carlson-Radvansky, Laura A. and Jiang, Yuhong
- Subjects
- *
CELESTIAL reference systems , *RESPONSE inhibition - Abstract
Presents a study which analyzed the selection of a reference-frame in order to determine the role of inhibition by using a negative-priming paradigm. Definition of spatial relations; Suggestion that reference-frame selection seems to be independent of object selection; Reference to the simultaneous activation of multiple reference frames; Results of the study.
- Published
- 1998
- Full Text
- View/download PDF
10. Surprise! An unexpected color sigleton does not capture attention in visual search.
- Author
-
Gibson, Bradley S. and Jiang, Yuhong
- Subjects
- *
VISION , *SCIENTIFIC experimentation - Abstract
Provides the findings of an experimental study conducted in which a visual search task was used to investigate whether visual selective attention could be captured in a purely stimulus-driven fashion by an initial encounter with a color singleton. What use of visual search for color singleton revealed; Methodology used to conduct the experiments; Results of the experiments; What findings suggests.
- Published
- 1998
- Full Text
- View/download PDF
11. Attitude controller design for the aerial trees-pruning robot based on nonsingular fast terminal sliding mode.
- Author
-
Zhang, Qiuyan, Yang, Zhong, Wang, Shaohui, Jiang, Yuhong, Xu, Changliang, Xu, Hao, and Xu, Xiangrong
- Subjects
SPACE robotics ,ROBOTS ,PROBLEM solving - Abstract
In this article, the attitude control problem of a new-designed aerial trees-pruning robot is addressed. During the tree cutting process, the aerial trees-pruning robot will be disturbed by unknown external disturbances. At the same time, the model uncertainties will also affect the attitude controller. To overcome the above problems, an attitude controller is designed with a nonsingular fast terminal sliding mode method. First, the extended state observer is designed to estimate the modeling uncertainties and unknown disturbances. Then, the extended state observer-based nonsingular fast terminal sliding mode controller can make the tracking error of the attitude converge to zero in a finite time. Finally, a control allocation matrix switching strategy is proposed to solve the problem of the change of the aerial robot model in the cutting process. The final simulation and experimental results show that the extended state observer-based nonsingular fast terminal sliding mode controller designed in this article has good attitude control performance and can effectively overcome the modeling uncertainties and unknown disturbances. The attitude controller and control allocation matrix switching strategy ensure that the attitude angles of the aerial robot can quickly track the reference signals. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.