4,985 results on '"Saliency"'
Search Results
2. Trans-saccadic integration for object recognition peters out with pre-saccadic object eccentricity as target-directed saccades become more saliency-driven
- Author
-
Liang, Junhao and Zhaoping, Li
- Published
- 2025
- Full Text
- View/download PDF
3. Multi-band image fusion via perceptual framework and multiscale texture saliency
- Author
-
Liu, Zhihao, Jin, Weiqi, Sheng, Dian, and Li, Li
- Published
- 2025
- Full Text
- View/download PDF
4. Saliency Based Data Augmentation for Few-Shot Video Action Recognition
- Author
-
Kong, Yongqiang, Wang, Yunhong, Li, Annan, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ide, Ichiro, editor, Kompatsiaris, Ioannis, editor, Xu, Changsheng, editor, Yanai, Keiji, editor, Chu, Wei-Ta, editor, Nitta, Naoko, editor, Riegler, Michael, editor, and Yamasaki, Toshihiko, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Visualizing and Generalizing Integrated Attributions
- Author
-
Payne, Ethan, Patrick, David, Fernandez, Amanda S., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
6. Explaining Model Parameters Using the Product Space
- Author
-
Payne, Ethan, Patrick, David, Fernandez, Amanda S., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Towards Explainable Deep Learning for Non-melanoma Skin Cancer Diagnosis
- Author
-
Le Van, Anh, Verspoor, Karin, Kirk, Thomas Brett, Song, Andy, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Gong, Mingming, editor, Song, Yiliao, editor, Koh, Yun Sing, editor, Xiang, Wei, editor, and Wang, Derui, editor
- Published
- 2025
- Full Text
- View/download PDF
8. Focus on Subtle Actions: Semantic and Saliency Knowledge Co-Propagation Method for Weakly-Supervised Temporal Action Localization
- Author
-
Dang, Yuanjie, Shou, Haoyu, Chen, Peng, Gao, Nan, Huan, Ruohong, Zhang, Yilong, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lin, Zhouchen, editor, Cheng, Ming-Ming, editor, He, Ran, editor, Ubul, Kurban, editor, Silamu, Wushouer, editor, Zha, Hongbin, editor, Zhou, Jie, editor, and Liu, Cheng-Lin, editor
- Published
- 2025
- Full Text
- View/download PDF
9. Cross-cultural differences in attention: An investigation through computational modelling
- Author
-
Mavritsaki, Eirini, Chua, Stephanie, Allen, Harriet A, and Rentzelas, Panagiotis
- Published
- 2025
- Full Text
- View/download PDF
10. Saliency and Anomaly: Transition of Concepts from Natural Images to Side-Scan Sonar Images
- Author
-
Kapetanović, Nadir, Mišković, Nikola, and Tahirović, Adnan
- Published
- 2020
- Full Text
- View/download PDF
11. Saliency Response in Superior Colliculus at the Future Saccade Goal Predicts Fixation Duration during Free Viewing of Dynamic Scenes.
- Author
-
Heeman, Jessica, White, Brian J., Van der Stigchel, Stefan, Theeuwes, Jan, Itti, Laurent, and Munoz, Douglas P.
- Subjects
- *
GAZE , *RHESUS monkeys , *EYE movements , *MESENCEPHALON , *NEURONS , *SUPERIOR colliculus - Abstract
Eye movements in daily life occur in rapid succession and often without a predefined goal. Using a free viewing task, we examined how fixation duration prior to a saccade correlates to visual saliency and neuronal activity in the superior colliculus (SC) at the saccade goal. Rhesus monkeys (three male) watched videos of natural, dynamic, scenes while eye movements were tracked and, simultaneously, neurons were recorded in the superficial and intermediate layers of the superior colliculus (SCs and SCi, respectively), a midbrain structure closely associated with gaze, attention, and saliency coding. Saccades that were directed into the neuron's receptive field (RF) were extrapolated from the data. To interpret the complex visual input, saliency at the RF location was computed during the pre-saccadic fixation period using a computational saliency model. We analyzed if visual saliency and neural activity at the saccade goal predicted pre-saccadic fixation duration. We report three major findings: (1) Saliency at the saccade goal inversely correlated with fixation duration, with motion and edge information being the strongest predictors. (2) SC visual saliency responses in both SCs and SCi were inversely related to fixation duration. (3) SCs neurons, and not SCi neurons, showed higher activation for two consecutive short fixations, suggestive of concurrent saccade processing during free viewing. These results reveal a close correspondence between visual saliency, SC processing, and the timing of saccade initiation during free viewing and are discussed in relation to their implication for understanding saccade initiation during real-world gaze behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
12. Can vaccination intentions against COVID-19 be nudged?
- Author
-
Kantorowicz-Reznichenko, Elena, Kantorowicz, Jaroslaw, and Wells, Liam
- Subjects
- *
NUDGE theory , *RISK perception , *SOCIAL norms , *POLITICAL doctrines , *VACCINATION , *CONSPIRACY theories , *IDEOLOGY - Abstract
Once vaccines against COVID-19 became available in many countries, a new challenge has emerged – how to increase the number of people who vaccinate? Different policies are being considered and implemented, including behaviourally informed interventions (i.e., nudges). In this study, we have experimentally examined two types of nudges on representative samples of two countries – descriptive social norms (Israel) and saliency of either the death experience from COVID-19 or its symptoms (UK). To increase the legitimacy of nudges, we have also examined the effectiveness of transparent nudges, where the goal of the nudge and the reasons of its implementation (expected effectiveness) were disclosed. We did not find evidence that informing people that the vast majority of their country-people intend to vaccinate enhanced vaccination intentions in Israel. We also did not find evidence that making the death experience from COVID-19, or its hard symptoms, salient enhanced vaccination intentions in the UK. Finally, transparent nudges as well did not change the results. We further provide evidence for the reasons why people choose not to vaccinate, and whether different factors such as gender, belief in conspiracy theories, political ideology, and risk perception, play a role in people's intentions to vaccinate or susceptibility to nudges. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
13. Positivity Bias and Cultural Differences in Acquiring Haihao in Chinese as a Second Language.
- Author
-
Chen, Chun-Yin Doris and Lu, Pin-Yu Ruby
- Subjects
CHINESE as a second language ,CHINESE people ,CULTURAL prejudices ,NATIVE language ,CHINESE language - Abstract
This study examines how Chinese as a Second Language (CSL) learners acquire the Chinese stance marker haihao with a focus on type and saliency. A total of 56 participants took part in the research, including 28 English-speaking CSL learners and 28 native Chinese speakers. The study utilized two evaluation judgment tasks. Results showed that participants categorized haihao into two simplified groups, guided by the economy principle and a positivity bias. English-speaking learners, influenced by a stronger positivity bias, tended to select more positive options, while Chinese participants favored slightly negative ones. Saliency improved the accuracy of recognizing negative haihao among American learners and low positive haihao among Chinese participants, though it was less effective for ambiguous expressions. These findings highlight how cultural differences and language saliency impact the interpretation of stance markers, offering insights for improving CSL teaching strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Part segmentation method of point cloud considering optimal allocation and optimal mask based on deep learning.
- Author
-
Chen, Xijiang, Sun, Xi, Zhao, Bufan, Tan, Dongmei, and Wu, Chong
- Subjects
- *
POINT cloud , *DATA augmentation , *POINT set theory , *ALLOCATION (Accounting) , *NEIGHBORHOODS - Abstract
In order to enhance the generalization ability of the network and improve the precision of the part segmentation, a point cloud part segmentation method which takes into account the optimal allocation and the optimal mask is proposed. Firstly, the optimal allocation between two point clouds is defined according to earth mover's distance. Then the farthest point sampling is used to group the point cloud, and the saliency of each point in the group is calculated. Finally, a new mixed sample is generated by replacing a partial subset of another point cloud sample with a local neighborhood in one point cloud sample. In this paper, the ShapeNet dataset was verified and the enhanced data was transferred to PointNet, Pointnet ++ and DGCNN models using this method. The mIoU increased from 83.7%, 85.1% and 85.1% to 84.6%, 85.9% and 85.7%, respectively. Effectively improve the effect of part segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. SAL3D: a model for saliency prediction in 3D meshes.
- Author
-
Martin, Daniel, Fandos, Andres, Masia, Belen, and Serrano, Ana
- Subjects
- *
EYE tracking , *AUGMENTED reality , *VIRTUAL reality , *PREDICTION models , *ATTENTION - Abstract
Advances in virtual and augmented reality have increased the demand for immersive and engaging 3D experiences. To create such experiences, it is crucial to understand visual attention in 3D environments, which is typically modeled by means of saliency maps. While attention in 2D images and traditional media has been widely studied, there is still much to explore in 3D settings. In this work, we propose a deep learning-based model for predicting saliency when viewing 3D objects, which is a first step toward understanding and predicting attention in 3D environments. Previous approaches rely solely on low-level geometric cues or unnatural conditions, however, our model is trained on a dataset of real viewing data that we have manually captured, which indeed reflects actual human viewing behavior. Our approach outperforms existing state-of-the-art methods and closely approximates the ground-truth data. Our results demonstrate the effectiveness of our approach in predicting attention in 3D objects, which can pave the way for creating more immersive and engaging 3D experiences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Video Salient Object Detection Via Multi-level Spatiotemporal Bidirectional Network Using Multi-scale Transfer Learning.
- Author
-
Sharma, Gaurav, Singh, Maheep, and Berwal, Krishan
- Subjects
- *
CONVOLUTIONAL neural networks , *LONG short-term memory , *VIDEOS - Abstract
Video saliency prediction aims to resemble human visual attention by identifying the most relevant and significant elements in a video frame or sequence. This task becomes notably intricate in scenarios characterized by dynamic elements such as rapid motion, occlusions, blur, background variations, and nonrigid deformations. Therefore, the inherent complexity of human visual attention behavior during dynamic scenes necessitates the assessment of both temporal and spatial data. Existing video saliency frameworks often falter under such conditions, and relying solely on image saliency models neglects crucial temporal information in videos. This study presents a new Video Salient Object Detection via Multi-level Spatiotemporal Bidirectional Network using Multi-scale Transfer Learning (MSB-Net) to address the problem of identifying significant objects in videos. The proposed MSB-Net achieves notable results for a given sequence of frames by employing multi-scale transfer learning with an encoder and decoder approach to acquire knowledge and saliency map attributes spatially and temporally. The proposed MSB-Net model has bidirectional LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Network) components. The VGG16 (Video Geometry Group) and VGG19 architectures extract multi-scale features from the input video frames. Evaluation of diverse datasets, namely DAVIS-T, SegTrack-V2, ViSal, VOS-T, and DAVSOD-T, demonstrates the model's effectiveness, outperforming other competitive models based on parameters such as MAE, F-measure, and S-measure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Depth Matters: Spatial Proximity-Based Gaze Cone Generation for Gaze Following in Wild.
- Author
-
Liu, Feiyang, Li, Kun, Zhong, Zhun, Jia, Wei, Hu, Bin, Yang, Xun, Wang, Meng, and Guo, Dan
- Subjects
DEPTH perception ,GAZE ,SOURCE code ,INDIVIDUAL differences ,PRIOR learning ,DISTRACTION - Abstract
Gaze following aims to predict where a person is looking in a scene. Existing methods tend to prioritize traditional 2D RGB visual cues or require burdensome prior knowledge and extra expensive datasets annotated in 3D coordinate systems to train specialized modules to enhance scene modeling. In this work, we introduce a novel framework deployed on a simple ResNet backbone, which exclusively uses image and depth maps to mimic human visual preferences and realize 3D-like depth perception. We first leverage depth maps to formulate spatial-based proximity information regarding the objects with the target person. This process sharpens the focus of the gaze cone on the specific region of interest pertaining to the target while diminishing the impact of surrounding distractions. To capture the diverse dependence of scene context on the saliency gaze cone, we then introduce a learnable grid-level regularized attention that anticipates coarse-grained regions of interest, thereby refining the mapping of the saliency feature to pixel-level heatmaps. This allows our model to better account for individual differences when predicting others' gaze locations. Finally, we employ the KL-divergence loss to super the grid-level regularized attention, which combines the gaze direction, heatmap regression, and in/out classification losses, providing comprehensive supervision for model optimization. Experimental results on two publicly available datasets demonstrate the comparable performance of our model with less help of modal information. Quantitative visualization results further validate the interpretability of our method. The source code will be available at https://github.com/VUT-HFUT/DepthMatters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Temporal and spatial analysis of event-related potentials in response to color saliency differences among various color vision types.
- Author
-
Naoko Takahashi, Masataka Sawayama, Xu Chen, Yuki Motomura, Hiroshige Takeichi, Satoru Miyauchi, and Chihiro Hiramatsu
- Subjects
COLOR vision ,EVOKED potentials (Electrophysiology) ,COGNITION ,STIMULUS & response (Psychology) ,SAMPLE size (Statistics) - Abstract
Introduction: Human color vision exhibits significant diversity that cannot be fully explained by categorical classifications. Understanding how individuals with different color vision phenotypes perceive, recognize, and react to the same physical stimuli provides valuable insights into sensory characteristics. This study aimed to identify behavioral and neural differences between different color visions, primarily classified as typical trichromats and anomalous trichromats, in response to two chromatic stimuli, blue-green and red, during an attention demanding oddball task. Methods: We analyzed the P3 component of event-related potentials (ERPs), associated with attention, and conducted a broad spatiotemporal exploration of neural differences. Behavioral responses were also analyzed to complement neural data. Participants included typical trichromats (n = 13) and anomalous trichromats (n = 5), and the chromatic stimuli were presented in an oddball paradigm. Results: Typical trichromats exhibited faster potentiation from the occipital to parietal regions in response to the more salient red stimulus, particularly in the area overlapping with the P3 component. In contrast, anomalous trichromats revealed faster potentiation to the expected more salient blue-green stimulus in the occipital to parietal regions, with no other significant neural differences between stimuli. Comparisons between the color vision types showed no significant overall neural differences. Discussion: The large variability in red-green sensitivity among anomalous trichromats, along with neural variability not fully explained by this sensitivity, likely contributed to the absence of clear neural distinctions based on color saliency. While reaction times were influenced by red-green sensitivity, neural signals showed ambiguity regarding saliency differences. These findings suggest that factors beyond red-green sensitivity influenced neural activity related to color perception and cognition in minority color vision phenotypes. Further research with larger sample sizes is needed to more comprehensively explore these neural dynamics and their broader implications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Temporal and spatial analysis of event-related potentials in response to color saliency difference among varouis vision types.
- Author
-
Naoko Takahashi, Masataka Sawayama, Xu Chen, Yuki Motomura, Hiroshige Takeichi, Satoru Miyauchi, and Chihiro Hiramatsu
- Subjects
COLOR vision ,EVOKED potentials (Electrophysiology) ,STIMULUS & response (Psychology) ,COGNITION ,SAMPLE size (Statistics) - Abstract
Introduction: Human color vision exhibits significant diversity that cannot be fully explained by categorical classifications. Understanding how individuals with different color vision phenotypes perceive, recognize, and react to the same physical stimuli provides valuable insights into sensory characteristics. This study aimed to identify behavioral and neural differences between different color visions, primarily classified as typical trichromats and anomalous trichromats, in response to two chromatic stimuli, blue-green and red, during an attention-demanding oddball task. Methods: We analyzed the P3 component of event-related potentials (ERPs), associated with attention, and conducted a broad spatiotemporal exploration of neural differences. Behavioral responses were also analyzed to complement neural data. Participants included typical trichromats (n = 13) and anomalous trichromats (n = 5), and the chromatic stimuli were presented in an oddball paradigm. Results: Typical trichromats exhibited faster potentiation from the occipital to parietal regions in response to the more salient red stimulus, particularly in the area overlapping with the P3 component. In contrast, anomalous trichromats revealed faster potentiation to the expected more salient blue-green stimulus in the occipital to parietal regions, with no other significant neural differences between stimuli. Comparisons between the color vision types showed no significant overall neural differences. Discussion: The large variability in red-green sensitivity among anomalous trichromats, along with neural variability not fully explained by this sensitivity, likely contributed to the absence of clear neural distinctions based on color saliency. While reaction times were influenced by red-green sensitivity, neural signals showed ambiguity regarding saliency differences. These findings suggest that factors beyond red-green sensitivity influenced neural activity related to color perception and cognition in minority color vision phenotypes. Further research with larger sample sizes is needed to more comprehensively explore these neural dynamics and their broader implications [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Charter–manifesto congruence as a signal for issue salience: democratic innovations within political parties in Hungary.
- Author
-
Kovarek, Daniel and Oross, Dániel
- Subjects
- *
POLITICAL parties , *VOTERS , *DICTATORSHIP , *POLITICAL opposition - Abstract
This article proposes a proxy for issue salience via studying congruence between parties' organizational structure and the policies they promote to their voters. This charter-manifesto congruence, i.e. institutionalizing the same democratic innovations at the intra-party level and advocating for their adoption at the national level, is particularly useful to demarcate party profiles in dominant party systems and electoral autocracies. To demonstrate how such congruence can be measured, we analyse the credibility of the commitments made by opposition parties in Hungary to a variety of democratic innovations. Drawing on novel survey data, we further substantiate our argument by showing high support for these innovations among party members and voters. The analysis identifies gender quotas as the most congruent innovation. Our longitudinal research design also reveals a recent breakthrough of e-democracy in party manifestos. We then discuss how preferences for various innovations are potentially shaped by membership size, party founders' political socialization or organizational learning. At the normative level, the analysis suggests that some forms of democratic innovations are better suited for intra-party organizations than for party manifestos. These implications might be relevant for those committed to promoting deliberation, gender equality and (re-)engagement of the youth in politics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Patterns of saliency and semantic features distinguish gaze of expert and novice viewers of surveillance footage.
- Author
-
Peng, Yujia, Burling, Joseph M., Todorova, Greta K., Neary, Catherine, Pollick, Frank E., and Lu, Hongjing
- Subjects
- *
CONVOLUTIONAL neural networks , *FEATURE extraction , *CLOSED-circuit television , *INFORMATION-seeking behavior , *EYE movements , *GAZE - Abstract
When viewing the actions of others, we not only see patterns of body movements, but we also "see" the intentions and social relations of people. Experienced forensic examiners – Closed Circuit Television (CCTV) operators – have been shown to convey superior performance in identifying and predicting hostile intentions from surveillance footage than novices. However, it remains largely unknown what visual content CCTV operators actively attend to, and whether CCTV operators develop different strategies for active information seeking from what novices do. Here, we conducted computational analysis for the gaze-centered stimuli captured by experienced CCTV operators and novices' eye movements when viewing the same surveillance footage. Low-level image features were extracted by a visual saliency model, whereas object-level semantic features were extracted by a deep convolutional neural network (DCNN), AlexNet, from gaze-centered regions. We found that the looking behavior of CCTV operators differs from novices by actively attending to visual contents with different patterns of saliency and semantic features. Expertise in selectively utilizing informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Retinal eccentricity modulates saliency-driven but not relevance-driven visual selection.
- Author
-
Donk, Mieke, van Heusden, Elle, and Olivers, Christian N. L.
- Subjects
- *
VISUAL perception , *VISUAL fields , *EYE movements - Abstract
Where we move our eyes during visual search is controlled by the relative saliency and relevance of stimuli in the visual field. However, the visual field is not homogeneous, as both sensory representations and attention change with eccentricity. Here we present an experiment investigating how eccentricity differences between competing stimuli affect saliency- and relevance-driven selection. Participants made a single eye movement to a predefined orientation singleton target that was simultaneously presented with an orientation singleton distractor in a background of multiple homogenously oriented other items. The target was either more or less salient than the distractor. Moreover, each of the two singletons could be presented at one of three different retinal eccentricities, such that both were presented at the same eccentricity, one eccentricity value apart, or two eccentricity values apart. The results showed that selection was initially determined by saliency, followed after about 300 ms by relevance. In addition, observers preferred to select the closer over the more distant singleton, and this central selection bias increased with increasing eccentricity difference. Importantly, it largely emerged within the same time window as the saliency effect, thereby resulting in a net reduction of the influence of saliency on the selection outcome. In contrast, the relevance effect remained unaffected by eccentricity. Together, these findings demonstrate that eccentricity is a major determinant of selection behavior, even to the extent that it modifies the relative contribution of saliency in determining where people move their eyes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Saliency-Guided Sparse Low-Rank Tensor Approximation for Unsupervised Anomaly Detection of Hyperspectral Remote Sensing Images.
- Author
-
Du, ZhiGuo, Yang, Lian, and Tang, MingXuan
- Subjects
- *
REMOTE sensing , *HYPERSPECTRAL imaging systems , *SPARSE matrices , *NATIONAL security - Abstract
Hyperspectral anomaly detection can separate sparse anomalies from the low-rank background component under an unsupervised behavior due to sufficient spectral information. Therefore, hyperspectral image anomaly detection technology has great application potential and value in public security and national defense. Currently, most existing models attempt to detect anomalous targets with a sparsity prior, without further considering the visual saliency of the targets themselves. To tackle this issue, this paper proposes a saliency-guided sparse low-rank tensor approximation model, called SSLR, to detect anomalous targets from hyperspectral remote sensing images in an unsupervised manner. Specifically, we first explore the saliency information of each pixel for regularizing the sparse anomaly matrix. We then suggest a three-directional tensor nuclear norm to obtain a low-rank background to characterize the background component. We solve the SSLR optimization problem by an efficient alternating direction method of multipliers framework. Experiments conducted on benchmark hyperspectral datasets demonstrate that the proposed SSLR outperforms some state-of-the-art anomaly detection methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Finding Patterns in the Location of Elements to Increase Readability.
- Author
-
Wan, Richard L.
- Subjects
COGNITIVE neuroscience ,SYSTEMS software ,GAME theory ,GRAPH theory ,READABILITY formulas - Abstract
There are many times when visuals and text will often disrupt each other in a single space and, as a result, make both more difficult to understand. Many methods exist to counteract this, such as placing text and visuals in specific locations in the given space. We have utilized these certain "combinations" to a great extent. Still, when there are special cases where these methods are less effective, finding an optimal combination as a replacement is more difficult. To see other patterns and techniques to assist in locating salient elements, we can use game theory and graph theory, as well as a small amount of cognitive neuroscience, to determine what combinations allow words and visuals to complement each other rather than compete. The research done in this paper is exemplified by bento grids, a recent design trend popularized by Apple. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Top-Down, Bottom-Up Attentional Processing. The Traditional View
- Author
-
Wasserman, Theodore, Wasserman, Lori Drucker, Wasserman, Theodore, Series Editor, and Wasserman, Lori Drucker
- Published
- 2024
- Full Text
- View/download PDF
26. Video Understanding Using 2D-CNNs on Salient Spatio-Temporal Slices
- Author
-
Hu, Yaxin, Barth, Erhardt, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wand, Michael, editor, Malinovská, Kristína, editor, Schmidhuber, Jürgen, editor, and Tetko, Igor V., editor
- Published
- 2024
- Full Text
- View/download PDF
27. Ultrasound Image Segmentation via a Multi-scale Salient Network
- Author
-
Alblwi, Abdalrahman, Barner, Kenneth E., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Finkelstein, Joseph, editor, Moskovitch, Robert, editor, and Parimbelli, Enea, editor
- Published
- 2024
- Full Text
- View/download PDF
28. DSAL-GAN: Denoising Based Saliency Prediction with Generative Adversarial Networks
- Author
-
Mukherjee, Prerana, Sharma, Manoj, Makwana, Megh, Singh, Ajay Pratap, Upadhyay, Avinash, Trivedi, Akkshita, Lall, Brejesh, Chaudhury, Santanu, Hartmanis, Juris, Founding Editor, Goos, Gerhard, Series Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ghosh, Ashish, editor, King, Irwin, editor, Bhattacharyya, Malay, editor, Sankar Ray, Shubhra, editor, and K. Pal, Sankar, editor
- Published
- 2024
- Full Text
- View/download PDF
29. Evaluating the Faithfulness of Causality in Saliency-Based Explanations of Deep Learning Models for Temporal Colour Constancy
- Author
-
Rizzo, Matteo, Conati, Cristina, Jang, Daesik, Hu, Hui, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Longo, Luca, editor, Lapuschkin, Sebastian, editor, and Seifert, Christin, editor
- Published
- 2024
- Full Text
- View/download PDF
30. Twilight Zone as Philosophy 101
- Author
-
Marinucci, Mimi, Kowalski, Dean A., editor, Lay, Chris, editor, S. Engels, Kimberly, editor, and Johnson, David Kyle, Editor-in-Chief
- Published
- 2024
- Full Text
- View/download PDF
31. Visual Mesh Quality Assessment Using Weighted Network Representation
- Author
-
El Hassouni, Mohammed, Cherifi, Hocine, Kacprzyk, Janusz, Series Editor, Cherifi, Hocine, editor, Rocha, Luis M., editor, Cherifi, Chantal, editor, and Donduran, Murat, editor
- Published
- 2024
- Full Text
- View/download PDF
32. Non-uniform Sampling-Based Breast Cancer Classification
- Author
-
Posso Murillo, Santiago, Skean, Oscar, Sanchez Giraldo, Luis G., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cao, Xiaohuan, editor, Xu, Xuanang, editor, Rekik, Islem, editor, Cui, Zhiming, editor, and Ouyang, Xi, editor
- Published
- 2024
- Full Text
- View/download PDF
33. Deep multimodal predictome for studying mental disorders
- Author
-
Rahaman, Abdur, Chen, Jiayu, Fu, Zening, Lewis, Noah, Iraji, Armin, Erp, Theo GM, and Calhoun, Vince D
- Subjects
Biological Psychology ,Psychology ,Schizophrenia ,Serious Mental Illness ,Basic Behavioral and Social Science ,Brain Disorders ,Mental Health ,Neurosciences ,Genetics ,Behavioral and Social Science ,Mental health ,Good Health and Well Being ,Humans ,Magnetic Resonance Imaging ,Neuroimaging ,Mental Disorders ,Neural Networks ,Computer ,functional network connectivity ,multimodal deep learning ,resting-state functional and structural MRI ,saliency ,schizophrenia classification ,single nucleotide polymorphism ,Cognitive Sciences ,Experimental Psychology ,Biological psychology ,Cognitive and computational psychology - Abstract
Characterizing neuropsychiatric disorders is challenging due to heterogeneity in the population. We propose combining structural and functional neuroimaging and genomic data in a multimodal classification framework to leverage their complementary information. Our objectives are two-fold (i) to improve the classification of disorders and (ii) to introspect the concepts learned to explore underlying neural and biological mechanisms linked to mental disorders. Previous multimodal studies have focused on naïve neural networks, mostly perceptron, to learn modality-wise features and often assume equal contribution from each modality. Our focus is on the development of neural networks for feature learning and implementing an adaptive control unit for the fusion phase. Our mid fusion with attention model includes a multilayer feed-forward network, an autoencoder, a bi-directional long short-term memory unit with attention as the features extractor, and a linear attention module for controlling modality-specific influence. The proposed model acquired 92% (p
- Published
- 2023
34. Positivity Bias and Cultural Differences in Acquiring Haihao in Chinese as a Second Language
- Author
-
Chun-Yin Doris Chen and Pin-Yu Ruby Lu
- Subjects
polysemy ,stance marker ,positive bias ,saliency ,Chinese as a second language ,Language and Literature - Abstract
This study examines how Chinese as a Second Language (CSL) learners acquire the Chinese stance marker haihao with a focus on type and saliency. A total of 56 participants took part in the research, including 28 English-speaking CSL learners and 28 native Chinese speakers. The study utilized two evaluation judgment tasks. Results showed that participants categorized haihao into two simplified groups, guided by the economy principle and a positivity bias. English-speaking learners, influenced by a stronger positivity bias, tended to select more positive options, while Chinese participants favored slightly negative ones. Saliency improved the accuracy of recognizing negative haihao among American learners and low positive haihao among Chinese participants, though it was less effective for ambiguous expressions. These findings highlight how cultural differences and language saliency impact the interpretation of stance markers, offering insights for improving CSL teaching strategies.
- Published
- 2024
- Full Text
- View/download PDF
35. Subjectively salient faces differ from emotional faces: ERP evidence
- Author
-
Żochowska, Anna and Nowicka, Anna
- Published
- 2024
- Full Text
- View/download PDF
36. Potsdam data set of eye movement on natural scenes (DAEMONS).
- Author
-
Schwetlick, Lisa, Kümmerer, Matthias, Bethge, Matthias, and Engbert, Ralf
- Subjects
EYE movements ,GAZE ,ARTIFICIAL neural networks ,DATA modeling ,COGNITIVE science ,COMPUTER vision ,HUMAN mechanics - Abstract
The article "Potsdam data set of eye movement on natural scenes (DAEMONS)" emphasizes the significance of high-quality, openly accessible data sets for studying eye movement behavior. The authors have compiled and released a substantial data set of eye tracking data on 2,400 color photographs of natural scenes. This data set, called DAEMONS, includes annotations and serves as a benchmark for scan path modeling and spatial saliency prediction. The article provides comprehensive details about the stimulus material, image annotations, participants, and experimental setup. The study was conducted ethically, and the authors express gratitude to the individuals and organizations involved in the research. The dataset, along with the stimulus images and eye tracking experiment data, is available in online repositories. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
37. Pathways for Naturalistic Looking Behavior in Primate II. Superior Colliculus Integrates Parallel Top-down and Bottom-up Inputs.
- Author
-
Veale, Richard and Takahashi, Mayu
- Subjects
- *
SUPERIOR colliculus , *NEURAL pathways , *GAZE , *CEREBRAL cortex , *SPINAL cord , *BASAL ganglia - Abstract
• Volitional signals for gaze control provided by multiple parallel pathways to the superior colliculus (88) • Interaction of sensory task-related signals within the multi-layered superior colliculus (79) • Convergence of bottom-up (world statistics) and top-down (goal and task) signals in the SC for gaze control (91) • Models of attention such as saliency map and their physiological basis (60) • Cerebral and subcortical inputs to control output signals for gaze in the SC (64) Volitional signals for gaze control are provided by multiple parallel pathways converging on the midbrain superior colliculus (SC), whose deeper layers output to the brainstem gaze circuits. In the first of two papers (Takahashi and Veale, 2023), we described the properties of gaze behavior of several species under both laboratory and natural conditions, as well as the current understanding of the brainstem and spinal cord circuits implementing gaze control in primate. In this paper, we review the parallel pathways by which sensory and task information reaches SC and how these sensory and task signals interact within SC's multilayered structure. This includes both bottom-up (world statistics) signals mediated by sensory cortex, association cortex, and subcortical structures, as well as top-down (goal and task) influences which arrive via either direct excitatory pathways from cerebral cortex, or via indirect basal ganglia relays resulting in inhibition or dis-inhibition as appropriate for alternative behaviors. Models of attention such as saliency maps serve as convenient frameworks to organize our understanding of both the separate computations of each neural pathway, as well as the interaction between the multiple parallel pathways influencing gaze. While the spatial interactions between gaze's neural pathways are relatively well understood, the temporal interactions between and within pathways will be an important area of future study, requiring both improved technical methods for measurement and improvement of our understanding of how temporal dynamics results in the observed spatiotemporal allocation of gaze. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Saliency-Guided Point Cloud Compression for 3D Live Reconstruction.
- Author
-
Ruiu, Pietro, Mascia, Lorenzo, and Grosso, Enrico
- Subjects
POINT cloud ,DATA transmission systems ,VIRTUAL reality ,REALITY television programs ,DATA compression ,USER experience ,TELEROBOTICS ,HEAD-mounted displays ,USER-generated content - Abstract
3D modeling and reconstruction are critical to creating immersive XR experiences, providing realistic virtual environments, objects, and interactions that increase user engagement and enable new forms of content manipulation. Today, 3D data can be easily captured using off-the-shelf, specialized headsets; very often, these tools provide real-time, albeit low-resolution, integration of continuously captured depth maps. This approach is generally suitable for basic AR and MR applications, where users can easily direct their attention to points of interest and benefit from a fully user-centric perspective. However, it proves to be less effective in more complex scenarios such as multi-user telepresence or telerobotics, where real-time transmission of local surroundings to remote users is essential. Two primary questions emerge: (i) what strategies are available for achieving real-time 3D reconstruction in such systems? and (ii) how can the effectiveness of real-time 3D reconstruction methods be assessed? This paper explores various approaches to the challenge of live 3D reconstruction from typical point cloud data. It first introduces some common data flow patterns that characterize virtual reality applications and shows that achieving high-speed data transmission and efficient data compression is critical to maintaining visual continuity and ensuring a satisfactory user experience. The paper thus introduces the concept of saliency-driven compression/reconstruction and compares it with alternative state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Region-based feature combination for robust salient object detection.
- Author
-
Singh, Vivek Kumar, Kumar, Nitin, and Nand, Parma
- Abstract
The diversity of natural images in terms of visual features is useful in saliency detection. The complementary visual features jointly improve the performance of salient object detection. In this paper, we introduce a novel region-based feature combination approach that utilizes the diversity of visual features over image regions for robust salient object detection. The proposed approach works in four steps: (i) region formation, (ii) feature extraction, (iii) region-wise weight learning and (iv) region-based feature combination. Region formation is carried out using simple linear iterative clustering (SLIC) algorithm. Then, the features are extracted using Boundary Connectivity (BC), Contrast Cluster (CC), and Minimum Directional Contrast (MDC) methods. These features are then used for learning weights vectors for each region. Our major contribution is in step four where a novel dynamic weighted feature combination method is proposed. In this step region-wise integration weights are obtained by using a nature inspirited optimization algorithm called Constrained Particle swarm optimization (CPSO). Then salient features are region-wise combined with their dynamic relevance for final saliency map. The proposed method is compared with eight state-of-the-art saliency detection methods on five public available saliency benchmark datasets namely MSRA10K, DUT-OMRON, ECSSD, PASCAL, and SED2. The experimental results demonstrate that the proposed method performs better than state-of-the-art methods in terms of Precision, Recall, F-measure and Mean Absolute Error while comparable in terms of AUC and ROC curve. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Detection of a temporal salient object benefits from visual stimulus‐specific adaptation in avian midbrain inhibitory nucleus.
- Author
-
WANG, Jiangtao, RAO, Xiaoping, HUANG, Shuman, WANG, Zhizhong, NIU, Xiaoke, ZHU, Minjie, WANG, Songwei, and SHI, Li
- Subjects
- *
VISUAL accommodation , *MESENCEPHALON , *VISUAL perception , *SELECTIVITY (Psychology) , *PIGEONS - Abstract
Food and predators are the most noteworthy objects for the basic survival of wild animals, and both are often deviant in both spatial and temporal domains and quickly attract an animal's attention. Although stimulus‐specific adaptation (SSA) is considered a potential neural basis of salient sound detection in the temporal domain, related research on visual SSA is limited and its relationship with temporal saliency is uncertain. The avian nucleus isthmi pars magnocellularis (Imc), which is central to midbrain selective attention network, is an ideal site to investigate the neural correlate of visual SSA and detection of a salient object in the time domain. Here, the constant order paradigm was applied to explore the visual SSA in the Imc of pigeons. The results showed that the firing rates of Imc neurons gradually decrease with repetitions of motion in the same direction, but recover when a motion in a deviant direction is presented, implying visual SSA to the direction of a moving object. Furthermore, enhanced response for an object moving in other directions that were not presented ever in the paradigm is also observed. To verify the neural mechanism underlying these phenomena, we introduced a neural computation model involving a recoverable synaptic change with a "center‐surround" pattern to reproduce the visual SSA and temporal saliency for the moving object. These results suggest that the Imc produces visual SSA to motion direction, allowing temporal salient object detection, which may facilitate the detection of the sudden appearance of a predator. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Dense and uniform displays facilitate the detection of salient targets.
- Author
-
Kerzel, Dirk and Constant, Martin
- Abstract
Increasing the density or uniformity of nontarget stimuli appears to increase the saliency of singleton stimuli. Consequently, search times should be shorter. Surprisingly, however, effects of density or uniformity on search times were not always observed in detection tasks. We re-examined this finding with stimuli having two features, color and shape. Half of the participants indicated the presence or absence of a color singleton, and the other half indicated the presence or absence of a shape singleton. Density was changed by increasing the number of stimuli from 4 to 10. We found that the effects of density were either limited to target-absent trials or to target-present trials, which may explain previous failures to observe these effects. When color was the target feature, we found shorter RTs to dense than sparse displays on target-absent trials, but no difference on target-present trials. When shape was the target feature, it was the opposite. Concerning the uniformity of the nontargets, we found shorter RTs with uniform than mixed displays and this difference was larger on target-absent than target-present trials. These results are mostly consistent with the Guided Search Model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Cosine modulated filter bank‐based architecture for extracting and fusing saliency features.
- Author
-
Ali, Md. Yousuf, Jiang, Bin, Chowdhury, Oindrila, Harun‐Ar‐Rashid, Md., Hossain, M. Shamim, and AlMutib, Khalid
- Subjects
- *
CONTENT-based image retrieval , *COMPUTER vision , *FILTER banks , *FEATURE extraction , *IMAGE segmentation - Abstract
Many academics are interested in content‐based image retrieval techniques like image segmentation. In computer vision, the most popular method for segmenting a digital image into different parts is known as image segmentation. We assigned the artificially intelligent algorithm to the image's critical areas by modeling human features in specific regions. In order to detect the object and identify the key parts in the 'RGB' photographs, we combined scenes based on a colour and depth map, or 'RGB‐D', and used cosine modulated filter bank (CMFB), which conducts cross‐scale extraction of joint features from the images during feature extraction. The proposed 'CMFB' combines the discovered collaborative elements with the discovered supplementary data. The features in multi‐scale images is combined using fusion blocks with the goal of producing additional features (FB). Then, a saliency mapping calculation is made for the loss linked to two blocks. The suggested 'CMFB' is tested with the aid of five data sets, and it is shown that, the proposed 'CMFB' outperforms other conventional techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Inferring Attention Shifts for Salient Instance Ranking.
- Author
-
Siris, Avishek, Jiao, Jianbo, Tam, Gary K. L., Xie, Xianghua, and Lau, Rynson W. H.
- Subjects
- *
DEEP learning , *VISUAL perception , *PROCESS capability , *SELECTIVITY (Psychology) , *ATTENTION - Abstract
The human visual system has limited capacity in simultaneously processing multiple visual inputs. Consequently, humans rely on shifting their attention from one location to another. When viewing an image of complex scenes, psychology studies and behavioural observations show that humans prioritise and sequentially shift attention among multiple visual stimuli. In this paper, we propose to predict the saliency rank of multiple objects by inferring human attention shift. We first construct a new large-scale salient object ranking dataset, with the saliency rank of objects defined by the order that an observer attends to these objects via attention shift. We then propose a new deep learning-based model to leverage both bottom-up and top-down attention mechanisms for saliency rank prediction. Our model includes three novel modules: Spatial Mask Module (SMM), Selective Attention Module (SAM) and Salient Instance Edge Module (SIEM). SMM integrates bottom-up and semantic object properties to enhance contextual object features, from which SAM learns the dependencies between object features and image features for saliency reasoning. SIEM is designed to improve segmentation of salient objects, which helps further improve their rank predictions. Experimental results show that our proposed network achieves state-of-the-art performances on the salient object ranking task across multiple datasets. Code and data are available at https://github.com/SirisAvishek/Attention_Shift_Ranks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Investor attention and stock price efficiency: Evidence from quasi‐natural experiments in China.
- Author
-
Li, Zhibing, Liu, Jie, Liu, Xiaoyu, and Wu, Chonglin
- Subjects
MARKET sentiment ,CAPITAL market ,PRICES ,AUDITING ,PRICE increases ,STOCKS (Finance) - Abstract
We examine whether increasing investor attention affects stock price efficiency. To identify the causal effect, we employ daily repeated quasi‐natural experiments in China where investor attention difference is purely driven by price rounding effect without information regarding stock fundamentals. Stocks tend to draw significant more attention and show higher price efficiency after being exposed to the Winner List. We also find supporting evidence for two nonexclusive channels through which investor attention enhance stock price efficiency: increasing stock liquidity and stronger net inflows from large orders. The positive relationship between investor attention and price efficiency is more pronounced among stocks with lower institutional shareholdings, stocks without overseas or Big Four audit firms, and stocks without B‐ or H‐shares. Our findings further shed light on the significant impact of saliency on the capital market. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Consumer Behavior: IN THE BLINK OF AN EYE: THE INFLUENCE OF QUICK VISUAL PROMPTS ON PRODUCT CHOICES ONLINE.
- Author
-
Luca, Ruxandra M., Legoux, Renaud, Forster, Sophie, and Khammash, Marv
- Subjects
INTERNET advertising ,SELECTIVITY (Psychology) ,CONSUMER behavior ,ADVERTISING effectiveness ,INTERNET marketing - Abstract
The article focuses on how brief visual prompts impact consumer product choices online. Topics include the effectiveness of visual prompts in influencing product selection; the effects of prompt location and color on consumer behavior; and the role of timing in visual prompt effectiveness. The study finds that even very brief visual cues, lasting 150 milliseconds, can significantly affect consumer decisions by enhancing the saliency of products.
- Published
- 2024
46. Target Perception and Behavioral Recognition Algorithms Based on Saliency and Feature Extraction
- Author
-
Weichuan Ni, Bingtian Zhang, Jinting Zhang, Jiajun Zou, Zhiming Xu, and Zemin Qiu
- Subjects
Target perception ,behavioural recognition ,saliency ,feature extraction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In response to the problems of target loss and insufficient accuracy in existing target perception and action recognition algorithms, this paper proposes a target perception and action recognition algorithm based on saliency and feature extraction. This algorithm uses saliency detection techniques to obtain salient regions in images or videos to focus attention on the target. At the same time, feature extraction techniques are combined to extract key nodes and inter-frame correlations from the target information. The experimental results of the measurement data show that this algorithm is superior to traditional detection methods in detecting target behaviors. In addition, it has successfully solved the problems of motion misalignment and jumping in pedestrian detection. Although the node localization of the algorithm needs further improvement, it has shown good application prospects in smart cities and intelligent surveillance. Future work will focus on improving the positioning accuracy of key nodes to enhance its adaptability to different environments and scenarios, providing better support for smart city and other applications.
- Published
- 2024
- Full Text
- View/download PDF
47. Medical Image Segmentation Using Combined Level Set and Saliency Analysis
- Author
-
Aditi Joshi, Mohammed Saquib Khan, and Kwang Nam Choi
- Subjects
Active contour ,biomedical image processing ,distance regularized ,gradient-flow ,image segmentation ,saliency ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In the realm of computer vision, image segmentation has become a crucial task with widespread applications, particularly in medical imaging. Although there have been significant advancements in image segmentation methods, challenges persist in accurately delineating intricate structures within noisy and varied medical images. In this study, we have developed a novel segmentation model that combines the distance regularized level set evolution (DRLSE) model with a local gradient flow-based image (LGFI) and saliency maps. This innovative fusion addresses the limitations of existing methods and offers robust and precise solutions for medical image segmentation. We provide comprehensive mathematical formulations and demonstrate the effectiveness of the proposed model across diverse medical images. Through quantitative and qualitative analyses of the brain tumor segmentation (BraTS) 2019 dataset, we have demonstrated the superior accuracy, robustness, and computational efficiency of the proposed model in comparison with the state-of-the-art methods. This research marks a significant step toward enhancing medical image analysis, with potential applications in diagnostics and healthcare practices.
- Published
- 2024
- Full Text
- View/download PDF
48. Rapid Extraction of the Spatial Distribution of Physical Saliency and Semantic Informativeness from Natural Scenes in the Human Brain
- Author
-
Kiat, John E, Hayes, Taylor R, Henderson, John M, and Luck, Steven J
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Eye Disease and Disorders of Vision ,Clinical Research ,Neurological ,Adolescent ,Adult ,Attention ,Brain ,Brain Mapping ,Evoked Potentials ,Female ,Humans ,Male ,Models ,Neurological ,Photic Stimulation ,Semantics ,Visual Perception ,Young Adult ,attention ,EEG ,ERP ,meaning map ,representational similarity analysis ,saliency ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery - Abstract
Physically salient objects are thought to attract attention in natural scenes. However, research has shown that meaning maps, which capture the spatial distribution of semantically informative scene features, trump physical saliency in predicting the pattern of eye moments in natural scene viewing. Meaning maps even predict the fastest eye movements, suggesting that the brain extracts the spatial distribution of potentially meaningful scene regions very rapidly. To test this hypothesis, we applied representational similarity analysis to ERP data. The ERPs were obtained from human participants (N = 32, male and female) who viewed a series of 50 different natural scenes while performing a modified 1-back task. For each scene, we obtained a physical saliency map from a computational model and a meaning map from crowd-sourced ratings. We then used representational similarity analysis to assess the extent to which the representational geometry of physical saliency maps and meaning maps can predict the representational geometry of the neural response (the ERP scalp distribution) at each moment in time following scene onset. We found that a link between physical saliency and the ERPs emerged first (∼78 ms after stimulus onset), with a link to semantic informativeness emerging soon afterward (∼87 ms after stimulus onset). These findings are in line with previous evidence indicating that saliency is computed rapidly, while also indicating that information related to the spatial distribution of semantically informative scene elements is computed shortly thereafter, early enough to potentially exert an influence on eye movements.SIGNIFICANCE STATEMENT Attention may be attracted by physically salient objects, such as flashing lights, but humans must also be able to direct their attention to meaningful parts of scenes. Understanding how we direct attention to meaningful scene regions will be important for developing treatments for disorders of attention and for designing roadways, cockpits, and computer user interfaces. Information about saliency appears to be extracted rapidly by the brain, but little is known about the mechanisms that determine the locations of meaningful information. To address this gap, we showed people photographs of real-world scenes and measured brain activity. We found that information related to the locations of meaningful scene elements was extracted rapidly, shortly after the emergence of saliency-related information.
- Published
- 2022
49. Potsdam data set of eye movement on natural scenes (DAEMONS)
- Author
-
Lisa Schwetlick, Matthias Kümmerer, Matthias Bethge, and Ralf Engbert
- Subjects
eye movement ,fixations ,scan path ,modeling ,machine learning ,saliency ,Psychology ,BF1-990 - Published
- 2024
- Full Text
- View/download PDF
50. Learning modifies attention during bumblebee visual search.
- Author
-
Robert, Théo, Tarapata, Karolina, and Nityananda, Vivek
- Subjects
POLLINATION ,POLLINATORS ,VISUAL perception ,BUMBLEBEES ,ATTENTION ,BEES - Abstract
The role of visual search during bee foraging is relatively understudied compared to the choices made by bees. As bees learn about rewards, we predicted that visual search would be modified to prioritise rewarding flowers. To test this, we ran an experiment testing how bee search differs in the initial and later part of training as they learn about flowers with either higher- or lower-quality rewards. We then ran an experiment to see how this prior training with reward influences their search on a subsequent task with different flowers. We used the time spent inspecting flowers as a measure of attention and found that learning increased attention to rewards and away from unrewarding flowers. Higher quality rewards led to decreased attention to non-flower regions, but lower quality rewards did not. Prior experience of lower rewards also led to more attention to higher rewards compared to unrewarding flowers and non-flower regions. Our results suggest that flowers would elicit differences in bee search behaviour depending on the sugar content of their nectar. They also demonstrate the utility of studying visual search and have important implications for understanding the pollination ecology of flowers with different qualities of reward. Significance statement: Studies investigating how foraging bees learn about reward typically focus on the choices made by the bees. How bees deploy attention and visual search during foraging is less well studied. We analysed flight videos to characterise visual search as bees learn which flowers are rewarding. We found that learning increases the focus of bees on flower regions. We also found that the quality of the reward a flower offers influences how much bees search in non-flower areas. This means that a flower with lower reward attracts less focussed foraging compared to one with a higher reward. Since flowers do differ in floral reward, this has important implications for how focussed pollinators will be on different flowers. Our approach of looking at search behaviour and attention thus advances our understanding of the cognitive ecology of pollination. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.