332 results on '"Motion cues"'
Search Results
2. Impact of motion cues, color, and luminance on depth perception in optical see-through AR displays.
- Author
-
Ashtiani, Omeed, Hung-Jui Guo, and Prabhakaran, Balakrishnan
- Subjects
MOTION perception (Vision) ,DEPTH perception ,COMPUTER-generated imagery ,AUGMENTED reality ,MOTION analysis ,COLOR vision ,LUMINOUS flux - Abstract
Introduction: Augmented Reality (AR) systems are systems in which users view and interact with virtual objects overlaying the real world. AR systems are used across a variety of disciplines, i.e., games, medicine, and education to name a few. Optical See-Through (OST) AR displays allow users to perceive the real world directly by combining computer-generated imagery overlaying the real world. While perception of depth and visibility of objects is a widely studied field, we wanted to observe how color, luminance, and movement of an object interacted with each other as well as external luminance in OST AR devices. Little research has been done regarding the issues around the effect of virtual objects' parameters on depth perception, external lighting, and the effect of an object's mobility on this depth perception. Methods: We aim to perform an analysis of the effects of motion cues, color, and luminance on depth estimation of AR objects overlaying the real world with OST displays. We perform two experiments, differing in environmental lighting conditions (287 lux and 156 lux), and analyze the effects and differences on depth and speed perceptions. Results: We have found that while stationary objects follow previous research with regards to depth perception, motion and both object and environmental luminance play a factor in this perception. Discussion: These results will be significantly useful for developers to account for depth estimation issues that may arise in AR environments. Awareness of the different effects of speed and environmental illuminance on depth perception can be utilized when performing AR or MR applications where precision matters. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Impact of motion cues, color, and luminance on depth perception in optical see-through AR displays
- Author
-
Omeed Ashtiani, Hung-Jui Guo, and Balakrishnan Prabhakaran
- Subjects
augmented reality ,virtual reality ,depth perception ,motion cues ,color perception ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Introduction: Augmented Reality (AR) systems are systems in which users view and interact with virtual objects overlaying the real world. AR systems are used across a variety of disciplines, i.e., games, medicine, and education to name a few. Optical See-Through (OST) AR displays allow users to perceive the real world directly by combining computer-generated imagery overlaying the real world. While perception of depth and visibility of objects is a widely studied field, we wanted to observe how color, luminance, and movement of an object interacted with each other as well as external luminance in OST AR devices. Little research has been done regarding the issues around the effect of virtual objects’ parameters on depth perception, external lighting, and the effect of an object’s mobility on this depth perception.Methods: We aim to perform an analysis of the effects of motion cues, color, and luminance on depth estimation of AR objects overlaying the real world with OST displays. We perform two experiments, differing in environmental lighting conditions (287 lux and 156 lux), and analyze the effects and differences on depth and speed perceptions.Results: We have found that while stationary objects follow previous research with regards to depth perception, motion and both object and environmental luminance play a factor in this perception.Discussion: These results will be significantly useful for developers to account for depth estimation issues that may arise in AR environments. Awareness of the different effects of speed and environmental illuminance on depth perception can be utilized when performing AR or MR applications where precision matters.
- Published
- 2023
- Full Text
- View/download PDF
4. Social Cues of Safety Can Override Differences in Threat Level
- Author
-
Clara H. Ferreira, Mirjam Heinemans, Matheus Farias, Rui Gonçalves, and Marta A. Moita
- Subjects
defensive behavior ,freezing ,social buffering ,Drosophila melanogaster ,motion cues ,safety in numbers ,Evolution ,QH359-425 ,Ecology ,QH540-549.5 - Abstract
Animals in groups integrate social with directly gathered information about the environment to guide decisions regarding reproduction, foraging, and defence against predatory threats. In the context of predation, usage of social information has acute fitness benefits, aiding the detection of predators, the mounting of concerted defensive responses, or allowing the inference of safety, permitting other beneficial behaviors, such as foraging for food. We previously showed that Drosophila melanogaster exposed to an inescapable visual threat use freezing by surrounding flies as a cue of danger and movement resumption as a cue of safety. Moreover, group responses were primarily guided by the safety cues, resulting in a net social buffering effect, i.e., a graded decrease in freezing behavior with increasing group sizes, similar to other animals. Whether and how different threat levels affect the use of social cues to guide defense responses remains elusive. Here, we investigated this issue by exposing flies individually and in groups to two threat imminences using looms of different speeds. We showed that freezing responses are stronger to the faster looms regardless of social condition. However, social buffering was stronger for groups exposed to the fast looms, such that the increase in freezing caused by the higher threat was less prominent in flies tested in groups than those tested individually. Through artificial control of movement, we created groups composed of moving and freezing flies and by varying group composition, we titrated the motion cues that surrounding flies produce, which were held constant across threat levels. We found that the same level of safety motion cues had a bigger weight on the flies’ decisions when these were exposed to the higher threat, thus overriding differences in perceived threat levels. These findings shed light on the “safety in numbers” effect, revealing the modulation of the saliency of social safety cues across threat intensities, a possible mechanism to regulate costly defensive responses.
- Published
- 2022
- Full Text
- View/download PDF
5. Personality perception in human videos altered by motion transfer networks.
- Author
-
Yurtoğlu, Ayda, Sonlu, Sinan, Doğan, Yalım, and Güdükbay, Uğur
- Subjects
- *
PERSONALITY , *VIDEO excerpts , *VIDEOCONFERENCING , *NEUROTICISM , *EXTRAVERSION , *MOTION - Abstract
The successful portrayal of personality in digital characters improves communication and immersion. Current research focuses on expressing personality through modifying animations using heuristic rules or data-driven models. While studies suggest motion style highly influences the apparent personality, the role of appearance can be similarly essential. This work analyzes the influence of movement and appearance on the perceived personality of short videos altered by motion transfer networks. We label the personalities in conference video clips with a user study to determine the samples that best represent the Five-Factor model's high, neutral, and low traits. We alter these videos using the Thin-Plate Spline Motion Model, utilizing the selected samples as the source and driving inputs. We follow five different cases to study the influence of motion and appearance on personality perception. Our comparative study reveals that motion and appearance influence different factors: motion strongly affects perceived extraversion, and appearance helps convey agreeableness and neuroticism. [Display omitted] • We analyze personality expression in videos generated by motion transfer networks. • We examine the effect of motion and appearance on personality via user studies. • Motion transfer can control perceived extroversion and openness. • Different appearances can be used for expressing agreeableness and neuroticism. • We observe that altering conscientiousness is difficult in short sequences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A Dynamical Generative Model of Social Interactions
- Author
-
Alessandro Salatiello, Mohammad Hovaidi-Ardestani, and Martin A. Giese
- Subjects
social interactions ,generative model ,motion cues ,social perception ,social inference ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
The ability to make accurate social inferences makes humans able to navigate and act in their social environment effortlessly. Converging evidence shows that motion is one of the most informative cues in shaping the perception of social interactions. However, the scarcity of parameterized generative models for the generation of highly-controlled stimuli has slowed down both the identification of the most critical motion features and the understanding of the computational mechanisms underlying their extraction and processing from rich visual inputs. In this work, we introduce a novel generative model for the automatic generation of an arbitrarily large number of videos of socially interacting agents for comprehensive studies of social perception. The proposed framework, validated with three psychophysical experiments, allows generating as many as 15 distinct interaction classes. The model builds on classical dynamical system models of biological navigation and is able to generate visual stimuli that are parametrically controlled and representative of a heterogeneous set of social interaction classes. The proposed method represents thus an important tool for experiments aimed at unveiling the computational mechanisms mediating the perception of social interactions. The ability to generate highly-controlled stimuli makes the model valuable not only to conduct behavioral and neuroimaging studies, but also to develop and validate neural models of social inference, and machine vision systems for the automatic recognition of social interactions. In fact, contrasting human and model responses to a heterogeneous set of highly-controlled stimuli can help to identify critical computational steps in the processing of social interaction stimuli.
- Published
- 2021
- Full Text
- View/download PDF
7. A Dynamical Generative Model of Social Interactions.
- Author
-
Salatiello, Alessandro, Hovaidi-Ardestani, Mohammad, and Giese, Martin A.
- Subjects
SOCIAL interaction ,COMPUTER vision ,SOCIAL sciences education ,VISUAL perception ,BIOLOGICAL systems ,SOCIAL perception - Abstract
The ability to make accurate social inferences makes humans able to navigate and act in their social environment effortlessly. Converging evidence shows that motion is one of the most informative cues in shaping the perception of social interactions. However, the scarcity of parameterized generative models for the generation of highly-controlled stimuli has slowed down both the identification of the most critical motion features and the understanding of the computational mechanisms underlying their extraction and processing from rich visual inputs. In this work, we introduce a novel generative model for the automatic generation of an arbitrarily large number of videos of socially interacting agents for comprehensive studies of social perception. The proposed framework, validated with three psychophysical experiments, allows generating as many as 15 distinct interaction classes. The model builds on classical dynamical system models of biological navigation and is able to generate visual stimuli that are parametrically controlled and representative of a heterogeneous set of social interaction classes. The proposed method represents thus an important tool for experiments aimed at unveiling the computational mechanisms mediating the perception of social interactions. The ability to generate highly-controlled stimuli makes the model valuable not only to conduct behavioral and neuroimaging studies, but also to develop and validate neural models of social inference, and machine vision systems for the automatic recognition of social interactions. In fact, contrasting human and model responses to a heterogeneous set of highly-controlled stimuli can help to identify critical computational steps in the processing of social interaction stimuli. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Multiple object tracking using feature fusion in hierarchical LSTMs
- Author
-
Ehtesham Hassan
- Subjects
image sequences ,object tracking ,object detection ,image representation ,video signal processing ,image segmentation ,image motion analysis ,recurrent neural nets ,learning (artificial intelligence) ,feature extraction ,track modelling ,motion coding scheme ,relative position ,motion representation ,hierarchical lstm structure ,track association ,multiple object tracking challenge datasets ,feature fusion ,hierarchical lstm ,intelligent video applications ,recurrent neural networks ,complex temporal dynamics ,online tracking ,active tracks ,tracking-by-detection methodology ,hierarchical long short term memory network structure ,motion dynamics ,motion cues ,bounding boxes ,object instance segments ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Multiple object tracking sets the foundation for many intelligent video applications. The authors present a novel tracking solution using the ability of recurrent neural networks to effectively model complex temporal dynamics between objects irrespective of appearances, pose, occlusions, and illumination. For online tracking, a real-time and accurate association of objects with active tracks poses the major algorithmic challenge. Additionally, re-entry of objects should also be correctly resolved. They follow tracking-by-detection methodology using hierarchical long short term memory (LSTM) network structure for modelling the motion dynamics between objects by learning the fusion of appearance and motion cues. Existing works capture object's perspective for tracking within the detected bounding boxes. They also incorporate object instance segments for track modelling by applying the maskRCNN detector. They present a novel motion coding scheme that anchors the LSTM structure to effectively model the motion and relative position between objects in a single representation scheme. The proposed motion representation and deep features representing objects appearances are fused in an embedded space learned by the hierarchical LSTM structure for predicting the object to track association. The authors present experimental validation of the proposed approach on multiple object tracking challenge datasets and demonstrate that their solution naturally deals with major tracking challenges under all uncertainties.
- Published
- 2020
- Full Text
- View/download PDF
9. Multiple object tracking using feature fusion in hierarchical LSTMs.
- Author
-
Hassan, Ehtesham
- Subjects
OBJECT tracking (Computer vision) ,NEURAL circuitry ,ARTIFICIAL neural networks ,ALGORITHMS ,REAL-time computing - Abstract
Multiple object tracking sets the foundation for many intelligent video applications. The authors present a novel tracking solution using the ability of recurrent neural networks to effectively model complex temporal dynamics between objects irrespective of appearances, pose, occlusions, and illumination. For online tracking, a real-time and accurate association of objects with active tracks poses the major algorithmic challenge. Additionally, re-entry of objects should also be correctly resolved. They follow tracking-by-detection methodology using hierarchical long short term memory (LSTM) network structure for modelling the motion dynamics between objects by learning the fusion of appearance and motion cues. Existing works capture object's perspective for tracking within the detected bounding boxes. They also incorporate object instance segments for track modelling by applying the maskRCNN detector. They present a novel motion coding scheme that anchors the LSTM structure to effectively model the motion and relative position between objects in a single representation scheme. The proposed motion representation and deep features representing objects appearances are fused in an embedded space learned by the hierarchical LSTM structure for predicting the object to track association. The authors present experimental validation of the proposed approach on multiple object tracking challenge datasets and demonstrate that their solution naturally deals with major tracking challenges under all uncertainties. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Human Visual Perception
- Author
-
Rößing, Christoph, Hammoud, Riad I., Series editor, Wolff, Lawrence B., Series editor, and Terzis, Anestis, editor
- Published
- 2016
- Full Text
- View/download PDF
11. Surveillance Based Crowd Counting via Convolutional Neural Networks
- Author
-
Zhang, Damin, Li, Zhanming, Liu, Pengcheng, Diniz Junqueira Barbosa, Simone, Series editor, Chen, Phoebe, Series editor, Du, Xiaoyong, Series editor, Filipe, Joaquim, Series editor, Kara, Orhun, Series editor, Kotenko, Igor, Series editor, Liu, Ting, Series editor, Sivalingam, Krishna M., Series editor, Washio, Takashi, Series editor, Zhang, Zhang, editor, and Huang, Kaiqi, editor
- Published
- 2016
- Full Text
- View/download PDF
12. VISUAL CUES : A WAY TO ENHANCE ACCURATE JUDGEMENTS OF TRAVEL SPEED IN DRIVER SIMULATORS
- Author
-
Söderström, Malin and Söderström, Malin
- Abstract
Drivers in simulators tend to drive faster than in a real car. The study aimed to examine if visual cues impact driver velocity in a simulator. This is important because of the tendency for users of to drive faster in simulators than in authentic driving situations. This is supposed to be caused by the lack of sufficient cues in the simulated environment to convey motion. The hypothesis advocates that the usage of visual cues would make simulated motion cues more realistic to assist the driver to make accurate judgements of their driving speed. Accurate judgements would in turn result in less speeding in the driver simulator. The experiment was conducted in a driver simulator in a collaboration with SAFE trafikskola. The experiment compared two conditions where visual cues were more and less present. The data was complimented with a survey to gather additional information. The result from the t-test showed a significant effect on the measured velocity, whereas the two-way ANOVA yielded no such impact. The repeated measures ANOVA contributed with significant results on the difference between the points of measure and gave no significant main effect between conditions. Together with the complimentary survey the conclusion was made that the usage of visual cues in a driver simulator can affect the velocity of the driver. The knowledge regarding visual cues in a simulated environments could be used to improve driver simulators. Future research has the possibility to investigate motion cues from other modalities than vision to increase realism in driver simulators.
- Published
- 2023
13. Exploiting Contextual Motion Cues for Visual Object Tracking
- Author
-
Duffner, Stefan, Garcia, Christophe, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Agapito, Lourdes, editor, Bronstein, Michael M., editor, and Rother, Carsten, editor
- Published
- 2015
- Full Text
- View/download PDF
14. EEG Frequency Tagging Reveals the Integration of Form and Motion Cues into the Perception of Group Movement
- Author
-
Lisa Quenon, Bruno Rossion, Goedele Van Belle, Guido Orgs, Patrick Haggard, Emiel Cracco, and Haeeun Lee
- Subjects
VISUAL-PERCEPTION ,STEADY-STATE ,Technology and Engineering ,binding ,biological motion perception ,Movement ,Cognitive Neuroscience ,media_common.quotation_subject ,Motion Perception ,groups ,Electroencephalography ,Stimulus (psychology) ,Social group ,Motion ,Cellular and Molecular Neuroscience ,FACE ,Perception ,medicine ,Humans ,INVERSION ,media_common ,BODY MOVEMENT ,perceptual ,medicine.diagnostic_test ,Movement (music) ,Group (mathematics) ,BIOLOGICAL MOTION ,RECOGNITION ,synchrony ,frequency tagging ,Motion cues ,MOTOR SIMULATION ,Biological motion perception ,Cues ,Psychology ,Photic Stimulation ,RESPONSES ,Cognitive psychology - Abstract
The human brain has dedicated mechanisms for processing other people’s movements. Previous research has revealed how these mechanisms contribute to perceiving the movements of individuals but has left open how we perceive groups of people moving together. Across three experiments, we test whether movement perception depends on the spatiotemporal relationships among the movements of multiple agents. In Experiment 1, we combine EEG frequency tagging with apparent human motion and show that posture and movement perception can be dissociated at harmonically related frequencies of stimulus presentation. We then show that movement but not posture processing is enhanced when observing multiple agents move in synchrony. Movement processing was strongest for fluently moving synchronous groups (Experiment 2) and was perturbed by inversion (Experiment 3). Our findings suggest that processing group movement relies on binding body postures into movements and individual movements into groups. Enhanced perceptual processing of movement synchrony may form the basis for higher order social phenomena such as group alignment and its social consequences.
- Published
- 2021
- Full Text
- View/download PDF
15. Motion-based countermeasure against photo and video spoofing attacks in face recognition.
- Author
-
Edmunds, Taiamiti and Caplier, Alice
- Subjects
- *
HUMAN facial recognition software , *FRAUD prevention , *MOTION detectors , *BIOMETRIC identification , *TRACKING algorithms - Abstract
Facial biometric systems are vulnerable to fraudulent access attempts by presenting photographs or videos of a valid user in front of the sensor also known as “spoofing attacks”. Multiple protection measures have been proposed but limited attention has been dedicated to exclusive motion-based countermeasures since the arrival of video and mask attacks. A novel motion-based countermeasure which exploits natural and unnatural motion cues is presented. The proposed method takes advantage of the Conditional Local Neural Fields (CLNF) face tracking algorithm to extract rigid and non-rigid face motions. Similarly to the bag-of-words feature encoding, a vocabulary of motion sequences is constructed to derive discriminant mid-level motion features using the Fisher vector framework. Extensive experiments are conducted on ReplayAttack-DB, CASIA-FASD and MSU-MFSD databases. Complementary experiments on rigid mask attacks from the 3DMAD public database are also conducted and generalization issues are investigated via cross-database evaluation in particular. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. Great Expectations: On the Design of Predictive Motion Cues to Alleviate Carsickness
- Author
-
Diels, Cyriel, Bos, Jelte, Krömker, Heidi, Krömker, Heidi, AMS - Rehabilitation & Development, AMS - Ageing & Vitality, Sensorimotor Control, and IBBA
- Subjects
Motion sickness ,Philosophy of design ,business.industry ,Computer science ,Context (language use) ,medicine.disease ,Automation ,Motion cues ,Personalization ,Stimulus modality ,Corollary ,Risk analysis (engineering) ,medicine ,Interface design ,business - Abstract
Motion sickness has gained renewed interested in the context of the developments in vehicle automation in which we are witnessing a transition from a driver-centric to passenger-centric design philosophy. As a corollary, motion sickness can be expected to become considerably more prevalent which creates a hurdle towards the successful introduction of vehicle automation and its ultimate socio-economic and environmental benefits. We here review early proof-of-concept studies into the beneficial effects of providing passenger with predictive motion cues as an elegant and effective method to reduce motion sickness in future vehicles. Future design parameters are discussed to finetune such cues not only for optimum effectiveness but, importantly, also for acceptance including sensory modality, timing, information detailing, and personalization.
- Published
- 2021
- Full Text
- View/download PDF
17. Inexperienced preys know when to flee or to freeze in front of a threat.
- Author
-
Hébert, Marie, Versace, Elisabetta, and Vallortigara, Giorgio
- Subjects
- *
STARTLE reaction , *ANIMAL young , *OPEN-ended questions , *CHICKS - Abstract
Using appropriate antipredatory responses is crucial for survival. While slowing down reduces the chances of being detected from distant predators, fleeing away is advantageous in front of an approaching predator. Whether appropriate responses depend on experience with moving objects is still an open question. To clarify whether adopting appropriate fleeing or freezing responses requires previous experience, we investigated responses of chicks naive to movement. When exposed to the moving cues mimicking an approaching predator (a rapidly expanding, looming stimulus), chicks displayed a fast escape response. In contrast, when presented with a distal threat (a small stimulus sweeping overhead) they decreased their speed, a maneuver useful to avoid detection. The fast expansion of the stimulus toward the subject, rather than its size per se or change in luminance, triggered the escape response. These results show that young animals, in the absence of previous experience, can use motion cues to select the appropriate responses to different threats. The adaptive needs of young preys are thus matched by spontaneous defensive mechanisms that do not require learning. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. A study on deep learning spatiotemporal models and feature extraction techniques for video understanding
- Author
-
Subramanya Kuppa, M. Suresha, and D. S. Raghukumar
- Subjects
business.industry ,Computer science ,Deep learning ,Feature extraction ,020207 software engineering ,02 engineering and technology ,Library and Information Sciences ,Machine learning ,computer.software_genre ,Convolutional neural network ,Motion cues ,Domain (software engineering) ,Score fusion ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Artificial intelligence ,Sequence learning ,Semantic information ,business ,computer ,Information Systems - Abstract
Video understanding requires abundant semantic information. Substantial progress has been made on deep learning models in the image, text, and audio domains, and notable efforts have been recently dedicated to the design of deep networks in the video domain. We discuss the state-of-the-art convolutional neural network (CNN) and its pipelines for the exploration of video features, various fusion strategies, and their performances; we also discuss the limitations of CNN for long-term motion cues and the use of sequential learning models such as long short-term memory to overcome these limitations. In addition, we address various multi-model approaches for extracting important cues and score fusion techniques from hybrid deep learning frameworks. Then, we highlight future plans in this domain, recent trends, and substantial challenges for video understanding. This survey’s objectives are to study the plethora of approaches that have been developed for solving video understanding problems, to comprehensively study spatiotemporal cues, to explore the various models that are available for solving these problems and to identify the most promising approaches.
- Published
- 2020
- Full Text
- View/download PDF
19. The functional significance of mantis peering behaviour
- Author
-
Karl KRAL
- Subjects
mantids ,mantis religiosa ,polyspilota ,tenodera aridifolia sinensis ,compound eye ,spatial vision ,binocular cues ,motion cues ,range estimation ,habitat structure ,Zoology ,QL1-991 - Abstract
The aim of this review is to explain the functional significance of mantis peering behaviour from an entomological perspective. First the morphological and optical features of the mantis compound eye that are important for spatial vision are described. The possibility that praying-mantises use binocular retinal disparity (stereopsis) and other alternative visual cues for determining distance in prey capture, are discussed. The primary focus of the review is the importance of peering movements for estimating the distance to stationary objects. Here the following aspects are examined: (1) Direct evidence via object manipulation experiments of absolute distance estimation with the aid of self-induced retinal image motion; (2) the mechanism of absolute distance estimation (with the interaction of visual and proprioceptive information); (3) the range of absolute and relative distance estimation; (4) the influence of target object features on distance estimation; and (5) the relationship between peering behaviour and habitat structures, based on results of studies on three species of mantis.
- Published
- 2012
- Full Text
- View/download PDF
20. Effects of unlimited angular motion cue and cue discrepancy on simulator sickness.
- Author
-
Kim, Jiwon, Lee, Seong-Min, Son, Hungsun, and Park, Taezoon
- Subjects
- *
SIMULATOR sickness , *MOTION sickness , *VISUAL perception , *FLIGHT training , *FLIGHT simulators - Abstract
• Unlimited angular motion cues via a spherical motion platform reduce motion sickness. • Cue conflict is directly measured by the mismatch between visual and motion cues. • Cue mismatches in pitch angle and yaw velocity correlate with motion sickness. Simulator sickness is a crucial concern undermining several benefits of simulator training, such as a realistic environment, low costs, and safe practice of emergencies. This study investigated the effects of unbounded angular motions and visual-vestibular cue discrepancies on simulator sickness for flight simulator training. Human subject experiments with 36 participants demonstrated that simulator sickness, measured by questionnaires and physiological signals, was significantly decreased by offering both motion and visual cues rather than visual signals alone (p < 0.05). Specifically, nausea (without motion = 54.59, with motion = 31.27; p = 0.036) and disorientation scores (without motion = 81.20, with motion = 44.08; p = 0.028) significantly decreased when both motion and visual signals were present. Furthermore, the experimental results showed a significant correlation between simulator sickness and visual-vestibular cue mismatches, particularly for the angular velocity along the z-axis (r = 0.110, p = 0.04). The pitch angle discrepancy (r = 0.156, p = 0.004) between the visual and motion cues was significantly correlated with the sickness severity, unlike the roll angle disparity (r = −0.009, p = 0.871). The results from this study can be explored for flight training operations using motion simulators to minimize or eliminate simulator sickness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Vision-Based Vehicle and Pedestrian Tracking of Intersection Videos.
- Author
-
Shirazi, Mohammad Shokrolah and Morris, Brendan Tran
- Subjects
- *
TRAFFIC safety , *VEHICLES , *PEDESTRIANS , *OPTICAL flow , *FALSE alarms , *TRAFFIC signs & signals , *SAFETY - Abstract
Vehicle and pedestrian tracking is a key component for vision-based safety analysis which can use motion and appearance cues of road users. Since appearance-based detectors generate false alarms and provide lower speed, motion-based detectors are preferred but they work poorly when pedestrians or vehicles stop due to traffic signals. In this paper, a tracking system is proposed to track waiting and moving road users by fusion of motion and appearance cues at detection level. The enhanced optical flow tracker handles the partial occlusion problem, and it cooperates with the detection module to provide long-term tracks of vehicles and pedestrians. The system evaluation shows 13% and 43% improvement in tracking of vehicles and pedestrians respectively, and finally heat-maps show benefits of using the proposed system through the visual depiction of intersection usage. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. Moving speeches: Dominance, trustworthiness and competence in body motion.
- Author
-
Koppensteiner, Markus, Stephan, Pia, and Jäschke, Johannes Paul Michael
- Subjects
- *
POLITICIANS -- Psychology , *SOCIAL skills , *POSTURE , *IMPRESSION formation (Psychology) , *SOCIAL perception - Abstract
People read dominance, trustworthiness and competence into the faces of politicians but do they also perceive such social qualities in other nonverbal cues? We transferred the body movements of politicians giving a speech onto animated stick-figures and presented these stimuli to participants in a rating-experiment. Analyses revealed single body postures of maximal expansiveness as strong predictors of perceived dominance. Also, stick-figures producing expansive movements as well as a great number of movements throughout the encoded sequences were judged high on dominance and low on trustworthiness. In a second step we divided our sample into speakers from the opposition parties and speakers that were part of the government as well as into male and female speakers. Male speakers from the opposition were rated higher on dominance but lower on trustworthiness than speakers from all other groups. In conclusion, people use simple cues to make equally simple social categorizations. Moreover, the party status of male politicians seems to become visible in their body motion. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
23. Should I Stop or Should I Cross?: Interactions between vulnerable road users and automated vehicles
- Author
-
Nuñez Velasco, J.P. (author) and Nuñez Velasco, J.P. (author)
- Abstract
This dissertation aims to understand the behavior of pedestrians and cyclists when interacting with automated vehicles (AVs). The role of AVs’ characteristics such as their physical appearance, whether a driver is present, the presence of external communication interfaces, and factors pertaining to the behavior of the vehicle were investigated using virtual reality road crossing experiments. In addition, psychological factors that could be affected by the presence of AVs were included., TRAIL Thesis Series no. T2021/15, the Netherlands Research School TRAIL, Transport and Planning
- Published
- 2021
24. Multi-Fusion Sensors for Action Recognition based on Discriminative Motion Cues and Random Forest
- Author
-
Sadaf Hafeez, Ahmad Jalal, and Shaharyar Kamal
- Subjects
Fusion ,Discriminative model ,business.industry ,Computer science ,Action recognition ,Pattern recognition ,Artificial intelligence ,business ,Motion cues ,Random forest - Published
- 2021
- Full Text
- View/download PDF
25. Auditory pitch glides influence time-to-contact judgements of visual stimuli
- Author
-
Steven L. Prime and Carly King
- Subjects
Male ,Visual perception ,Acoustics ,Motion Perception ,Constant speed ,Time to contact ,050105 experimental psychology ,Auditory pitch ,Judgment ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,otorhinolaryngologic diseases ,Humans ,0501 psychology and cognitive sciences ,cardiovascular diseases ,Pitch Perception ,Mathematics ,Crossmodal ,Pure tone ,General Neuroscience ,05 social sciences ,humanities ,Motion cues ,Acoustic Stimulation ,Auditory Perception ,Female ,Photic Stimulation ,psychological phenomena and processes ,030217 neurology & neurosurgery - Abstract
A common experimental task used to study the accuracy of estimating when a moving object arrives at a designated location is the time-to-contact (TTC) task. The previous studies have shown evidence that sound motion cues influence TTC estimates of a visual moving object. However, the extent to which sound can influence TTC of visual targets still remains unclear. Some studies on the crossmodal correspondence between pitch and speed suggest that descending pitch sounds are associated with faster speeds compared to ascending pitch sounds due to an internal model of gravity. Other studies have shown an opposite pitch-speed mapping (i.e., ascending pitch associated with faster speeds) and no influence of gravity heuristics. Here, we explored whether auditory pitch glides, a continuous pure tone sound either ascending or descending in pitch, influence TTC estimates of a vertically moving visual target and if any observed effects are consistent with a gravity-centered or gravity-unrelated pitch-speed mapping. Subjects estimated when a disc moving either upward or downward at a constant speed reached a visual landmark after the disc disappeared behind an occluder under three conditions: with an accompanying ascending pitch glide, with a descending pitch glide, or with no sound. Overall, subjects underestimated TTC with ascending pitch glides and overestimated TTC with descending pitch glides, compared to the no-sound condition. These biases in TTC were consistent in both disc motion directions. These results suggest that subjects adopted a gravity-unrelated pitch-speed mapping where ascending pitch is associated with faster speeds and descending pitch associated with slower speeds.
- Published
- 2019
- Full Text
- View/download PDF
26. Depth Map Estimation Using Defocus and Motion Cues
- Author
-
Sumana Gupta, Ajeet Singh Yadav, Himanshu Kumar, and K. S. Venkatesh
- Subjects
Computer science ,business.industry ,Motion blur ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Stereo display ,Motion (physics) ,Motion cues ,Depth map ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Depth perception ,business ,Reliability (statistics) ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Significant recent developments in 3D display technology have focused on techniques for converting 2D media into 3D. Depth map is an integral part of 2D-to-3D conversion. Combining multiple depth cues results in a more accurate depth map as it compensates for the errors caused by one depth cue as well as its absence by other depth cues. In this paper, we present a novel framework to generate a more accurate depth map for video using defocus and motion cues. The moving objects present in the scene are the source of errors in both defocus and motion-based depth map estimation. The proposed method rectifies these errors in the depth map by integrating defocus blur and motion cues. In addition, it also corrects the errors in other parts of depth map caused by inaccurate estimation of defocus blur and motion. Since the proposed integration approach relies on the characteristics of point spread functions of defocus and motion blur along with their relations to camera parameters, it is more accurate and reliable.
- Published
- 2019
- Full Text
- View/download PDF
27. Visual perception of speed in drivers with ADHD
- Author
-
Birgitta Thorslund and Björn Lidestam
- Subjects
050210 logistics & transportation ,Student population ,medicine.medical_specialty ,Visual perception ,Sociology and Political Science ,05 social sciences ,Geography, Planning and Development ,Driving simulator ,Audiology ,Management, Monitoring, Policy and Law ,Environmental Science (miscellaneous) ,medicine.disease ,behavioral disciplines and activities ,Motion cues ,Visual motion ,mental disorders ,0502 economics and business ,Speed perception ,medicine ,Attention deficit hyperactivity disorder ,0501 psychology and cognitive sciences ,Motion perception ,Psychology ,050107 human factors - Abstract
Effects of ADHD on driving speed were studied in a driving simulator with only visual motion cues, by comparing drivers with ADHD diagnosis (n = 36) to drivers from a normal student population (n = 28). Their task was to repeatedly accelerate to own preferred speed for a total of 26 trials (2 baseline, 24 experimental trials). Field of view (1, 3, 5, and 7 monitors) and virtual road markings (on, off) were manipulated. These eight experimental conditions were presented three times each (replicates). Overall mean speed did not differ between groups, but the ADHD group was less affected by the extra motion cues. Also, whereas the control group lowered their speed between replicates, the ADHD group did not. The combined results suggest that for ADHD drivers, speed perception is more of a rule-based skill and more based on attention, whereas the normal student population perceives speed more effortlessly.
- Published
- 2019
- Full Text
- View/download PDF
28. Practises to identify and prevent adverse aircraft-and-rotorcraft-pilot couplings—A ground simulator perspective.
- Author
-
Pavel, Marilena D., Jump, Michael, Masarati, Pierangelo, Zaichik, Larisa, Dang-Vu, Binh, Smaili, Hafid, Quaranta, Giuseppe, Stroosma, Olaf, Yilmaz, Deniz, Johnes, Michael, Gennaretti, Massimmo, and Ionita, Achim
- Subjects
- *
AIR pilots , *ROTORCRAFT , *FLIGHT simulators , *VISUAL culture , *AIRPLANE design - Abstract
The aviation community relies heavily on flight simulators as a fundamental tool for research, pilot training and development of any new aircraft design. The goal of the present paper is to provide a review on how effective ground simulation is as an assessment tool for unmasking adverse Aircraft-and-Rotorcraft Pilot Couplings (APC/RPC). Although it is generally believed that simulators are not reliable in revealing the existence of A/RPC tendencies, the paper demonstrates that a proper selection of high-gain tasks combined with appropriate motion and visual cueing can reveal negative features of a particular aircraft that may lead to A/RPC. The paper discusses new methods for real-time A/RPC detection that can be used as a tool for unmasking adverse A/RPC. Although flight simulators will not achieve the level of reality of in-flight testing, exposing A/RPC tendencies in the simulator may be the only convenient safe place to evaluate the wide range of conditions that could produce hazardous A/RPC events. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
29. A Directional Congruency Effect of Amplified Dilated Time Perception Induced by Looming Stimuli With Implied Motion Cues
- Author
-
Euisun Kim, Joohee Seo, and Sung-Ho Kim
- Subjects
05 social sciences ,Dynamics (mechanics) ,Motion Perception ,Experimental and Cognitive Psychology ,Context (language use) ,Time perception ,Affect (psychology) ,050105 experimental psychology ,Sensory Systems ,Motion (physics) ,Motion cues ,03 medical and health sciences ,Motion ,0302 clinical medicine ,Looming ,Time Perception ,Auditory Perception ,Humans ,0501 psychology and cognitive sciences ,Cues ,Psychology ,Neuroscience ,030217 neurology & neurosurgery ,Photic Stimulation - Abstract
The perception of time is not veridical, but, rather, it is susceptible to environmental context, like the intrinsic dynamics of moving stimuli. The direction of motion has been reported to affect time perception such that movement of objects toward an observer (i.e., looming stimuli) is perceived as longer in duration than movement of objects away from the observer (i.e., receding stimuli). In the current study we investigated whether this looming/receding temporal asymmetry can be modulated by the direction of movement implied by static cues of images. Participants were presented with images of a running person, rendered from either the front or the back (i.e., representing movement toward or away from the observer). In Experiment 1, the size of the images was constant. In Experiment 2, the image sizes varied (i.e., increasing: looming; or decreasing: receding). In both experiments, participants performed a temporal bisection task by judging the duration of the image presentation as “short” or “long”. In Experiment 1, we found no influence of implied-motion direction in the participants’ duration perceptions. In Experiment 2, however, participants overestimated the duration of the looming, as compared to the receding image in relation to real motion. This finding replicated previous findings of the looming/receding asymmetry using naturalistic human-character stimuli. Further, in Experiment 2 we observed a directional congruency effect between real and implied motion; stimuli were perceived as lasting longer when the directions of real and implied motion were congruent versus when these directions were incongruent. Thus, looming (versus receding) movement, a perceptually salient stimulus, elicits differential temporal processing, and higher-order motion processing integrates signals of real and implied motion in time perception.
- Published
- 2021
30. Gaze following emergence relies on both perceptual cues and social awareness
- Author
-
Maleen Thiele, Kim Astor, and Gustaf Gredebäck
- Subjects
Psykologi (exklusive tillämpad psykologi) ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,Social cue ,Gaze ,Motion cues ,Motion (physics) ,Psychology (excluding Applied Psychology) ,Perception ,Developmental and Educational Psychology ,Perceptual narrowing ,Social consciousness ,Psychology ,media_common ,Cognitive psychology - Abstract
Decades of research have emphasized the significance of gaze following in early development. Yet, the developmental origin of this ability has remained poorly understood. We tested the claims made by two prominent theoretical perspectives to answer whether infants gaze following response is based on perceptual (motion of the head) or social cues (gaze direction). We found that 12-month-olds (N = 30) are able to inhibit motion cues and exclusively follow the direction of others’ gaze. Six- (N = 29) and 4-month-olds (N = 30) can follow gaze, with a sensitivity to both perceptual and social cues. These results align with the perceptual narrowing hypothesis of gaze following emergence, suggesting that social and perceptual cueing are non-exclusive paths to early developing gaze following.
- Published
- 2021
31. Emotion Recognition Performance in Children with Callous Unemotional Traits is Modulated by Co-occurring Autistic Traits
- Author
-
Rachael Bedford, Matthew Bluett-Duncan, Helen Sharp, Jonathan Hill, Nicola Wright, Tim J. Smith, Andrew Pickles, Gizelle Anzures, and Virginia Carter Leno
- Subjects
Conduct Disorder ,Male ,Callous unemotional ,05 social sciences ,Emotions ,Cognition ,Anger ,Child health ,Motion cues ,Developmental psychology ,03 medical and health sciences ,Clinical Psychology ,0302 clinical medicine ,Autistic traits ,Co occurring ,Developmental and Educational Psychology ,Humans ,0501 psychology and cognitive sciences ,Emotion recognition ,Autistic Disorder ,Cues ,Psychology ,Association (psychology) ,030217 neurology & neurosurgery ,050104 developmental & child psychology - Abstract
© 2020 The Author(s). Published with license by Taylor & Francis Group, LLC. Objective: Atypical emotion recognition (ER) is characteristic of children with high callous unemotional (CU) traits. The current study aims to 1) replicate studies showing ER difficulties for static faces in relation to high CU-traits; 2) test whether ER difficulties remain when more naturalistic dynamic stimuli are used; 3) test whether ER performance for dynamic stimuli is moderated by eye-gaze direction and 4) assess the impact of co-occurring autistic traits on the association between CU and ER. Methods: Participants were 292 (152 male) 7-year-olds from the Wirral Child Health and Development Study (WCHADS). Children completed a static and dynamic ER eye-tracking task, and accuracy, reaction time and attention to the eyes were recorded. Results: Higher parent-reported CU-traits were significantly associated with reduced ER for static expressions, with lower accuracy for angry and happy faces. No association was found for dynamic expressions. However, parent-reported autistic traits were associated with ER difficulties for both static and dynamic expressions, and after controlling for autistic traits, the association between CU-traits and ER for static expressions became non-significant. CU-traits and looking to the eyes were not associated in either paradigm. Conclusion: The finding that CU-traits and ER are associated for static but not naturalistic dynamic expressions may be because motion cues in the dynamic stimuli draw attention to emotion-relevant features such as eyes and mouth. Further, results suggest that ER difficulties in CU-traits may be due, in part, to co-occurring autistic traits. Future developmental studies are required to tease apart pathways toward the apparently overlapping cognitive phenotype.
- Published
- 2021
- Full Text
- View/download PDF
32. Should I Stop or Should I Cross? Interactions between vulnerable road users and automated vehicles
- Author
-
Nuñez Velasco, J.P., Hagenzieker, M.P., van Arem, B., Farah, H., and Delft University of Technology
- Subjects
Motion cues ,Road crossing behavior ,Automated vehicles (AVs) ,AVs' characteristics ,vulnerable road users (VRUs) ,Psychological factors - Abstract
This dissertation aims to understand the behavior of pedestrians and cyclists when interacting with automated vehicles (AVs). The role of AVs’ characteristics such as their physical appearance, whether a driver is present, the presence of external communication interfaces, and factors pertaining to the behavior of the vehicle were investigated using virtual reality road crossing experiments. In addition, psychological factors that could be affected by the presence of AVs were included.
- Published
- 2021
- Full Text
- View/download PDF
33. Dogs fail to recognize a human pointing gesture in two-dimensional depictions of motion cues
- Author
-
Carla Jade Eatherington, Lieta Marinelli, Paolo Mongillo, and Miina Lõoke
- Subjects
media_common.quotation_subject ,Motion Perception ,Living entity ,Silhouette ,Behavioral Neuroscience ,Motion ,Dogs ,Perception ,Animals ,Humans ,media_common ,Gestures ,Video ,General Medicine ,Canis familiaris ,Biological motion, Canis familiaris, Pointing, Point light display, Recognition, Video ,Biological motion ,Human motion ,Motion cues ,Pointing ,Recognition ,Biological motion perception ,Point light display ,Animal Science and Zoology ,Cues ,Psychology ,Cognitive psychology ,Gesture - Abstract
Few studies have investigated biological motion perception in dogs and it remains unknown whether dogs recognise the biological identity of two-dimensional animations of human motion cues. To test this, we assessed the dogs’ (N = 32) responses to point-light displays of a human performing a pointing gesture towards one of two pots. At the start of the experiment the demonstrator was a real-life person, but over the course of the test dogs were presented with two-dimensional figurative representations of pointing gestures in which visual information was progressively removed until only the isolated motion cues remained. Dogs’ accuracy was above chance level only with real-life and black-and-white videos, but not with the silhouette or the point-light figure. Dogs’ accuracy during these conditions was significantly lower than in the real-life condition. This result could not be explained by trial order since dogs’ performance was still not higher than chance when only the point-light figure condition was presented after the initial demonstration. The results imply that dogs are unable to recognise humans in two-dimensional depictions of human motion cues only. In spite of extensive exposure to human movement, dogs need more perceptual cues to detect equivalence between human two-dimensional animations and the represented living entity.
- Published
- 2021
34. Head turning is an effective cue for gaze following: Evidence from newly sighted individuals, school children and adults.
- Author
-
Rubio-Fernandez, Paula, Shukla, Vishakha, Bhatia, Vrinda, Ben-Ami, Shlomit, and Sinha, Pawan
- Subjects
- *
SCHOOL children , *GAZE , *ADULTS , *NEW words , *NEURODIVERSITY , *THEORY of mind - Abstract
In referential communication, gaze is often interpreted as a social cue that facilitates comprehension and enables word learning. Here we investigated the degree to which head turning facilitates gaze following. We presented participants with static pictures of a man looking at a target object in a first and third block of trials (pre- and post-intervention), while they saw short videos of the same man turning towards the target in the second block of trials (intervention). In Experiment 1, newly sighted individuals (treated for congenital cataracts; N = 8) benefited from the motion cues, both when comparing their initial performance with static gaze cues to their performance with dynamic head turning, and their performance with static cues before and after the videos. In Experiment 2, neurotypical school children (ages 5–10 years; N = 90) and adults (N = 30) also revealed improved performance with motion cues, although most participants had started to follow the static gaze cues before they saw the videos. Our results confirm that head turning is an effective social cue when interpreting new words, offering new insights for a pathways approach to development. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Minimal videos: Trade-off between spatial and temporal information in human and machine vision
- Author
-
Guy Ben-Yosef, Gabriel Kreiman, and Shimon Ullman
- Subjects
Linguistics and Language ,Machine vision ,Cognitive Neuroscience ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Experimental and Cognitive Psychology ,Space (commercial competition) ,050105 experimental psychology ,Language and Linguistics ,Article ,Reduction (complexity) ,03 medical and health sciences ,0302 clinical medicine ,Developmental and Educational Psychology ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,Temporal information ,Vision, Ocular ,Computational model ,business.industry ,Interpretation (philosophy) ,05 social sciences ,Recognition, Psychology ,Replicate ,Motion cues ,Artificial intelligence ,business ,Psychology ,030217 neurology & neurosurgery - Abstract
Objects and their parts can be visually recognized from purely spatial or purely temporal information but the mechanisms integrating space and time are poorly understood. Here we show that visual recognition of objects and actions can be achieved by efficiently combining spatial and motion cues in configurations where each source on its own is insufficient for recognition. This analysis is obtained by identifying minimal videos: these are short and tiny video clips in which objects, parts, and actions can be reliably recognized, but any reduction in either space or time makes them unrecognizable. Human recognition in minimal videos is invariably accompanied by full interpretation of the internal components of the video. State-of-the-art deep convolutional networks for dynamic recognition cannot replicate human behavior in these configurations. The gap between human and machine vision demonstrated here is due to critical mechanisms for full spatiotemporal interpretation that are lacking in current computational models.
- Published
- 2020
36. Statistical Features-Based Violence Detection in Surveillance Videos
- Author
-
S. Chandrakala, S. Roshan, L. K. P. Vignesh, K. Deepak, and G. Srivathsan
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Motion cues ,ComputingMethodologies_PATTERNRECOGNITION ,Histogram of oriented gradients ,Discriminative model ,Feature (computer vision) ,Feature descriptor ,Benchmark (computing) ,Artificial intelligence ,Violence detection ,business ,Representation (mathematics) - Abstract
Research over detecting anomalous human behavior in crowded scenes has created much attention due to its direct applicability over a large number of real-world security applications. In this work, we propose a novel statistical feature descriptor to detect violent human activities in real-world surveillance videos. Standard spatiotemporal feature descriptors are used to extract motion cues from videos. Finally, a discriminative SVM classifier is used to classify violent/non-violent scenes present in the videos with the help of feature representation formed out of the proposed statistical descriptor. Efficiency of the proposed approach is tested on crowd violence and hockey fight benchmark datasets.
- Published
- 2020
- Full Text
- View/download PDF
37. Perceptual Coupling Based on Depth and Motion Cues in Stereovision-Impaired Subjects
- Author
-
Laurens A. M. H. Kirkels, Reinder Dorman, Richard J. A. van Wezel, TechMed Centre, and Biomedical Signals and Systems
- Subjects
Adult ,Male ,bistability ,Computer science ,depth ,media_common.quotation_subject ,Object (grammar) ,Biophysics ,Motion Perception ,Experimental and Cognitive Psychology ,Stereoscopy ,perception ,050105 experimental psychology ,Motion (physics) ,law.invention ,Perceptual Disorders ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Short Reports ,Artificial Intelligence ,law ,Perception ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,media_common ,three-dimensional perception ,binocular vision ,Coupling ,Depth Perception ,business.industry ,05 social sciences ,Middle Aged ,stereopsis ,Sensory Systems ,Motion cues ,Ophthalmology ,Stereopsis ,Pattern Recognition, Visual ,Female ,Artificial intelligence ,business ,Binocular vision ,030217 neurology & neurosurgery - Abstract
When an object is partially occluded, the different parts of the object have to be perceptually coupled. Cues that can be used for perceptual coupling are, for instance, depth ordering and visual motion information. In subjects with impaired stereovision, the brain is less able to use stereoscopic depth cues, making them more reliant on other cues. Therefore, our hypothesis is that stereovision-impaired subjects have stronger motion coupling than stereoscopic subjects. We compared perceptual coupling in 8 stereoscopic and 10 stereovision-impaired subjects, using random moving dot patterns that defined an ambiguous rotating cylinder and a coaxially presented nonambiguous half cylinder. Our results show that, whereas stereoscopic subjects exhibit significant coupling in the far plane, stereovision-impaired subjects show no coupling and under our conditions also no stronger motion coupling than stereoscopic subjects.
- Published
- 2020
- Full Text
- View/download PDF
38. Emergency braking at intersections: A motion-base motorcycle simulator study
- Author
-
Francesco Celiberti, Joost C. F. de Winter, Riender Happee, Marco Grottoli, Marjan Hagenzieker, Yves Lemmens, and N. Kovacsova
- Subjects
Computer science ,Deceleration ,Accidents, Traffic ,Base (geometry) ,Physical Therapy, Sports Therapy and Rehabilitation ,Human Factors and Ergonomics ,Collision ,Motorcyclist safety ,Motion cues ,Motion (physics) ,Motion ,Rider performance ,Motorcycles ,Humans ,Computer Simulation ,Emergencies ,Safety, Risk, Reliability and Quality ,Engineering (miscellaneous) ,Motorcycle-car interaction ,Psychomotor Performance ,Simulation ,Intersection (aeronautics) ,Perception-action ,Hazard - Abstract
Powered two-wheeler riders are frequently involved in crashes at intersections because an approaching car driver fails to give right of way. This simulator study aimed to investigate how riders perform an emergency braking maneuver in response to an oncoming car and, second, whether longitudinal motion cues provided by a motion platform influence riders' braking performance. Twelve riders approached a four-way intersection at the same time as an oncoming car. We manipulated the car's direction of travel, speed profile, and its indicator light. The results showed that the more dangerous the situation (safe, near-miss, impending-crash), the more likely riders were to initiate braking. Although riders braked in the majority of trials when the car crossed their path, they were often unsuccessful in avoiding a collision with the car. No statistically significant differences were found in riders' initiation of braking and braking style between the motion and no-motion simulator configurations.
- Published
- 2020
39. Effect of Motion Cues on Simulator Sickness in a Flight Simulator
- Author
-
Taezoon Park, Jiwon Kim, and Jihong Hwang
- Subjects
medicine.medical_specialty ,Nausea ,Sensory system ,medicine.disease ,030226 pharmacology & pharmacy ,Flight simulator ,Motion cues ,Motion (physics) ,03 medical and health sciences ,0302 clinical medicine ,Physical medicine and rehabilitation ,Motion sickness ,medicine ,Simulator sickness ,medicine.symptom ,Psychology ,Sensory cue ,030217 neurology & neurosurgery - Abstract
The objective of this study is to investigate the effect of sensory conflict on the occurrence and severity of simulator sickness in a flight simulator. According to the sensory conflict theory, it is expected that providing motion cues that match the visual cues will reduce the discrepancy between the sensory inputs and thus reduce simulator sickness. We tested the effect of motion cues thorough a human subject experiment with a spherical type motion platform. After completing pre-experiment questionnaire including Motion Sickness Susceptibility Questionnaire (MSSQ) and Immersive Tendency Questionnaire (ITQ), two groups of participants conducted a flight simulation session with or without motion cues for 40 min. In the simulation session, participants were asked to fly through the gates sequentially arranged along the figure-eight shaped route. The Simulator Sickness Questionnaire (SSQ) was filled out after the exposure to compare groups between with and without motion cues. Physiological data, including electrodermal activity, heart rate, blood volume pressure, and wrist temperature were also collected to find the relationship with perceived simulator sickness. The results showed that simulator sickness and disorientation significantly lowered in motion-based group. Also, nausea and oculomotor were marginally lower when motion cue was given. This study supports sensory conflict theory. Providing proper motion cue corresponding to the visual flow could be considered to prevent simulator sickness.
- Published
- 2020
- Full Text
- View/download PDF
40. Using Driver Control Models to Understand and Evaluate Behavioral Validity of Driving Simulators
- Author
-
Erwin R. Boer, A. Hamish Jamson, Richard Romano, Gustav Markkula, Luigi Pariota, Alex Bean, Markkula, Gm, Romano, R, Jamson, Ah, Pariota, L, Bean, A, and Boer, Er
- Subjects
0209 industrial biotechnology ,Vehicle positioning ,Computer Networks and Communications ,Computer science ,Human Factors and Ergonomics ,02 engineering and technology ,Analytical model ,Man-machine system ,Vehicle ,Data modeling ,020901 industrial engineering & automation ,Artificial Intelligence ,0502 economics and business ,simulator validation ,Training ,Torque ,Human performance modeling ,Simulation ,Taxonomy ,Control models ,050210 logistics & transportation ,Data model ,05 social sciences ,Driving simulator ,Human Factors and Ergonomic ,Vehicle driving ,Motion cues ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Signal Processing ,Task analysis ,Task analysi - Abstract
For a driving simulator to be a valid tool for research, vehicle development, or driver training, it is crucial that it elicits similar driver behavior as the corresponding real vehicle. To assess such behavioral validity, the use of quantitative driver models has been suggested but not previously reported. Here, a task-general conceptual driver model is proposed, along with a taxonomy defining levels of behavioral validity. Based on these theoretical concepts, it is argued that driver models without explicit representations of sensory or neuromuscular dynamics should be sufficient for a model-based assessment of driving simulators in most contexts. As a task-specific example, two parsimonious driver steering models of this nature are developed and tested on a dataset of real and simulated driving in near-limit, low-friction circumstances, indicating a clear preference of one model over the other. By means of closed-loop simulations, it is demonstrated that the parameters of this preferred model can generally be accurately estimated from unperturbed driver steering data, using a simple, open-loop fitting method, as long as the vehicle positioning data are reliable. Some recurring patterns between the two studied tasks are noted in how the model’s parameters, fitted to human steering, are affected by the presence or absence of steering torques and motion cues in the simulator.
- Published
- 2018
- Full Text
- View/download PDF
41. Motion cues tune social influence in shoaling fish
- Author
-
Christa M. Woodley, Bertrand H. Lemasson, Tammy L. Threadgill, David J. Smith, Shea Qarqish, and Colby Tanner
- Subjects
0301 basic medicine ,Majority rule ,Movement ,Sensation ,lcsh:Medicine ,Relative strength ,Stimulus (physiology) ,Article ,03 medical and health sciences ,Motion ,Animals ,Social information ,Social Behavior ,lcsh:Science ,Zebrafish ,Social influence ,Multidisciplinary ,Behavior, Animal ,lcsh:R ,Shoaling and schooling ,Motion cues ,030104 developmental biology ,Optomotor response ,lcsh:Q ,Cues ,Photic Stimulation ,Cognitive psychology - Abstract
Social interactions have important consequences for individual fitness. Collective actions, however, are notoriously context-dependent and identifying how animals rapidly weigh the actions of others despite environmental uncertainty remains a fundamental challenge in biology. By exposing zebrafish (Danio rerio) to virtual fish silhouettes in a maze we isolated how the relative strength of a visual feature guides individual directional decisions and, subsequently, tunes social influence. We varied the relative speed and coherency with which a portion of silhouettes adopted a direction (leader/distractor ratio) and established that solitary zebrafish display a robust optomotor response to follow leader silhouettes that moved much faster than their distractors, regardless of stimulus coherency. Although recruitment time decreased as a power law of zebrafish group size, individual decision times retained a speed-accuracy trade-off, suggesting a benefit to smaller group sizes in collective decision-making. Directional accuracy improved regardless of group size in the presence of the faster moving leader silhouettes, but without these stimuli zebrafish directional decisions followed a democratic majority rule. Our results show that a large difference in movement speeds can guide directional decisions within groups, thereby providing individuals with a rapid and adaptive means of evaluating social information in the face of uncertainty.
- Published
- 2018
- Full Text
- View/download PDF
42. A genetic algorithm–based nonlinear scaling method for optimal motion cueing algorithm in driving simulator
- Author
-
Chee Peng Lim, Arash Mohammadi, Shady Mohamed, Houshyar Asadi, Lakshmanan Shanmugam, and Saeid Nahavandi
- Subjects
0209 industrial biotechnology ,Nonlinear scaling ,Computer science ,Mechanical Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Driving simulator ,Angular velocity ,02 engineering and technology ,Motion cues ,Motion (physics) ,Washout filter ,020901 industrial engineering & automation ,Control and Systems Engineering ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Linear acceleration ,020201 artificial intelligence & image processing ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
A motion cueing algorithm plays an important role in generating motion cues in driving simulators. The motion cueing algorithm is used to transform the linear acceleration and angular velocity of a vehicle into the translational and rotational motions of a simulator within its physical limitation through washout filters. Indeed, scaling and limiting should be used along within the washout filter to decrease the amplitude of the translational and rotational motion signals uniformly across all frequencies through the motion cueing algorithm. This is to decrease the effects of the workspace limitations in the simulator motion reproduction and improve the realism of movement sensation. A nonlinear scaling method based on the genetic algorithm for the motion cueing algorithm is developed in this study. The aim is to accurately produce motions with a high degree of fidelity and use the platform more efficiently without violating its physical limitations. To successfully achieve this aim, a third-order polynomial scaling method based on the genetic algorithm is formulated, tuned, and implemented for the linear quadratic regulator–based optimal motion cueing algorithm. A number of factors, which include the sensation error between the real and simulator drivers, the simulator’s physical limitations, and the sensation signal shape-following criteria, are considered in optimizing the proposed nonlinear scaling method. The results show that the proposed method not only is able to overcome problems pertaining to selecting nonlinear scaling parameters based on trial-and-error and inefficient usage of the platform workspace, but also to reduce the sensation error between the simulator and real drivers, while satisfying the constraints imposed by the platform boundaries.
- Published
- 2018
- Full Text
- View/download PDF
43. Inexperienced preys know when to flee or to freeze in front of a threat
- Author
-
Elisabetta Versace, Marie Hébert, and Giorgio Vallortigara
- Subjects
Evolution ,threat detection ,Escape response ,Stimulus (physiology) ,defense strategies ,03 medical and health sciences ,0302 clinical medicine ,Looming ,Escape Reaction ,Animals ,Learning ,Vision, Ocular ,030304 developmental biology ,Naive animals ,0303 health sciences ,Multidisciplinary ,Antipredatory behaviors ,Defense strategies ,Motion cues ,Threat detection ,Behavior, Animal ,motion cues ,naive animals ,Biological Sciences ,antipredatory behaviors ,Psychology ,Chickens ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Using appropriate antipredatory responses is crucial for survival. While slowing down reduces the chances of being detected from distant predators, fleeing away is advantageous in front of an approaching predator. Whether appropriate responses depend on experience with moving objects is still an open question. To clarify whether adopting appropriate fleeing or freezing responses requires previous experience, we investigated responses of chicks naive to movement. When exposed to the moving cues mimicking an approaching predator (a rapidly expanding, looming stimulus), chicks displayed a fast escape response. In contrast, when presented with a distal threat (a small stimulus sweeping overhead) they decreased their speed, a maneuver useful to avoid detection. The fast expansion of the stimulus toward the subject, rather than its size per se or change in luminance, triggered the escape response. These results show that young animals, in the absence of previous experience, can use motion cues to select the appropriate responses to different threats. The adaptive needs of young preys are thus matched by spontaneous defensive mechanisms that do not require learning.
- Published
- 2019
44. Will pedestrians cross the road before an automated vehicle? The effect of drivers’ attentiveness and presence on pedestrians’ road crossing behavior
- Author
-
Marjan Hagenzieker, Albert Solernou, Natasha Merat, J Uttley, Yee Mun Lee, Juan Pablo Nuñez Velasco, Haneen Farah, and Bart van Arem
- Subjects
Road crossing ,Applied psychology ,Driver attentiveness ,Environment controlled ,Transportation ,Safety margin ,Driver presence ,Pedestrian ,Management Science and Operations Research ,Virtual reality ,Motion cues ,Risk perception ,Fixed time ,Automotive Engineering ,Risk taking ,Psychology ,Transportation and communications ,Automated vehicles ,Vulnerable road users ,HE1-9990 ,Civil and Structural Engineering - Abstract
The impact of automated vehicles (AV) on pedestrians’ crossing behavior has been the topic of some recent studies, but findings are still scarce and inconclusive. The aim of this study is to determine whether the drivers’ presence and apparent attentiveness in a vehicle influences pedestrians’ crossing behavior, perceived behavioral control, and perceived risk, in a controlled environment, using a Head-mounted Display in an immersive Virtual Reality study.\ud \ud Twenty participants took part in a road-crossing experiment. The VR environment consisted of a single lane one-way road with car traffic approaching from the right-hand side of the participant which travelled at 30 kmph. Participants were asked to cross the road if they felt safe to do so. The effect of three driver conditions on pedestrians’ crossing behavior were studied: Attentive driver, distracted driver, and no driver present. Two vehicles were employed with a fixed time gap (3.5 s and 5.5 s) between them to study the effects of time gaps on pedestrians’ crossing behavior. The manipulated vehicle yielded to the pedestrians in half of the trials, stopping completely before reaching the pedestrian’s position. The crossing decision, time to initiate the crossing, crossing duration, and safety margin were measured.\ud \ud The main findings show that the vehicle’s motion cues (i.e. the gap between the vehicles, and the yielding behavior of the vehicle) were the most important factors affecting pedestrians’ crossing behavior. Therefore, future research should focus more on investigating how AVs should behave while interacting with pedestrians. Distracted driver condition leads to shorter crossing initiation time but the effect was small. No driver condition leads to smaller safety margin. Findings also showed that perceived behavioral control was higher and perceived risk was significantly lower when the driver appeared attentive. Given that drivers will be allowed to do other tasks while AVs are operating in the future, whether explicit communication will be needed in this situation should be further investigated.
- Published
- 2021
- Full Text
- View/download PDF
45. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome.
- Author
-
Virji-Babul, Naznin, Watt, Kimberley, Nathoo, Farouk, and Johnson, Peter
- Subjects
- *
AFFECT (Psychology) , *ANALYSIS of variance , *ANGER , *CLINICAL trials , *CONFIDENCE intervals , *EPIDEMIOLOGY , *FACIAL expression , *FEAR , *GRIEF , *HAPPINESS , *PHOTOGRAPHY , *STATISTICAL sampling , *SCALES (Weighing instruments) , *VIDEO recording , *LOGISTIC regression analysis , *DATA analysis , *DOWN syndrome , *BODY movement , *REPEATED measures design , *DATA analysis software , *DESCRIPTIVE statistics , *ADULTS - Abstract
Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA) viewed photographs and video clips of facial expressions of: happy, sad, mad, and scared. The odds of accurate identification of facial expressions were 2.7 times greater for video clips compared with photographs. The odds of accurate identification of expressions of mad and scared were greater for video clips compared with photographs. The odds of accurate identification of expressions of mad and sad were greater for adults but did not differ between adults with DS and children. Adults with DS demonstrated the lowest accuracy for recognition of scared. These results support the importance of motion cues in evaluating the social skills of individuals with DS. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
46. The functional significance of mantis peering behaviour.
- Author
-
Kral, Karl
- Subjects
- *
MANTIS (Genus) , *INSECT behavior , *HABITATS , *PREDATION , *ENTOMOLOGY - Abstract
The aim of this review is to explain the functional significance of mantis peering behaviour from an entomological perspective. First the morphological and optical features of the mantis compound eye that are important for spatial vision are described. The possibility that praying-mantises use binocular retinal disparity (stereopsis) and other alternative visual cues for determining distance in prey capture, are discussed. The primary focus of the review is the importance of peering movements for estimating the distance to stationary objects. Here the following aspects are examined: (1) Direct evidence via object manipulation experiments of absolute distance estimation with the aid of self-induced retinal image motion; (2) the mechanism of absolute distance estimation (with the interaction of visual and proprioceptive information); (3) the range of absolute and relative distance estimation; (4) the influence of target object features on distance estimation; and (5) the relationship between peering behaviour and habitat structures, based on results of studies on three species of mantis. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
47. The unique role of parietal cortex in action observation: Functional organization for communicative and manipulative actions
- Author
-
Burcu A. Urgen, Guy Orban, and Ürgen, Burcu A.
- Subjects
Male ,TOOL USE ,GESTURES ,0302 clinical medicine ,PREMOTOR ,Parietal Lobe ,NETWORK ,Control (linguistics) ,0303 health sciences ,Brain Mapping ,Radiology, Nuclear Medicine & Medical Imaging ,05 social sciences ,SPEECH ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Social Perception ,Neurology ,Climbing ,POSTERIOR PARIETAL ,Visual Perception ,Identity (object-oriented programming) ,Female ,MOTION CUES ,Psychology ,Life Sciences & Biomedicine ,INTEGRATION ,RC321-571 ,Adult ,Cognitive Neuroscience ,Posterior parietal cortex ,Neuroimaging ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Interpersonal communication ,Motor Activity ,Article ,050105 experimental psychology ,Premotor cortex ,Young Adult ,03 medical and health sciences ,medicine ,Humans ,0501 psychology and cognitive sciences ,Nonverbal Communication ,030304 developmental biology ,Science & Technology ,Neurosciences ,Displacement (psychology) ,EVOLUTION ,REPRESENTATIONS ,Action (philosophy) ,Neurosciences & Neurology ,Neuroscience ,030217 neurology & neurosurgery ,Coding (social sciences) - Abstract
Action observation is supported by a network of regions in occipito-temporal, parietal, and premotor cortex in primates. Recent research suggests that the parietal node has regions dedicated to different action classes including manipulation, interpersonal interactions, skin displacement, locomotion, and climbing. The goals of the current study consist of: 1) extending this work with new classes of actions that are communicative and specific to humans, 2) investigating how parietal cortex differs from the occipito-temporal and premotor cortex in representing action classes. Human subjects underwent fMRI scanning while observing three action classes: indirect communication, direct communication, and manipulation, plus two types of control stimuli, static controls which were static frames from the video clips, and dynamic controls consisting of temporally-scrambled optic flow information. Using univariate analysis, MVPA, and representational similarity analysis, our study presents several novel findings. First, we provide further evidence for the anatomical segregation in parietal cortex of different action classes: We have found a new site that is specific for representing human-specific indirect communicative actions in cytoarchitectonic parietal area PFt. Second, we found that the discriminability between action classes was higher in parietal cortex than the other two levels suggesting the coding of action identity information at this level. Finally, our results advocate the use of the control stimuli not just for univariate analysis of complex action videos but also when using multivariate techniques. ispartof: NEUROIMAGE vol:237 ispartof: location:United States status: published
- Published
- 2021
- Full Text
- View/download PDF
48. Looming sounds as warning signals: The function of motion cues
- Author
-
Bach, Dominik R., Neuhoff, John G., Perrig, Walter, and Seifritz, Erich
- Subjects
- *
SOUND -- Psychological aspects , *SIGNALS & signaling , *EMOTIONS , *GALVANIC skin response , *WARNINGS , *AUDITORY scene analysis , *AUDITORY perception - Abstract
Abstract: Sounds with increasing intensity can act as intrinsic warning cues by signalling that the sound source is approaching. However, intensity change is not always the dominant motion cue to a moving sound, and the effects of simple rising intensity sounds versus sounds with full three dimensional motion cues have not yet been directly compared. Here, we examined skin conductance responses, phasic alertness, and perceptual and explicit emotional ratings in response to approaching and receding sounds characterised either by full motion cues or by intensity change only. We found a stronger approach/recede effect in sounds with full motion cues for skin conductance response amplitude, suggesting sustained mobilisation of resources due to their greater saliency. Otherwise, the approach/recede effect was comparable in sounds with and without full motion cues. Overall, approaching sounds elicited greater skin conductance responses and phasic alertness, and loudness change was estimated higher. Also, they were rated as more unpleasant, potent, arousing and intense, and the probability of such sounds to signal a salient event or threat was rated higher. Several of these effects were modulated by sex. In summary, this study supports the suggestion that intensity change is the dominant motion cue mediating the effects of approaching sound sources, thus clarifying the interpretation of previous studies using such stimuli. Explicit emotional appraisal of such sounds shows a strong directional asymmetry and thus may reflect their implicit warning properties. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
49. The use of predictive information is impaired in the actions of children and young adults with Developmental Coordination Disorder.
- Author
-
Wilmut, Kate and Wann, John
- Subjects
- *
MOVEMENT disorders , *BODY movement , *CHILDREN , *YOUNG adults , *MOVEMENT disorders in children - Abstract
The need for a movement response may often be preceded by some advance information regarding direction or extent. We examined the ability of individuals with Developmental Coordination Disorder (DCD) to organise a movement in response to advance information. Pre-cues were presented and varied in the extent to which they indicated the response target. Both eye movement latencies and hand movements were measured. In the absence of pre-cues, individuals with DCD were as fast in initial hand movements as the typically developing (TD) participants, but were less efficient at correcting initial directional errors. A major difference was seen in the degree to which each group could use advance pre-cue information. TD participants were able to use pre-cue information to refine their actions. For the individuals with DCD this was only effective if there was no ambiguity in the advance cue and they had particular difficulty in using predictive motion cues. There were no differences in the speed of gaze responses which excluded an explanation relating to the dynamic allocation of attention. Individuals with DCD continued to rely on the slower strategy of fixating the target prior to initiating a hand movement, rather than using advance information to set initial movement parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
50. Acquisition of joint attention through natural interaction utilizing motion cues.
- Author
-
Sumioka, Hidenobu, Hosoda, Koh, Yoshikawa, Yuichiro, and Asada, Minoru
- Subjects
- *
JOINT attention , *MACHINE learning , *ASYNCHRONOUS child development , *ROBOTS , *HUMAN-machine relationship - Abstract
Joint attention is one of the most important cognitive functions for the emergence of communication not only between humans, but also between humans and robots. In previous work, we have demonstrated how a robot can acquire primary joint attention behavior (gaze following) without external evaluation. However, this method needs the human to tell the robot when to shift its gaze. This paper presents a method that does not need such a constraint by introducing an attention selector based on a measure consisting of saliencies of object features and motion cues. In order to realize natural interaction, a self-organizing map for real-time face pattern separation and contingency learning for gaze following without external evaluation are utilized. The attention selector controls the robot gaze to switch often from the human face to an object and vice versa, and pairs of a face pattern and a gaze motor command are input to the contingency learning. The motion cues are expected to reduce the number of incorrect training data pairs due to the asynchronous interaction that affects the convergence of the contingency learning. The experimental result shows that gaze shift utilizing motion cues enables a robot to synchronize its own motion with human motion and to learn joint attention efficiently in about 20 min. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.