33 results on '"Maye, Alexander"'
Search Results
2. Where's the action? The pragmatic turn in cognitive science
- Author
-
Engel, Andreas K., Maye, Alexander, Kurthen, Martin, and König, Peter
- Published
- 2013
- Full Text
- View/download PDF
3. Extending Sensorimotor Contingencies to Cognition
- Author
-
Maye, Alexander, author and Engel, Andreas K., author
- Published
- 2016
- Full Text
- View/download PDF
4. Action-Oriented Models of Cognitive Processing: A Little Less Cogitation, A Little More Action Please
- Author
-
Kilner, James, author, Hommel, Bernhard, author, Bar, Moshe, author, Barsalou, Lawrence W., author, Friston, Karl J., author, Jost, Jürgen, author, Maye, Alexander, author, Metzinger, Thomas, author, Pulvermüller, Friedemann, author, Sánchez-Fibla, Marti, author, Tsotsos, John K., author, and Vigliocco, Gabriella, author
- Published
- 2016
- Full Text
- View/download PDF
5. Temporal dynamics of access to consciousness in the attentional blink
- Author
-
Kranczioch, Cornelia, Debener, Stefan, Maye, Alexander, and Engel, Andreas K.
- Published
- 2007
- Full Text
- View/download PDF
6. Neuronal Assembly Models of Compositionality
- Author
-
Maye, Alexander, Engel, Andreas K., Hinzen, Wolfram, book editor, Machery, Edouard, book editor, and Werning, Markus, book editor
- Published
- 2012
- Full Text
- View/download PDF
7. Three-Dimensional Average-Shape Atlas of the Honeybee Brain and Its Applications
- Author
-
BRANDT, ROBERT, ROHLFING, TORSTEN, RYBAK, JüRGEN, KROFCZIK, SABINE, MAYE, ALEXANDER, WESTERHOFF, MALTE, HEGE, HANS-CHRISTIAN, and MENZEL, RANDOLF
- Published
- 2005
- Full Text
- View/download PDF
8. Temporal binding of non-uniform objects
- Author
-
Maye, Alexander and Werning, Markus
- Published
- 2004
- Full Text
- View/download PDF
9. Socializing Sensorimotor Contingencies.
- Author
-
Lübbert, Annika, Göschl, Florian, Krause, Hanna, Schneider, Till R., Maye, Alexander, and Engel, Andreas K.
- Subjects
SOCIAL perception ,HUMAN-robot interaction ,EMPATHY ,SOCIAL interaction ,MENTAL representation - Abstract
The aim of this review is to highlight the idea of grounding social cognition in sensorimotor interactions shared across agents. We discuss an action-oriented account that emerges from a broader interpretation of the concept of sensorimotor contingencies. We suggest that dynamic informational and sensorimotor coupling across agents can mediate the deployment of action-effect contingencies in social contexts. We propose this concept of socializing sensorimotor contingencies (socSMCs) as a shared framework of analysis for processes within and across brains and bodies, and their physical and social environments. In doing so, we integrate insights from different fields, including neuroscience, psychology, and research on human–robot interaction. We review studies on dynamic embodied interaction and highlight empirical findings that suggest an important role of sensorimotor and informational entrainment in social contexts. Furthermore, we discuss links to closely related concepts, such as enactivism, models of coordination dynamics and others, and clarify differences to approaches that focus on mentalizing and high-level cognitive representations. Moreover, we consider conceptual implications of rethinking cognition as social sensorimotor coupling. The insight that social cognitive phenomena like joint attention, mutual trust or empathy rely heavily on the informational and sensorimotor coupling between agents may provide novel remedies for people with disturbed social cognition and for situations of disturbed social interaction. Furthermore, our proposal has potential applications in the field of human–robot interaction where socSMCs principles might lead to more natural and intuitive interfaces for human users. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Correlated neuronal activity can represent multiple binding solutions
- Author
-
Maye, Alexander
- Published
- 2003
- Full Text
- View/download PDF
11. A Spatially-Coded Visual Brain-Computer Interface for Flexible Visual Spatial Information Decoding.
- Author
-
Chen, Jingjing, Wang, Yijun, Maye, Alexander, Hong, Bo, Gao, Xiaorong, Engel, Andreas K., and Zhang, Dan
- Subjects
BRAIN-computer interfaces ,KNOWLEDGE transfer ,USER experience ,LINEAR network coding ,EYE tracking ,GAZE ,TIME-frequency analysis - Abstract
Conventional visual BCIs, in which control channels are tagged with stimulation patterns to elicit distinguishable brain patterns, has made impressive progress in terms of the information transfer rates (ITRs). However, less development has been seen with respect to user experience and complexity of the technical setup. The requirement to tag each of targets by a unique stimulus substantially limits the flexibility of conventional visual BCI systems. A method for decoding the targets in the environment flexibly was therefore proposed in the present study. A BCI speller with thirteen symbols drawn on paper was developed. The symbols were interspersed with four flickers with distinct frequencies, but the user did not have to gaze at flickers. Rather, subjects could spell a sequence by looking at the symbols on the paper. In a cue-guided spelling task, the average offline and online accuracies reached 89.3± 7.3% and 90.3± 6.9% for 13 subjects, corresponding to ITRs of 43.0± 7.4 bit/min and 43.8± 6.8 bit/min. In an additional free-spelling task for seven out of thirteen subjects, an accuracy of 92.3± 3.1% and an ITR of 45.6± 3.3 bit/min were achieved. Analysis of a simulated online system showed the possibility to reach an average ITR of 105.8 bit/min by reducing the epoch duration from 4 to 1 second. Reliable BCI control is possible by gazing at targets in the environment instead of dedicated stimuli which encode control channels. The proposed method can drastically reduce the technical effort for visual BCIs and thereby advance their applications outside the laboratory. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Subjective Evaluation of Performance in a Collaborative Task Is Better Predicted From Autonomic Response Than From True Achievements.
- Author
-
Maye, Alexander, Lorenz, Jürgen, Stoica, Mircea, and Engel, Andreas K.
- Subjects
HEART beat ,AUTONOMIC nervous system ,SOCIAL perception ,TASK performance ,NONVERBAL communication - Abstract
Whereas the fundamental role of the body in social cognition seems to be generally accepted, elucidating the bodily mechanisms associated with non-verbal communication and cooperation between two or more persons is still a challenging endeavor. In this article we propose a fresh approach for investigating the function of the autonomic nervous system that is reflected in parameters of heart rate variability, respiration, and electrodermal activity in a social setting. We analyzed autonomic parameters of dyads solving a target-tracking task together with the partner or individually. A machine classifier was trained to predict the subjects' rating of performance and collaboration either from tracking error data or from the set of autonomic parameters. When subjects collaborated, this classifier could predict the subjective performance ratings better from the autonomic response than from the objective performance of the subjects. However, when they solved the task individually, predictability from autonomic parameters dropped to the level of objective performance, indicating that subjects were more rational in rating their performance in this condition. Moreover, the model captured general knowledge about the population that allows it to predict the performance ratings of an unseen subject significantly better than chance. Our results suggest that, in particular in situations that require collaboration with others, evaluation of performance is shaped by the bodily processes that are quantified by autonomic parameters. Therefore, subjective performance assessments appear to be modulated not only by the output of a rational or discriminative system that tracks the objective performance but to a significant extent also by interoceptive processes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. An Oscillator Ensemble Model of Sequence Learning.
- Author
-
Maye, Alexander, Wang, Peng, Daume, Jonathan, Hu, Xiaolin, and Engel, Andreas K.
- Subjects
HUMAN behavior models ,FREQUENCY tuning ,PHASE-locked loops ,HUMAN experimentation - Abstract
Learning and memorizing sequences of events is an important function of the human brain and the basis for forming expectations and making predictions. Learning is facilitated by repeating a sequence several times, causing rhythmic appearance of the individual sequence elements. This observation invites to consider the resulting multitude of rhythms as a spectral "fingerprint" which characterizes the respective sequence. Here we explore the implications of this perspective by developing a neurobiologically plausible computational model which captures this "fingerprint" by attuning an ensemble of neural oscillators. In our model, this attuning process is based on a number of oscillatory phenomena that have been observed in electrophysiological recordings of brain activity like synchronization, phase locking, and reset as well as cross-frequency coupling. We compare the learning properties of the model with behavioral results from a study in human participants and observe good agreement of the errors for different levels of complexity of the sequence to be memorized. Finally, we suggest an extension of the model for processing sequences that extend over several sensory modalities. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. A Single-Stimulus, Multitarget BCI Based on Retinotopic Mapping of Motion-Onset VEPs.
- Author
-
Chen, Jingjing, Li, Zhuoran, Hong, Bo, Maye, Alexander, Engel, Andreas K., and Zhang, Dan
- Subjects
BRAIN-computer interfaces ,VISUAL evoked potentials ,MOTION ,RETINA ,VISUAL perception ,ATTENTION - Abstract
Objective: We present a new type of brain-computer interface (BCI) that utilizes the retinotopic mapping of motion-onset visual evoked potentials (mVEP) to accomplish four control channels using a single motion stimulus. Methods: Participants selected a BCI command by fixating one of four target locations around a centrally presented visual motion stimulus. A template-matching method was employed to recognize the users’ intention by decoding the position of the motion stimulus in the peripheral visual field, and classification performances were evaluated in an offline manner. The motion stimulus eccentricity between the targets and the visual motion stimulus varied among 5.1°, 6.7°, 9.8°, and 13.0°. Results: Distinct N200 spatial patterns were elicited when participants directed attention overtly to the target locations. A four-class classification accuracy of 72.2 ± 5.05% was achieved with a distance of 5.1° visual angle between the targets and the visual motion stimulus. The classification accuracies decreased with increasing motion stimulus eccentricities but remained separable well above the chance level at 13.0° (47.3 ± 8.54%). Conclusion: Our results support the feasibility of a single-stimulus, multitarget mVEP BCI. Significance: The proposed system can simplify the visual stimulation of mVEP BCIs, improve user experience and pave the way for simple yet efficient BCI communication. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Effect- and Performance-Based Auditory Feedback on Interpersonal Coordination.
- Author
-
Hwang, Tong-Hun, Schmitz, Gerd, Klemmt, Kevin, Brinkop, Lukas, Ghai, Shashank, Stoica, Mircea, Maye, Alexander, Blume, Holger, and Effenberg, Alfred O.
- Subjects
PSYCHOLOGICAL feedback ,COLLECTIVE action ,ORAL communication ,JOB performance ,APPLICATION software ,MOTOR ability - Abstract
When two individuals interact in a collaborative task, such as carrying a sofa or a table, usually spatiotemporal coordination of individual motor behavior will emerge. In many cases, interpersonal coordination can arise independently of verbal communication, based on the observation of the partners' movements and/or the object's movements. In this study, we investigate how social coupling between two individuals can emerge in a collaborative task under different modes of perceptual information. A visual reference condition was compared with three different conditions with new types of additional auditory feedback provided in real time: effect-based auditory feedback, performance-based auditory feedback, and combined effect/performance-based auditory feedback. We have developed a new paradigm in which the actions of both participants continuously result in a seamlessly merged effect on an object simulated by a tablet computer application. Here, participants should temporally synchronize their movements with a 90° phase difference and precisely adjust the finger dynamics in order to keep the object (a ball) accurately rotating on a given circular trajectory on the tablet. Results demonstrate that interpersonal coordination in a joint task can be altered by different kinds of additional auditory information in various ways. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. From Animals to Animats 12: Using Sensorimotor Contingencies for Terrain Discrimination and Adaptive Walking Behavior in the Quadruped Robot Puppy
- Author
-
Hoffmann, Matej, Schmidt, Nico M, Pfeifer, Rolf, Engel, Andreas K, Maye, Alexander, University of Zurich, Ziemke, Tom, Balkenius, Christian, Hallam, John, and Hoffmann, Matej
- Subjects
10009 Department of Informatics ,1700 General Computer Science ,000 Computer science, knowledge & systems ,2614 Theoretical Computer Science - Published
- 2012
17. Utilizing Retinotopic Mapping for a Multi-Target SSVEP BCI With a Single Flicker Frequency.
- Author
-
Maye, Alexander, Zhang, Dan, and Engel, Andreas K.
- Subjects
KNOWLEDGE transfer ,BOUNDARY spanning activity - Abstract
In brain–.computer interfaces (BCIs) that use the steady-state visual evoked response (SSVEP), the user selects a control command by directing attention overtly or covertly to one out of several flicker stimuli. The different control channels are encoded in the frequency, phase, or time domain of the flicker signals. Here, we present a new type of SSVEP BCI, which uses only a single flicker stimulus and yet affords controlling multiple channels. The approach rests on the observation that the relative position between the stimulus and the foci of overt attention result in distinct topographies of the SSVEP response on the scalp. By classifying these topographies, the computer can determine at which position the user is gazing. Offline data analysis in a study on 12 healthy volunteers revealed that 9 targets can be recognized with about 95±3% accuracy, corresponding to an information transfer rate (ITR) of 40.8 ± 3.3 b/min on average. We explored how the classification accuracy is affected by the number of control channels, the trial length, and the number of EEG channels. Our findings suggest that the EEG data from five channels over parieto-occipital brain areas are sufficient for reliably classifying the topographies and that there is a large potential to improve the ITR by optimizing the trial length. The robust performance and the simple stimulation setup suggest that this approach is a prime candidate for applications on desktop and tablet computers. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. Application of a single-flicker online SSVEP BCI for spatial navigation.
- Author
-
Chen, Jingjing, Zhang, Dan, Engel, Andreas K., Gong, Qin, and Maye, Alexander
- Subjects
BRAIN-computer interfaces ,VISUAL evoked potentials ,DIAGNOSIS of epilepsy ,STIMULUS & response (Psychology) ,KNOWLEDGE transfer ,TASK performance - Abstract
A promising approach for brain-computer interfaces (BCIs) employs the steady-state visual evoked potential (SSVEP) for extracting control information. Main advantages of these SSVEP BCIs are a simple and low-cost setup, little effort to adjust the system parameters to the user and comparatively high information transfer rates (ITR). However, traditional frequency-coded SSVEP BCIs require the user to gaze directly at the selected flicker stimulus, which is liable to cause fatigue or even photic epileptic seizures. The spatially coded SSVEP BCI we present in this article addresses this issue. It uses a single flicker stimulus that appears always in the extrafoveal field of view, yet it allows the user to control four control channels. We demonstrate the embedding of this novel SSVEP stimulation paradigm in the user interface of an online BCI for navigating a 2-dimensional computer game. Offline analysis of the training data reveals an average classification accuracy of 96.9±1.64%, corresponding to an information transfer rate of 30.1±1.8 bits/min. In online mode, the average classification accuracy reached 87.9±11.4%, which resulted in an ITR of 23.8±6.75 bits/min. We did not observe a strong relation between a subject’s offline and online performance. Analysis of the online performance over time shows that users can reliably control the new BCI paradigm with stable performance over at least 30 minutes of continuous operation. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Maximizing Information Transfer in SSVEP-Based Brain–Computer Interfaces.
- Author
-
Sengelmann, Malte, Engel, Andreas K., and Maye, Alexander
- Subjects
KNOWLEDGE transfer ,INFORMATION measurement ,SIGNALS & signaling ,BRAIN -- Electromechanical analogies ,BRAIN magnetic fields - Abstract
Compared to the different brain signals used in brain–computer interface (BCI) paradigms, the
s teady-state visually evoked potential (SSVEP) features a high signal to noise ratio, enabling reliable and fast classification of neural activity patterns without extensive training requirements. In this paper, we present methods to further increase the information transfer rates (ITRs) of SSVEP-based BCIs. Starting with stimulus parameter optimizations methods, we develop an improved approach for the use of Canonical correlation analysis and analyze properties of the SSVEP when the user fixates a target and during transitions between targets. These transitions show a negative effect on the system's ITR which we trace back to delays and dead times of the SSVEP. Using two classifier types adapted to continuous and transient SSVEPs and two control modes (fast feedback and fast input), we present a simulated online BCI implementation which addresses the challenges introduced by transient SSVEPs. The resulting system reaches an average ITR of 181 Bits/min and peak ITR values of up to 295 Bits/min for individual users. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
20. Using Music to Tap into a Universal Neural Grammar
- Author
-
Maye, Alexander and Werning, Markus
- Subjects
Social and Behavioral Sciences - Published
- 2005
21. The Sensorimotor Account of Sensory Consciousness.
- Author
-
Maye, Alexander and Engel, Andreas K.
- Subjects
- *
SENSORIMOTOR integration , *CONSCIOUSNESS , *SENSORY perception , *COGNITIVE ability , *ROBOTS - Abstract
When people speak about consciousness, they distinguish various types and different levels, and they argue for different concepts of cognition. This complicates the discussion about artificial or machine consciousness. Here we take a bottom-up approach to this question by presenting a family of robot experiments that invite us to think about consciousness in the context of artificial agents. The experiments are based on a computational model of sensorimotor contingencies. It has been suggested that these regularities in the sensorimotor flow of an agent can explain raw feels and perceptual consciousness in biological agents. We discuss the validity of the model with respect to sensorimotor contingency theory and consider whether a robot that is controlled by knowledge of its sensorimotor contingencies could have any form of consciousness. We propose that consciousness does not require higher-order thought or higher-order representations. Rather, we argue that consciousness starts when (i) an agent actively (endogenously triggered) uses its knowledge of sensorimotor contingencies to issue predictions and (ii) when it deploys this capability to structure subsequent action. [ABSTRACT FROM AUTHOR]
- Published
- 2016
22. Context-dependent dynamic weighting of information from multiple sensory modalities.
- Author
-
Maye, Alexander and Engel, Andreas K.
- Abstract
A major problem for the application of sensori-motor approaches to robot control is the classification of states. The typically immense sizes of sensorimotor state spaces render it very unlikely that exactly the same states are visited by the robot several times. In order to learn about the consequences of alternative behaviors in these states, a classification of similar or related states is necessary. This requires a metric to measure similarity between states. Under the premise that the robot should maximize its fitness, we studied the correlations between sensory data in different modalities and fitness values. We found that this correlation structure can serve as a context-dependent weighting of the importance of individual sensory channels that allows to define such a metric. In a collision-avoidance scenario we demonstrate that this results in (i) faster learning of successful actions, (ii) an acquired differentiation between sensory modalities, (iii) the possibility to use the full sensors resolution without quantization or compression, and (iv) a means to enhance sensor failure resilience. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
23. How fast can f-VEP BCIs ever be?
- Author
-
Sengelmann, Malte, Engel, Andreas K., and Maye, Alexander
- Abstract
VEP-based BCIs offer a number of advantages that make them a promising candidate for applications in everyday environments: They do not require user training, the high signal-to-noise ratio of VEPs allows reliable classification, and high information transfer rates (ITRs) of up to ∼100 bits/min have been achieved during recent years. In this article we estimate an upper bound of the ITR for VEP BCIs that use frequency and phase coding for classification (f-VEP BCIs). The estimate is based on an idealized classification process that operates on real EEG data of the steady-state (SSVEP) from naïve subjects. Our study yields subject-specific upper bounds in the range of approx. 200 to 500 bits/min. We identify causes for the significantly lower ITR of existing f-VEP BCIs and suggest solutions that can narrow the gap to the upper bound. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
24. Using Sensorimotor Contingencies for Prediction and Action Planning.
- Author
-
Maye, Alexander and Engel, Andreas K.
- Published
- 2012
- Full Text
- View/download PDF
25. Using Sensorimotor Contingencies for Terrain Discrimination and Adaptive Walking Behavior in the Quadruped Robot Puppy.
- Author
-
Hoffmann, Matej, Schmidt, Nico M., Pfeifer, Rolf, Engel, Andreas K., and Maye, Alexander
- Published
- 2012
- Full Text
- View/download PDF
26. Time Scales of Sensorimotor Contingencies.
- Author
-
Maye, Alexander and Engel, Andreas K.
- Published
- 2012
- Full Text
- View/download PDF
27. Extending sensorimotor contingency theory: prediction, planning, and action generation.
- Author
-
Maye, Alexander and Engel, Andreas K
- Subjects
- *
SENSORIMOTOR cortex , *CONTINGENCY theory (Management) , *FORECASTING , *ARTIFICIAL intelligence , *REASONING , *AVERSIVE stimuli - Abstract
One of the main assertions of sensorimotor contingency theory is that sensory experience is not generated by activating an internal representation of the outside world through sensory signals, but corresponds to a mode of exploration and hence is an active process. Perception and sensory awareness emerge from using the structure of changes in the sensory input resulting from these exploratory actions, called sensorimotor contingencies (SMCs), for planning, reasoning, and goal achievement. Using a previously developed computational model of SMCs we show how an artificial agent can plan ahead with SMCs and use them for action guidance. Our main assumption is that SMCs are associated with a utility for the agent, and that the agent selects actions that maximize this utility. We analyze the properties of the resulting actions in a robot that is endowed with several sensory modalities and controlled by our model in a simple environment. The results demonstrate that its actions avoid aversive events, and that it can achieve a low-level form of spatial awareness that is resilient to the complete loss of a sensory modality. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
28. Multimodal Brain-Computer Interfaces.
- Author
-
Maye, Alexander, Zhang, Dan, Wang, Yijun, Gao, Shangkai, and Engel, Andreas K.
- Subjects
BRAIN-computer interfaces ,SUPPORT vector machines ,EVOKED potentials (Electrophysiology) ,COMPUTATIONAL neuroscience ,BRAIN stimulation ,COMPUTER users - Abstract
Abstract: A critical parameter of brain-computer interfaces (BCIs) is the number of dimensions a user can control independently. One way to increment this number without increasing the mental effort required to operate the system is to stimulate several sensory modalities simultaneously, and to distinguish brain activity patterns when the user focuses attention to different elements of this multisensory input. In this article we show how shifting attention between simultaneously presented tactile and visual stimuli affects the electrical brain activity of human subjects, and that this signal can be used to augment the control information from the two uni-modal BCI subsystems. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
29. Simultaneous Decoding of Eccentricity and Direction Information for a Single-Flicker SSVEP BCI.
- Author
-
Chen, Jingjing, Maye, Alexander, Engel, Andreas K., Wang, Yijun, Gao, Xiaorong, and Zhang, Dan
- Subjects
VISUAL evoked potentials ,BRAIN-computer interfaces ,CANONICAL correlation (Statistics) ,SIGNAL-to-noise ratio ,CLASSIFICATION algorithms - Abstract
The feasibility of a steady-state visual evoked potential (SSVEP) brain–computer interface (BCI) with a single-flicker stimulus for multiple-target decoding has been demonstrated in a number of recent studies. The single-flicker BCIs have mainly employed the direction information for encoding the targets, i.e., different targets are placed at different spatial directions relative to the flicker stimulus. The present study explored whether visual eccentricity information can also be used to encode targets for the purpose of increasing the number of targets in the single-flicker BCIs. A total number of 16 targets were encoded, placed at eight spatial directions, and two eccentricities (2.5° and 5°) relative to a 12 Hz flicker stimulus. Whereas distinct SSVEP topographies were elicited when participants gazed at targets of different directions, targets of different eccentricities were mainly represented by different signal-to-noise ratios (SNRs). Using a canonical correlation analysis-based classification algorithm, simultaneous decoding of both direction and eccentricity information was achieved, with an offline 16-class accuracy of 66.8 ± 16.4% averaged over 12 participants and a best individual accuracy of 90.0%. Our results demonstrate a single-flicker BCI with a substantially increased target number towards practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. An independent brain–computer interface using covert non-spatial visual selective attention.
- Author
-
Zhang, Dan, Maye, Alexander, Gao, Xiaorong, Hong, Bo, Engel, Andreas K., and Gao, Shangkai
- Published
- 2010
- Full Text
- View/download PDF
31. A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling.
- Author
-
Chen H, Lin K, Maye A, Li J, and Hu X
- Abstract
Given the features of a video, recurrent neural networks can be used to automatically generate a caption for the video. Existing methods for video captioning have at least three limitations. First, semantic information has been widely applied to boost the performance of video captioning models, but existing networks often fail to provide meaningful semantic features. Second, the Teacher Forcing algorithm is often utilized to optimize video captioning models, but during training and inference, different strategies are applied to guide word generation, leading to poor performance. Third, current video captioning models are prone to generate relatively short captions that express video contents inappropriately. Toward resolving these three problems, we suggest three corresponding improvements. First of all, we propose a metric to compare the quality of semantic features, and utilize appropriate features as input for a semantic detection network (SDN) with adequate complexity in order to generate meaningful semantic features for videos. Then, we apply a scheduled sampling strategy that gradually transfers the training phase from a teacher-guided manner toward a more self-teaching manner. Finally, the ordinary logarithm probability loss function is leveraged by sentence length so that the inclination of generating short sentences is alleviated. Our model achieves better results than previous models on the YouTube2Text dataset and is competitive with the previous best model on the MSR-VTT dataset., (Copyright © 2020 Chen, Lin, Maye, Li and Hu.)
- Published
- 2020
- Full Text
- View/download PDF
32. An independent brain-computer interface based on covert shifts of non-spatial visual attention.
- Author
-
Zhang D, Gao X, Gao S, Engel AK, and Maye A
- Subjects
- Adult, Female, Humans, Male, Attention physiology, Brain Mapping methods, Evoked Potentials, Visual physiology, Motion Perception physiology, Perceptual Masking physiology, User-Computer Interface, Visual Cortex physiology
- Abstract
Modulation of steady-state visual evoked potential (SSVEP) by directing gaze to targets flickering at different frequencies has been utilized in many brain-computer interface (BCI) studies. However, this paradigm may not work with patients suffering from complete locked-in syndrome or other severe motor disabilities that do not allow conscious control of gaze direction. In this paper, we present a novel, independent BCI paradigm based on covert shift of non-spatial visual selective attention. Subjects viewed a display consisting of two spatially overlapping sets of randomly positioned dots. The two dot sets differed in color, motion and flickering frequency. Two types of motion, rotation and linear motion, were investigated. Both, the SSVEP amplitude and phase response were modulated by selectively attending to one of the two dot sets. Offline analysis revealed a predicted online classification accuracy of 69.3+/-10.2% for the rotating dots, and 80.7+/-10.4% for the linearly moving dots.
- Published
- 2009
- Full Text
- View/download PDF
33. Order in spontaneous behavior.
- Author
-
Maye A, Hsieh CH, Sugihara G, and Brembs B
- Subjects
- Animals, Fractals, Probability, Behavior, Animal, Drosophila physiology
- Abstract
Brains are usually described as input/output systems: they transform sensory input into motor output. However, the motor output of brains (behavior) is notoriously variable, even under identical sensory conditions. The question of whether this behavioral variability merely reflects residual deviations due to extrinsic random noise in such otherwise deterministic systems or an intrinsic, adaptive indeterminacy trait is central for the basic understanding of brain function. Instead of random noise, we find a fractal order (resembling Lévy flights) in the temporal structure of spontaneous flight maneuvers in tethered Drosophila fruit flies. Lévy-like probabilistic behavior patterns are evolutionarily conserved, suggesting a general neural mechanism underlying spontaneous behavior. Drosophila can produce these patterns endogenously, without any external cues. The fly's behavior is controlled by brain circuits which operate as a nonlinear system with unstable dynamics far from equilibrium. These findings suggest that both general models of brain function and autonomous agents ought to include biologically relevant nonlinear, endogenous behavior-initiating mechanisms if they strive to realistically simulate biological brains or out-compete other agents.
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.