34 results on '"Zachary V. Freudenburg"'
Search Results
2. Detailed somatotopy of tongue movement in the human sensorimotor cortex: A case study
- Author
-
Anouck Schippers, Mariska J. Vansteensel, Zachary V. Freudenburg, Frans S.S. Leijten, and Nick F. Ramsey
- Subjects
Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Published
- 2021
- Full Text
- View/download PDF
3. Sensorimotor ECoG Signal Features for BCI Control: A Comparison Between People With Locked-In Syndrome and Able-Bodied Controls
- Author
-
Zachary V. Freudenburg, Mariana P. Branco, Sacha Leinders, Benny H. van der Vijgh, Elmar G. M. Pels, Timothy Denison, Leonard H. van den Berg, Kai J. Miller, Erik J. Aarnoutse, Nick F. Ramsey, and Mariska J. Vansteensel
- Subjects
brain-computer interface ,implant ,sensorimotor cortex ,amyotrophic lateral sclerosis ,brain stem stroke ,electrocorticography ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
The sensorimotor cortex is a frequently targeted brain area for the development of Brain-Computer Interfaces (BCIs) for communication in people with severe paralysis and communication problems (locked-in syndrome; LIS). It is widely acknowledged that this area displays an increase in high-frequency band (HFB) power and a decrease in the power of the low frequency band (LFB) during movement of, for example, the hand. Upon termination of hand movement, activity in the LFB band typically shows a short increase (rebound). The ability to modulate the neural signal in the sensorimotor cortex by imagining or attempting to move is crucial for the implementation of sensorimotor BCI in people who are unable to execute movements. This may not always be self-evident, since the most common causes of LIS, amyotrophic lateral sclerosis (ALS) and brain stem stroke, are associated with significant damage to the brain, potentially affecting the generation of baseline neural activity in the sensorimotor cortex and the modulation thereof by imagined or attempted hand movement. In the Utrecht NeuroProsthesis (UNP) study, a participant with LIS caused by ALS and a participant with LIS due to brain stem stroke were implanted with a fully implantable BCI, including subdural electrocorticography (ECoG) electrodes over the sensorimotor area, with the purpose of achieving ECoG-BCI-based communication. We noted differences between these participants in the spectral power changes generated by attempted movement of the hand. To better understand the nature and origin of these differences, we compared the baseline spectral features and task-induced modulation of the neural signal of the LIS participants, with those of a group of able-bodied people with epilepsy who received a subchronic implant with ECoG electrodes for diagnostic purposes. Our data show that baseline LFB oscillatory components and changes generated in the LFB power of the sensorimotor cortex by (attempted) hand movement differ between participants, despite consistent HFB responses in this area. We conclude that the etiology of LIS may have significant effects on the LFB spectral components in the sensorimotor cortex, which is relevant for the development of communication-BCIs for this population.
- Published
- 2019
- Full Text
- View/download PDF
4. The dorsolateral pre-frontal cortex bi-polar error-related potential in a locked-in patient implanted with a daily use brain–computer interface
- Author
-
Mariska J. Vansteensel, Khaterah Kohneshin, Nick F. Ramsey, Elmar Pels, Zachary V. Freudenburg, Max van den Boom, Sacha Leinders, Erik J. Aarnoutse, and Mariana P. Branco
- Subjects
Neural correlates of consciousness ,Control and Optimization ,Action (philosophy) ,Control and Systems Engineering ,Computer science ,Brain activity and meditation ,Speech recognition ,Motor system ,Aerospace Engineering ,Performance improvement ,Cursor (databases) ,Brain–computer interface ,Task (project management) - Abstract
While brain computer interfaces (BCIs) offer the potential of allowing those suffering from loss of muscle control to once again fully engage with their environment by bypassing the affected motor system and decoding user intentions directly from brain activity, they are prone to errors. One possible avenue for BCI performance improvement is to detect when the BCI user perceives the BCI to have made an unintended action and thus take corrective actions. Error-related potentials (ErrPs) are neural correlates of error awareness and as such can provide an indication of when a BCI system is not performing according to the user’s intentions. Here, we investigate the brain signals of an implanted BCI user suffering from locked-in syndrome (LIS) due to late-stage ALS that prevents her from being able to speak or move but not from using her BCI at home on a daily basis to communicate, for the presence of error-related signals. We first establish the presence of an ErrP originating from the dorsolateral pre-frontal cortex (dLPFC) in response to errors made during a discrete feedback task that mimics the click-based spelling software she uses to communicate. Then, we show that this ErrP can also be elicited by cursor movement errors in a continuous BCI cursor control task. This work represents a first step toward detecting ErrPs during the daily home use of a communications BCI.
- Published
- 2021
5. Mapping Acoustics to Articulatory Gestures in Dutch: Relating Speech Gestures, Acoustics and Neural Data
- Author
-
Paolo Favero, Julia Berezutskaya, Nick F. Ramsey, Aleksei Nazarov, and Zachary V. Freudenburg
- Subjects
Language in Interaction ,Gestures ,Chromosome Inversion ,Quality of Life ,Humans ,Paralysis ,Speech ,Cognitive artificial intelligence ,Acoustics ,Language - Abstract
Contains fulltext : 285042.pdf (Publisher’s version ) (Open Access) Completely locked-in patients suffer from paralysis affecting every muscle in their body, reducing their communication means to brain-computer interfaces (BCIs). State-of-the-art BCIs have a slow spelling rate, which inevitably places a burden on patients' quality of life. Novel techniques address this problem by following a bio-mimetic approach, which consists of decoding sensory-motor cortex (SMC) activity that underlies the movements of the vocal tract's articulators. As recording articulatory data in combination with neural recordings is often unfeasible, the goal of this study was to develop an acoustic-to-articulatory inversion (AAI) model, i.e. an algorithm that generates articulatory data (speech gestures) from acoustics. A fully convolutional neural network was trained to solve the AAI mapping, and was tested on an unseen acoustic set, recorded simultaneously with neural data. Representational similarity analysis was then used to assess the relationship between predicted gestures and neural responses. The network's predictions and targets were significantly correlated. Moreover, SMC neural activity was correlated to the vocal tract gestural dynamics. The present AAI model has the potential to further our understanding of the relationship between neural, gestural and acoustic signals and lay the foundations for the development of a bio-mimetic speech BCI. Clinical Relevance- This study investigates the relationship between articulatory gestures during speech and the underlying neural activity. The topic is central for development of brain-computer interfaces for severely paralysed individuals. 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) (Glasgow, 11-15 July 2022)
- Published
- 2022
6. Direct Speech Reconstruction from Sensorimotor Brain Activity with Optimized Deep Learning Models
- Author
-
Julia Berezutskaya, Zachary V. Freudenburg, Mariska J. Vansteensel, Erik J. Aarnoutse, Nick F. Ramsey, and Marcel A.J. van Gerven
- Abstract
Development of brain-computer interface (BCI) technology is key for enabling communication in individuals who have lost the faculty of speech due to severe motor paralysis. A BCI control strategy that is gaining attention employs speech decoding from neural data. Recent studies have shown that a combination of direct neural recordings and advanced computational models can provide promising results. Understanding which decoding strategies deliver best and directly applicable results is crucial for advancing the field. In this paper, we optimized and validated a decoding approach based on speech reconstruction directly from high-density electrocorticography recordings from sensorimotor cortex during a speech production task. We show that 1) dedicated machine learning optimization of reconstruction models is key for achieving the best reconstruction performance; 2) individual word decoding in reconstructed speech achieves 92-100% accuracy (chance level is 8%); 3) direct reconstruction from sensorimotor brain activity produces intelligible speech. These results underline the need for model optimization in achieving best speech decoding results and highlight the potential that reconstruction-based speech decoding from sensorimotor cortex can offer for development of next-generation BCI technology for communication.
- Published
- 2022
7. Classification of Facial Expressions for Intended Display of Emotions Using Brain–Computer Interfaces
- Author
-
E. Salari, Mariska J. Vansteensel, Zachary V. Freudenburg, and Nick F. Ramsey
- Subjects
Adult ,Male ,0301 basic medicine ,Facial expression ,Computer science ,Speech recognition ,Emotions ,Brief Communication ,Facial Expression ,03 medical and health sciences ,Neural activity ,030104 developmental biology ,0302 clinical medicine ,Neurology ,Brain-Computer Interfaces ,Refractory epilepsy ,Humans ,Female ,Electrocorticography ,Neurology (clinical) ,Brief Communications ,Sensorimotor cortex ,030217 neurology & neurosurgery ,Brain–computer interface ,Linguistic communication - Abstract
Facial expressions are important for intentional display of emotions in social interaction. For people with severe paralysis, the ability to display emotions intentionally can be impaired. Current brain-computer interfaces (BCIs) allow for linguistic communication but are cumbersome for expressing emotions. Here, we investigated the feasibility of a BCI to display emotions by decoding facial expressions. We used electrocorticographic recordings from the sensorimotor cortex of people with refractory epilepsy and classified five facial expressions, based on neural activity. The mean classification accuracy was 72%. This approach could be a promising avenue for development of BCI-based solutions for fast communication of emotions. ANN NEUROL 2020;88:631-636.
- Published
- 2020
8. Decoding four hand gestures with a single bipolar pair of electrocorticography electrodes
- Author
-
Mariana P. Branco, Mariska J. Vansteensel, Zachary V. Freudenburg, Frans S. S. Leijten, Erik J. Aarnoutse, Maxime Verwoert, and Nick F. Ramsey
- Subjects
Hand region ,medicine.diagnostic_test ,Gestures ,Computer science ,Biomedical Engineering ,Intractable epilepsy ,Electroencephalography ,Hand ,Article ,Electrodes, Implanted ,Cellular and Molecular Neuroscience ,Neural activity ,Brain-Computer Interfaces ,mental disorders ,Electrode ,medicine ,Humans ,Electrocorticography ,Electrodes ,Decoding methods ,Gesture ,Brain–computer interface ,Biomedical engineering - Abstract
OBJECTIVE: Electrocorticography (ECoG) based Brain-Computer Interfaces (BCIs) can be used to restore communication in individuals with locked-in syndrome. In motor-based BCIs, the number of degrees-of-freedom, and thus the speed of the BCI, directly depends on the number of classes that can be discriminated from the neural activity in the sensorimotor cortex. When considering minimally invasive BCI implants, the size of the subdural ECoG implant must be minimized without compromising the number of degrees-of-freedom. APPROACH: Here we investigated if four hand gestures could be decoded using a single ECoG strip of four consecutive electrodes spaced 1 cm apart and compared the performance between a unipolar and a bipolar montage. For that we collected data of seven individuals with intractable epilepsy implanted with ECoG grids, covering the hand region of the sensorimotor cortex. Based on the implanted grids, we generated virtual ECoG strips and compared the decoding accuracy between 1) a single unipolar electrode (Unipolar Electrode), 2) a combination of 4 unipolar electrodes (Unipolar Strip), 3) a single bipolar pair (Bipolar Pair) and 4) a combination of 6 bipolar pairs (Bipolar Strip). MAIN RESULTS: We show that four hand gestures can be equally well decoded using ‘Unipolar Strips’ (mean 67.4 ± 11.7%), ‘Bipolar Strips’ (mean 66.6 ± 12.1%) and ‘Bipolar Pairs’ (mean 67.6 ± 9.4%), while ‘Unipolar Electrodes’ (61.6 ± 5.9%) performed significantly worse compared to ‘Unipolar Strips’ and ‘Bipolar Pairs’. SIGNIFICANCE: We conclude that a single bipolar pair is a potential candidate for minimally invasive motor-based BCIs and encourage the use of ECoG as a robust and reliable BCI platform for multi-class movement decoding.
- Published
- 2021
9. Towards predicting ECoG-BCI performance: assessing the potential of scalp-EEG *
- Author
-
Mansoureh Fahimi Hnazaee, Maxime Verwoert, Zachary V Freudenburg, Sandra M A van der Salm, Erik J Aarnoutse, Sacha Leinders, Marc M Van Hulle, Nick F Ramsey, and Mariska J Vansteensel
- Subjects
amyotrophic lateral sclerosis ,predict ,Scalp ,Movement ,brain-computer interface ,Biomedical Engineering ,Electroencephalography ,locked-in syndrome ,Cellular and Molecular Neuroscience ,Brain-Computer Interfaces ,Humans ,Electrocorticography ,electroencephalography ,electrocorticography ,sensorimotor - Abstract
Objective. Implanted brain-computer interfaces (BCIs) employ neural signals to control a computer and may offer an alternative communication channel for people with locked-in syndrome (LIS). Promising results have been obtained using signals from the sensorimotor (SM) area. However, in earlier work on home-use of an electrocorticography (ECoG)-based BCI by people with LIS, we detected differences in ECoG-BCI performance, which were related to differences in the modulation of low frequency band (LFB) power in the SM area. For future clinical implementation of ECoG-BCIs, it will be crucial to determine whether reliable performance can be predicted before electrode implantation. To assess if non-invasive scalp-electroencephalography (EEG) could serve such prediction, we here investigated if EEG can detect the characteristics observed in the LFB modulation of ECoG signals. Approach. We included three participants with LIS of the earlier study, and a control group of 20 healthy participants. All participants performed a Rest task, and a Movement task involving actual (healthy) or attempted (LIS) hand movements, while their EEG signals were recorded. Main results. Data of the Rest task was used to determine signal-to-noise ratio, which showed a similar range for LIS and healthy participants. Using data of the Movement task, we selected seven EEG electrodes that showed a consistent movement-related decrease in beta power (13–30 Hz) across healthy participants. Within the EEG recordings of this subset of electrodes of two LIS participants, we recognized the phenomena reported earlier for the LFB in their ECoG recordings. Specifically, strong movement-related beta band suppression was observed in one, but not the other, LIS participant, and movement-related alpha band (8–12 Hz) suppression was practically absent in both. Results of the third LIS participant were inconclusive due to technical issues with the EEG recordings. Significance. Together, these findings support a potential role for scalp EEG in the presurgical assessment of ECoG-BCI candidates.
- Published
- 2022
10. High-density intracranial recordings reveal a distinct site in anterior dorsal precentral cortex that tracks perceived speech
- Author
-
Zachary V. Freudenburg, Julia Berezutskaya, Clarissa Baratin, and N.F. Ramsey
- Subjects
Adult ,Male ,Drug Resistant Epilepsy ,medicine.medical_specialty ,Speech perception ,110 000 Neurocognition of Language ,Adolescent ,media_common.quotation_subject ,Audiology ,speech perception ,050105 experimental psychology ,Language in Interaction ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Rhythm ,Perception ,Cortex (anatomy) ,medicine ,otorhinolaryngologic diseases ,Humans ,0501 psychology and cognitive sciences ,Radiology, Nuclear Medicine and imaging ,Research Articles ,Pitch contour ,media_common ,Brain Mapping ,Radiological and Ultrasound Technology ,05 social sciences ,Functional specialization ,Motor Cortex ,Cognitive artificial intelligence ,Speech processing ,ECoG ,medicine.anatomical_structure ,Neurology ,Female ,Electrocorticography ,Neurology (clinical) ,Anatomy ,Psychology ,030217 neurology & neurosurgery ,Research Article ,Motor cortex - Abstract
Various brain regions are implicated in speech processing, and the specific function of some of them is better understood than others. In particular, involvement of the dorsal precentral cortex (dPCC) in speech perception remains debated, and attribution of the function of this region is more or less restricted to motor processing. In this study, we investigated high‐density intracranial responses to speech fragments of a feature film, aiming to determine whether dPCC is engaged in perception of continuous speech. Our findings show that dPCC exhibited preference to speech over other tested sounds. Moreover, the identified area was involved in tracking of speech auditory properties including speech spectral envelope, its rhythmic phrasal pattern and pitch contour. DPCC also showed the ability to filter out noise from the perceived speech. Comparing these results to data from motor experiments showed that the identified region had a distinct location in dPCC, anterior to the hand motor area and superior to the mouth articulator region. The present findings uncovered with high‐density intracranial recordings help elucidate the functional specialization of PCC and demonstrate the unique role of its anterior dorsal region in continuous speech perception., Berezutskaya et al. show that a distinct region within anterior dorsal precentral cortex tracks multiple auditory properties of perceived continuous speech. The region is distinct from the adjacent hand and mouth motor areas and has a unique role in speech processing.
- Published
- 2020
11. Brain-optimized extraction of complex sound features that drive continuous auditory perception
- Author
-
Zachary V. Freudenburg, Marcel A. J. van Gerven, Umut Güçlü, Julia Berezutskaya, and Nick F. Ramsey
- Subjects
Male ,0301 basic medicine ,Time Factors ,Vision ,Computer science ,Speech recognition ,Motion Pictures ,Social Sciences ,Diagnostic Radiology ,Language in Interaction ,Mathematical and Statistical Techniques ,0302 clinical medicine ,Functional Magnetic Resonance Imaging ,Cortex (anatomy) ,Medicine and Health Sciences ,Psychology ,Biology (General) ,media_common ,Brain Mapping ,Artificial neural network ,medicine.diagnostic_test ,Ecology ,Physics ,Radiology and Imaging ,Statistics ,Signal Processing, Computer-Assisted ,Cognitive artificial intelligence ,Magnetic Resonance Imaging ,Sound ,medicine.anatomical_structure ,Computational Theory and Mathematics ,Modeling and Simulation ,Physical Sciences ,Auditory Perception ,Speech Perception ,Engineering and Technology ,Regression Analysis ,Sensory Perception ,Female ,Research Article ,Adult ,Auditory perception ,Computer and Information Sciences ,110 000 Neurocognition of Language ,Speech perception ,Adolescent ,QH301-705.5 ,Imaging Techniques ,Bioacoustics ,media_common.quotation_subject ,Models, Neurological ,Neuroimaging ,Linear Regression Analysis ,Research and Analysis Methods ,Young Adult ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Artificial Intelligence ,Diagnostic Medicine ,Phonetics ,Perception ,medicine ,Genetics ,Speech ,Humans ,Statistical Methods ,Latency (engineering) ,Molecular Biology ,Artificial Neural Networks ,Ecology, Evolution, Behavior and Systematics ,Computational Neuroscience ,Auditory Cortex ,Biology and Life Sciences ,Computational Biology ,Linguistics ,Acoustics ,030104 developmental biology ,Speech Signal Processing ,Signal Processing ,Electrocorticography ,Neural Networks, Computer ,Functional magnetic resonance imaging ,Mathematics ,030217 neurology & neurosurgery ,Neuroscience - Abstract
Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postulate that constructing a data-driven neural model of auditory perception, with a minimum of theoretical assumptions about the relevant sound features, could provide an alternative approach and possibly a better match to the neural responses. We collected electrocorticography recordings from six patients who watched a long-duration feature film. The raw movie soundtrack was used to train an artificial neural network model to predict the associated neural responses. The model achieved high prediction accuracy and generalized well to a second dataset, where new participants watched a different film. The extracted bottom-up features captured acoustic properties that were specific to the type of sound and were associated with various response latency profiles and distinct cortical distributions. Specifically, several features encoded speech-related acoustic properties with some features exhibiting shorter latency profiles (associated with responses in posterior perisylvian cortex) and others exhibiting longer latency profiles (associated with responses in anterior perisylvian cortex). Our results support and extend the current view on speech perception by demonstrating the presence of temporal hierarchies in the perisylvian cortex and involvement of cortical sites outside of this region during audiovisual speech perception., Author summary A lot remains unknown regarding how the human brain processes sound in a naturalistic setting, for example when talking to a friend or watching a movie. Many theoretical frameworks have been developed in attempt to explain this process, yet we still lack the comprehensive understanding of the brain mechanisms that support continuous auditory processing. Here we present a new type of framework where we seek to explain the brain responses to sound by considering few theoretical assumptions and instead learn about the brain mechanisms of auditory processing with a ‘data-driven’ approach. Our approach is based on applying a deep artificial neural network directly to predicting the brain responses evoked by a soundtrack of a movie. We show that our framework provides good prediction accuracy of the observed neural activity and performs well on novel brain and audio data. In addition, we show that our model learns interpretable auditory features that link well to the observed neural dynamics particularly during speech perception. This framework can easily be applied to external audio and brain data and is therefore unique in its potential to address various questions about auditory perception in a completely data-driven way.
- Published
- 2020
12. Cortical network responses map onto data-driven features that capture visual semantics of movie fragments
- Author
-
Nick F. Ramsey, Zachary V. Freudenburg, Julia Berezutskaya, Luca Ambrogioni, Umut Güçlü, and Marcel A. J. van Gerven
- Subjects
0301 basic medicine ,Male ,Computer science ,lcsh:Medicine ,computer.software_genre ,Language in Interaction ,0302 clinical medicine ,Neural Pathways ,Image Processing, Computer-Assisted ,lcsh:Science ,media_common ,Language ,Cerebral Cortex ,Brain Mapping ,Multidisciplinary ,Artificial neural network ,Information processing ,Cognition ,Cognitive artificial intelligence ,Human brain ,Magnetic Resonance Imaging ,Semantics ,medicine.anatomical_structure ,Neural encoding ,Pattern Recognition, Visual ,Female ,Natural language processing ,Algorithms ,110 000 Neurocognition of Language ,media_common.quotation_subject ,Models, Neurological ,Article ,03 medical and health sciences ,Rule-based machine translation ,Encoding (memory) ,Perception ,medicine ,Humans ,business.industry ,lcsh:R ,Reproducibility of Results ,030104 developmental biology ,lcsh:Q ,Artificial intelligence ,Nerve Net ,business ,computer ,030217 neurology & neurosurgery ,Photic Stimulation - Abstract
Contains fulltext : 221394.pdf (Publisher’s version ) (Open Access) Research on how the human brain extracts meaning from sensory input relies in principle on methodological reductionism. In the present study, we adopt a more holistic approach by modeling the cortical responses to semantic information that was extracted from the visual stream of a feature film, employing artificial neural network models. Advances in both computer vision and natural language processing were utilized to extract the semantic representations from the film by combining perceptual and linguistic information. We tested whether these representations were useful in studying the human brain data. To this end, we collected electrocorticography responses to a short movie from 37 subjects and fitted their cortical patterns across multiple regions using the semantic components extracted from film frames. We found that individual semantic components reflected fundamental semantic distinctions in the visual input, such as presence or absence of people, human movement, landscape scenes, human faces, etc. Moreover, each semantic component mapped onto a distinct functional cortical network involving high-level cognitive regions in occipitotemporal, frontal and parietal cortices. The present work demonstrates the potential of the data-driven methods from information processing fields to explain patterns of cortical responses, and contributes to the overall discussion about the encoding of high-level perceptual information in the human brain. 21 p.
- Published
- 2020
13. High-frequency band temporal dynamics in response to a grasp force task
- Author
-
Zachary V. Freudenburg, Erik J. Aarnoutse, Mariana P. Branco, Nick F. Ramsey, Mariska J. Vansteensel, and Simon H. Geukes
- Subjects
Adult ,Male ,Offset (computer science) ,Adolescent ,Neuroprosthetics ,Computer science ,Frequency band ,0206 medical engineering ,Biomedical Engineering ,02 engineering and technology ,Article ,Young Adult ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Discriminative model ,Reaction Time ,medicine ,Journal Article ,Humans ,Child ,Electrocorticography ,Brain–computer interface ,Epilepsy ,Hand Strength ,medicine.diagnostic_test ,business.industry ,GRASP ,Pattern recognition ,020601 biomedical engineering ,Electrodes, Implanted ,Female ,Artificial intelligence ,business ,Psychomotor Performance ,030217 neurology & neurosurgery ,Decoding methods - Abstract
OBJECTIVE: Brain-computer interfaces (BCIs) are being developed to restore reach and grasping movements of paralyzed individuals. Recent studies have shown that the kinetics of grasping movement, such as grasp force, can be successfully decoded from electrocorticography (ECoG) signals, and that the high-frequency band (HFB) power changes provide discriminative information that contribute to an accurate decoding of grasp force profiles. However, as the models used in these studies contained simultaneous information from multiple spectral features over multiple areas in the brain, it remains unclear what parameters of movement and force are encoded by the HFB signals and how these are represented temporally and spatially in the SMC. APPROACH: To investigate this, and to gain insight in the temporal dynamics of the HFB during grasping, we continuously modelled the ECoG HFB response recorded from nine individuals with epilepsy temporarily implanted with ECoG grids, who performed three different grasp force tasks. MAIN RESULTS: We show that a model based on the force onset and offset consistently provides a better fit to the HFB power responses when compared with a model based on the force magnitude, irrespective of electrode location. SIGNIFICANCE: Our results suggest that HFB power, although potentially useful for continuous decoding, is more closely related to the changes in movement. This finding may potentially contribute to the more natural decoding of grasping movement in neural prosthetics.
- Published
- 2019
14. Optimization of sampling rate and smoothing improves classification of high frequency power in electrocorticographic brain signals
- Author
-
Zachary V. Freudenburg, Erik J. Aarnoutse, Mariana P. Branco, Mariska J. Vansteensel, and Nick F. Ramsey
- Subjects
0301 basic medicine ,Computer science ,Noise (signal processing) ,business.industry ,Pattern recognition ,Signal ,Article ,Upsampling ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Wavelet ,Sampling (signal processing) ,Feature (computer vision) ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,General Nursing ,Smoothing ,Brain–computer interface - Abstract
Objective High-frequency band (HFB) activity, measured using implanted sensors over the cortex, is increasingly considered as a feature for the study of brain function and the design of neural-implants, such as Brain-Computer Interfaces (BCIs). One common way of extracting these power signals is using a wavelet dictionary, which involves the selection of different temporal sampling and temporal smoothing parameters, such that the resulting HFB signal best represents the temporal features of the neuronal event of interest. Typically, the use of neuro-electrical signals for closed-loop BCI control requires a certain level of signal downsampling and smoothing in order to remove uncorrelated noise, optimize performance and provide fast feedback. However, a fixed setting of the sampling and smoothing parameters may lead to a suboptimal representation of the underlying neural responses and poor BCI control. This problem can be resolved with a systematic assessment of parameter settings. Approach With classification of HFB power responses as performance measure, different combinations of temporal sampling and temporal smoothing values were applied to data from sensory and motor tasks recorded with high-density and standard clinical electrocorticography (ECoG) grids in 12 epilepsy patients. Main results The results suggest that HFB ECoG responses are best performed with high sampling and subsequent smoothing. For the paradigms used in this study, optimal temporal sampling ranged from 29 Hz to 50 Hz. Regarding optimal smoothing, values were similar between tasks (0.1-0.9 s), except for executed complex hand gestures, for which two optimal possible smoothing windows were found (0.4-0.6 s and 0.9-2.7 s). Significance The range of optimal values indicates that parameter optimization depends on the functional paradigm and may be subject-specific. Our results advocate a methodical assessment of parameter settings for optimal decodability of ECoG signals.
- Published
- 2019
15. Sensorimotor ECoG Signal Features for BCI Control: A Comparison Between People With Locked-In Syndrome and Able-Bodied Controls
- Author
-
Mariana P. Branco, Timothy J. Denison, Zachary V. Freudenburg, Elmar Pels, Leonard H. van den Berg, Erik J. Aarnoutse, Nick F. Ramsey, Sacha Leinders, Mariska J. Vansteensel, Kai J. Miller, and Benny van der Vijgh
- Subjects
amyotrophic lateral sclerosis ,Neuroprosthetics ,implant ,high-frequency band ,Neuroscience(all) ,Population ,low-frequency band ,Signal ,050105 experimental psychology ,lcsh:RC321-571 ,03 medical and health sciences ,Epilepsy ,0302 clinical medicine ,medicine ,Journal Article ,0501 psychology and cognitive sciences ,Amyotrophic lateral sclerosis ,brain stem stroke ,education ,Electrocorticography ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,sensorimotor cortex ,Brain–computer interface ,Original Research ,electrocorticography ,education.field_of_study ,medicine.diagnostic_test ,General Neuroscience ,05 social sciences ,brain-computer interface ,medicine.disease ,Locked-in syndrome ,Psychology ,Neuroscience ,030217 neurology & neurosurgery - Abstract
The sensorimotor cortex is a frequently targeted brain area for the development of Brain-Computer Interfaces (BCIs) for communication in people with severe paralysis and communication problems (locked-in syndrome; LIS). It is widely acknowledged that this area displays an increase in high-frequency band (HFB) power and a decrease in the power of the low frequency band (LFB) during movement of, for example, the hand. Upon termination of hand movement, activity in the LFB band typically shows a short increase (rebound). The ability to modulate the neural signal in the sensorimotor cortex by imagining or attempting to move is crucial for the implementation of sensorimotor BCI in people who are unable to execute movements. This may not always be self-evident, since the most common causes of LIS, amyotrophic lateral sclerosis (ALS) and brain stem stroke, are associated with significant damage to the brain, potentially affecting the generation of baseline neural activity in the sensorimotor cortex and the modulation thereof by imagined or attempted hand movement. In the Utrecht NeuroProsthesis (UNP) study, a participant with LIS caused by ALS and a participant with LIS due to brain stem stroke were implanted with a fully implantable BCI, including subdural electrocorticography (ECoG) electrodes over the sensorimotor area, with the purpose of achieving ECoG-BCI-based communication. We noted differences between these participants in the spectral power changes generated by attempted movement of the hand. To better understand the nature and origin of these differences, we compared the baseline spectral features and task-induced modulation of the neural signal of the LIS participants, with those of a group of able-bodied people with epilepsy who received a subchronic implant with ECoG electrodes for diagnostic purposes. Our data show that baseline LFB oscillatory components and changes generated in the LFB power of the sensorimotor cortex by (attempted) hand movement differ between participants, despite consistent HFB responses in this area. We conclude that the etiology of LIS may have significant effects on the LFB spectral components in the sensorimotor cortex, which is relevant for the development of communication-BCIs for this population.
- Published
- 2019
16. Repeated Vowel Production Affects Features of Neural Activity in Sensorimotor Cortex
- Author
-
E. Salari, Zachary V. Freudenburg, Nick F. Ramsey, and Mariska J. Vansteensel
- Subjects
Adult ,Male ,Computer science ,Speech recognition ,Movement ,Clinical Neurology ,050105 experimental psychology ,Repetition ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Vowel ,Humans ,Speech ,0501 psychology and cognitive sciences ,Radiology, Nuclear Medicine and imaging ,Brain–computer interface ,Original Paper ,Brain Mapping ,Repetition (rhetorical device) ,Radiological and Ultrasound Technology ,Movement (music) ,05 social sciences ,Work (physics) ,ECoG ,Sensorimotor cortex ,Neurology ,Duration (music) ,Radiology Nuclear Medicine and imaging ,Brain-Computer Interfaces ,Production (computer science) ,Female ,Neurology (clinical) ,Electrocorticography ,Anatomy ,030217 neurology & neurosurgery ,Utterance - Abstract
The sensorimotor cortex is responsible for the generation of movements and interest in the ability to use this area for decoding speech by brain–computer interfaces has increased recently. Speech decoding is challenging however, since the relationship between neural activity and motor actions is not completely understood. Non-linearity between neural activity and movement has been found for instance for simple finger movements. Despite equal motor output, neural activity amplitudes are affected by preceding movements and the time between movements. It is unknown if neural activity is also affected by preceding motor actions during speech. We addressed this issue, using electrocorticographic high frequency band (HFB; 75–135 Hz) power changes in the sensorimotor cortex during discrete vowel generation. Three subjects with temporarily implanted electrode grids produced the /i/ vowel at repetition rates of 1, 1.33 and 1.66 Hz. For every repetition, the HFB power amplitude was determined. During the first utterance, most electrodes showed a large HFB power peak, which decreased for subsequent utterances. This result could not be explained by differences in performance. With increasing duration between utterances, more electrodes showed an equal response to all repetitions, suggesting that the duration between vowel productions influences the effect of previous productions on sensorimotor cortex activity. Our findings correspond with previous studies for finger movements and bear relevance for the development of brain-computer interfaces that employ speech decoding based on brain signals, in that past utterances will need to be taken into account for these systems to work accurately. Electronic supplementary material The online version of this article (10.1007/s10548-018-0673-4) contains supplementary material, which is available to authorized users.
- Published
- 2019
17. GridLoc : An automatic and unsupervised localization method for high-density ECoG grids
- Author
-
Zachary V. Freudenburg, Mariska J. Vansteensel, Nick F. Ramsey, Michael Leibbrand, and Mariana P. Branco
- Subjects
Adult ,Male ,medicine.medical_specialty ,Neurology ,Adolescent ,Computer science ,Cognitive Neuroscience ,High density ,Signal ,050105 experimental psychology ,Article ,Unsupervised ,Resting-state ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Cortical distance ,medicine ,Image Processing, Computer-Assisted ,Humans ,0501 psychology and cognitive sciences ,Awake surgery ,Electrocorticography ,Electrode localization ,Signal processing ,Brain Mapping ,medicine.diagnostic_test ,Resting state fMRI ,05 social sciences ,Brain ,High-frequency band ,Middle Aged ,Magnetic Resonance Imaging ,Electrodes, Implanted ,Functional anatomy ,High-density ,Female ,030217 neurology & neurosurgery ,Angiogram ,Biomedical engineering - Abstract
Precise localization of electrodes is essential in the field of high-density (HD) electrocorticography (ECoG) brain signal analysis in order to accurately interpret the recorded activity in relation to functional anatomy. Current localization methods for subchronically implanted HD electrode grids involve post-operative imaging. However, for situations where post-operative imaging is not available, such as during acute measurements in awake surgery, electrode localization is complicated. Intra-operative photographs may be informative, but not for electrode grids positioned partially or fully under the skull. Here we present an automatic and unsupervised method to localize HD electrode grids that does not require post-operative imaging. The localization method, named GridLoc, is based on the hypothesis that the anatomical and vascular brain structures under the ECoG electrodes have an effect on the amplitude of the recorded ECoG signal. More specifically, we hypothesize that the spatial match between resting-state high-frequency band power (45–120 Hz) patterns over the grid and the anatomical features of the brain under the electrodes, such as the presence of sulci and larger blood vessels, can be used for adequate HD grid localization. We validate this hypothesis and compare the GridLoc results with electrode locations determined with post-operative imaging and/or photographs in 8 patients implanted with HD-ECoG grids. Locations agreed with an average difference of 1.94 ± 0.11 mm, which is comparable to differences reported earlier between post-operative imaging and photograph methods. The results suggest that resting-state high-frequency band activity can be used for accurate localization of HD grid electrodes on a pre-operative MRI scan and that GridLoc provides a convenient alternative to methods that rely on post-operative imaging or intra-operative photographs.
- Published
- 2018
18. The influence of prior pronunciations on sensorimotor cortex activity patterns during vowel production
- Author
-
E. Salari, Nick F. Ramsey, Zachary V. Freudenburg, and Mariska J. Vansteensel
- Subjects
0301 basic medicine ,Adult ,Male ,Adolescent ,Computer science ,Speech recognition ,Interface (computing) ,Movement ,Biomedical Engineering ,Pronunciation ,Article ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Young Adult ,0302 clinical medicine ,Vowel ,medicine ,Humans ,Speech ,Brain–computer interface ,Language ,Epilepsy ,Motor Cortex ,Reproducibility of Results ,Variety (linguistics) ,Electrodes, Implanted ,Task (computing) ,030104 developmental biology ,Variation (linguistics) ,medicine.anatomical_structure ,Brain-Computer Interfaces ,Female ,Electrocorticography ,Sensorimotor Cortex ,030217 neurology & neurosurgery ,Psychomotor Performance ,Motor cortex - Abstract
Objective In recent years, brain-computer interface (BCI) systems have been investigated for their potential as a communication device to assist people with severe paralysis. Decoding speech sensorimotor cortex activity is a promising avenue for the generation of BCI control signals, but is complicated by variability in neural patterns, leading to suboptimal decoding. We investigated whether neural pattern variability associated with sound pronunciation can be explained by prior pronunciations and determined to what extent prior speech affects BCI decoding accuracy. Approach Neural patterns in speech motor areas were evaluated with electrocorticography in five epilepsy patients, who performed a simple speech task that involved pronunciation of the /i/ sound, preceded by either silence, the /a/ sound or the /u/ sound. Main results The neural pattern related to the /i/ sound depends on previous sounds and is therefore associated with multiple distinct sensorimotor patterns, which is likely to reflect differences in the movements towards this sound. We also show that these patterns still contain a commonality that is distinct from the other vowel sounds (/a/ and /u/). Classification accuracies for the decoding of different sounds do increase, however, when the multiple patterns for the /i/ sound are taken into account. Simply including multiple forms of the /i/ vowel in the training set for the creation of a single /i/ model performs as well as training individual models for each /i/ variation. Significance Our results are of interest for the development of BCIs that aim to decode speech sounds from the sensorimotor cortex, since they argue that a multitude of cortical activity patterns associated with speech movements can be reduced to a basis set of models which reflect meaningful language units (vowels), yet it is important to account for the variety of neural patterns associated with a single sound in the training process.
- Published
- 2018
19. Spatial-Temporal Dynamics of the Sensorimotor Cortex: Sustained and Transient Activity
- Author
-
Zachary V. Freudenburg, E. Salari, Nick F. Ramsey, and Mariska J. Vansteensel
- Subjects
0301 basic medicine ,Adult ,Male ,Adolescent ,Movement ,Biomedical Engineering ,Biology ,Article ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Internal Medicine ,medicine ,Premovement neuronal activity ,Humans ,Speech ,Electrocorticography ,Sensorimotor cortex ,Electrodes ,Neurons ,Brain Mapping ,medicine.diagnostic_test ,General Neuroscience ,Rehabilitation ,Electroencephalography ,Transient analysis ,030104 developmental biology ,Female ,Sensorimotor Cortex ,Neuroscience ,030217 neurology & neurosurgery ,Algorithms ,Psychomotor Performance - Abstract
How the sensorimotor cortex is organized with respect to controlling different features of movement is unclear. One unresolved question concerns the relation between the duration of an action and the duration of the associated neuronal activity change in the sensorimotor cortex. Using subdural electrocorticography electrodes, we investigated in five subjects, whether high frequency band (HFB; 75–135 Hz) power changes have a transient or sustained relation to speech duration, during pronunciation of the Dutch /i/ vowel with different durations. We showed that the neuronal activity patterns recorded from the sensorimotor cortex can be directly related to action duration in some locations, whereas in other locations, during the same action, neuronal activity is transient, with a peak in HFB activity at movement onset and/or offset. This data sheds light on the neural underpinnings of motor actions and we discuss the possible mechanisms underlying these different response types.
- Published
- 2018
20. Decoding spoken phonemes from sensorimotor cortex with High-Density ECoG grids
- Author
-
Mariska J. Vansteensel, E. Salari, Zachary V. Freudenburg, Nick F. Ramsey, Erik J. Aarnoutse, and Martin G. Bleichner
- Subjects
0301 basic medicine ,Decodes ,Adult ,Male ,Support Vector Machine ,Adolescent ,Computer science ,Cognitive Neuroscience ,Speech recognition ,Article ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Phonetics ,Humans ,Speech ,Language ,Brain Mapping ,biology ,Matched filter ,Voice-onset time ,biology.organism_classification ,Central sulcus ,Support vector machine ,030104 developmental biology ,Neurology ,Brain-Computer Interfaces ,Female ,Electrocorticography ,Sensorimotor Cortex ,030217 neurology & neurosurgery ,Decoding methods ,Algorithms ,Spoken language - Abstract
For people who cannot communicate due to severe paralysis or involuntary movements, technology that decodes intended speech from the brain may offer an alternative means of communication. If decoding proves to be feasible, intracranial Brain-Computer Interface systems can be developed which are designed to translate decoded speech into computer generated speech or to instructions for controlling assistive devices. Recent advances suggest that such decoding may be feasible from sensorimotor cortex, but it is not clear how this challenge can be approached best. One approach is to identify and discriminate elements of spoken language, such as phonemes. We investigated feasibility of decoding four spoken phonemes from the sensorimotor face area, using electrocorticographic signals obtained with high-density electrode grids. Several decoding algorithms including spatiotemporal matched filters, spatial matched filters and support vector machines were compared. Phonemes could be classified correctly at a level of over 75% with spatiotemporal matched filters. Support Vector machine analysis reached a similar level, but spatial matched filters yielded significantly lower scores. The most informative electrodes were clustered along the central sulcus. Highest scores were achieved from time windows centered around voice onset time, but a 500 ms window before onset time could also be classified significantly. The results suggest that phoneme production involves a sequence of robust and reproducible activity patterns on the cortical surface. Importantly, decoding requires inclusion of temporal information to capture the rapid shifts of robust patterns associated with articulator muscle group contraction during production of a phoneme. The high classification scores are likely to be enabled by the use of high density grids, and by the use of discrete phonemes. Implications for use in Brain-Computer Interfaces are discussed.
- Published
- 2017
21. Neural tuning to low-level features of speech throughout the perisylvian cortex
- Author
-
Nick F. Ramsey, Umut Güçlü, Julia Berezutskaya, Zachary V. Freudenburg, and Marcel A. J. van Gerven
- Subjects
Adult ,Male ,0301 basic medicine ,Adolescent ,Neuroscience(all) ,Inferior frontal gyrus ,Auditory cortex ,Speech comprehension ,Language in Interaction ,Young Adult ,03 medical and health sciences ,Superior temporal gyrus ,0302 clinical medicine ,Phonetics ,otorhinolaryngologic diseases ,Humans ,Speech ,Temporal dynamics of music and language ,Research Articles ,Language ,Auditory Cortex ,Temporal cortex ,Brain Mapping ,General Neuroscience ,Modeling ,Language processing in the brain ,Cognitive artificial intelligence ,Magnetic Resonance Imaging ,Electrodes, Implanted ,Brain Networks and Neuronal Communication [DI-BCB_DCC_Theme 4] ,Emotional lateralization ,030104 developmental biology ,Neural encoding ,Acoustic Stimulation ,Speech Perception ,Female ,Electrocorticography ,Neurocomputational speech processing ,Nerve Net ,Psychology ,Neuroscience ,Photic Stimulation ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain. SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of “coarse” speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension.
- Published
- 2017
22. Fully implanted brain–computer interface in a locked-in patient with ALS
- Author
-
Nick F. Ramsey, Mariska J. Vansteensel, Mariana P. Branco, Max van den Boom, Thomas H. Ottens, Elmar Pels, Sacha Leinders, Martin G. Bleichner, Peter H. Gosselaar, Erik J. Aarnoutse, Peter C. van Rijen, Timothy J. Denison, and Zachary V. Freudenburg
- Subjects
0301 basic medicine ,medicine.medical_specialty ,Interface (computing) ,Case Reports ,Quadriplegia ,Communication Aids for Disabled ,03 medical and health sciences ,0302 clinical medicine ,Aphonia ,Journal Article ,medicine ,Paralysis ,Humans ,media_common.cataloged_instance ,In patient ,European union ,Electrode placement ,Brain–computer interface ,media_common ,Medicine(all) ,business.industry ,Amyotrophic Lateral Sclerosis ,Motor Cortex ,Neurological Rehabilitation ,General Medicine ,Middle Aged ,Electrodes, Implanted ,Surgery ,030104 developmental biology ,medicine.anatomical_structure ,Brain-Computer Interfaces ,Female ,medicine.symptom ,Subdural electrodes ,business ,030217 neurology & neurosurgery ,Motor cortex - Abstract
Options for people with severe paralysis who have lost the ability to communicate orally are limited. We describe a method for communication in a patient with late-stage amyotrophic lateral sclerosis (ALS), involving a fully implanted brain-computer interface that consists of subdural electrodes placed over the motor cortex and a transmitter placed subcutaneously in the left side of the thorax. By attempting to move the hand on the side opposite the implanted electrodes, the patient accurately and independently controlled a computer typing program 28 weeks after electrode placement, at the equivalent of two letters per minute. The brain-computer interface offered autonomous communication that supplemented and at times supplanted the patient's eye-tracking device. (Funded by the Government of the Netherlands and the European Union; ClinicalTrials.gov number, NCT02224469 .).
- Published
- 2016
23. Decoding hand gestures from primary somatosensory cortex using high-density ECoG
- Author
-
Mariana P. Branco, Mariska J. Vansteensel, Zachary V. Freudenburg, Erik J. Aarnoutse, Nick F. Ramsey, and Martin G. Bleichner
- Subjects
0301 basic medicine ,Adult ,Male ,Decoding ,Primary motor cortex ,Cognitive Neuroscience ,Wavelet Analysis ,Sensory system ,Cognitive neuroscience ,Somatosensory system ,Article ,03 medical and health sciences ,Sign Language ,Young Adult ,0302 clinical medicine ,Primary somatosensory cortex ,Cortex (anatomy) ,medicine ,Journal Article ,Gamma Rhythm ,Humans ,Electrocorticography ,Brain–computer interface ,Brain Mapping ,Epilepsy ,medicine.diagnostic_test ,Gestures ,Motor Cortex ,Somatosensory Cortex ,Middle Aged ,Hand ,Electrodes, Implanted ,030104 developmental biology ,medicine.anatomical_structure ,Neurology ,Brain-computer interface ,Brain-Computer Interfaces ,Female ,Psychology ,Neuroscience ,030217 neurology & neurosurgery ,Gesture - Abstract
Electrocorticography (ECoG) based Brain-Computer Interfaces (BCIs) have been proposed as a way to restore and replace motor function or communication in severely paralyzed people. To date, most motor-based BCIs have either focused on the sensorimotor cortex as a whole or on the primary motor cortex (M1) as a source of signals for this purpose. Still, target areas for BCI are not confined to M1, and more brain regions may provide suitable BCI control signals. A logical candidate is the primary somatosensory cortex (S1), which not only shares similar somatotopic organization to M1, but also has been suggested to have a role beyond sensory feedback during movement execution. Here, we investigated whether four complex hand gestures, taken from the American sign language alphabet, can be decoded exclusively from S1 using both spatial and temporal information. For decoding, we used the signal recorded from a small patch of cortex with subdural high-density (HD) grids in five patients with intractable epilepsy. Notably, we introduce a new method of trial alignment based on the increase of the electrophysiological response, which virtually eliminates the confounding effects of systematic and non-systematic temporal differences within and between gestures execution. Results show that S1 classification scores are high (76%), similar to those obtained from M1 (74%) and sensorimotor cortex as a whole (85%), and significantly above chance level (25%). We conclude that S1 offers characteristic spatiotemporal neuronal activation patterns that are discriminative between gestures, and that it is possible to decode gestures with high accuracy from a very small patch of cortex using subdurally implanted HD grids. The feasibility of decoding hand gestures using HD-ECoG grids encourages further investigation of implantable BCI systems for direct interaction between the brain and external devices with multiple degrees of freedom.
- Published
- 2016
24. Decoding Motor Signals From the Pediatric Cortex: Implications for Brain-Computer Interfaces in Children
- Author
-
Mohit Sharma, David T. Bundy, Nicholas Anderson, Charles M. Gaona, Matthew D. Smyth, Jonathan D. Breshears, David D. Limbrick, William D. Smart, Eric C. Leuthardt, Zachary V. Freudenburg, John M. Zempel, and Jarod L. Roland
- Subjects
Adult ,Male ,medicine.medical_specialty ,Adolescent ,Neuroprosthetics ,Training time ,Intractable epilepsy ,Audiology ,Cerebral palsy ,User-Computer Interface ,Cortex (anatomy) ,Task Performance and Analysis ,medicine ,Humans ,Child ,Electrocorticography ,Brain–computer interface ,Epilepsy ,medicine.diagnostic_test ,business.industry ,Motor Cortex ,medicine.disease ,Confidence interval ,Electrodes, Implanted ,medicine.anatomical_structure ,Pediatrics, Perinatology and Child Health ,Physical therapy ,Feasibility Studies ,Female ,business - Abstract
OBJECTIVE: To demonstrate the decodable nature of pediatric brain signals for the purpose of neuroprosthetic control. We hypothesized that children would achieve levels of brain-derived computer control comparable to performance previously reported for adults. PATIENTS AND METHODS: Six pediatric patients with intractable epilepsy who were invasively monitored underwent screening for electrocortical control signals associated with specific motor or phoneme articulation tasks. Subsequently, patients received visual feedback as they used these associated electrocortical signals to direct one dimensional cursor movement to a target on a screen. RESULTS: All patients achieved accuracies between 70% and 99% within 9 minutes of training using the same screened motor and articulation tasks. Two subjects went on to achieve maximum accuracies of 73% and 100% using imagined actions alone. Average mean and maximum performance for the 6 pediatric patients was comparable to that of 5 adults. The mean accuracy of the pediatric group was 81% (95% confidence interval [CI]: 71.5–90.5) over a mean training time of 11.6 minutes, whereas the adult group had a mean accuracy of 72% (95% CI: 61.2–84.3) over a mean training time of 12.5 minutes. Maximum performance was also similar between the pediatric and adult groups (89.6% [95% CI: 83–96.3] and 88.5% [95% CI: 77.1–99.8], respectively). CONCLUSIONS: Similarly to adult brain signals, pediatric brain signals can be decoded and used for BCI operation. Therefore, BCI systems developed for adults likely hold similar promise for children with motor disabilities.
- Published
- 2011
25. Nonuniform High-Gamma (60–500 Hz) Power Changes Dissociate Cognitive Task and Anatomy in Human Cortex
- Author
-
Zachary V. Freudenburg, Gerwin Schalk, Eric C. Leuthardt, Dennis L. Barbour, Mohit Sharma, Jonathan D. Breshears, Jarod L. Roland, Charles M. Gaona, and David T. Bundy
- Subjects
Adult ,Male ,Time Factors ,Adolescent ,Population ,Neuropsychological Tests ,Electroencephalography ,Vocabulary ,Brain mapping ,Superior temporal gyrus ,Reaction Time ,medicine ,Humans ,education ,Evoked Potentials ,Cerebral Cortex ,Analysis of Variance ,Brain Mapping ,education.field_of_study ,Epilepsy ,medicine.diagnostic_test ,Spectrum Analysis ,General Neuroscience ,Cognition ,Articles ,Middle Aged ,Brain Waves ,Electrophysiology ,Amplitude ,medicine.anatomical_structure ,Acoustic Stimulation ,Nonlinear Dynamics ,Cerebral cortex ,Female ,sense organs ,Cognition Disorders ,Psychology ,Neuroscience ,Photic Stimulation - Abstract
High-gamma-band (>60 Hz) power changes in cortical electrophysiology are a reliable indicator of focal, event-related cortical activity. Despite discoveries of oscillatory subthreshold and synchronous suprathreshold activity at the cellular level, there is an increasingly popular view that high-gamma-band amplitude changes recorded from cellular ensembles are the result of asynchronous firing activity that yields wideband and uniform power increases. Others have demonstrated independence of power changes in the low- and high-gamma bands, but to date, no studies have shown evidence of any such independence above 60 Hz. Based on nonuniformities in time-frequency analyses of electrocorticographic (ECoG) signals, we hypothesized that induced high-gamma-band (60–500 Hz) power changes are more heterogeneous than currently understood. Using single-word repetition tasks in six human subjects, we showed that functional responsiveness of different ECoG high-gamma sub-bands can discriminate cognitive task (e.g., hearing, reading, speaking) and cortical locations. Power changes in these sub-bands of the high-gamma range are consistently present within single trials and have statistically different time courses within the trial structure. Moreover, when consolidated across all subjects within three task-relevant anatomic regions (sensorimotor, Broca's area, and superior temporal gyrus), these behavior- and location-dependent power changes evidenced nonuniform trends across the population. Together, the independence and nonuniformity of power changes across a broad range of frequencies suggest that a new approach to evaluating high-gamma-band cortical activity is necessary. These findings show that in addition to time and location, frequency is another fundamental dimension of high-gamma dynamics.
- Published
- 2011
26. Real-time Naive Learning of Neural Correlates in ECoGElectrophysiology
- Author
-
Zachary V. Freudenburg, N.F. Ramsey, Mark Wronkiewicz, Eric C. Leuthardt, Robert Pless, and William D. Smart
- Subjects
Neural correlates of consciousness ,Information Systems and Management ,Artificial Intelligence ,Computer science ,Neuroscience ,Computer Science Applications - Published
- 2011
27. Classification of mouth movements using 7 T fMRI
- Author
-
J.M. Jansma, Zachary V. Freudenburg, Mathijs Raemaekers, E. Salari, Martin G. Bleichner, and Nick F. Ramsey
- Subjects
Male ,medicine.medical_specialty ,Adolescent ,Movement ,Interface (computing) ,Biomedical Engineering ,computer.software_genre ,Brain mapping ,Young Adult ,Cellular and Molecular Neuroscience ,Physical medicine and rehabilitation ,High resolution fMRI ,Voxel ,motor cortex ,Cortex (anatomy) ,medicine ,Humans ,Mouth movements ,Brain–computer interface ,Brain Mapping ,Mouth ,Movement (music) ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Brain-Computer Interfaces ,Female ,Sensorimotor Cortex ,Psychology ,Neuroscience ,computer ,Brain computer interface ,Motor cortex - Abstract
Objective. A brain-computer interface (BCI) is an interface that uses signals from the brain to control a computer. BCIs will likely become important tools for severely paralyzed patients to restore interaction with the environment. The sensorimotor cortex is a promising target brain region for a BCI due to the detailed topography and minimal functional interference with other important brain processes. Previous studies have shown that attempted movements in paralyzed people generate neural activity that strongly resembles actual movements. Hence decodability for BCI applications can be studied in able-bodied volunteers with actual movements. Approach. In this study we tested whether mouth movements provide adequate signals in the sensorimotor cortex for a BCI. The study was executed using fMRI at 7 T to ensure relevance for BCI with cortical electrodes, as 7 T measurements have been shown to correlate well with electrocortical measurements. Twelve healthy volunteers executed four mouth movements (lip protrusion, tongue movement, teeth clenching, and the production of a larynx activating sound) while in the scanner. Subjects performed a training and a test run. Single trials were classified based on the Pearson correlation values between the activation patterns per trial type in the training run and single trials in the test run in a 'winner-takes-all' design. Main results. Single trial mouth movements could be classified with 90% accuracy. The classification was based on an area with a volume of about 0.5 cc, located on the sensorimotor cortex. If voxels were limited to the surface, which is accessible for electrode grids, classification accuracy was still very high (82%). Voxels located on the precentral cortex performed better (87%) than the postcentral cortex (72%). Significance. The high reliability of decoding mouth movements suggests that attempted mouth movements are a promising candidate for BCI in paralyzed people.
- Published
- 2015
28. Fully implanted brain signal recording device for communication in severe paralysis reveals feasibility of chronic home use of neuronal activity
- Author
-
N.F. Ramsey, Mariana P. Branco, Timothy J. Denison, Elmar Pels, Zachary V. Freudenburg, M.A. Van Den Boom, Sacha Leinders, Erik J. Aarnoutse, and Mariska J. Vansteensel
- Subjects
Pharmacology ,medicine.medical_specialty ,business.industry ,030204 cardiovascular system & hematology ,Home use ,Signal ,03 medical and health sciences ,Psychiatry and Mental health ,0302 clinical medicine ,Physical medicine and rehabilitation ,Neurology ,Paralysis ,medicine ,Premovement neuronal activity ,Pharmacology (medical) ,Neurology (clinical) ,medicine.symptom ,business ,030217 neurology & neurosurgery ,Biological Psychiatry - Published
- 2017
29. Characterization of the effects of the human dura on macro- and micro-electrocorticographic recordings
- Author
-
Amy L. Daitch, David T. Bundy, Zachary V. Freudenburg, Eric C. Leuthardt, Mohit Sharma, Charles M. Gaona, Nicholas Szrama, Erik R. Zellmer, Carl D. Hacker, and Daniel W. Moran
- Subjects
Epidural Space ,Models, Anatomic ,Materials science ,Dura mater ,Biomedical Engineering ,Subdural Space ,Electroencephalography ,Prosthesis Design ,Signal ,Article ,Cellular and Molecular Neuroscience ,medicine ,Humans ,Subdural space ,Electrocorticography ,Evoked Potentials ,Brain–computer interface ,Cerebral Cortex ,Epilepsy ,medicine.diagnostic_test ,Noise floor ,Epidural space ,Electrodes, Implanted ,medicine.anatomical_structure ,Brain-Computer Interfaces ,Data Interpretation, Statistical ,Dura Mater ,Head ,Microelectrodes ,Algorithms ,Biomedical engineering - Abstract
Objective. Electrocorticography (ECoG) electrodes implanted on the surface of the brain have recently emerged as a potential signal platform for brain?computer interface (BCI) systems. While clinical ECoG electrodes are currently implanted beneath the dura, epidural electrodes could reduce the invasiveness and the potential impact of a surgical site infection. Subdural electrodes, on the other hand, while slightly more invasive, may have better signals for BCI application. Because of this balance between risk and benefit between the two electrode positions, the effect of the dura on signal quality must be determined in order to define the optimal implementation for an ECoG BCI system. Approach. This study utilized simultaneously acquired baseline recordings from epidural and subdural ECoG electrodes while patients rested. Both macro-scale (2?mm diameter electrodes with 1?cm inter-electrode distance, one patient) and micro-scale (75??m diameter electrodes with 1?mm inter-electrode distance, four patients) ECoG electrodes were tested. Signal characteristics were evaluated to determine differences in the spectral amplitude and noise floor. Furthermore, the experimental results were compared to theoretical effects produced by placing epidural and subdural ECoG contacts of different sizes within a finite element model. Main results. The analysis demonstrated that for micro-scale electrodes, subdural contacts have significantly higher spectral amplitudes and reach the noise floor at a higher frequency than epidural contacts. For macro-scale electrodes, while there are statistical differences, these differences are small in amplitude and likely do not represent differences relevant to the ability of the signals to be used in a BCI system. Conclusions. Our findings demonstrate an important trade-off that should be considered in developing a chronic BCI system. While implanting electrodes under the dura is more invasive, it is associated with increased signal quality when recording from micro-scale electrodes with very small sizes and spacing. If recording from larger electrodes, such as traditionally used clinically, the signal quality of epidural recordings is similar to that of subdural recordings.
- Published
- 2014
30. Fast-scale network dynamics in human cortex have specific spectral covariance patterns
- Author
-
Zachary V. Freudenburg, Eric C. Leuthardt, Jonathan D. Breshears, Charles M. Gaona, Mohit Sharma, Robert Pless, and David T. Bundy
- Subjects
Cerebral Cortex ,Multidisciplinary ,Spectral signature ,medicine.diagnostic_test ,Electroencephalography ,Covariance ,Biological Sciences ,Network dynamics ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Cerebral cortex ,Cortex (anatomy) ,medicine ,Humans ,Covariant transformation ,Psychology ,Neuroscience ,Electrocorticography - Abstract
Whether measured by MRI or direct cortical physiology, infraslow rhythms have defined state invariant cortical networks. The time scales of this functional architecture, however, are unlikely to be able to accommodate the more rapid cortical dynamics necessary for an active cognitive task. Using invasively monitored epileptic patients as a research model, we tested the hypothesis that faster frequencies would spectrally bind regions of cortex as a transient mechanism to enable fast network interactions during the performance of a simple hear-and-repeat speech task. We term these short-lived spectrally covariant networks functional spectral networks (FSNs). We evaluated whether spectrally covariant regions of cortex, which were unique in their spectral signatures, provided a higher degree of task-related information than any single site showing more classic physiologic responses (i.e., single-site amplitude modulation). Taken together, our results showing that FSNs are a more sensitive measure of task-related brain activation and are better able to discern phonemic content strongly support the concept of spectrally encoded interactions in cortex. Moreover, these findings that specific linguistic information is represented in FSNs that have broad anatomic topographies support a more distributed model of cortical processing.
- Published
- 2014
31. Stable and dynamic cortical electrophysiology of induction and emergence with propofol anesthesia
- Author
-
Michael S. Avidan, Charles M. Gaona, Mohit Sharma, Rene Tempelhoff, Eric C. Leuthardt, Zachary V. Freudenburg, Jonathan D. Breshears, and Jarod L. Roland
- Subjects
Cerebral Cortex ,Multidisciplinary ,medicine.diagnostic_test ,Consciousness ,Thalamus ,Context (language use) ,Electroencephalography ,Biology ,Biological Sciences ,Electrophysiological Phenomena ,Functional imaging ,Electrophysiology ,Burst suppression ,medicine.anatomical_structure ,Cerebral cortex ,Evoked Potentials, Somatosensory ,medicine ,Humans ,Anesthesia ,Neuroscience ,Electrocorticography ,Propofol - Abstract
The mechanism(s) by which anesthetics reversibly suppress consciousness are incompletely understood. Previous functional imaging studies demonstrated dynamic changes in thalamic and cortical metabolic activity, as well as the maintained presence of metabolically defined functional networks despite the loss of consciousness. However, the invasive electrophysiology associated with these observations has yet to be studied. By recording electrical activity directly from the cortical surface, electrocorticography (ECoG) provides a powerful method to integrate spatial, temporal, and spectral features of cortical electrophysiology not possible with noninvasive approaches. In this study, we report a unique comprehensive recording of invasive human cortical physiology during both induction and emergence from propofol anesthesia. Propofol-induced transitions in and out of consciousness (defined here as responsiveness) were characterized by maintained large-scale functional networks defined by correlated fluctuations of the slow cortical potential (
- Published
- 2010
32. High-frequency band temporal dynamics in response to a grasp force task.
- Author
-
Mariana P Branco, Simon H Geukes, Erik J Aarnoutse, Mariska J Vansteensel, Zachary V Freudenburg, and Nick F Ramsey
- Published
- 2019
- Full Text
- View/download PDF
33. Optimization of sampling rate and smoothing improves classification of high frequency power in electrocorticographic brain signals.
- Author
-
Mariana P Branco, Zachary V Freudenburg, Erik J Aarnoutse, Mariska J Vansteensel, and Nick F Ramsey
- Published
- 2018
- Full Text
- View/download PDF
34. Brain-optimized extraction of complex sound features that drive continuous auditory perception.
- Author
-
Julia Berezutskaya, Zachary V Freudenburg, Umut Güçlü, Marcel A J van Gerven, and Nick F Ramsey
- Subjects
Biology (General) ,QH301-705.5 - Abstract
Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postulate that constructing a data-driven neural model of auditory perception, with a minimum of theoretical assumptions about the relevant sound features, could provide an alternative approach and possibly a better match to the neural responses. We collected electrocorticography recordings from six patients who watched a long-duration feature film. The raw movie soundtrack was used to train an artificial neural network model to predict the associated neural responses. The model achieved high prediction accuracy and generalized well to a second dataset, where new participants watched a different film. The extracted bottom-up features captured acoustic properties that were specific to the type of sound and were associated with various response latency profiles and distinct cortical distributions. Specifically, several features encoded speech-related acoustic properties with some features exhibiting shorter latency profiles (associated with responses in posterior perisylvian cortex) and others exhibiting longer latency profiles (associated with responses in anterior perisylvian cortex). Our results support and extend the current view on speech perception by demonstrating the presence of temporal hierarchies in the perisylvian cortex and involvement of cortical sites outside of this region during audiovisual speech perception.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.