365 results on '"Gallant, Jack L."'
Search Results
2. The cortical representation of language timescales is shared between reading and listening
- Author
-
Chen, Catherine, Dupré la Tour, Tom, Gallant, Jack L., Klein, Daniel, and Deniz, Fatma
- Published
- 2024
- Full Text
- View/download PDF
3. Phonemic segmentation of narrative speech in human cerebral cortex
- Author
-
Gong, Xue L, Huth, Alexander G, Deniz, Fatma, Johnson, Keith, Gallant, Jack L, and Theunissen, Frédéric E
- Subjects
Biological Psychology ,Cognitive and Computational Psychology ,Psychology ,Clinical Research ,Behavioral and Social Science ,Neurosciences ,Mental Health ,Brain Disorders ,Humans ,Speech ,Temporal Lobe ,Brain ,Speech Perception ,Brain Mapping ,Magnetic Resonance Imaging ,Cerebral Cortex - Abstract
Speech processing requires extracting meaning from acoustic patterns using a set of intermediate representations based on a dynamic segmentation of the speech stream. Using whole brain mapping obtained in fMRI, we investigate the locus of cortical phonemic processing not only for single phonemes but also for short combinations made of diphones and triphones. We find that phonemic processing areas are much larger than previously described: they include not only the classical areas in the dorsal superior temporal gyrus but also a larger region in the lateral temporal cortex where diphone features are best represented. These identified phonemic regions overlap with the lexical retrieval region, but we show that short word retrieval is not sufficient to explain the observed responses to diphones. Behavioral studies have shown that phonemic processing and lexical retrieval are intertwined. Here, we also have identified candidate regions within the speech cortical network where this joint processing occurs.
- Published
- 2023
4. Feature-space selection with banded ridge regression
- Author
-
la Tour, Tom Dupré, Eickenberg, Michael, Nunez-Elizalde, Anwar O, and Gallant, Jack L
- Subjects
Biomedical and Clinical Sciences ,Health Sciences ,Humans ,Regression Analysis ,Linear Models ,Neuroimaging ,Encoding models ,Regularized regression ,Variance decomposition ,Group sparsity ,Hyperparameter optimization ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery ,Biomedical and clinical sciences ,Health sciences - Abstract
Encoding models provide a powerful framework to identify the information represented in brain recordings. In this framework, a stimulus representation is expressed within a feature space and is used in a regularized linear regression to predict brain activity. To account for a potential complementarity of different feature spaces, a joint model is fit on multiple feature spaces simultaneously. To adapt regularization strength to each feature space, ridge regression is extended to banded ridge regression, which optimizes a different regularization hyperparameter per feature space. The present paper proposes a method to decompose over feature spaces the variance explained by a banded ridge regression model. It also describes how banded ridge regression performs a feature-space selection, effectively ignoring non-predictive and redundant feature spaces. This feature-space selection leads to better prediction accuracy and to better interpretability. Banded ridge regression is then mathematically linked to a number of other regression methods with similar feature-space selection mechanisms. Finally, several methods are proposed to address the computational challenge of fitting banded ridge regressions on large numbers of voxels and feature spaces. All implementations are released in an open-source Python package called Himalaya.
- Published
- 2022
5. Task-Dependent Warping of Semantic Representations During Search for Visual Action Categories.
- Author
-
Shahdloo, Mo, Çelik, Emin, Ürgen, Burcu A, Gallant, Jack L, and Çukur, Tolga
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Eye Disease and Disorders of Vision ,Basic Behavioral and Social Science ,Clinical Research ,Behavioral and Social Science ,Mental Health ,1.2 Psychological and socioeconomic processes ,1.1 Normal biological development and functioning ,Underpinning research ,Mental health ,Neurological ,attention ,fMRI ,natural movies ,visual actions ,voxelwise modeling ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery - Abstract
Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied brain activity recorded from five subjects (1 female) via functional magnetic resonance imaging while they viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex, and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity towards target actions, and that tuning shifts are a general feature of conceptual representations in the brain.SIGNIFICANCE STATEMENTThe ability to swiftly perceive the actions and intentions of others is a crucial skill for humans, which relies on efficient allocation of limited brain resources to prioritise the attended targets over distractors. However, little is known about the nature of high-level semantic representations during natural visual search for action categories. Here we provide the first evidence showing that attention significantly warps semantic representations by inducing tuning shifts in single cortical voxels, broadly spread across occipitotemporal, parietal, prefrontal, and cingulate cortices. This dynamic attentional mechanism can facilitate action perception by efficiently allocating neural resources to accentuate the representation of task-relevant action categories.
- Published
- 2022
6. Design of Complex Experiments Using Mixed Integer Linear Programming
- Author
-
Slivkoff, Storm and Gallant, Jack L.
- Subjects
Quantitative Biology - Neurons and Cognition - Abstract
Over the past few decades, neuroscience experiments have become increasingly complex and naturalistic. Experimental design has in turn become more challenging, as experiments must conform to an ever-increasing diversity of design constraints. In this article we demonstrate how this design process can be greatly assisted using an optimization tool known as Mixed Integer Linear Programming (MILP). MILP provides a rich framework for incorporating many types of real-world design constraints into a neuroimaging experiment. We introduce the mathematical foundations of MILP, compare MILP to other experimental design techniques, and provide four case studies of how MILP can be used to solve complex experimental design challenges.
- Published
- 2020
7. Author Correction: Phonemic segmentation of narrative speech in human cerebral cortex
- Author
-
Gong, Xue L., Huth, Alexander G., Deniz, Fatma, Johnson, Keith, Gallant, Jack L., and Theunissen, Frédéric E.
- Published
- 2023
- Full Text
- View/download PDF
8. Cortical networks of dynamic scene category representation in the human brain
- Author
-
Çelik, Emin, Keles, Umit, Kiremitçi, İbrahim, Gallant, Jack L, and Çukur, Tolga
- Subjects
Biological Psychology ,Psychology ,Neurosciences ,Eye Disease and Disorders of Vision ,Brain Disorders ,Underpinning research ,1.2 Psychological and socioeconomic processes ,1.1 Normal biological development and functioning ,Neurological ,Brain ,Brain Mapping ,Cerebral Cortex ,Cluster Analysis ,Humans ,Magnetic Resonance Imaging ,Pattern Recognition ,Visual ,Photic Stimulation ,Visual Perception ,fMRI ,Dynamic scene category ,representation ,Voxelwise encoding model ,Cluster analysis ,Dynamic scene category representation ,Cognitive Sciences ,Experimental Psychology ,Biological psychology ,Cognitive and computational psychology - Abstract
Humans have an impressive ability to rapidly process global information in natural scenes to infer their category. Yet, it remains unclear whether and how scene categories observed dynamically in the natural world are represented in cerebral cortex beyond few canonical scene-selective areas. To address this question, here we examined the representation of dynamic visual scenes by recording whole-brain blood oxygenation level-dependent (BOLD) responses while subjects viewed natural movies. We fit voxelwise encoding models to estimate tuning for scene categories that reflect statistical ensembles of objects and actions in the natural world. We find that this scene-category model explains a significant portion of the response variance broadly across cerebral cortex. Cluster analysis of scene-category tuning profiles across cortex reveals nine spatially-segregated networks of brain regions consistently across subjects. These networks show heterogeneous tuning for a diverse set of dynamic scene categories related to navigation, human activity, social interaction, civilization, natural environment, non-human animals, motion-energy, and texture, suggesting that the organization of scene category representation is quite complex.
- Published
- 2021
9. Voxel-Based State Space Modeling Recovers Task-Related Cognitive States in Naturalistic fMRI Experiments
- Author
-
Zhang, Tianjiao, Gao, James S, Çukur, Tolga, and Gallant, Jack L
- Subjects
Behavioral and Social Science ,Neurosciences ,Basic Behavioral and Social Science ,Clinical Research ,1.1 Normal biological development and functioning ,Underpinning research ,Mental health ,Neurological ,functional magnetic resonance imaging ,state space ,dimensionality reduction ,naturalistic stimuli ,complex task environments ,Psychology ,Cognitive Sciences - Abstract
Complex natural tasks likely recruit many different functional brain networks, but it is difficult to predict how such tasks will be represented across cortical areas and networks. Previous electrophysiology studies suggest that task variables are represented in a low-dimensional subspace within the activity space of neural populations. Here we develop a voxel-based state space modeling method for recovering task-related state spaces from human fMRI data. We apply this method to data acquired in a controlled visual attention task and a video game task. We find that each task induces distinct brain states that can be embedded in a low-dimensional state space that reflects task parameters, and that attention increases state separation in the task-related subspace. Our results demonstrate that the state space framework offers a powerful approach for modeling human brain activity elicited by complex natural tasks.
- Published
- 2021
10. Experience, circuit dynamics, and forebrain recruitment in larval zebrafish prey capture.
- Author
-
Oldfield, Claire S, Grossrubatscher, Irene, Chávez, Mario, Hoagland, Adam, Huth, Alex R, Carroll, Elizabeth C, Prendergast, Andrew, Qu, Tony, Gallant, Jack L, Wyart, Claire, and Isacoff, Ehud Y
- Subjects
Prosencephalon ,Visual Pathways ,Animals ,Zebrafish ,Predatory Behavior ,Learning ,brain ,experience ,forebrain ,neural circuit ,neuroscience ,prey capture ,zebrafish ,Biochemistry and Cell Biology - Abstract
Experience influences behavior, but little is known about how experience is encoded in the brain, and how changes in neural activity are implemented at a network level to improve performance. Here we investigate how differences in experience impact brain circuitry and behavior in larval zebrafish prey capture. We find that experience of live prey compared to inert food increases capture success by boosting capture initiation. In response to live prey, animals with and without prior experience of live prey show activity in visual areas (pretectum and optic tectum) and motor areas (cerebellum and hindbrain), with similar visual area retinotopic maps of prey position. However, prey-experienced animals more readily initiate capture in response to visual area activity and have greater visually-evoked activity in two forebrain areas: the telencephalon and habenula. Consequently, disruption of habenular neurons reduces capture performance in prey-experienced fish. Together, our results suggest that experience of prey strengthens prey-associated visual drive to the forebrain, and that this lowers the threshold for prey-associated visual activity to trigger activity in motor areas, thereby improving capture performance.
- Published
- 2020
11. The unified maximum a posteriori (MAP) framework for neuronal system identification
- Author
-
Wu, Michael C. -K., Deniz, Fatma, Prenger, Ryan J., and Gallant, Jack L.
- Subjects
Quantitative Biology - Neurons and Cognition - Abstract
The functional relationship between an input and a sensory neuron's response can be described by the neuron's stimulus-response mapping function. A general approach for characterizing the stimulus-response mapping function is called system identification. Many different names have been used for the stimulus-response mapping function: kernel or transfer function, transducer, spatiotemporal receptive field. Many algorithms have been developed to estimate a neuron's mapping function from an ensemble of stimulus-response pairs. These include the spike-triggered average, normalized reverse correlation, linearized reverse correlation, ridge regression, local spectral reverse correlation, spike-triggered covariance, artificial neural networks, maximally informative dimensions, kernel regression, boosting, and models based on leaky integrate-and-fire neurons. Because many of these system identification algorithms were developed in other disciplines, they seem very different superficially and bear little relationship with each other. Each algorithm makes different assumptions about the neuron and how the data is generated. Without a unified framework it is difficult to select the most suitable algorithm for estimating the neuron's mapping function. In this review, we present a unified framework for describing these algorithms called maximum a posteriori estimation (MAP). In the MAP framework, the implicit assumptions built into any system identification algorithm are made explicit in three MAP constituents: model class, noise distributions, and priors. Understanding the interplay between these three MAP constituents will simplify the task of selecting the most appropriate algorithms for a given data set. The MAP framework can also facilitate the development of novel system identification algorithms by incorporating biophysically plausible assumptions and mechanisms into the MAP constituents., Comment: affiliations changed
- Published
- 2018
12. The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality
- Author
-
Deniz, Fatma, Nunez-Elizalde, Anwar O, Huth, Alexander G, and Gallant, Jack L
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Clinical Research ,Behavioral and Social Science ,Basic Behavioral and Social Science ,1.1 Normal biological development and functioning ,1.2 Psychological and socioeconomic processes ,Neurological ,Acoustic Stimulation ,Adult ,Auditory Perception ,Cerebral Cortex ,Comprehension ,Female ,Humans ,Magnetic Resonance Imaging ,Male ,Models ,Neurological ,Photic Stimulation ,Reading ,Semantics ,Visual Perception ,BOLD ,cross-modal representations ,fMRI ,listening ,reading ,semantics ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery - Abstract
An integral part of human language is the capacity to extract meaning from spoken and written words, but the precise relationship between brain representations of information perceived by listening versus reading is unclear. Prior neuroimaging studies have shown that semantic information in spoken language is represented in multiple regions in the human cerebral cortex, while amodal semantic information appears to be represented in a few broad brain regions. However, previous studies were too insensitive to determine whether semantic representations were shared at a fine level of detail rather than merely at a coarse scale. We used fMRI to record brain activity in two separate experiments while participants listened to or read several hours of the same narrative stories, and then created voxelwise encoding models to characterize semantic selectivity in each voxel and in each individual participant. We find that semantic tuning during listening and reading are highly correlated in most semantically selective regions of cortex, and models estimated using one modality accurately predict voxel responses in the other modality. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.SIGNIFICANCE STATEMENT Humans can comprehend the meaning of words from both spoken and written language. It is therefore important to understand the relationship between the brain representations of spoken or written text. Here, we show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.
- Published
- 2019
13. Feature-space selection with banded ridge regression
- Author
-
Dupré la Tour, Tom, Eickenberg, Michael, Nunez-Elizalde, Anwar O., and Gallant, Jack L.
- Published
- 2022
- Full Text
- View/download PDF
14. Design of complex neuroscience experiments using mixed-integer linear programming
- Author
-
Slivkoff, Storm and Gallant, Jack L.
- Published
- 2021
- Full Text
- View/download PDF
15. The Hierarchical Cortical Organization of Human Speech Processing
- Author
-
de Heer, Wendy A, Huth, Alexander G, Griffiths, Thomas L, Gallant, Jack L, and Theunissen, Frédéric E
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Brain Disorders ,Clinical Research ,Behavioral and Social Science ,1.1 Normal biological development and functioning ,Adult ,Cerebral Cortex ,Computer Simulation ,Female ,Humans ,Male ,Models ,Neurological ,Nerve Net ,Neural Pathways ,Speech Perception ,fMRI ,natural stimuli ,regression ,speech ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery - Abstract
Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech.SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here.
- Published
- 2017
16. PrAGMATiC: a Probabilistic and Generative Model of Areas Tiling the Cortex
- Author
-
Huth, Alexander G., Griffiths, Thomas L., Theunissen, Frederic E., and Gallant, Jack L.
- Subjects
Quantitative Biology - Quantitative Methods ,Quantitative Biology - Neurons and Cognition - Abstract
Much of the human cortex seems to be organized into topographic cortical maps. Yet few quantitative methods exist for characterizing these maps. To address this issue we developed a modeling framework that can reveal group-level cortical maps based on neuroimaging data. PrAGMATiC, a probabilistic and generative model of areas tiling the cortex, is a hierarchical Bayesian generative model of cortical maps. This model assumes that the cortical map in each individual subject is a sample from a single underlying probability distribution. Learning the parameters of this distribution reveals the properties of a cortical map that are common across a group of subjects while avoiding the potentially lossy step of co-registering each subject into a group anatomical space. In this report we give a mathematical description of PrAGMATiC, describe approximations that make it practical to use, show preliminary results from its application to a real dataset, and describe a number of possible future extensions.
- Published
- 2015
17. Pyrcca: regularized kernel canonical correlation analysis in Python and its applications to neuroimaging
- Author
-
Bilenko, Natalia Y. and Gallant, Jack L.
- Subjects
Quantitative Biology - Quantitative Methods ,Computer Science - Computer Vision and Pattern Recognition ,Statistics - Machine Learning - Abstract
Canonical correlation analysis (CCA) is a valuable method for interpreting cross-covariance across related datasets of different dimensionality. There are many potential applications of CCA to neuroimaging data analysis. For instance, CCA can be used for finding functional similarities across fMRI datasets collected from multiple subjects without resampling individual datasets to a template anatomy. In this paper, we introduce Pyrcca, an open-source Python module for executing CCA between two or more datasets. Pyrcca can be used to implement CCA with or without regularization, and with or without linear or a Gaussian kernelization of the datasets. We demonstrate an application of CCA implemented with Pyrcca to neuroimaging data analysis. We use CCA to find a data-driven set of functional response patterns that are similar across individual subjects in a natural movie experiment. We then demonstrate how this set of response patterns discovered by CCA can be used to accurately predict subject responses to novel natural movie stimuli.
- Published
- 2015
18. Eye movement-invariant representations in the human visual system
- Author
-
Nishimoto, Shinji, Huth, Alexander G, Bilenko, Natalia Y, and Gallant, Jack L
- Subjects
Behavioral and Social Science ,Clinical Research ,Basic Behavioral and Social Science ,Eye Disease and Disorders of Vision ,Neurosciences ,Eye ,Neurological ,Adult ,Brain ,Eye Movements ,Female ,Fixation ,Ocular ,Humans ,Magnetic Resonance Imaging ,Male ,Visual Perception ,eye movements ,visual perception ,natural ,movies ,fMRI ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Experimental Psychology - Abstract
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
- Published
- 2017
19. Functional Subdomains within Scene-Selective Cortex: Parahippocampal Place Area, Retrosplenial Complex, and Occipital Place Area
- Author
-
Çukur, Tolga, Huth, Alexander G, Nishimoto, Shinji, and Gallant, Jack L
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Eye Disease and Disorders of Vision ,1.1 Normal biological development and functioning ,1.2 Psychological and socioeconomic processes ,Underpinning research ,Neurological ,Adult ,Brain Mapping ,Cerebral Cortex ,Communication ,Female ,Functional Laterality ,Humans ,Image Processing ,Computer-Assisted ,Magnetic Resonance Imaging ,Male ,Models ,Neurological ,Movement ,Occipital Lobe ,Oxygen ,Parahippocampal Gyrus ,Photic Stimulation ,Young Adult ,category representation ,fMRI ,OPA ,PPA ,RSC ,scene ,subdomain ,tuning profile ,voxelwise model ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery - Abstract
Functional MRI studies suggest that at least three brain regions in human visual cortex-the parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA; often called the transverse occipital sulcus)-represent large-scale information in natural scenes. Tuning of voxels within each region is often assumed to be functionally homogeneous. To test this assumption, we recorded blood oxygenation level-dependent responses during passive viewing of complex natural movies. We then used a voxelwise modeling framework to estimate voxelwise category tuning profiles within each scene-selective region. In all three regions, cluster analysis of the voxelwise tuning profiles reveals two functional subdomains that differ primarily in their responses to animals, man-made objects, social communication, and movement. Thus, the conventional functional definitions of the PPA, RSC, and OPA appear to be too coarse. One attractive hypothesis is that this consistent functional subdivision of scene-selective regions is a reflection of an underlying anatomical organization into two separate processing streams, one selectively biased toward static stimuli and one biased toward dynamic stimuli.Significance statementVisual scene perception is a critical ability to survive in the real world. It is therefore reasonable to assume that the human brain contains neural circuitry selective for visual scenes. Here we show that responses in three scene-selective areas-identified in previous studies-carry information about many object and action categories encountered in daily life. We identify two subregions in each area: one that is selective for categories of man-made objects, and another that is selective for vehicles and locomotion-related action categories that appear in dynamic scenes. This consistent functional subdivision may reflect an anatomical organization into two processing streams, one biased toward static stimuli and one biased toward dynamic stimuli.
- Published
- 2016
20. Natural speech reveals the semantic maps that tile human cerebral cortex
- Author
-
Huth, Alexander G, de Heer, Wendy A, Griffiths, Thomas L, Theunissen, Frédéric E, and Gallant, Jack L
- Subjects
Biological Psychology ,Cognitive and Computational Psychology ,Psychology ,Clinical Research ,Brain Disorders ,Neurosciences ,Mental Health ,1.1 Normal biological development and functioning ,Adult ,Auditory Perception ,Brain Mapping ,Cerebral Cortex ,Female ,Humans ,Magnetic Resonance Imaging ,Male ,Narration ,Principal Component Analysis ,Reproducibility of Results ,Semantics ,Speech ,General Science & Technology - Abstract
The meaning of language is represented in regions of the cerebral cortex collectively known as the 'semantic system'. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods--commonplace in studies of human neuroanatomy and functional connectivity--provide a powerful and efficient means for mapping functional representations in the brain.
- Published
- 2016
21. Pixels to Voxels: Modeling Visual Representation in the Human Brain
- Author
-
Agrawal, Pulkit, Stansbury, Dustin, Malik, Jitendra, and Gallant, Jack L.
- Subjects
Quantitative Biology - Neurons and Cognition ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Neural and Evolutionary Computing - Abstract
The human brain is adept at solving difficult high-level visual processing problems such as image interpretation and object recognition in natural scenes. Over the past few years neuroscientists have made remarkable progress in understanding how the human brain represents categories of objects and actions in natural scenes. However, all current models of high-level human vision operate on hand annotated images in which the objects and actions have been assigned semantic tags by a human operator. No current models can account for high-level visual function directly in terms of low-level visual input (i.e., pixels). To overcome this fundamental limitation we sought to develop a new class of models that can predict human brain activity directly from low-level visual input (i.e., pixels). We explored two classes of models drawn from computer vision and machine learning. The first class of models was based on Fisher Vectors (FV) and the second was based on Convolutional Neural Networks (ConvNets). We find that both classes of models accurately predict brain activity in high-level visual areas, directly from pixels and without the need for any semantic tags or hand annotation of images. This is the first time that such a mapping has been obtained. The fit models provide a new platform for exploring the functional principles of human vision, and they show that modern methods of computer vision and machine learning provide important tools for characterizing brain function.
- Published
- 2014
22. Pyrcca: Regularized Kernel Canonical Correlation Analysis in Python and Its Applications to Neuroimaging
- Author
-
Bilenko, Natalia Y and Gallant, Jack L
- Subjects
Information and Computing Sciences ,Applied Computing ,Machine Learning ,Biomedical Imaging ,Neurosciences ,canonical correlation analysis ,covariance analysis ,Python ,fMRI ,cross-subject alignment ,partial least squares regression ,Cognitive Sciences ,Applied computing ,Machine learning - Abstract
In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA). CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Pyrcca to implement cross-subject comparison in a natural movie functional magnetic resonance imaging (fMRI) experiment by finding a data-driven set of functional response patterns that are similar across individuals. We validate this cross-subject comparison method in Pyrcca by predicting responses to novel natural movies across subjects. Finally, we show how Pyrcca can reveal retinotopic organization in brain responses to natural movies without the need for an explicit model.
- Published
- 2016
23. Decoding the Semantic Content of Natural Movies from Human Brain Activity
- Author
-
Huth, Alexander G, Lee, Tyler, Nishimoto, Shinji, Bilenko, Natalia Y, Vu, An T, and Gallant, Jack L
- Subjects
Biochemistry and Cell Biology ,Biomedical and Clinical Sciences ,Biological Sciences ,Zoology ,Neurosciences ,Clinical Research ,1.1 Normal biological development and functioning ,Underpinning research ,Neurological ,fMRI ,decoding ,natural images ,WordNet ,structured output ,Physiology ,Medical Physiology ,Biochemistry and cell biology - Abstract
One crucial test for any quantitative model of the brain is to show that the model can be used to accurately decode information from evoked brain activity. Several recent neuroimaging studies have decoded the structure or semantic content of static visual images from human brain activity. Here we present a decoding algorithm that makes it possible to decode detailed information about the object and action categories present in natural movies from human brain activity signals measured by functional MRI. Decoding is accomplished using a hierarchical logistic regression (HLR) model that is based on labels that were manually assigned from the WordNet semantic taxonomy. This model makes it possible to simultaneously decode information about both specific and general categories, while respecting the relationships between them. Our results show that we can decode the presence of many object and action categories from averaged blood-oxygen level-dependent (BOLD) responses with a high degree of accuracy (area under the ROC curve > 0.9). Furthermore, we used this framework to test whether semantic relationships defined in the WordNet taxonomy are represented the same way in the human brain. This analysis showed that hierarchical relationships between general categories and atypical examples, such as organism and plant, did not seem to be reflected in representations measured by BOLD fMRI.
- Published
- 2016
24. PrAGMATiC: a Probabilistic and Generative Model of Areas Tiling the Cortex
- Author
-
Huth, Alexander G, Griffiths, Thomas L, Theunissen, Frederic E, and Gallant, Jack L
- Subjects
q-bio.QM ,q-bio.NC - Abstract
Much of the human cortex seems to be organized into topographic corticalmaps. Yet few quantitative methods exist for characterizing these maps. Toaddress this issue we developed a modeling framework that can revealgroup-level cortical maps based on neuroimaging data. PrAGMATiC, aprobabilistic and generative model of areas tiling the cortex, is ahierarchical Bayesian generative model of cortical maps. This model assumesthat the cortical map in each individual subject is a sample from a singleunderlying probability distribution. Learning the parameters of thisdistribution reveals the properties of a cortical map that are common across agroup of subjects while avoiding the potentially lossy step of co-registeringeach subject into a group anatomical space. In this report we give amathematical description of PrAGMATiC, describe approximations that make itpractical to use, show preliminary results from its application to a realdataset, and describe a number of possible future extensions.
- Published
- 2015
25. Voxelwise encoding models with non-spherical multivariate normal priors
- Author
-
Nunez-Elizalde, Anwar O., Huth, Alexander G., and Gallant, Jack L.
- Published
- 2019
- Full Text
- View/download PDF
26. Encoding and decoding V1 fMRI responses to natural images with sparse nonparametric models
- Author
-
Vu, Vincent Q., Ravikumar, Pradeep, Naselaris, Thomas, Kay, Kendrick N., Gallant, Jack L., and Yu, Bin
- Subjects
Statistics - Applications - Abstract
Functional MRI (fMRI) has become the most common method for investigating the human brain. However, fMRI data present some complications for statistical analysis and modeling. One recently developed approach to these data focuses on estimation of computational encoding models that describe how stimuli are transformed into brain activity measured in individual voxels. Here we aim at building encoding models for fMRI signals recorded in the primary visual cortex of the human brain. We use residual analyses to reveal systematic nonlinearity across voxels not taken into account by previous models. We then show how a sparse nonparametric method [J. Roy. Statist. Soc. Ser. B 71 (2009b) 1009-1030] can be used together with correlation screening to estimate nonlinear encoding models effectively. Our approach produces encoding models that predict about 25% more accurately than models estimated using other methods [Nature 452 (2008a) 352--355]. The estimated nonlinearity impacts the inferred properties of individual voxels, and it has a plausible biological interpretation. One benefit of quantitative encoding models is that estimated models can be used to decode brain activity, in order to identify which specific image was seen by an observer. Encoding models estimated by our approach also improve such image identification by about 12% when the correct image is one of 11,500 possible images., Comment: Published in at http://dx.doi.org/10.1214/11-AOAS476 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org)
- Published
- 2011
- Full Text
- View/download PDF
27. Using a novel source-localized phase regressor technique for evaluation of the vascular contribution to semantic category area localization in BOLD fMRI
- Author
-
Vu, An T and Gallant, Jack L
- Subjects
Biological Psychology ,Psychology ,Rare Diseases ,vein suppression ,Fusiform Face Area ,Parahippocampal Place Area ,phase ,complex valued ,BOLD fMRI ,Neurosciences ,Cognitive Sciences ,Biological psychology - Abstract
Numerous studies have shown that gradient-echo blood oxygen level dependent (BOLD) fMRI is biased toward large draining veins. However, the impact of this large vein bias on the localization and characterization of semantic category areas has not been examined. Here we address this issue by comparing standard magnitude measures of BOLD activity in the Fusiform Face Area (FFA) and Parahippocampal Place Area (PPA) to those obtained using a novel method that suppresses the contribution of large draining veins: source-localized phase regressor (sPR). Unlike previous suppression methods that utilize the phase component of the BOLD signal, sPR yields robust and unbiased suppression of large draining veins even in voxels with no task-related phase changes. This is confirmed in ideal simulated data as well as in FFA/PPA localization data from four subjects. It was found that approximately 38% of right PPA, 14% of left PPA, 16% of right FFA, and 6% of left FFA voxels predominantly reflect signal from large draining veins. Surprisingly, with the contributions from large veins suppressed, semantic category representation in PPA actually tends to be lateralized to the left rather than the right hemisphere. Furthermore, semantic category areas larger in volume and higher in fSNR were found to have more contributions from large veins. These results suggest that previous studies using gradient-echo BOLD fMRI were biased toward semantic category areas that receive relatively greater contributions from large veins.
- Published
- 2015
28. Pycortex: an interactive surface visualizer for fMRI
- Author
-
Gao, James S, Huth, Alexander G, Lescroart, Mark D, and Gallant, Jack L
- Subjects
Information and Computing Sciences ,Graphics ,Augmented Reality and Games ,1.1 Normal biological development and functioning ,fMRI ,visualization ,python ,WebGL ,data sharing ,Neurosciences ,Cognitive Sciences ,Applied computing ,Machine learning - Abstract
Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software.
- Published
- 2015
29. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas
- Author
-
Lescroart, Mark D, Stansbury, Dustin E, and Gallant, Jack L
- Subjects
Biomedical and Clinical Sciences ,Neurosciences ,Clinical Sciences ,Eye Disease and Disorders of Vision ,Clinical Research ,scene perception ,fMRI ,voxel-wise modeling ,encoding models ,neuroscience ,vision ,Clinical sciences - Abstract
Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue.
- Published
- 2015
30. A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes
- Author
-
Naselaris, Thomas, Olman, Cheryl A, Stansbury, Dustin E, Ugurbil, Kamil, and Gallant, Jack L
- Subjects
Clinical Research ,Eye Disease and Disorders of Vision ,Neurosciences ,Mental health ,Neurological ,Good Health and Well Being ,Adult ,Brain Mapping ,Humans ,Imagination ,Magnetic Resonance Imaging ,Pattern Recognition ,Visual ,Visual Cortex ,Mental imagery ,Voxel-wise encoding models ,Decoding ,fMRI ,Vision ,Perception ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery - Abstract
Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial frequency, and orientation) are encoded during mental imagery. However, the specific hypothesis that low-level visual features are encoded during mental imagery is difficult to directly test using MVPC. The difficulty is especially acute when considering the representation of complex, multi-object scenes that can evoke multiple sources of variation that are distinct from low-level visual features. Therefore, we used a voxel-wise modeling and decoding approach to directly test the hypothesis that low-level visual features are encoded in activity generated during mental imagery of complex scenes. Using fMRI measurements of cortical activity evoked by viewing photographs, we constructed voxel-wise encoding models of tuning to low-level visual features. We also measured activity as subjects imagined previously memorized works of art. We then used the encoding models to determine if putative low-level visual features encoded in this activity could pick out the imagined artwork from among thousands of other randomly selected images. We show that mental images can be accurately identified in this way; moreover, mental image identification accuracy depends upon the degree of tuning to low-level visual features in the voxels selected for decoding. These results directly confirm the hypothesis that low-level visual features are encoded during mental imagery of complex scenes. Our work also points to novel forms of brain-machine interaction: we provide a proof-of-concept demonstration of an internet image search guided by mental imagery.
- Published
- 2015
31. Human Scene-Selective Areas Represent 3D Configurations of Surfaces
- Author
-
Lescroart, Mark D. and Gallant, Jack L.
- Published
- 2019
- Full Text
- View/download PDF
32. Semantic Representation in the Human Brain under Rich, Naturalistic Conditions
- Author
-
Gallant, Jack L., primary and Popham, Sara F., additional
- Published
- 2020
- Full Text
- View/download PDF
33. Functional Subdomains within Human FFA
- Author
-
Çukur, Tolga, Huth, Alexander G, Nishimoto, Shinji, and Gallant, Jack L
- Subjects
Eye Disease and Disorders of Vision ,Biomedical Imaging ,1.1 Normal biological development and functioning ,Underpinning research ,Adult ,Brain Mapping ,Female ,Functional Neuroimaging ,Humans ,Image Processing ,Computer-Assisted ,Magnetic Resonance Imaging ,Male ,Occipital Lobe ,Pattern Recognition ,Visual ,Photic Stimulation ,Temporal Lobe ,Visual Perception ,Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery - Abstract
The fusiform face area (FFA) is a well-studied human brain region that shows strong activation for faces. In functional MRI studies, FFA is often assumed to be a homogeneous collection of voxels with similar visual tuning. To test this assumption, we used natural movies and a quantitative voxelwise modeling and decoding framework to estimate category tuning profiles for individual voxels within FFA. We find that the responses in most FFA voxels are strongly enhanced by faces, as reported in previous studies. However, we also find that responses of individual voxels are selectively enhanced or suppressed by a wide variety of other categories and that these broader tuning profiles differ across FFA voxels. Cluster analysis of category tuning profiles across voxels reveals three spatially segregated functional subdomains within FFA. These subdomains differ primarily in their responses for nonface categories, such as animals, vehicles, and communication verbs. Furthermore, this segregation does not depend on the statistical threshold used to define FFA from responses to functional localizers. These results suggest that voxels within FFA represent more diverse information about object and action categories than generally assumed.
- Published
- 2013
34. Natural Scene Statistics Account for the Representation of Scene Categories in Human Visual Cortex
- Author
-
Stansbury, Dustin E, Naselaris, Thomas, and Gallant, Jack L
- Subjects
Biological Psychology ,Biomedical and Clinical Sciences ,Neurosciences ,Psychology ,Eye Disease and Disorders of Vision ,Clinical Research ,Neurological ,Functional Neuroimaging ,Humans ,Learning ,Magnetic Resonance Imaging ,Pattern Recognition ,Visual ,Photic Stimulation ,Visual Cortex ,Visual Perception ,Cognitive Sciences ,Neurology & Neurosurgery ,Biological psychology - Abstract
During natural vision, humans categorize the scenes they encounter: an office, the beach, and so on. These categories are informed by knowledge of the way that objects co-occur in natural scenes. How does the human brain aggregate information about objects to represent scene categories? To explore this issue, we used statistical learning methods to learn categories that objectively capture the co-occurrence statistics of objects in a large collection of natural scenes. Using the learned categories, we modeled fMRI brain signals evoked in human subjects when viewing images of scenes. We find that evoked activity across much of anterior visual cortex is explained by the learned categories. Furthermore, a decoder based on these scene categories accurately predicts the categories and objects comprising novel scenes from brain activity evoked by those scenes. These results suggest that the human brain represents scene categories that capture the co-occurrence statistics of objects in the world.
- Published
- 2013
35. Attention during natural vision warps semantic representation across the human brain
- Author
-
Çukur, Tolga, Nishimoto, Shinji, Huth, Alexander G, and Gallant, Jack L
- Subjects
Neurosciences ,Adult ,Attention ,Brain Mapping ,Cerebral Cortex ,Concept Formation ,Humans ,Magnetic Resonance Imaging ,Male ,Motion Pictures ,Neuropsychological Tests ,Semantics ,Visual Perception ,Psychology ,Cognitive Sciences ,Neurology & Neurosurgery - Abstract
Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.
- Published
- 2013
36. Working Memory and Decision Processes in Visual Area V4
- Author
-
Hayden, Benjamin Y and Gallant, Jack L
- Subjects
BRII recipient: Gallant - Abstract
Recognizing and responding to a remembered stimulus requires the coordination of perception, working memory, and decision-making. To investigate the role of visual cortex in these processes, we recorded responses of single V4 neurons during performance of a delayed match-to-sample task that incorporates rapid serial visual presentation of natural images. We found that neuronal activity during the delay period after the cue but before the images depends on the identity of the remembered image and that this change persists while distractors appear. This persistent response modulation has been identified as a diagnostic criterion for putative working memory signals; our data thus suggest that working memory may involve reactivation of sensory neurons. When the remembered image reappears in the neuron’s receptive field, visually evoked responses are enhanced; this match enhancement is a diagnostic criterion for decision. One model that predicts these data is the matched filter hypothesis, which holds that during search V4 neurons change their tuning so as to match the remembered cue, and thus become detectors for that image. More generally, these results suggest that V4 neurons participate in the perceptual, working memory, and decision processes that are needed to perform memory-guided decision-making.
- Published
- 2013
37. A variational autoencoder provides novel, data-driven features that explain functional brain representations in a naturalistic navigation task
- Author
-
Cho, Cheol Jun, primary, Zhang, Tianjiao, additional, and Gallant, Jack L., additional
- Published
- 2023
- Full Text
- View/download PDF
38. Spatial Frequency and Orientation Tuning Dynamics in Area V1
- Author
-
Mazer, James A., Vinje, William E., McDermott, Josh, Schiller, Peter H., and Gallant, Jack L.
- Published
- 2002
39. Sparse Coding and Decorrelation in Primary Visual Cortex during Natural Vision
- Author
-
Vinje, William E. and Gallant, Jack L.
- Published
- 2000
40. Model connectivity: leveraging the power of encoding models to overcome the limitations of functional connectivity
- Author
-
Meschke, Emily X, primary, Visconti di Oleggio Castello, Matteo, additional, Dupre la Tour, Tom, additional, and Gallant, Jack L, additional
- Published
- 2023
- Full Text
- View/download PDF
41. Semantic Representations during Language Comprehension Are Affected by Context
- Author
-
Deniz, Fatma, primary, Tseng, Christine, additional, Wehbe, Leila, additional, Dupré la Tour, Tom, additional, and Gallant, Jack L., additional
- Published
- 2023
- Full Text
- View/download PDF
42. A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain
- Author
-
Huth, Alexander G., Nishimoto, Shinji, Vu, An T., and Gallant, Jack L.
- Published
- 2012
- Full Text
- View/download PDF
43. Cortical representation of animate and inanimate objects in complex natural scenes
- Author
-
Naselaris, Thomas, Stansbury, Dustin E., and Gallant, Jack L.
- Published
- 2012
- Full Text
- View/download PDF
44. Toward a Unified Theory of Visual Area V4
- Author
-
Roe, Anna W., Chelazzi, Leonardo, Connor, Charles E., Conway, Bevil R., Fujita, Ichiro, Gallant, Jack L., Lu, Haidong, and Vanduffel, Wim
- Published
- 2012
- Full Text
- View/download PDF
45. Feature-space selection with banded ridge regression
- Author
-
la Tour, Tom Dupré, primary, Eickenberg, Michael, additional, Nunez-Elizalde, Anwar O., additional, and Gallant, Jack L., additional
- Published
- 2022
- Full Text
- View/download PDF
46. Encoding and decoding in fMRI
- Author
-
Naselaris, Thomas, Kay, Kendrick N., Nishimoto, Shinji, and Gallant, Jack L.
- Published
- 2011
- Full Text
- View/download PDF
47. Semantic representations during language comprehension are affected by context
- Author
-
Deniz, Fatma, primary, Tseng, Christine, additional, Wehbe, Leila, additional, and Gallant, Jack L, additional
- Published
- 2021
- Full Text
- View/download PDF
48. Bayesian Reconstruction of Natural Images from Human Brain Activity
- Author
-
Naselaris, Thomas, Prenger, Ryan J., Kay, Kendrick N., Oliver, Michael, and Gallant, Jack L.
- Published
- 2009
- Full Text
- View/download PDF
49. Combined effects of spatial and feature-based attention on responses of V4 neurons
- Author
-
Hayden, Benjamin Y. and Gallant, Jack L.
- Published
- 2009
- Full Text
- View/download PDF
50. Selectivity for Polar, Hyperbolic, and Cartesian Gratings in Macaque Visual Cortex
- Author
-
Gallant, Jack L., Braun, Jochen, and Van Essen, David C.
- Published
- 1993
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.