17 results on '"Parr, Thomas"'
Search Results
2. A Bayesian Account of Psychopathy: A Model of Lacks Remorse and Self-Aggrandizing
- Author
-
Prosser, Aaron, Friston, Karl J., Bakker, Nathan, and Parr, Thomas
- Subjects
lcsh:RC435-571 ,lcsh:Consciousness. Cognition ,lcsh:Computer applications to medicine. Medical informatics ,lcsh:BF309-499 ,psychopathy ,psychopathic personality disorder ,free-energy ,antisocial personality disorder ,active inference ,lcsh:Psychiatry ,Bayesian brain ,lcsh:R858-859.7 ,personality disorders ,predictive coding ,Research Articles - Abstract
This article proposes a formal model that integrates cognitive and psychodynamic psychotherapeutic models of psychopathy to show how two major psychopathic traits called lacks remorse and self-aggrandizing can be understood as a form of abnormal Bayesian inference about the self. This model draws on the predictive coding (i.e., active inference) framework, a neurobiologically plausible explanatory framework for message passing in the brain that is formalized in terms of hierarchical Bayesian inference. In summary, this model proposes that these two cardinal psychopathic traits reflect entrenched maladaptive Bayesian inferences about the self, which defend against the experience of deep-seated, self-related negative emotions, specifically shame and worthlessness. Support for the model in extant research on the neurobiology of psychopathy and quantitative simulations are provided. Finally, we offer a preliminary overview of a novel treatment for psychopathy that rests on our Bayesian formulation.
- Published
- 2018
3. Reclaiming saliency: Rhythmic precision-modulated action and perception.
- Author
-
Meera, Ajith Anil, Novicky, Filip, Parr, Thomas, Friston, Karl, Lanillos, Pablo, and Sajid, Noor
- Subjects
HEISENBERG uncertainty principle ,ATTENTION control ,ARTIFICIAL intelligence ,COGNITIVE robotics ,CONCEPT mapping - Abstract
Computational models of visual attention in artificial intelligence and robotics have been inspired by the concept of a saliency map. These models account for the mutual information between the (current) visual information and its estimated causes. However, they fail to consider the circular causality between perception and action. In other words, they do not consider where to sample next, given current beliefs. Here, we reclaim salience as an active inference process that relies on two basic principles: uncertainty minimization and rhythmic scheduling. For this, we make a distinction between attention and salience. Briefly, we associate attention with precision control, i.e., the confidence with which beliefs can be updated given sampled sensory data, and salience with uncertainty minimization that underwrites the selection of future sensory data. Using this, we propose a new account of attention based on rhythmic precision-modulation and discuss its potential in robotics, providing numerical experiments that showcase its advantages for state and noise estimation, system identification and action selection for informative path planning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. The evolution of brain architectures for predictive coding and active inference.
- Author
-
Pezzulo, Giovanni, Parr, Thomas, and Friston, Karl
- Subjects
- *
ANIMAL habitations , *ANIMAL species , *PROBLEM solving , *NATURAL selection - Abstract
This article considers the evolution of brain architectures for predictive processing. We argue that brain mechanisms for predictive perception and action are not late evolutionary additions of advanced creatures like us. Rather, they emerged gradually from simpler predictive loops (e.g. autonomic and motor reflexes) that were a legacy from our earlier evolutionary ancestors--and were key to solving their fundamental problems of adaptive regulation. We characterize simpler-to-more-complex brains formally, in terms of generative models that include predictive loops of increasing hierarchical breadth and depth. These may start from a simple homeostatic motif and be elaborated during evolution in four main ways: these include the multimodal expansion of predictive control into an allostatic loop; its duplication to form multiple sensorimotor loops that expand an animal's behavioural repertoire; and the gradual endowment of generative models with hierarchical depth (to deal with aspects of the world that unfold at different spatial scales) and temporal depth (to select plans in a future-oriented manner). In turn, these elaborations underwrite the solution to biological regulation problems faced by increasingly sophisticated animals. Our proposal aligns neuroscientific theorising--about predictive processing--with evolutionary and comparative data on brain architectures in different animal species. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Perceptual awareness and active inference
- Author
-
Parr, Thomas, Corcoran, Andrew W, Friston, Karl J, and Hohwy, Jakob
- Subjects
active inference ,Troxler fading ,awareness ,binocular rivalry ,Bayesian ,Research Article - Abstract
Perceptual awareness depends upon the way in which we engage with our sensorium. This notion is central to active inference, a theoretical framework that treats perception and action as inferential processes. This variational perspective on cognition formalizes the notion of perception as hypothesis testing and treats actions as experiments that are designed (in part) to gather evidence for or against alternative hypotheses. The common treatment of perception and action affords a useful interpretation of certain perceptual phenomena whose active component is often not acknowledged. In this article, we start by considering Troxler fading – the dissipation of a peripheral percept during maintenance of fixation, and its recovery during free (saccadic) exploration. This offers an important example of the failure to maintain a percept without actively interrogating a visual scene. We argue that this may be understood in terms of the accumulation of uncertainty about a hypothesized stimulus when free exploration is disrupted by experimental instructions or pathology. Once we take this view, we can generalize the idea of using bodily (oculomotor) action to resolve uncertainty to include the use of mental (attentional) actions for the same purpose. This affords a useful way to think about binocular rivalry paradigms, in which perceptual changes need not be associated with an overt movement.
- Published
- 2019
6. Understanding, Explanation, and Active Inference.
- Author
-
Parr, Thomas and Pezzulo, Giovanni
- Subjects
PROBLEM solving ,ARTIFICIAL intelligence ,DECISION making ,MACHINE learning - Abstract
While machine learning techniques have been transformative in solving a range of problems, an important challenge is to understand why they arrive at the decisions they output. Some have argued that this necessitates augmenting machine intelligence with understanding such that, when queried, a machine is able to explain its behaviour (i.e., explainable AI). In this article, we address the issue of machine understanding from the perspective of active inference. This paradigm enables decision making based upon a model of how data are generated. The generative model contains those variables required to explain sensory data, and its inversion may be seen as an attempt to explain the causes of these data. Here we are interested in explanations of one's own actions. This implies a deep generative model that includes a model of the world, used to infer policies, and a higher-level model that attempts to predict which policies will be selected based upon a space of hypothetical (i.e., counterfactual) explanations—and which can subsequently be used to provide (retrospective) explanations about the policies pursued. We illustrate the construct validity of this notion of understanding in relation to human understanding by highlighting the similarities in computational architecture and the consequences of its dysfunction. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. The Anatomy of Inference: Generative Models and Brain Structure
- Author
-
Parr, Thomas and Friston, Karl J.
- Subjects
Cellular and Molecular Neuroscience ,active inference ,neuroanatomy ,message passing ,Neuroscience (miscellaneous) ,predictive processing ,generative model ,Bayesian ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,lcsh:RC321-571 - Abstract
To infer the causes of its sensations, the brain must call on a generative (predictive) model. This necessitates passing local messages between populations of neurons to update beliefs about hidden variables in the world beyond its sensory samples. It also entails inferences about how we will act. Active inference is a principled framework that frames perception and action as approximate Bayesian inference. This has been successful in accounting for a wide range of physiological and behavioral phenomena. Recently, a process theory has emerged that attempts to relate inferences to their neurobiological substrates. In this paper, we review and develop the anatomical aspects of this process theory. We argue that the form of the generative models required for inference constrains the way in which brain regions connect to one another. Specifically, neuronal populations representing beliefs about a variable must receive input from populations representing the Markov blanket of that variable. We illustrate this idea in four different domains: perception, planning, attention, and movement. In doing so, we attempt to show how appealing to generative models enables us to account for anatomical brain architectures. Ultimately, committing to an anatomical theory of inference ensures we can form empirical hypotheses that can be tested using neuroimaging, neuropsychological, and electrophysiological experiments.
- Published
- 2018
8. Free-energy minimization in joint agent-environment systems: A niche construction perspective
- Author
-
Bruineberg, Jelle, Rietveld, Erik, Parr, Thomas, van Maanen, Leendert, Friston, Karl J, Faculty of Science, ILLC (FNWI/FGw), Logic and Language (ILLC, FNWI/FGw), ILLC (FGw), Psychologische Methodenleer (Psychologie, FMG), APH - Mental Health, APH - Health Behaviors & Chronic Diseases, Amsterdam Neuroscience - Compulsivity, Impulsivity & Attention, Adult Psychiatry, and Philosophy
- Subjects
Entropy ,Models, Neurological ,Brain ,Adaptive environments ,Article ,Markov decision processes ,ComputingMilieux_GENERAL ,Computer Science::Multiagent Systems ,Agent-environment complementarity ,Free energy principle ,Active inference ,Humans ,Niche construction ,Desire paths - Abstract
Highlights • Free-energy is developed as a measure for the `fit’ between an agent and its niche. • Simulations show how the behavior of an agent is shaped by the structure of its niche. • Using computational methods, we show how niche-construction can improve the `fit’ between an agent and its environment., The free-energy principle is an attempt to explain the structure of the agent and its brain, starting from the fact that an agent exists (Friston and Stephan, 2007; Friston et al., 2010). More specifically, it can be regarded as a systematic attempt to understand the ‘fit’ between an embodied agent and its niche, where the quantity of free-energy is a measure for the ‘misfit’ or disattunement (Bruineberg and Rietveld, 2014) between agent and environment. This paper offers a proof-of-principle simulation of niche construction under the free-energy principle. Agent-centered treatments have so far failed to address situations where environments change alongside agents, often due to the action of agents themselves. The key point of this paper is that the minimum of free-energy is not at a point in which the agent is maximally adapted to the statistics of a static environment, but can better be conceptualized an attracting manifold within the joint agent-environment state-space as a whole, which the system tends toward through mutual interaction. We will provide a general introduction to active inference and the free-energy principle. Using Markov Decision Processes (MDPs), we then describe a canonical generative model and the ensuing update equations that minimize free-energy. We then apply these equations to simulations of foraging in an environment; in which an agent learns the most efficient path to a pre-specified location. In some of those simulations, unbeknownst to the agent, the ‘desire paths’ emerge as a function of the activity of the agent (i.e. niche construction occurs). We will show how, depending on the relative inertia of the environment and agent, the joint agent-environment system moves to different attracting sets of jointly minimized free-energy.
- Published
- 2018
9. The computational neurology of movement under active inference.
- Author
-
Parr, Thomas, Limanowski, Jakub, Rawji, Vishal, and Friston, Karl
- Subjects
- *
PYRAMIDAL tract , *NEUROLOGY , *NERVOUS system , *TENDONS , *BRAIN physiology , *COMPUTER simulation , *BIOLOGICAL models , *RESEARCH , *RESEARCH methodology , *MEDICAL cooperation , *EVALUATION research , *COMPARATIVE studies , *BODY movement , *RESEARCH funding - Abstract
We propose a computational neurology of movement based on the convergence of theoretical neurobiology and clinical neurology. A significant development in the former is the idea that we can frame brain function as a process of (active) inference, in which the nervous system makes predictions about its sensory data. These predictions depend upon an implicit predictive (generative) model used by the brain. This means neural dynamics can be framed as generating actions to ensure sensations are consistent with these predictions-and adjusting predictions when they are not. We illustrate the significance of this formulation for clinical neurology by simulating a clinical examination of the motor system using an upper limb coordination task. Specifically, we show how tendon reflexes emerge naturally under the right kind of generative model. Through simulated perturbations, pertaining to prior probabilities of this model's variables, we illustrate the emergence of hyperreflexia and pendular reflexes, reminiscent of neurological lesions in the corticospinal tract and cerebellum. We then turn to the computational lesions causing hypokinesia and deficits of coordination. This in silico lesion-deficit analysis provides an opportunity to revisit classic neurological dichotomies (e.g. pyramidal versus extrapyramidal systems) from the perspective of modern approaches to theoretical neurobiology-and our understanding of the neurocomputational architecture of movement control based on first principles. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case.
- Author
-
Smith, Ryan, Schwartenbeck, Philipp, Parr, Thomas, and Friston, Karl J.
- Subjects
CONCEPT learning ,COMPUTATIONAL neuroscience ,COGNITIVE learning ,COGNITIVE science ,STIMULUS generalization - Abstract
Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. Using concept learning as an example case, we introduce a novel approach for modeling structure learning—and specifically state-space expansion and reduction—within the active inference framework and its accompanying neural process theory. Our aim is to demonstrate its potential to facilitate a novel line of active inference research in this area. The approach we lay out is based on the idea that a generative model can be equipped with extra (hidden state or cause) "slots" that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning—associated with these slots—can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model's ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We also simulate the predicted neural basis of these processes. We further show that it can accomplish a simple form of "one-shot" generalization to new stimuli. Although deliberately simple, these simulation results highlight ways in which active inference could offer useful resources in developing neurocomputational models of structure learning. They provide a template for how future active inference research could apply this approach to real-world structure learning problems and assess the added utility it may offer. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. An Investigation of the Free Energy Principle for Emotion Recognition.
- Author
-
Demekas, Daphne, Parr, Thomas, and Friston, Karl J.
- Abstract
This paper offers a prospectus of what might be achievable in the development of emotional recognition devices. It provides a conceptual overview of the free energy principle; including Markov blankets, active inference, and—in particular—a discussion of selfhood and theory of mind, followed by a brief explanation of how these concepts can explain both neural and cultural models of emotional inference. The underlying hypothesis is that emotion recognition and inference devices will evolve from state-of-the-art deep learning models into active inference schemes that go beyond marketing applications and become adjunct to psychiatric practice. Specifically, this paper proposes that a second wave of emotion recognition devices will be equipped with an emotional lexicon (or the ability to epistemically search for one), allowing the device to resolve uncertainty about emotional states by actively eliciting responses from the user and learning from these responses. Following this, a third wave of emotional devices will converge upon the user's generative model, resulting in the machine and human engaging in a reciprocal, prosocial emotional interaction, i.e., sharing a generative model of emotional states. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. Simulating Emotions: An Active Inference Model of Emotional State Inference and Emotion Concept Learning.
- Author
-
Smith, Ryan, Parr, Thomas, and Friston, Karl J.
- Subjects
CONCEPT learning ,EMOTIONS ,INFERENCE (Logic) ,SELF-consciousness (Awareness) - Abstract
The ability to conceptualize and understand one's own affective states and responses – or "Emotional awareness" (EA) – is reduced in multiple psychiatric populations; it is also positively correlated with a range of adaptive cognitive and emotional traits. While a growing body of work has investigated the neurocognitive basis of EA, the neurocomputational processes underlying this ability have received limited attention. Here, we present a formal Active Inference (AI) model of emotion conceptualization that can simulate the neurocomputational (Bayesian) processes associated with learning about emotion concepts and inferring the emotions one is feeling in a given moment. We validate the model and inherent constructs by showing (i) it can successfully acquire a repertoire of emotion concepts in its "childhood", as well as (ii) acquire new emotion concepts in synthetic "adulthood," and (iii) that these learning processes depend on early experiences, environmental stability, and habitual patterns of selective attention. These results offer a proof of principle that cognitive-emotional processes can be modeled formally, and highlight the potential for both theoretical and empirical extensions of this line of research on emotion and emotional disorders. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Precision and False Perceptual Inference.
- Author
-
Parr, Thomas, Benrimoh, David A., Vincent, Peter, and Friston, Karl J.
- Subjects
NEURODEGENERATION ,LEWY body dementia ,SENSORY perception ,CHOLINERGIC mechanisms ,FREE energy (Thermodynamics) - Abstract
Accurate perceptual inference fundamentally depends upon accurate beliefs about the reliability of sensory data. In this paper, we describe a Bayes optimal and biologically plausible scheme that refines these beliefs through a gradient descent on variational free energy. To illustrate this, we simulate belief updating during visual foraging and show that changes in estimated sensory precision (i.e., confidence in visual data) are highly sensitive to prior beliefs about the contents of a visual scene. In brief, confident prior beliefs induce an increase in estimated precision when consistent with sensory evidence, but a decrease when they conflict. Prior beliefs held with low confidence are rapidly updated to posterior beliefs, determined by sensory data. These induce much smaller changes in beliefs about sensory precision. We argue that pathologies of scene construction may be due to abnormal priors, and show that these can induce a reduction in estimated sensory precision. Having previously associated this precision with cholinergic signaling, we note that several neurodegenerative conditions are associated with visual disturbances and cholinergic deficits; notably, the synucleinopathies. On relating the message passing in our model to the functional anatomy of the ventral visual stream, we find that simulated neuronal loss in temporal lobe regions induces confident, inaccurate, empirical prior beliefs at lower levels in the visual hierarchy. This provides a plausible, if speculative, computational mechanism for the loss of cholinergic signaling and the visual disturbances associated with temporal lobe Lewy body pathology. This may be seen as an illustration of the sorts of hypotheses that may be expressed within this computational framework. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Computational Neuropsychology and Bayesian Inference.
- Author
-
Parr, Thomas, Rees, Geraint, and Friston, Karl J.
- Subjects
NEUROPSYCHOLOGY ,BAYESIAN analysis ,COMPUTATIONAL neuroscience ,PSYCHIATRIC research ,BRAIN diseases - Abstract
Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine 'prior' beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology - optimal inference with suboptimal priors - and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient's behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
15. Neural Dynamics under Active Inference: Plausibility and Efficiency of Information Processing.
- Author
-
Da Costa, Lancelot, Parr, Thomas, Sengupta, Biswa, and Friston, Karl
- Subjects
- *
INFORMATION processing , *ACTION potentials , *MEMBRANE potential , *TEST validity - Abstract
Active inference is a normative framework for explaining behaviour under the free energy principle—a theory of self-organisation originating in neuroscience. It specifies neuronal dynamics for state-estimation in terms of a descent on (variational) free energy—a measure of the fit between an internal (generative) model and sensory observations. The free energy gradient is a prediction error—plausibly encoded in the average membrane potentials of neuronal populations. Conversely, the expected probability of a state can be expressed in terms of neuronal firing rates. We show that this is consistent with current models of neuronal dynamics and establish face validity by synthesising plausible electrophysiological responses. We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space. We compare the information length of belief updating in both schemes, a measure of the distance travelled in information space that has a direct interpretation in terms of metabolic cost. We show that neural dynamics under active inference are metabolically efficient and suggest that neural representations in biological agents may evolve by approximating steepest descent in information space towards the point of optimal inference. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Inferring What to Do (And What Not to).
- Author
-
Parr, Thomas
- Subjects
- *
STATISTICAL decision making , *MESSAGE passing (Computer science) , *NEUROANATOMY , *NERVOUS system , *INFORMATION theory - Abstract
In recent years, the "planning as inference" paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about "how I am going to act". This paper provides an overview of the factors that contribute to this prior. These are rooted in optimal experimental design, information theory, and statistical decision making. We unpack how these factors imply a functional architecture for motivated behaviour. This raises an important question: how can we put this architecture to work in the service of understanding observed neurobiological structure? To answer this question, we draw from established techniques in experimental studies of behaviour. Typically, these examine the influence of perturbations of the nervous system—which include pathological insults or optogenetic manipulations—to see their influence on behaviour. Here, we argue that the message passing that emerges from inferring what to do can be similarly perturbed. If a given perturbation elicits the same behaviours as a focal brain lesion, this provides a functional interpretation of empirical findings and an anatomical grounding for theoretical results. We highlight examples of this approach that influence different sorts of goal-directed behaviour, active learning, and decision making. Finally, we summarise their implications for the neuroanatomy of inferring what to do (and what not to). [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. The Markov blankets of life: autonomy, active inference and the free energy principle
- Author
-
Kirchhoff, Michael, Parr, Thomas, Palacios, Ensor, Friston, Karl, Kiverstein, Julian, and Adult Psychiatry
- Subjects
Computer Science::Machine Learning ,Markov blanket ,Entropy ,Review Article ,Models, Biological ,Markov Chains ,active inference ,ensemble Markov blanket ,Physics::Accelerator Physics ,Astrophysics::Solar and Stellar Astrophysics ,free energy principle ,autonomy ,Review Articles - Abstract
This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens, in terms of the presence of Markov blankets under the active inference scheme—a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets—all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment.
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.