31 results on '"Parr, Thomas"'
Search Results
2. Generating meaning: active inference and the scope and limits of passive AI.
- Author
-
Pezzulo, Giovanni, Parr, Thomas, Cisek, Paul, Clark, Andy, and Friston, Karl
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *ARTIFICIAL intelligence - Abstract
Generative artificial intelligence (AI) systems, such as large language models (LLMs), have achieved remarkable performance in various tasks such as text and image generation. We discuss the foundations of generative AI systems by comparing them with our current understanding of living organisms, when seen as active inference systems. Both generative AI and active inference are based on generative models, but they acquire and use them in fundamentally different ways. Living organisms and active inference agents learn their generative models by engaging in purposive interactions with the environment and by predicting these interactions. This provides them with a core understanding and a sense of mattering, upon which their subsequent knowledge is grounded. Future generative AI systems might follow the same (biomimetic) approach – and learn the affordances implicit in embodied engagement with the world before – or instead of – being trained passively. Prominent accounts of sentient behavior depict brains as generative models of organismic interaction with the world, evincing intriguing similarities with current advances in generative artificial intelligence (AI). However, because they contend with the control of purposive, life-sustaining sensorimotor interactions, the generative models of living organisms are inextricably anchored to the body and world. Unlike the passive models learned by generative AI systems, they must capture and control the sensory consequences of action. This allows embodied agents to intervene upon their worlds in ways that constantly put their best models to the test, thus providing a solid bedrock that is – we argue – essential to the development of genuine understanding. We review the resulting implications and consider future directions for generative AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Generalised free energy and active inference
- Author
-
Parr, Thomas and Friston, Karl J.
- Published
- 2019
- Full Text
- View/download PDF
4. The computational pharmacology of oculomotion
- Author
-
Parr, Thomas and Friston, Karl J
- Published
- 2019
- Full Text
- View/download PDF
5. Immunoceptive inference: why are psychiatric disorders and immune responses intertwined?
- Author
-
Bhat, Anjali, Parr, Thomas, Ramstead, Maxwell, and Friston, Karl
- Published
- 2021
- Full Text
- View/download PDF
6. Reclaiming saliency: Rhythmic precision-modulated action and perception.
- Author
-
Meera, Ajith Anil, Novicky, Filip, Parr, Thomas, Friston, Karl, Lanillos, Pablo, and Sajid, Noor
- Subjects
HEISENBERG uncertainty principle ,ATTENTION control ,ARTIFICIAL intelligence ,COGNITIVE robotics ,CONCEPT mapping - Abstract
Computational models of visual attention in artificial intelligence and robotics have been inspired by the concept of a saliency map. These models account for the mutual information between the (current) visual information and its estimated causes. However, they fail to consider the circular causality between perception and action. In other words, they do not consider where to sample next, given current beliefs. Here, we reclaim salience as an active inference process that relies on two basic principles: uncertainty minimization and rhythmic scheduling. For this, we make a distinction between attention and salience. Briefly, we associate attention with precision control, i.e., the confidence with which beliefs can be updated given sampled sensory data, and salience with uncertainty minimization that underwrites the selection of future sensory data. Using this, we propose a new account of attention based on rhythmic precision-modulation and discuss its potential in robotics, providing numerical experiments that showcase its advantages for state and noise estimation, system identification and action selection for informative path planning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. The evolution of brain architectures for predictive coding and active inference.
- Author
-
Pezzulo, Giovanni, Parr, Thomas, and Friston, Karl
- Subjects
- *
ANIMAL habitations , *ANIMAL species , *PROBLEM solving , *NATURAL selection - Abstract
This article considers the evolution of brain architectures for predictive processing. We argue that brain mechanisms for predictive perception and action are not late evolutionary additions of advanced creatures like us. Rather, they emerged gradually from simpler predictive loops (e.g. autonomic and motor reflexes) that were a legacy from our earlier evolutionary ancestors--and were key to solving their fundamental problems of adaptive regulation. We characterize simpler-to-more-complex brains formally, in terms of generative models that include predictive loops of increasing hierarchical breadth and depth. These may start from a simple homeostatic motif and be elaborated during evolution in four main ways: these include the multimodal expansion of predictive control into an allostatic loop; its duplication to form multiple sensorimotor loops that expand an animal's behavioural repertoire; and the gradual endowment of generative models with hierarchical depth (to deal with aspects of the world that unfold at different spatial scales) and temporal depth (to select plans in a future-oriented manner). In turn, these elaborations underwrite the solution to biological regulation problems faced by increasingly sophisticated animals. Our proposal aligns neuroscientific theorising--about predictive processing--with evolutionary and comparative data on brain architectures in different animal species. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Active inference as a theory of sentient behavior.
- Author
-
Pezzulo, Giovanni, Parr, Thomas, and Friston, Karl
- Subjects
- *
ARTIFICIAL intelligence , *DIRECT action , *SUBLIMINAL perception , *PATHOLOGICAL psychology , *ROOT development , *INFERENCE (Logic) , *MACHINE learning - Abstract
This review paper offers an overview of the history and future of active inference—a unifying perspective on action and perception. Active inference is based upon the idea that sentient behavior depends upon our brains' implicit use of internal models to predict, infer, and direct action. Our focus is upon the conceptual roots and development of this theory of (basic) sentience and does not follow a rigid chronological narrative. We trace the evolution from Helmholtzian ideas on unconscious inference, through to a contemporary understanding of action and perception. In doing so, we touch upon related perspectives, the neural underpinnings of active inference, and the opportunities for future development. Key steps in this development include the formulation of predictive coding models and related theories of neuronal message passing, the use of sequential models for planning and policy optimization, and the importance of hierarchical (temporally) deep internal (i.e., generative or world) models. Active inference has been used to account for aspects of anatomy and neurophysiology, to offer theories of psychopathology in terms of aberrant precision control, and to unify extant psychological theories. We anticipate further development in all these areas and note the exciting early work applying active inference beyond neuroscience. This suggests a future not just in biology, but in robotics, machine learning, and artificial intelligence. • We review the history and future of active inference: a unifying perspective on action and perception. • We discuss the conceptual roots of active inference, its current status and promising future directions. • We highlight the importance of the brain's generative models to predict, infer, and direct action. • We critically discuss how active inference can unify extant psychological theories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Active inference, selective attention, and the cocktail party problem.
- Author
-
Holmes, Emma, Parr, Thomas, Griffiths, Timothy D., and Friston, Karl J.
- Subjects
- *
SELECTIVITY (Psychology) , *COCKTAIL parties , *EVOKED potentials (Electrophysiology) , *ATTENTION - Abstract
• New generative model for selective attention during cocktail party listening. • Computational 'lesions' in the model dissociate different errors during word report. • We model different temporal hypotheses for preparatory attention. • Temporal changes in precision are necessary to explain ERPs but not reaction times. • CNV-like responses can be explained by subjective precision rather than action. In this paper, we introduce a new generative model for an active inference account of preparatory and selective attention, in the context of a classic 'cocktail party' paradigm. In this setup, pairs of words are presented simultaneously to the left and right ears and an instructive spatial cue directs attention to the left or right. We use this generative model to test competing hypotheses about the way that human listeners direct preparatory and selective attention. We show that assigning low precision to words at attended—relative to unattended—locations can explain why a listener reports words from a competing sentence. Under this model, temporal changes in sensory precision were not needed to account for faster reaction times with longer cue-target intervals, but were necessary to explain ramping effects on event-related potentials (ERPs)—resembling the contingent negative variation (CNV)—during the preparatory interval. These simulations reveal that different processes are likely to underlie the improvement in reaction times and the ramping of ERPs that are associated with spatial cueing. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Understanding, Explanation, and Active Inference.
- Author
-
Parr, Thomas and Pezzulo, Giovanni
- Subjects
PROBLEM solving ,ARTIFICIAL intelligence ,DECISION making ,MACHINE learning - Abstract
While machine learning techniques have been transformative in solving a range of problems, an important challenge is to understand why they arrive at the decisions they output. Some have argued that this necessitates augmenting machine intelligence with understanding such that, when queried, a machine is able to explain its behaviour (i.e., explainable AI). In this article, we address the issue of machine understanding from the perspective of active inference. This paradigm enables decision making based upon a model of how data are generated. The generative model contains those variables required to explain sensory data, and its inversion may be seen as an attempt to explain the causes of these data. Here we are interested in explanations of one's own actions. This implies a deep generative model that includes a model of the world, used to infer policies, and a higher-level model that attempts to predict which policies will be selected based upon a space of hypothetical (i.e., counterfactual) explanations—and which can subsequently be used to provide (retrospective) explanations about the policies pursued. We illustrate the construct validity of this notion of understanding in relation to human understanding by highlighting the similarities in computational architecture and the consequences of its dysfunction. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. The computational neurology of movement under active inference.
- Author
-
Parr, Thomas, Limanowski, Jakub, Rawji, Vishal, and Friston, Karl
- Subjects
- *
PYRAMIDAL tract , *NEUROLOGY , *NERVOUS system , *TENDONS , *BRAIN physiology , *COMPUTER simulation , *BIOLOGICAL models , *RESEARCH , *RESEARCH methodology , *MEDICAL cooperation , *EVALUATION research , *COMPARATIVE studies , *BODY movement , *RESEARCH funding - Abstract
We propose a computational neurology of movement based on the convergence of theoretical neurobiology and clinical neurology. A significant development in the former is the idea that we can frame brain function as a process of (active) inference, in which the nervous system makes predictions about its sensory data. These predictions depend upon an implicit predictive (generative) model used by the brain. This means neural dynamics can be framed as generating actions to ensure sensations are consistent with these predictions-and adjusting predictions when they are not. We illustrate the significance of this formulation for clinical neurology by simulating a clinical examination of the motor system using an upper limb coordination task. Specifically, we show how tendon reflexes emerge naturally under the right kind of generative model. Through simulated perturbations, pertaining to prior probabilities of this model's variables, we illustrate the emergence of hyperreflexia and pendular reflexes, reminiscent of neurological lesions in the corticospinal tract and cerebellum. We then turn to the computational lesions causing hypokinesia and deficits of coordination. This in silico lesion-deficit analysis provides an opportunity to revisit classic neurological dichotomies (e.g. pyramidal versus extrapyramidal systems) from the perspective of modern approaches to theoretical neurobiology-and our understanding of the neurocomputational architecture of movement control based on first principles. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Federated inference and belief sharing.
- Author
-
Friston, Karl J., Parr, Thomas, Heins, Conor, Constant, Axel, Friedman, Daniel, Isomura, Takuya, Fields, Chris, Verbelen, Tim, Ramstead, Maxwell, Clippinger, John, and Frith, Christopher D.
- Subjects
- *
FEDERATED learning , *ACTIVE learning , *LANGUAGE acquisition , *SPEECH - Abstract
This paper concerns the distributed intelligence or federated inference that emerges under belief-sharing among agents who share a common world—and world model. Imagine, for example, several animals keeping a lookout for predators. Their collective surveillance rests upon being able to communicate their beliefs—about what they see—among themselves. But, how is this possible? Here, we show how all the necessary components arise from minimising free energy. We use numerical studies to simulate the generation, acquisition and emergence of language in synthetic agents. Specifically, we consider inference, learning and selection as minimising the variational free energy of posterior (i.e., Bayesian) beliefs about the states, parameters and structure of generative models, respectively. The common theme—that attends these optimisation processes—is the selection of actions that minimise expected free energy, leading to active inference, learning and model selection (a.k.a., structure learning). We first illustrate the role of communication in resolving uncertainty about the latent states of a partially observed world, on which agents have complementary perspectives. We then consider the acquisition of the requisite language—entailed by a likelihood mapping from an agent's beliefs to their overt expression (e.g., speech)—showing that language can be transmitted across generations by active learning. Finally, we show that language is an emergent property of free energy minimisation, when agents operate within the same econiche. We conclude with a discussion of various perspectives on these phenomena; ranging from cultural niche construction, through federated learning, to the emergence of complexity in ensembles of self-organising systems. • Communication—and language in particular—is an emergent property of agents that seek evidence for generative models of their shared world. • Nested free energy minimising—evidence maximising—processes explain the emergence of language and its transmission over generations. • Reading these processes as inference integrates perspectives on communication; from generalised synchrony to cultural niche construction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case.
- Author
-
Smith, Ryan, Schwartenbeck, Philipp, Parr, Thomas, and Friston, Karl J.
- Subjects
CONCEPT learning ,COMPUTATIONAL neuroscience ,COGNITIVE learning ,COGNITIVE science ,STIMULUS generalization - Abstract
Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. Using concept learning as an example case, we introduce a novel approach for modeling structure learning—and specifically state-space expansion and reduction—within the active inference framework and its accompanying neural process theory. Our aim is to demonstrate its potential to facilitate a novel line of active inference research in this area. The approach we lay out is based on the idea that a generative model can be equipped with extra (hidden state or cause) "slots" that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning—associated with these slots—can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model's ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We also simulate the predicted neural basis of these processes. We further show that it can accomplish a simple form of "one-shot" generalization to new stimuli. Although deliberately simple, these simulation results highlight ways in which active inference could offer useful resources in developing neurocomputational models of structure learning. They provide a template for how future active inference research could apply this approach to real-world structure learning problems and assess the added utility it may offer. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. An Investigation of the Free Energy Principle for Emotion Recognition.
- Author
-
Demekas, Daphne, Parr, Thomas, and Friston, Karl J.
- Abstract
This paper offers a prospectus of what might be achievable in the development of emotional recognition devices. It provides a conceptual overview of the free energy principle; including Markov blankets, active inference, and—in particular—a discussion of selfhood and theory of mind, followed by a brief explanation of how these concepts can explain both neural and cultural models of emotional inference. The underlying hypothesis is that emotion recognition and inference devices will evolve from state-of-the-art deep learning models into active inference schemes that go beyond marketing applications and become adjunct to psychiatric practice. Specifically, this paper proposes that a second wave of emotion recognition devices will be equipped with an emotional lexicon (or the ability to epistemically search for one), allowing the device to resolve uncertainty about emotional states by actively eliciting responses from the user and learning from these responses. Following this, a third wave of emotional devices will converge upon the user's generative model, resulting in the machine and human engaging in a reciprocal, prosocial emotional interaction, i.e., sharing a generative model of emotional states. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Simulating Emotions: An Active Inference Model of Emotional State Inference and Emotion Concept Learning.
- Author
-
Smith, Ryan, Parr, Thomas, and Friston, Karl J.
- Subjects
CONCEPT learning ,EMOTIONS ,INFERENCE (Logic) ,SELF-consciousness (Awareness) - Abstract
The ability to conceptualize and understand one's own affective states and responses – or "Emotional awareness" (EA) – is reduced in multiple psychiatric populations; it is also positively correlated with a range of adaptive cognitive and emotional traits. While a growing body of work has investigated the neurocognitive basis of EA, the neurocomputational processes underlying this ability have received limited attention. Here, we present a formal Active Inference (AI) model of emotion conceptualization that can simulate the neurocomputational (Bayesian) processes associated with learning about emotion concepts and inferring the emotions one is feeling in a given moment. We validate the model and inherent constructs by showing (i) it can successfully acquire a repertoire of emotion concepts in its "childhood", as well as (ii) acquire new emotion concepts in synthetic "adulthood," and (iii) that these learning processes depend on early experiences, environmental stability, and habitual patterns of selective attention. These results offer a proof of principle that cognitive-emotional processes can be modeled formally, and highlight the potential for both theoretical and empirical extensions of this line of research on emotion and emotional disorders. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
16. The Anatomy of Inference: Generative Models and Brain Structure.
- Author
-
Parr, Thomas and Friston, Karl J.
- Abstract
To infer the causes of its sensations, the brain must call on a generative (predictive) model. This necessitates passing local messages between populations of neurons to update beliefs about hidden variables in the world beyond its sensory samples. It also entails inferences about how we will act. Active inference is a principled framework that frames perception and action as approximate Bayesian inference. This has been successful in accounting for a wide range of physiological and behavioral phenomena. Recently, a process theory has emerged that attempts to relate inferences to their neurobiological substrates. In this paper, we review and develop the anatomical aspects of this process theory. We argue that the form of the generative models required for inference constrains the way in which brain regions connect to one another. Specifically, neuronal populations representing beliefs about a variable must receive input from populations representing the Markov blanket of that variable. We illustrate this idea in four different domains: perception, planning, attention, and movement. In doing so, we attempt to show how appealing to generative models enables us to account for anatomical brain architectures. Ultimately, committing to an anatomical theory of inference ensures we can form empirical hypotheses that can be tested using neuroimaging, neuropsychological, and electrophysiological experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
17. Precision and False Perceptual Inference.
- Author
-
Parr, Thomas, Benrimoh, David A., Vincent, Peter, and Friston, Karl J.
- Subjects
NEURODEGENERATION ,LEWY body dementia ,SENSORY perception ,CHOLINERGIC mechanisms ,FREE energy (Thermodynamics) - Abstract
Accurate perceptual inference fundamentally depends upon accurate beliefs about the reliability of sensory data. In this paper, we describe a Bayes optimal and biologically plausible scheme that refines these beliefs through a gradient descent on variational free energy. To illustrate this, we simulate belief updating during visual foraging and show that changes in estimated sensory precision (i.e., confidence in visual data) are highly sensitive to prior beliefs about the contents of a visual scene. In brief, confident prior beliefs induce an increase in estimated precision when consistent with sensory evidence, but a decrease when they conflict. Prior beliefs held with low confidence are rapidly updated to posterior beliefs, determined by sensory data. These induce much smaller changes in beliefs about sensory precision. We argue that pathologies of scene construction may be due to abnormal priors, and show that these can induce a reduction in estimated sensory precision. Having previously associated this precision with cholinergic signaling, we note that several neurodegenerative conditions are associated with visual disturbances and cholinergic deficits; notably, the synucleinopathies. On relating the message passing in our model to the functional anatomy of the ventral visual stream, we find that simulated neuronal loss in temporal lobe regions induces confident, inaccurate, empirical prior beliefs at lower levels in the visual hierarchy. This provides a plausible, if speculative, computational mechanism for the loss of cholinergic signaling and the visual disturbances associated with temporal lobe Lewy body pathology. This may be seen as an illustration of the sorts of hypotheses that may be expressed within this computational framework. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. Cognitive effort and active inference.
- Author
-
Parr, Thomas, Holmes, Emma, Friston, Karl J., and Pezzulo, Giovanni
- Subjects
- *
EXECUTIVE function , *CONTROL (Psychology) , *STROOP effect , *COGNITIVE neuroscience , *COGNITIVE ability , *NEUROPSYCHOLOGY - Abstract
This paper aims to integrate some key constructs in the cognitive neuroscience of cognitive control and executive function by formalising the notion of cognitive (or mental) effort in terms of active inference. To do so, we call upon a task used in neuropsychology to assess impulse inhibition—a Stroop task. In this task, participants must suppress the impulse to read a colour word and instead report the colour of the text of the word. The Stroop task is characteristically effortful, and we unpack a theory of mental effort in which, to perform this task accurately, participants must overcome prior beliefs about how they would normally act. However, our interest here is not in overt action, but in covert (mental) action. Mental actions change our beliefs but have no (direct) effect on the outside world—much like deploying covert attention. This account of effort as mental action lets us generate multimodal (choice, reaction time, and electrophysiological) data of the sort we might expect from a human participant engaging in this task. We analyse how parameters determining cognitive effort influence simulated responses and demonstrate that—when provided only with performance data—these parameters can be recovered, provided they are within a certain range. [Display omitted] • This paper offers a formalisation of 'cognitive effort' under the active inference framework. • Cognitive effort is formulated as a deviation from prior beliefs about mental (covert) action—i.e., effort is exerted to overcome a mental habit. • A computational model of the Stroop task—a characteristically effortful task—is developed to illustrate this notion of effort. • We demonstrate that it is possible to recover combinations of effort-related model parameters from simulated data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Active inference and the anatomy of oculomotion.
- Author
-
Parr, Thomas and Friston, Karl J.
- Subjects
- *
FREE energy (Thermodynamics) , *SACCADIC eye movements , *OCULOMOTOR nerve , *BRAIN stem , *INFERENCE (Logic) - Abstract
Given that eye movement control can be framed as an inferential process, how are the requisite forces generated to produce anticipated or desired fixation? Starting from a generative model based on simple Newtonian equations of motion, we derive a variational solution to this problem and illustrate the plausibility of its implementation in the oculomotor brainstem. We show, through simulation, that the Bayesian filtering equations that implement ‘planning as inference’ can generate both saccadic and smooth pursuit eye movements. Crucially, the associated message passing maps well onto the known connectivity and neuroanatomy of the brainstem – and the changes in these messages over time are strikingly similar to single unit recordings of neurons in the corresponding nuclei. Furthermore, we show that simulated lesions to axonal pathways reproduce eye movement patterns of neurological patients with damage to these tracts. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. Computational Neuropsychology and Bayesian Inference.
- Author
-
Parr, Thomas, Rees, Geraint, and Friston, Karl J.
- Subjects
NEUROPSYCHOLOGY ,BAYESIAN analysis ,COMPUTATIONAL neuroscience ,PSYCHIATRIC research ,BRAIN diseases - Abstract
Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine 'prior' beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology - optimal inference with suboptimal priors - and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient's behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Neural Dynamics under Active Inference: Plausibility and Efficiency of Information Processing.
- Author
-
Da Costa, Lancelot, Parr, Thomas, Sengupta, Biswa, and Friston, Karl
- Subjects
- *
INFORMATION processing , *ACTION potentials , *MEMBRANE potential , *TEST validity - Abstract
Active inference is a normative framework for explaining behaviour under the free energy principle—a theory of self-organisation originating in neuroscience. It specifies neuronal dynamics for state-estimation in terms of a descent on (variational) free energy—a measure of the fit between an internal (generative) model and sensory observations. The free energy gradient is a prediction error—plausibly encoded in the average membrane potentials of neuronal populations. Conversely, the expected probability of a state can be expressed in terms of neuronal firing rates. We show that this is consistent with current models of neuronal dynamics and establish face validity by synthesising plausible electrophysiological responses. We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space. We compare the information length of belief updating in both schemes, a measure of the distance travelled in information space that has a direct interpretation in terms of metabolic cost. We show that neural dynamics under active inference are metabolically efficient and suggest that neural representations in biological agents may evolve by approximating steepest descent in information space towards the point of optimal inference. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Inferring What to Do (And What Not to).
- Author
-
Parr, Thomas
- Subjects
- *
STATISTICAL decision making , *MESSAGE passing (Computer science) , *NEUROANATOMY , *NERVOUS system , *INFORMATION theory - Abstract
In recent years, the "planning as inference" paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about "how I am going to act". This paper provides an overview of the factors that contribute to this prior. These are rooted in optimal experimental design, information theory, and statistical decision making. We unpack how these factors imply a functional architecture for motivated behaviour. This raises an important question: how can we put this architecture to work in the service of understanding observed neurobiological structure? To answer this question, we draw from established techniques in experimental studies of behaviour. Typically, these examine the influence of perturbations of the nervous system—which include pathological insults or optogenetic manipulations—to see their influence on behaviour. Here, we argue that the message passing that emerges from inferring what to do can be similarly perturbed. If a given perturbation elicits the same behaviours as a focal brain lesion, this provides a functional interpretation of empirical findings and an anatomical grounding for theoretical results. We highlight examples of this approach that influence different sorts of goal-directed behaviour, active learning, and decision making. Finally, we summarise their implications for the neuroanatomy of inferring what to do (and what not to). [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Perceptual awareness and active inference.
- Author
-
Parr, Thomas, Corcoran, Andrew W, Friston, Karl J, and Hohwy, Jakob
- Subjects
BINOCULAR rivalry ,TROXLER fading ,BAYESIAN analysis - Abstract
Perceptual awareness depends upon the way in which we engage with our sensorium. This notion is central to active inference, a theoretical framework that treats perception and action as inferential processes. This variational perspective on cognition formalizes the notion of perception as hypothesis testing and treats actions as experiments that are designed (in part) to gather evidence for or against alternative hypotheses. The common treatment of perception and action affords a useful interpretation of certain perceptual phenomena whose active component is often not acknowledged. In this article, we start by considering Troxler fading – the dissipation of a peripheral percept during maintenance of fixation, and its recovery during free (saccadic) exploration. This offers an important example of the failure to maintain a percept without actively interrogating a visual scene. We argue that this may be understood in terms of the accumulation of uncertainty about a hypothesized stimulus when free exploration is disrupted by experimental instructions or pathology. Once we take this view, we can generalize the idea of using bodily (oculomotor) action to resolve uncertainty to include the use of mental (attentional) actions for the same purpose. This affords a useful way to think about binocular rivalry paradigms, in which perceptual changes need not be associated with an overt movement. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Searching for an Anchor in an Unpredictable World: A Computational Model of Obsessive Compulsive Disorder.
- Author
-
Fradkin, Isaac, Adams, Rick A., Parr, Thomas, Roiser, Jonathan P., and Huppert, Jonathan D.
- Subjects
- *
OBSESSIVE-compulsive disorder , *ACTION theory (Psychology) , *FORECASTING , *ANCHORS , *DISABILITIES - Abstract
In this article, we develop a computational model of obsessive–compulsive disorder (OCD). We propose that OCD is characterized by a difficulty in relying on past events to predict the consequences of patients' own actions and the unfolding of possible events. Clinically, this corresponds both to patients' difficulty in trusting their own actions (and therefore repeating them), and to their common preoccupation with unlikely chains of events. Critically, we develop this idea on the basis of the well-developed framework of the Bayesian brain, where this impairment is formalized as excessive uncertainty regarding state transitions. We illustrate the validity of this idea using quantitative simulations and use these to form specific empirical predictions. These predictions are evaluated in relation to existing evidence, and are used to delineate directions for future research. We show how seemingly unrelated findings and phenomena in OCD can be explained by the model, including a persistent experience that actions were not adequately performed and a tendency to repeat actions; excessive information gathering (i.e., checking); indecisiveness and pathological doubt; overreliance on habits at the expense of goal-directed behavior; and overresponsiveness to sensory stimuli, thoughts, and feedback. We discuss the relationship and interaction between our model and other prominent models of OCD, including models focusing on harm-avoidance, not-just-right experiences, or impairments in goal-directed behavior. Finally, we outline potential clinical implications and suggest lines for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. Neurocomputational mechanisms underlying emotional awareness: Insights afforded by deep active inference and their potential clinical relevance.
- Author
-
Smith, Ryan, Lane, Richard D., Parr, Thomas, and Friston, Karl J.
- Subjects
- *
AWARENESS , *INDIVIDUALIZED medicine , *PATIENT selection , *MENTAL illness - Abstract
• Low emotional awareness (EA) is associated with multiple clinical conditions. • The neurocomputational processes underlying EA are poorly understood. • We present a deep active inference model of EA that can simulate these processes. • This model illustrates 7 distinct mechanisms whereby aberrant processing produces low EA. • This may offer distinct targets that could inform individualized treatment selection. Emotional awareness (EA) is recognized as clinically relevant to the vulnerability to, and maintenance of, psychiatric disorders. However, the neurocomputational processes that underwrite individual variations remain unclear. In this paper, we describe a deep (active) inference model that reproduces the cognitive-emotional processes and self-report behaviors associated with EA. We then present simulations to illustrate (seven) distinct mechanisms that (either alone or in combination) can produce phenomena – such as somatic misattribution, coarse-grained emotion conceptualization, and constrained reflective capacity – characteristic of low EA. Our simulations suggest that the clinical phenotype of impoverished EA can be reproduced by dissociable computational processes. The possibility that different processes are at work in different individuals suggests that they may benefit from distinct clinical interventions. As active inference makes particular predictions about the underlying neurobiology of such aberrant inference, we also discuss how this type of modelling could be used to design neuroimaging tasks to test predictions and identify which processes operate in different individuals – and provide a principled basis for personalized precision medicine. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Free-energy minimization in joint agent-environment systems: A niche construction perspective.
- Author
-
Bruineberg, Jelle, Rietveld, Erik, Parr, Thomas, van Maanen, Leendert, and Friston, Karl J
- Subjects
- *
ECOLOGICAL niche , *FREE energy (Thermodynamics) , *MUTUALISM , *SIMULATION methods & models , *MARKOV processes - Abstract
The free-energy principle is an attempt to explain the structure of the agent and its brain, starting from the fact that an agent exists (Friston and Stephan, 2007; Friston et al., 2010). More specifically, it can be regarded as a systematic attempt to understand the ‘fit’ between an embodied agent and its niche, where the quantity of free-energy is a measure for the ‘misfit’ or disattunement (Bruineberg and Rietveld, 2014) between agent and environment. This paper offers a proof-of-principle simulation of niche construction under the free-energy principle. Agent-centered treatments have so far failed to address situations where environments change alongside agents, often due to the action of agents themselves. The key point of this paper is that the minimum of free-energy is not at a point in which the agent is maximally adapted to the statistics of a static environment, but can better be conceptualized an attracting manifold within the joint agent-environment state-space as a whole, which the system tends toward through mutual interaction. We will provide a general introduction to active inference and the free-energy principle. Using Markov Decision Processes (MDPs), we then describe a canonical generative model and the ensuing update equations that minimize free-energy. We then apply these equations to simulations of foraging in an environment; in which an agent learns the most efficient path to a pre-specified location. In some of those simulations, unbeknownst to the agent, the ‘desire paths’ emerge as a function of the activity of the agent (i.e. niche construction occurs). We will show how, depending on the relative inertia of the environment and agent, the joint agent-environment system moves to different attracting sets of jointly minimized free-energy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. Deep temporal models and active inference.
- Author
-
Friston, Karl J., Rosch, Richard, Parr, Thomas, Price, Cathy, and Bowman, Howard
- Subjects
- *
INFERENCE (Logic) , *TEMPORAL integration , *FREE energy (Thermodynamics) , *PHYSIOLOGICAL aspects of reading , *NEUROSCIENCES - Abstract
How do we navigate a deeply structured world? Why are you reading this sentence first – and did you actually look at the fifth word? This review offers some answers by appealing to active inference based on deep temporal models. It builds on previous formulations of active inference to simulate behavioural and electrophysiological responses under hierarchical generative models of state transitions. Inverting these models corresponds to sequential inference, such that the state at any hierarchical level entails a sequence of transitions in the level below. The deep temporal aspect of these models means that evidence is accumulated over nested time scales, enabling inferences about narratives (i.e., temporal scenes). We illustrate this behaviour with Bayesian belief updating – and neuronal process theories – to simulate the epistemic foraging seen in reading. These simulations reproduce perisaccadic delay period activity and local field potentials seen empirically. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
28. Deep temporal models and active inference.
- Author
-
Friston, Karl J., Rosch, Richard, Parr, Thomas, Price, Cathy, and Bowman, Howard
- Subjects
- *
ELECTROPHYSIOLOGY , *BRAIN imaging , *NEUROSCIENCES , *SEMANTICS , *INFERENCE (Logic) , *SIMULATION methods & models - Abstract
How do we navigate a deeply structured world? Why are you reading this sentence first – and did you actually look at the fifth word? This review offers some answers by appealing to active inference based on deep temporal models. It builds on previous formulations of active inference to simulate behavioural and electrophysiological responses under hierarchical generative models of state transitions. Inverting these models corresponds to sequential inference, such that the state at any hierarchical level entails a sequence of transitions in the level below. The deep temporal aspect of these models means that evidence is accumulated over nested time scales, enabling inferences about narratives (i.e., temporal scenes). We illustrate this behaviour with Bayesian belief updating – and neuronal process theories – to simulate the epistemic foraging seen in reading. These simulations reproduce perisaccadic delay period activity and local field potentials seen empirically. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
29. Everything is connected: Inference and attractors in delusions.
- Author
-
Adams, Rick A., Vincent, Peter, Benrimoh, David, Friston, Karl J., and Parr, Thomas
- Subjects
- *
DELUSIONS , *INFERENCE (Logic) , *MARKOV processes , *BAYESIAN field theory - Abstract
Delusions are, by popular definition, false beliefs that are held with certainty and resistant to contradictory evidence. They seem at odds with the notion that the brain at least approximates Bayesian inference. This is especially the case in schizophrenia, a disorder thought to relate to decreased - rather than increased - certainty in the brain's model of the world. We use an active inference Markov decision process model (a Bayes-optimal decision-making agent) to perform a simple task involving social and non-social inferences. We show that even moderate changes in some model parameters - decreasing confidence in sensory input and increasing confidence in states implied by its own (especially habitual) actions - can lead to delusions as defined above. Incorporating affect in the model increases delusions, specifically in the social domain. The model also reproduces some classic psychological effects, including choice-induced preference change, and an optimism bias in inferences about oneself. A key observation is that no change in a single parameter is both necessary and sufficient for delusions; rather, delusions arise due to conditional dependencies that create 'basins of attraction' which trap Bayesian beliefs. Simulating the effects of antidopaminergic antipsychotics - by reducing the model's confidence in its actions - demonstrates that the model can escape from these attractors, through this synthetic pharmacotherapy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. On Markov blankets and hierarchical self-organisation.
- Author
-
Palacios, Ensor Rafael, Razi, Adeel, Parr, Thomas, Kirchhoff, Michael, and Friston, Karl
- Subjects
- *
BLANKETS , *BIOLOGICAL systems , *ORGANELLES , *VARIATIONAL principles , *EFFECT of human beings on climate change - Abstract
• Computational treatment of biological self-organisation. • Biological self-organisation requires emergence of boundaries, namely Markov blankets. • Hierarchical self-organisation entails emergence of Markov blankets at multiple scale. Biological self-organisation can be regarded as a process of spontaneous pattern formation; namely, the emergence of structures that distinguish themselves from their environment. This process can occur at nested spatial scales: from the microscopic (e.g., the emergence of cells) to the macroscopic (e.g. the emergence of organisms). In this paper, we pursue the idea that Markov blankets – that separate the internal states of a structure from external states – can self-assemble at successively higher levels of organisation. Using simulations, based on the principle of variational free energy minimisation, we show that hierarchical self-organisation emerges when the microscopic elements of an ensemble have prior (e.g., genetic) beliefs that they participate in a macroscopic Markov blanket: i.e., they can only influence – or be influenced by – a subset of other elements. Furthermore, the emergent structures look very much like those found in nature (e.g., cells or organelles), when influences are mediated by short range signalling. These simulations are offered as a proof of concept that hierarchical self-organisation of Markov blankets (into Markov blankets) can explain the self-evidencing, autopoietic behaviour of biological systems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. Active listening.
- Author
-
Friston, Karl J., Sajid, Noor, Quiroga-Martinez, David Ricardo, Parr, Thomas, Price, Cathy J., and Holmes, Emma
- Subjects
- *
ACTIVE listening , *PROSODIC analysis (Linguistics) , *SPEECH perception , *PREDICTIVE validity , *TEST validity - Abstract
This paper introduces active listening, as a unified framework for synthesising and recognising speech. The notion of active listening inherits from active inference, which considers perception and action under one universal imperative: to maximise the evidence for our (generative) models of the world. First, we describe a generative model of spoken words that simulates (i) how discrete lexical, prosodic, and speaker attributes give rise to continuous acoustic signals; and conversely (ii) how continuous acoustic signals are recognised as words. The 'active' aspect involves (covertly) segmenting spoken sentences and borrows ideas from active vision. It casts speech segmentation as the selection of internal actions, corresponding to the placement of word boundaries. Practically, word boundaries are selected that maximise the evidence for an internal model of how individual words are generated. We establish face validity by simulating speech recognition and showing how the inferred content of a sentence depends on prior beliefs and background noise. Finally, we consider predictive validity by associating neuronal or physiological responses, such as the mismatch negativity and P300, with belief updating under active listening, which is greatest in the absence of accurate prior beliefs about what will be heard next. • Describes a generative model for synthesising and recognising speech. • Considers speech segmentation (placing word boundaries) as an active process. • Treats speech segmentation and lexical inferences as complementary. • Associates neural mismatch responses (e.g., MMN) with belief updating. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.