10 results on '"Parr, Thomas"'
Search Results
2. Generating meaning: active inference and the scope and limits of passive AI.
- Author
-
Pezzulo, Giovanni, Parr, Thomas, Cisek, Paul, Clark, Andy, and Friston, Karl
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *ARTIFICIAL intelligence - Abstract
Generative artificial intelligence (AI) systems, such as large language models (LLMs), have achieved remarkable performance in various tasks such as text and image generation. We discuss the foundations of generative AI systems by comparing them with our current understanding of living organisms, when seen as active inference systems. Both generative AI and active inference are based on generative models, but they acquire and use them in fundamentally different ways. Living organisms and active inference agents learn their generative models by engaging in purposive interactions with the environment and by predicting these interactions. This provides them with a core understanding and a sense of mattering, upon which their subsequent knowledge is grounded. Future generative AI systems might follow the same (biomimetic) approach – and learn the affordances implicit in embodied engagement with the world before – or instead of – being trained passively. Prominent accounts of sentient behavior depict brains as generative models of organismic interaction with the world, evincing intriguing similarities with current advances in generative artificial intelligence (AI). However, because they contend with the control of purposive, life-sustaining sensorimotor interactions, the generative models of living organisms are inextricably anchored to the body and world. Unlike the passive models learned by generative AI systems, they must capture and control the sensory consequences of action. This allows embodied agents to intervene upon their worlds in ways that constantly put their best models to the test, thus providing a solid bedrock that is – we argue – essential to the development of genuine understanding. We review the resulting implications and consider future directions for generative AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Reclaiming saliency: Rhythmic precision-modulated action and perception.
- Author
-
Meera, Ajith Anil, Novicky, Filip, Parr, Thomas, Friston, Karl, Lanillos, Pablo, and Sajid, Noor
- Subjects
HEISENBERG uncertainty principle ,ATTENTION control ,ARTIFICIAL intelligence ,COGNITIVE robotics ,CONCEPT mapping - Abstract
Computational models of visual attention in artificial intelligence and robotics have been inspired by the concept of a saliency map. These models account for the mutual information between the (current) visual information and its estimated causes. However, they fail to consider the circular causality between perception and action. In other words, they do not consider where to sample next, given current beliefs. Here, we reclaim salience as an active inference process that relies on two basic principles: uncertainty minimization and rhythmic scheduling. For this, we make a distinction between attention and salience. Briefly, we associate attention with precision control, i.e., the confidence with which beliefs can be updated given sampled sensory data, and salience with uncertainty minimization that underwrites the selection of future sensory data. Using this, we propose a new account of attention based on rhythmic precision-modulation and discuss its potential in robotics, providing numerical experiments that showcase its advantages for state and noise estimation, system identification and action selection for informative path planning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. The evolution of brain architectures for predictive coding and active inference.
- Author
-
Pezzulo, Giovanni, Parr, Thomas, and Friston, Karl
- Subjects
- *
ANIMAL habitations , *ANIMAL species , *PROBLEM solving , *NATURAL selection - Abstract
This article considers the evolution of brain architectures for predictive processing. We argue that brain mechanisms for predictive perception and action are not late evolutionary additions of advanced creatures like us. Rather, they emerged gradually from simpler predictive loops (e.g. autonomic and motor reflexes) that were a legacy from our earlier evolutionary ancestors--and were key to solving their fundamental problems of adaptive regulation. We characterize simpler-to-more-complex brains formally, in terms of generative models that include predictive loops of increasing hierarchical breadth and depth. These may start from a simple homeostatic motif and be elaborated during evolution in four main ways: these include the multimodal expansion of predictive control into an allostatic loop; its duplication to form multiple sensorimotor loops that expand an animal's behavioural repertoire; and the gradual endowment of generative models with hierarchical depth (to deal with aspects of the world that unfold at different spatial scales) and temporal depth (to select plans in a future-oriented manner). In turn, these elaborations underwrite the solution to biological regulation problems faced by increasingly sophisticated animals. Our proposal aligns neuroscientific theorising--about predictive processing--with evolutionary and comparative data on brain architectures in different animal species. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Active inference as a theory of sentient behavior.
- Author
-
Pezzulo, Giovanni, Parr, Thomas, and Friston, Karl
- Subjects
- *
ARTIFICIAL intelligence , *DIRECT action , *SUBLIMINAL perception , *PATHOLOGICAL psychology , *ROOT development , *INFERENCE (Logic) , *MACHINE learning - Abstract
This review paper offers an overview of the history and future of active inference—a unifying perspective on action and perception. Active inference is based upon the idea that sentient behavior depends upon our brains' implicit use of internal models to predict, infer, and direct action. Our focus is upon the conceptual roots and development of this theory of (basic) sentience and does not follow a rigid chronological narrative. We trace the evolution from Helmholtzian ideas on unconscious inference, through to a contemporary understanding of action and perception. In doing so, we touch upon related perspectives, the neural underpinnings of active inference, and the opportunities for future development. Key steps in this development include the formulation of predictive coding models and related theories of neuronal message passing, the use of sequential models for planning and policy optimization, and the importance of hierarchical (temporally) deep internal (i.e., generative or world) models. Active inference has been used to account for aspects of anatomy and neurophysiology, to offer theories of psychopathology in terms of aberrant precision control, and to unify extant psychological theories. We anticipate further development in all these areas and note the exciting early work applying active inference beyond neuroscience. This suggests a future not just in biology, but in robotics, machine learning, and artificial intelligence. • We review the history and future of active inference: a unifying perspective on action and perception. • We discuss the conceptual roots of active inference, its current status and promising future directions. • We highlight the importance of the brain's generative models to predict, infer, and direct action. • We critically discuss how active inference can unify extant psychological theories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Active inference, selective attention, and the cocktail party problem.
- Author
-
Holmes, Emma, Parr, Thomas, Griffiths, Timothy D., and Friston, Karl J.
- Subjects
- *
SELECTIVITY (Psychology) , *COCKTAIL parties , *EVOKED potentials (Electrophysiology) , *ATTENTION - Abstract
• New generative model for selective attention during cocktail party listening. • Computational 'lesions' in the model dissociate different errors during word report. • We model different temporal hypotheses for preparatory attention. • Temporal changes in precision are necessary to explain ERPs but not reaction times. • CNV-like responses can be explained by subjective precision rather than action. In this paper, we introduce a new generative model for an active inference account of preparatory and selective attention, in the context of a classic 'cocktail party' paradigm. In this setup, pairs of words are presented simultaneously to the left and right ears and an instructive spatial cue directs attention to the left or right. We use this generative model to test competing hypotheses about the way that human listeners direct preparatory and selective attention. We show that assigning low precision to words at attended—relative to unattended—locations can explain why a listener reports words from a competing sentence. Under this model, temporal changes in sensory precision were not needed to account for faster reaction times with longer cue-target intervals, but were necessary to explain ramping effects on event-related potentials (ERPs)—resembling the contingent negative variation (CNV)—during the preparatory interval. These simulations reveal that different processes are likely to underlie the improvement in reaction times and the ramping of ERPs that are associated with spatial cueing. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Understanding, Explanation, and Active Inference.
- Author
-
Parr, Thomas and Pezzulo, Giovanni
- Subjects
PROBLEM solving ,ARTIFICIAL intelligence ,DECISION making ,MACHINE learning - Abstract
While machine learning techniques have been transformative in solving a range of problems, an important challenge is to understand why they arrive at the decisions they output. Some have argued that this necessitates augmenting machine intelligence with understanding such that, when queried, a machine is able to explain its behaviour (i.e., explainable AI). In this article, we address the issue of machine understanding from the perspective of active inference. This paradigm enables decision making based upon a model of how data are generated. The generative model contains those variables required to explain sensory data, and its inversion may be seen as an attempt to explain the causes of these data. Here we are interested in explanations of one's own actions. This implies a deep generative model that includes a model of the world, used to infer policies, and a higher-level model that attempts to predict which policies will be selected based upon a space of hypothetical (i.e., counterfactual) explanations—and which can subsequently be used to provide (retrospective) explanations about the policies pursued. We illustrate the construct validity of this notion of understanding in relation to human understanding by highlighting the similarities in computational architecture and the consequences of its dysfunction. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Federated inference and belief sharing.
- Author
-
Friston, Karl J., Parr, Thomas, Heins, Conor, Constant, Axel, Friedman, Daniel, Isomura, Takuya, Fields, Chris, Verbelen, Tim, Ramstead, Maxwell, Clippinger, John, and Frith, Christopher D.
- Subjects
- *
FEDERATED learning , *ACTIVE learning , *LANGUAGE acquisition , *SPEECH - Abstract
This paper concerns the distributed intelligence or federated inference that emerges under belief-sharing among agents who share a common world—and world model. Imagine, for example, several animals keeping a lookout for predators. Their collective surveillance rests upon being able to communicate their beliefs—about what they see—among themselves. But, how is this possible? Here, we show how all the necessary components arise from minimising free energy. We use numerical studies to simulate the generation, acquisition and emergence of language in synthetic agents. Specifically, we consider inference, learning and selection as minimising the variational free energy of posterior (i.e., Bayesian) beliefs about the states, parameters and structure of generative models, respectively. The common theme—that attends these optimisation processes—is the selection of actions that minimise expected free energy, leading to active inference, learning and model selection (a.k.a., structure learning). We first illustrate the role of communication in resolving uncertainty about the latent states of a partially observed world, on which agents have complementary perspectives. We then consider the acquisition of the requisite language—entailed by a likelihood mapping from an agent's beliefs to their overt expression (e.g., speech)—showing that language can be transmitted across generations by active learning. Finally, we show that language is an emergent property of free energy minimisation, when agents operate within the same econiche. We conclude with a discussion of various perspectives on these phenomena; ranging from cultural niche construction, through federated learning, to the emergence of complexity in ensembles of self-organising systems. • Communication—and language in particular—is an emergent property of agents that seek evidence for generative models of their shared world. • Nested free energy minimising—evidence maximising—processes explain the emergence of language and its transmission over generations. • Reading these processes as inference integrates perspectives on communication; from generalised synchrony to cultural niche construction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Cognitive effort and active inference.
- Author
-
Parr, Thomas, Holmes, Emma, Friston, Karl J., and Pezzulo, Giovanni
- Subjects
- *
EXECUTIVE function , *CONTROL (Psychology) , *STROOP effect , *COGNITIVE neuroscience , *COGNITIVE ability , *NEUROPSYCHOLOGY - Abstract
This paper aims to integrate some key constructs in the cognitive neuroscience of cognitive control and executive function by formalising the notion of cognitive (or mental) effort in terms of active inference. To do so, we call upon a task used in neuropsychology to assess impulse inhibition—a Stroop task. In this task, participants must suppress the impulse to read a colour word and instead report the colour of the text of the word. The Stroop task is characteristically effortful, and we unpack a theory of mental effort in which, to perform this task accurately, participants must overcome prior beliefs about how they would normally act. However, our interest here is not in overt action, but in covert (mental) action. Mental actions change our beliefs but have no (direct) effect on the outside world—much like deploying covert attention. This account of effort as mental action lets us generate multimodal (choice, reaction time, and electrophysiological) data of the sort we might expect from a human participant engaging in this task. We analyse how parameters determining cognitive effort influence simulated responses and demonstrate that—when provided only with performance data—these parameters can be recovered, provided they are within a certain range. [Display omitted] • This paper offers a formalisation of 'cognitive effort' under the active inference framework. • Cognitive effort is formulated as a deviation from prior beliefs about mental (covert) action—i.e., effort is exerted to overcome a mental habit. • A computational model of the Stroop task—a characteristically effortful task—is developed to illustrate this notion of effort. • We demonstrate that it is possible to recover combinations of effort-related model parameters from simulated data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Everything is connected: Inference and attractors in delusions.
- Author
-
Adams, Rick A., Vincent, Peter, Benrimoh, David, Friston, Karl J., and Parr, Thomas
- Subjects
- *
DELUSIONS , *INFERENCE (Logic) , *MARKOV processes , *BAYESIAN field theory - Abstract
Delusions are, by popular definition, false beliefs that are held with certainty and resistant to contradictory evidence. They seem at odds with the notion that the brain at least approximates Bayesian inference. This is especially the case in schizophrenia, a disorder thought to relate to decreased - rather than increased - certainty in the brain's model of the world. We use an active inference Markov decision process model (a Bayes-optimal decision-making agent) to perform a simple task involving social and non-social inferences. We show that even moderate changes in some model parameters - decreasing confidence in sensory input and increasing confidence in states implied by its own (especially habitual) actions - can lead to delusions as defined above. Incorporating affect in the model increases delusions, specifically in the social domain. The model also reproduces some classic psychological effects, including choice-induced preference change, and an optimism bias in inferences about oneself. A key observation is that no change in a single parameter is both necessary and sufficient for delusions; rather, delusions arise due to conditional dependencies that create 'basins of attraction' which trap Bayesian beliefs. Simulating the effects of antidopaminergic antipsychotics - by reducing the model's confidence in its actions - demonstrates that the model can escape from these attractors, through this synthetic pharmacotherapy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.