33 results on '"Sajid, Noor"'
Search Results
2. On efficient computation in active inference
- Author
-
Paul, Aswin, Sajid, Noor, Da Costa, Lancelot, and Razi, Adeel
- Published
- 2024
- Full Text
- View/download PDF
3. Degeneracy in the neurological model of auditory speech repetition
- Author
-
Sajid, Noor, Gajardo-Vidal, Andrea, Ekert, Justyna O., Lorca-Puls, Diego L., Hope, Thomas M. H., Green, David W., Friston, Karl J., and Price, Cathy J.
- Published
- 2023
- Full Text
- View/download PDF
4. The free energy principle made simpler but not too simple
- Author
-
Friston, Karl, Da Costa, Lancelot, Sajid, Noor, Heins, Conor, Ueltzhöffer, Kai, Pavliotis, Grigorios A., and Parr, Thomas
- Published
- 2023
- Full Text
- View/download PDF
5. Active inference on discrete state-spaces: A synthesis
- Author
-
Da Costa, Lancelot, Parr, Thomas, Sajid, Noor, Veselic, Sebastijan, Neacsu, Victorita, and Friston, Karl
- Published
- 2020
- Full Text
- View/download PDF
6. Generative models, linguistic communication and active inference
- Author
-
Friston, Karl J., Parr, Thomas, Yufik, Yan, Sajid, Noor, Price, Catherine J., and Holmes, Emma
- Published
- 2020
- Full Text
- View/download PDF
7. Simulating lesion-dependent functional recovery mechanisms
- Author
-
Sajid, Noor, Holmes, Emma, Hope, Thomas M., Fountas, Zafeirios, Price, Cathy J., and Friston, Karl J.
- Published
- 2021
- Full Text
- View/download PDF
8. Bistable perception, precision and neuromodulation.
- Author
-
Novicky, Filip, Parr, Thomas, Friston, Karl, Mirza, Muammer Berk, and Sajid, Noor
- Published
- 2024
- Full Text
- View/download PDF
9. On efficient computation in active inference
- Author
-
Paul, Aswin, Sajid, Noor, Da Costa, Lancelot, and Razi, Adeel
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,FOS: Biological sciences ,Quantitative Biology - Neurons and Cognition ,Neurons and Cognition (q-bio.NC) ,Machine Learning (cs.LG) - Abstract
Despite being recognized as neurobiologically plausible, active inference faces difficulties when employed to simulate intelligent behaviour in complex environments due to its computational cost and the difficulty of specifying an appropriate target distribution for the agent. This paper introduces two solutions that work in concert to address these limitations. First, we present a novel planning algorithm for finite temporal horizons with drastically lower computational complexity. Second, inspired by Z-learning from control theory literature, we simplify the process of setting an appropriate target distribution for new and existing active inference planning schemes. Our first approach leverages the dynamic programming algorithm, known for its computational efficiency, to minimize the cost function used in planning through the Bellman-optimality principle. Accordingly, our algorithm recursively assesses the expected free energy of actions in the reverse temporal order. This improves computational efficiency by orders of magnitude and allows precise model learning and planning, even under uncertain conditions. Our method simplifies the planning process and shows meaningful behaviour even when specifying only the agent's final goal state. The proposed solutions make defining a target distribution from a goal state straightforward compared to the more complicated task of defining a temporally informed target distribution. The effectiveness of these methods is tested and demonstrated through simulations in standard grid-world tasks. These advances create new opportunities for various applications., 23 pages, 7 figures. Project repo: https://github.com/aswinpaul/dpefe_2023
- Published
- 2023
10. Bistable perception, precision and neuromodulation
- Author
-
Novicky, Filip, Parr, Thomas, Friston, Karl, Mirza, M. Berk, and Sajid, Noor
- Subjects
FOS: Biological sciences ,Quantitative Biology - Neurons and Cognition ,Neurons and Cognition (q-bio.NC) - Abstract
Bistable perception follows from observing a static, ambiguous, (visual) stimulus with two possible interpretations. Here, we present an active (Bayesian) inference account of bistable perception and posit that perceptual transitions between different interpretations (i.e., inferences) of the same stimulus ensue from specific eye movements that shift the focus to a different visual feature. Formally, these inferences are a consequence of precision control that determines how confident beliefs are and change the frequency with which one can perceive - and alternate between - two distinct percepts. We hypothesised that there are multiple, but distinct, ways in which precision modulation can interact to give rise to a similar frequency of bistable perception. We validated this using numerical simulations of the Necker's cube paradigm and demonstrate the multiple routes that underwrite the frequency of perceptual alternation. Our results provide an (enactive) computational account of the intricate precision balance underwriting bistable perception. Importantly, these precision parameters can be considered the computational homologues of particular neurotransmitters - i.e., acetylcholine, noradrenaline, dopamine - that have been previously implicated in controlling bistable perception, providing a computational link between the neurochemistry and perception.
- Published
- 2022
11. Proceedings of the 2nd Applied Active Inference Symposium on 'Robotics', at the Active Inference Institute
- Author
-
Friston, Karl, Schneider, Tim, Verbelen, Tim, White, Ben, Sajid, Noor, Chen, Wen-Hua, Lara, Bruno, Brown, Matt, Safron, Adam, Cloutier, JF, Smekal, Jakub, Knight, Bleu, and Friedman, Daniel
- Subjects
Free Energy Principle ,Active Inference ,Robotics - Abstract
This is a transcript of the proceedings of the 2nd Applied Active Inference Symposium. There are twoSymposium sessions of four hours each, starting at 4 & 16 UTC on July 31st Watch link for 1st Symposium interval, July 31st at 4 UTC: https://youtu.be/zm2d9o5n0PU Watch link for 2nd Symposium interval, July 31st at 16 UTC: https://youtu.be/dTVHHenms_Y The Program is available at:https://coda.io/@active-inference-institute/2nd-applied-active-inference-symposium
- Published
- 2022
- Full Text
- View/download PDF
12. Modelling non-reinforced preferences using selective attention
- Author
-
Sajid, Noor, Tigas, Panagiotis, Fountas, Zafeirios, Guo, Qinghai, Zakharov, Alexey, and Da Costa, Lancelot
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Machine Learning (cs.LG) - Abstract
How can artificial agents learn non-reinforced preferences to continuously adapt their behaviour to a changing environment? We decompose this question into two challenges: ($i$) encoding diverse memories and ($ii$) selectively attending to these for preference formation. Our proposed \emph{no}n-\emph{re}inforced preference learning mechanism using selective attention, \textsc{Nore}, addresses both by leveraging the agent's world model to collect a diverse set of experiences which are interleaved with imagined roll-outs to encode memories. These memories are selectively attended to, using attention and gating blocks, to update agent's preferences. We validate \textsc{Nore} in a modified OpenAI Gym FrozenLake environment (without any external signal) with and without volatility under a fixed model of the environment -- and compare its behaviour to \textsc{Pepper}, a Hebbian preference learning mechanism. We demonstrate that \textsc{Nore} provides a straightforward framework to induce exploratory preferences in the absence of external signals., 4 pages, 3 figures - Workshop Track: 1st Conference on Lifelong Learning Agents, 2022
- Published
- 2022
13. Reward Maximization Through Discrete Active Inference.
- Author
-
Da Costa, Lancelot, Sajid, Noor, Parr, Thomas, Friston, Karl, and Smith, Ryan
- Subjects
- *
MARKOV processes , *BIOLOGICAL models , *ARTIFICIAL membranes - Abstract
Active inference is a probabilistic framework for modeling the behavior of biological and artificial agents, which derives from the principle of minimizing free energy. In recent years, this framework has been applied successfully to a variety of situations where the goal was to maximize reward, often offering comparable and sometimes superior performance to alternative approaches. In this article, we clarify the connection between reward maximization and active inference by demonstrating how and when active inference agents execute actions that are optimal for maximizing reward. Precisely, we show the conditions under which active inference produces the optimal solution to the Bellman equation, a formulation that underlies several approaches to model-based reinforcement learning and control. On partially observed Markov decision processes, the standard active inference scheme can produce Bellman optimal actions for planning horizons of 1 but not beyond. In contrast, a recently developed recursive active inference scheme (sophisticated inference) can produce Bellman optimal actions on any finite temporal horizon. We append the analysis with a discussion of the broader relationship between active inference and reinforcement learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Bayesian brains and the R\'enyi divergence
- Author
-
Sajid, Noor, Faccio, Francesco, Da Costa, Lancelot, Parr, Thomas, Schmidhuber, Jürgen, and Friston, Karl
- Subjects
Computer Science - Artificial Intelligence ,Quantitative Biology - Neurons and Cognition - Abstract
Under the Bayesian brain hypothesis, behavioural variations can be attributed to different priors over generative model parameters. This provides a formal explanation for why individuals exhibit inconsistent behavioural preferences when confronted with similar choices. For example, greedy preferences are a consequence of confident (or precise) beliefs over certain outcomes. Here, we offer an alternative account of behavioural variability using R\'enyi divergences and their associated variational bounds. R\'enyi bounds are analogous to the variational free energy (or evidence lower bound) and can be derived under the same assumptions. Importantly, these bounds provide a formal way to establish behavioural differences through an $\alpha$ parameter, given fixed priors. This rests on changes in $\alpha$ that alter the bound (on a continuous scale), inducing different posterior estimates and consequent variations in behaviour. Thus, it looks as if individuals have different priors, and have reached different conclusions. More specifically, $\alpha \to 0^{+}$ optimisation leads to mass-covering variational estimates and increased variability in choice behaviour. Furthermore, $\alpha \to + \infty$ optimisation leads to mass-seeking variational posteriors and greedy preferences. We exemplify this formulation through simulations of the multi-armed bandit task. We note that these $\alpha$ parameterisations may be especially relevant, i.e., shape preferences, when the true posterior is not in the same family of distributions as the assumed (simpler) approximate density, which may be the case in many real-world scenarios. The ensuing departure from vanilla variational inference provides a potentially useful explanation for differences in behavioural preferences of biological (or artificial) agents under the assumption that the brain performs variational Bayesian inference., Comment: 23 pages, 5 figures
- Published
- 2021
15. Is Social Media to Blame for the Sharp Rise in STDs?
- Author
-
Carl Enomoto, Sajid Noor, and Benjamin Widner
- Subjects
STDs ,social media ,Match.com ,OKCupid ,Down Dating ,social networking ,Social Sciences - Abstract
Rhode Island, New Zealand, and southern California recently reported sharp increases in sexually transmitted diseases (STDs). Health department officials stated that these increases appeared to be due to the more widespread use of social media like Tinder, Grindr, and Facebook, which allow users to readily connect with and meet others. The purpose of this study was to see if U.S. states that have more users of social networking sites, dating sites, and dating apps like Match.com, Ashley Madison, Our Time, Down Dating, Bumble, Zoosk, Hinge, Score, At First Sight, Plenty of Fish, Eharmony, Adult Friend Finder, Tinder, Grindr, and Facebook have more cases of STDs after controlling for population, race, age, income, education, and population density. It was found that states with more users of Match.com, OKCupid, and Down Dating had a larger number of cases of STDs, while states with more users of Our Time, Ashley Madison, Facebook, How About We, Hinge, Adult Friend Finder, Grindr, Bumble, Score, Tinder, and At First Sight had fewer cases of STDs. While social networking sites make it easier for individuals to be exposed to an STD since in-network individuals may share an STD, many sites either attract individuals who are not interested in a short-term sexual relationship or who take precautions to avoid contracting an STD.
- Published
- 2017
- Full Text
- View/download PDF
16. Reclaiming saliency: Rhythmic precision-modulated action and perception.
- Author
-
Meera, Ajith Anil, Novicky, Filip, Parr, Thomas, Friston, Karl, Lanillos, Pablo, and Sajid, Noor
- Subjects
HEISENBERG uncertainty principle ,ATTENTION control ,ARTIFICIAL intelligence ,COGNITIVE robotics ,CONCEPT mapping - Abstract
Computational models of visual attention in artificial intelligence and robotics have been inspired by the concept of a saliency map. These models account for the mutual information between the (current) visual information and its estimated causes. However, they fail to consider the circular causality between perception and action. In other words, they do not consider where to sample next, given current beliefs. Here, we reclaim salience as an active inference process that relies on two basic principles: uncertainty minimization and rhythmic scheduling. For this, we make a distinction between attention and salience. Briefly, we associate attention with precision control, i.e., the confidence with which beliefs can be updated given sampled sensory data, and salience with uncertainty minimization that underwrites the selection of future sensory data. Using this, we propose a new account of attention based on rhythmic precision-modulation and discuss its potential in robotics, providing numerical experiments that showcase its advantages for state and noise estimation, system identification and action selection for informative path planning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Bayesian Brains and the Rényi Divergence.
- Author
-
Sajid, Noor, Faccio, Francesco, Da Costa, Lancelot, Parr, Thomas, Schmidhuber, Jürgen, and Friston, Karl
- Subjects
- *
CONSTRAINED optimization , *BAYESIAN field theory , *PARAMETERIZATION - Abstract
Under the Bayesian brain hypothesis, behavioral variations can be attributed to different priors over generative model parameters. This provides a formal explanation for why individuals exhibit inconsistent behavioral preferences when confronted with similar choices. For example, greedy preferences are a consequence of confident (or precise) beliefs over certain outcomes. Here, we offer an alternative account of behavioral variability using Rényi divergences and their associated variational bounds. Rényi bounds are analogous to the variational free energy (or evidence lower bound) and can be derived under the same assumptions. Importantly, these bounds provide a formal way to establish behavioral differences through an α parameter, given fixed priors. This rests on changes in α that alter the bound (on a continuous scale), inducing different posterior estimates and consequent variations in behavior. Thus, it looks as if individuals have different priors and have reached different conclusions. More specifically, α → 0 + optimization constrains the variational posterior to be positive whenever the true posterior is positive. This leads to mass-covering variational estimates and increased variability in choice behavior. Furthermore, α → + ∞ optimization constrains the variational posterior to be zero whenever the true posterior is zero. This leads to mass-seeking variational posteriors and greedy preferences. We exemplify this formulation through simulations of the multiarmed bandit task. We note that these α parameterizations may be especially relevant (i.e., shape preferences) when the true posterior is not in the same family of distributions as the assumed (simpler) approximate density, which may be the case in many real-world scenarios. The ensuing departure from vanilla variational inference provides a potentially useful explanation for differences in behavioral preferences of biological (or artificial) agents under the assumption that the brain performs variational Bayesian inference. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Reward Maximisation through Discrete Active Inference
- Author
-
Da Costa, Lancelot, Sajid, Noor, Parr, Thomas, Friston, Karl, and Smith, Ryan
- Subjects
FOS: Computer and information sciences ,Computer Science::Machine Learning ,Artificial Intelligence (cs.AI) ,Optimization and Control (math.OC) ,Computer Science - Artificial Intelligence ,FOS: Biological sciences ,Quantitative Biology - Neurons and Cognition ,FOS: Mathematics ,Neurons and Cognition (q-bio.NC) ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Mathematics - Optimization and Control - Abstract
Active inference is a probabilistic framework for modelling the behaviour of biological and artificial agents, which derives from the principle of minimising free energy. In recent years, this framework has successfully been applied to a variety of situations where the goal was to maximise reward, offering comparable and sometimes superior performance to alternative approaches. In this paper, we clarify the connection between reward maximisation and active inference by demonstrating how and when active inference agents perform actions that are optimal for maximising reward. Precisely, we show the conditions under which active inference produces the optimal solution to the Bellman equation--a formulation that underlies several approaches to model-based reinforcement learning and control. On partially observed Markov decision processes, the standard active inference scheme can produce Bellman optimal actions for planning horizons of 1, but not beyond. In contrast, a recently developed recursive active inference scheme (sophisticated inference) can produce Bellman optimal actions on any finite temporal horizon. We append the analysis with a discussion of the broader relationship between active inference and reinforcement learning., 18 pages, 3 figures (main text); 37 pages including references and appendix
- Published
- 2020
19. Intramolecular epistasis and the evolution of a new enzymatic function.
- Author
-
Sajid Noor, Matthew C Taylor, Robyn J Russell, Lars S Jermiin, Colin J Jackson, John G Oakeshott, and Colin Scott
- Subjects
Medicine ,Science - Abstract
Atrazine chlorohydrolase (AtzA) and its close relative melamine deaminase (TriA) differ by just nine amino acid substitutions but have distinct catalytic activities. Together, they offer an informative model system to study the molecular processes that underpin the emergence of new enzymatic function. Here we have constructed the potential evolutionary trajectories between AtzA and TriA, and characterized the catalytic activities and biophysical properties of the intermediates along those trajectories. The order in which the nine amino acid substitutions that separate the enzymes could be introduced to either enzyme, while maintaining significant catalytic activity, was dictated by epistatic interactions, principally between three amino acids within the active site: namely, S331C, N328D and F84L. The mechanistic basis for the epistatic relationships is consistent with a model for the catalytic mechanisms in which protonation is required for hydrolysis of melamine, but not atrazine.
- Published
- 2012
- Full Text
- View/download PDF
20. How Active Inference Could Help Revolutionise Robotics.
- Author
-
Da Costa, Lancelot, Lanillos, Pablo, Sajid, Noor, Friston, Karl, and Khan, Shujhat
- Subjects
ROBOTICS ,MATHEMATICAL functions ,BAYESIAN field theory - Abstract
Recent advances in neuroscience have characterised brain function using mathematical formalisms and first principles that may be usefully applied elsewhere. In this paper, we explain how active inference—a well-known description of sentient behaviour from neuroscience—can be exploited in robotics. In short, active inference leverages the processes thought to underwrite human behaviour to build effective autonomous systems. These systems show state-of-the-art performance in several robotics settings; we highlight these and explain how this framework may be used to advance robotics. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Predicting Language Recovery after Stroke with Convolutional Networks on Stitched MRI
- Author
-
Roohani, Yusuf H., Sajid, Noor, Madhyastha, Pranava, Price, Cathy J., and Hope, Thomas M. H.
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
One third of stroke survivors have language difficulties. Emerging evidence suggests that their likelihood of recovery depends mainly on the damage to language centers. Thus previous research for predicting language recovery post-stroke has focused on identifying damaged regions of the brain. In this paper, we introduce a novel method where we only make use of stitched 2-dimensional cross-sections of raw MRI scans in a deep convolutional neural network setup to predict language recovery post-stroke. Our results show: a) the proposed model that only uses MRI scans has comparable performance to models that are dependent on lesion specific information; b) the features learned by our model are complementary to the lesion specific information and the combination of both appear to outperform previously reported results in similar settings. We further analyse the CNN model for understanding regions in brain that are responsible for arriving at these predictions using gradient based saliency maps. Our findings are in line with previous lesion studies., Machine Learning for Health (ML4H) Workshop at NeurIPS 2018 arXiv:1811.07216
- Published
- 2018
22. Generative Models for Active Vision.
- Author
-
Parr, Thomas, Sajid, Noor, Da Costa, Lancelot, Mirza, M. Berk, and Friston, Karl J.
- Subjects
EYE movements ,VISION ,PROPRIOCEPTION ,SENSES - Abstract
The active visual system comprises the visual cortices, cerebral attention networks, and oculomotor system. While fascinating in its own right, it is also an important model for sensorimotor networks in general. A prominent approach to studying this system is active inference—which assumes the brain makes use of an internal (generative) model to predict proprioceptive and visual input. This approach treats action as ensuring sensations conform to predictions (i.e., by moving the eyes) and posits that visual percepts are the consequence of updating predictions to conform to sensations. Under active inference, the challenge is to identify the form of the generative model that makes these predictions—and thus directs behavior. In this paper, we provide an overview of the generative models that the brain must employ to engage in active vision. This means specifying the processes that explain retinal cell activity and proprioceptive information from oculomotor muscle fibers. In addition to the mechanics of the eyes and retina, these processes include our choices about where to move our eyes. These decisions rest upon beliefs about salient locations, or the potential for information gain and belief-updating. A key theme of this paper is the relationship between "looking" and "seeing" under the brain's implicit generative model of the visual world. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. Active Inference: Demystified and Compared.
- Author
-
Sajid, Noor, Ball, Philip J., Parr, Thomas, and Friston, Karl J.
- Abstract
Active inference is a first principle account of how autonomous agents operate in dynamic, nonstationary environments. This problem is also considered in reinforcement learning, but limited work exists on comparing the two approaches on the same discrete-state environments. In this letter, we provide (1) an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in reinforcement learning, and (2) an explicit discrete-state comparison between active inference and reinforcement learning on an OpenAI gym baseline. We begin by providing a condensed overview of the active inference literature, in particular viewing the various natural behaviors of active inference agents through the lens of reinforcement learning. We show that by operating in a pure belief-based setting, active inference agents can carry out epistemic exploration—and account for uncertainty about their environment—in a Bayes-optimal fashion. Furthermore, we show that the reliance on an explicit reward signal in reinforcement learning is removed in active inference, where reward can simply be treated as another observation we have a preference over; even in the total absence of rewards, agent behaviors are learned through preference learning. We make these properties explicit by showing two scenarios in which active inference agents can infer behaviors in reward-free environments compared to both Q-learning and Bayesian model-based reinforcement learning agents and by placing zero prior preferences over rewards and learning the prior preferences over the observations corresponding to reward. We conclude by noting that this formalism can be applied to more complex settings (e.g., robotic arm movement, Atari games) if appropriate generative models can be formulated. In short, we aim to demystify the behavior of active inference agents by presenting an accessible discrete state-space and time formulation and demonstrate these behaviors in a OpenAI gym environment, alongside reinforcement learning agents. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Degeneracy and Redundancy in Active Inference.
- Author
-
Sajid, Noor, Parr, Thomas, Hope, Thomas M, Price, Cathy J, and Friston, Karl J
- Published
- 2020
- Full Text
- View/download PDF
25. Intramolecular epistasis and the evolution of a new enzymatic function
- Author
-
Lars S. Jermiin, John G. Oakeshott, Sajid Noor, Colin Scott, Robyn J. Russell, Matthew C. Taylor, and Colin J. Jackson
- Subjects
Models, Molecular ,Evolutionary Processes ,Heredity ,Hydrolases ,lcsh:Medicine ,Protonation ,Forms of Evolution ,Biochemistry ,Evolution, Molecular ,Aminohydrolases ,Molecular evolution ,Catalytic Domain ,Pseudomonas ,Enzyme Stability ,Genetics ,Atrazine chlorohydrolase ,Transition Temperature ,Microevolution ,Amino Acids ,Adaptation ,lcsh:Science ,Biology ,Enzyme Kinetics ,chemistry.chemical_classification ,Evolutionary Biology ,Evolutionary Theory ,Multidisciplinary ,biology ,lcsh:R ,Active site ,Epistasis, Genetic ,Enzymes ,Amino acid ,Kinetics ,Enzyme ,Amino Acid Substitution ,chemistry ,Enzyme Structure ,Epistasis ,biology.protein ,lcsh:Q ,Function (biology) ,Research Article - Abstract
Atrazine chlorohydrolase (AtzA) and its close relative melamine deaminase (TriA) differ by just nine amino acid substitutions but have distinct catalytic activities. Together, they offer an informative model system to study the molecular processes that underpin the emergence of new enzymatic function. Here we have constructed the potential evolutionary trajectories between AtzA and TriA, and characterized the catalytic activities and biophysical properties of the intermediates along those trajectories. The order in which the nine amino acid substitutions that separate the enzymes could be introduced to either enzyme, while maintaining significant catalytic activity, was dictated by epistatic interactions, principally between three amino acids within the active site: namely, S331C, N328D and F84L. The mechanistic basis for the epistatic relationships is consistent with a model for the catalytic mechanisms in which protonation is required for hydrolysis of melamine, but not atrazine.
- Published
- 2012
26. Cancer Niches and Their Kikuchi Free Energy.
- Author
-
Sajid, Noor, Convertino, Laura, Friston, Karl, and Verdoolaege, Geert
- Subjects
- *
CANCER cells , *CELL differentiation , *FREE energy (Thermodynamics) , *CLUSTER variation method , *METASTASIS , *TUMOR growth , *MORPHOLOGY - Abstract
Biological forms depend on a progressive specialization of pluripotent stem cells. The differentiation of these cells in their spatial and functional environment defines the organism itself; however, cellular mutations may disrupt the mutual balance between a cell and its niche, where cell proliferation and specialization are released from their autopoietic homeostasis. This induces the construction of cancer niches and maintains their survival. In this paper, we characterise cancer niche construction as a direct consequence of interactions between clusters of cancer and healthy cells. Explicitly, we evaluate these higher-order interactions between niches of cancer and healthy cells using Kikuchi approximations to the free energy. Kikuchi's free energy is measured in terms of changes to the sum of energies of baseline clusters of cells (or nodes) minus the energies of overcounted cluster intersections (and interactions of interactions, etc.). We posit that these changes in energy node clusters correspond to a long-term reduction in the complexity of the system conducive to cancer niche survival. We validate this formulation through numerical simulations of apoptosis, local cancer growth, and metastasis, and highlight its implications for a computational understanding of the etiopathology of cancer. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. Active listening.
- Author
-
Friston, Karl J., Sajid, Noor, Quiroga-Martinez, David Ricardo, Parr, Thomas, Price, Cathy J., and Holmes, Emma
- Subjects
- *
ACTIVE listening , *PROSODIC analysis (Linguistics) , *SPEECH perception , *PREDICTIVE validity , *TEST validity - Abstract
This paper introduces active listening, as a unified framework for synthesising and recognising speech. The notion of active listening inherits from active inference, which considers perception and action under one universal imperative: to maximise the evidence for our (generative) models of the world. First, we describe a generative model of spoken words that simulates (i) how discrete lexical, prosodic, and speaker attributes give rise to continuous acoustic signals; and conversely (ii) how continuous acoustic signals are recognised as words. The 'active' aspect involves (covertly) segmenting spoken sentences and borrows ideas from active vision. It casts speech segmentation as the selection of internal actions, corresponding to the placement of word boundaries. Practically, word boundaries are selected that maximise the evidence for an internal model of how individual words are generated. We establish face validity by simulating speech recognition and showing how the inferred content of a sentence depends on prior beliefs and background noise. Finally, we consider predictive validity by associating neuronal or physiological responses, such as the mismatch negativity and P300, with belief updating under active listening, which is greatest in the absence of accurate prior beliefs about what will be heard next. • Describes a generative model for synthesising and recognising speech. • Considers speech segmentation (placing word boundaries) as an active process. • Treats speech segmentation and lexical inferences as complementary. • Associates neural mismatch responses (e.g., MMN) with belief updating. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. Neuromodulatory Control and Language Recovery in Bilingual Aphasia: An Active Inference Approach.
- Author
-
Sajid, Noor, Friston, Karl J., Ekert, Justyna O., Price, Cathy J., and W. Green, David
- Subjects
- *
APHASIA , *INFERENCE (Logic) , *BILINGUALISM , *BILINGUAL education , *LANGUAGE & languages , *ETIOLOGY of diseases - Abstract
Understanding the aetiology of the diverse recovery patterns in bilingual aphasia is a theoretical challenge with implications for treatment. Loss of control over intact language networks provides a parsimonious starting point that can be tested using in-silico lesions. We simulated a complex recovery pattern (alternate antagonism and paradoxical translation) to test the hypothesis—from an established hierarchical control model—that loss of control was mediated by constraints on neuromodulatory resources. We used active (Bayesian) inference to simulate a selective loss of sensory precision; i.e., confidence in the causes of sensations. This in-silico lesion altered the precision of beliefs about task relevant states, including appropriate actions, and reproduced exactly the recovery pattern of interest. As sensory precision has been linked to acetylcholine release, these simulations endorse the conjecture that loss of neuromodulatory control can explain this atypical recovery pattern. We discuss the relevance of this finding for other recovery patterns. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Modules or Mean-Fields?
- Author
-
Parr, Thomas, Sajid, Noor, and Friston, Karl J.
- Subjects
- *
MEAN field theory , *STOCHASTIC systems , *MESSAGE passing (Computer science) , *BRAIN anatomy , *DYNAMICAL systems , *DENSITY of states - Abstract
The segregation of neural processing into distinct streams has been interpreted by some as evidence in favour of a modular view of brain function. This implies a set of specialised 'modules', each of which performs a specific kind of computation in isolation of other brain systems, before sharing the result of this operation with other modules. In light of a modern understanding of stochastic non-equilibrium systems, like the brain, a simpler and more parsimonious explanation presents itself. Formulating the evolution of a non-equilibrium steady state system in terms of its density dynamics reveals that such systems appear on average to perform a gradient ascent on their steady state density. If this steady state implies a sufficiently sparse conditional independency structure, this endorses a mean-field dynamical formulation. This decomposes the density over all states in a system into the product of marginal probabilities for those states. This factorisation lends the system a modular appearance, in the sense that we can interpret the dynamics of each factor independently. However, the argument here is that it is factorisation, as opposed to modularisation, that gives rise to the functional anatomy of the brain or, indeed, any sentient system. In the following, we briefly overview mean-field theory and its applications to stochastic dynamical systems. We then unpack the consequences of this factorisation through simple numerical simulations and highlight the implications for neuronal message passing and the computational architecture of sentience. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Disentangling shape and pose for object-centric deep active inference models
- Author
-
Ferraro, Stefano, Van de Maele, Toon, Mazzaglia, Pietro, Verbelen, Tim, Dhoedt, Bart, Buckley, Christopher L., Cialfi, Daniela, Lanillos, Pablo, Ramstead, Maxwell, Sajid, Noor, Shimazaki, Hideaki, and Verbelen, Tim
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Technology and Engineering ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Disentanglement ,Computer Science - Computer Vision and Pattern Recognition ,Deep learning ,Object perception ,Machine Learning (cs.LG) ,Computer Science - Robotics ,Artificial Intelligence (cs.AI) ,Active inference ,Robotics (cs.RO) - Abstract
Active inference is a first principles approach for understanding the brain in particular, and sentient agents in general, with the single imperative of minimizing free energy. As such, it provides a computational account for modelling artificial intelligent agents, by defining the agent's generative model and inferring the model parameters, actions and hidden state beliefs. However, the exact specification of the generative model and the hidden state space structure is left to the experimenter, whose design choices influence the resulting behaviour of the agent. Recently, deep learning methods have been proposed to learn a hidden state space structure purely from data, alleviating the experimenter from this tedious design task, but resulting in an entangled, non-interpreteable state space. In this paper, we hypothesize that such a learnt, entangled state space does not necessarily yield the best model in terms of free energy, and that enforcing different factors in the state space can yield a lower model complexity. In particular, we consider the problem of 3D object representation, and focus on different instances of the ShapeNet dataset. We propose a model that factorizes object shape, pose and category, while still learning a representation for each factor using a deep neural network. We show that models, with best disentanglement properties, perform best when adopted by an active agent in reaching preferred observations.
- Published
- 2023
31. Learning generative models for active inference using tensor networks
- Author
-
Wauthier, Samuel T., Vanhecke, Bram, Verbelen, Tim, Dhoedt, Bart, Buckley, Christopher L., Cialfi, Daniela, Lanillos, Pablo, Ramstead, Maxwell, Sajid, Noor, Shimazaki, Hideaki, and Verbelen, Tim
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Technology and Engineering ,Computer Science - Artificial Intelligence ,Generative modeling ,Active inference ,Tensor networks ,Machine Learning (cs.LG) - Abstract
Active inference provides a general framework for behavior and learning in autonomous agents. It states that an agent will attempt to minimize its variational free energy, defined in terms of beliefs over observations, internal states and policies. Traditionally, every aspect of a discrete active inference model must be specified by hand, i.e. by manually defining the hidden state space structure, as well as the required distributions such as likelihood and transition probabilities. Recently, efforts have been made to learn state space representations automatically from observations using deep neural networks. In this paper, we present a novel approach of learning state spaces using quantum physics-inspired tensor networks. The ability of tensor networks to represent the probabilistic nature of quantum states as well as to reduce large state spaces makes tensor networks a natural candidate for active inference. We show how tensor networks can be used as a generative model for sequential data. Furthermore, we show how one can obtain beliefs from such a generative model and how an active inference agent can use these to compute the expected free energy. Finally, we demonstrate our method on the classic T-maze environment., Accepted as a workshop paper at IWAI2022 @ ECML/PKDD2022
- Published
- 2023
32. Reclaiming saliency: Rhythmic precision-modulated action and perception.
- Author
-
Anil Meera A, Novicky F, Parr T, Friston K, Lanillos P, and Sajid N
- Abstract
Computational models of visual attention in artificial intelligence and robotics have been inspired by the concept of a saliency map. These models account for the mutual information between the (current) visual information and its estimated causes. However, they fail to consider the circular causality between perception and action. In other words, they do not consider where to sample next, given current beliefs. Here, we reclaim salience as an active inference process that relies on two basic principles: uncertainty minimization and rhythmic scheduling. For this, we make a distinction between attention and salience. Briefly, we associate attention with precision control, i.e., the confidence with which beliefs can be updated given sampled sensory data, and salience with uncertainty minimization that underwrites the selection of future sensory data. Using this, we propose a new account of attention based on rhythmic precision-modulation and discuss its potential in robotics, providing numerical experiments that showcase its advantages for state and noise estimation, system identification and action selection for informative path planning., (Copyright © 2022 Anil Meera, Novicky, Parr, Friston, Lanillos and Sajid.)
- Published
- 2022
- Full Text
- View/download PDF
33. Paradoxical lesions, plasticity and active inference.
- Author
-
Sajid N, Parr T, Gajardo-Vidal A, Price CJ, and Friston KJ
- Abstract
Paradoxical lesions are secondary brain lesions that ameliorate functional deficits caused by the initial insult. This effect has been explained in several ways; particularly by the reduction of functional inhibition, or by increases in the excitatory-to-inhibitory synaptic balance within perilesional tissue. In this article, we simulate how and when a modification of the excitatory-inhibitory balance triggers the reversal of a functional deficit caused by a primary lesion. For this, we introduce in-silico lesions to an active inference model of auditory word repetition. The first in-silico lesion simulated damage to the extrinsic (between regions) connectivity causing a functional deficit that did not fully resolve over 100 trials of a word repetition task. The second lesion was implemented in the intrinsic (within region) connectivity, compromising the model's ability to rebalance excitatory-inhibitory connections during learning. We found that when the second lesion was mild, there was an increase in experience-dependent plasticity that enhanced performance relative to a single lesion. This paradoxical lesion effect disappeared when the second lesion was more severe because plasticity-related changes were disproportionately amplified in the intrinsic connectivity, relative to lesioned extrinsic connections. Finally, this framework was used to predict the physiological correlates of paradoxical lesions. This formal approach provides new insights into the computational and neurophysiological mechanisms that allow some patients to recover after large or multiple lesions., (© The Author(s) (2020). Published by Oxford University Press on behalf of the Guarantors of Brain.)
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.