Back to Search Start Over

Understanding, Explanation, and Active Inference.

Authors :
Parr, Thomas
Pezzulo, Giovanni
Source :
Frontiers in Systems Neuroscience; 11/5/2021, Vol. 15, p1-13, 13p
Publication Year :
2021

Abstract

While machine learning techniques have been transformative in solving a range of problems, an important challenge is to understand why they arrive at the decisions they output. Some have argued that this necessitates augmenting machine intelligence with understanding such that, when queried, a machine is able to explain its behaviour (i.e., explainable AI). In this article, we address the issue of machine understanding from the perspective of active inference. This paradigm enables decision making based upon a model of how data are generated. The generative model contains those variables required to explain sensory data, and its inversion may be seen as an attempt to explain the causes of these data. Here we are interested in explanations of one's own actions. This implies a deep generative model that includes a model of the world, used to infer policies, and a higher-level model that attempts to predict which policies will be selected based upon a space of hypothetical (i.e., counterfactual) explanations—and which can subsequently be used to provide (retrospective) explanations about the policies pursued. We illustrate the construct validity of this notion of understanding in relation to human understanding by highlighting the similarities in computational architecture and the consequences of its dysfunction. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
16625137
Volume :
15
Database :
Complementary Index
Journal :
Frontiers in Systems Neuroscience
Publication Type :
Academic Journal
Accession number :
153433759
Full Text :
https://doi.org/10.3389/fnsys.2021.772641