Back to Search Start Over

Determinantal point process attention over grid cell code supports out of distribution generalization.

Authors :
Mondal, Shanka Subhra
Frankland, Steven
Webb, Taylor W.
Cohen, Jonathan D.
Source :
eLife. 8/1/2024, p1-27. 27p.
Publication Year :
2024

Abstract

Deep neural networks have made tremendous gains in emulating human-like intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies. However, these still fall short of, and therefore fail to provide insight into how the brain supports strong forms of generalization of which humans are capable. One such case is out-of-distribution (OOD) generalization - successful performance on test examples that lie outside the distribution of the training set. Here, we identify properties of processing in the brain that may contribute to this ability. We describe a two-part algorithm that draws on specific features of neural computation to achieve OOD generalization, and provide a proof of concept by evaluating performance on two challenging cognitive tasks. First we draw on the fact that the mammalian brain represents metric spaces using grid cell code (e.g., in the entorhinal cortex): abstract representations of relational structure, organized in recurring motifs that cover the representational space. Second, we propose an attentional mechanism that operates over the grid cell code using determinantal point process (DPP), that we call DPP attention (DPP-A) - a transformation that ensures maximum sparseness in the coverage of that space. We show that a loss function that combines standard task-optimized error with DPP-A can exploit the recurring motifs in the grid cell code, and can be integrated with common architectures to achieve strong OOD generalization performance on analogy and arithmetic tasks. This provides both an interpretation of how the grid cell code in the mammalian brain may contribute to generalization performance, and at the same time a potential means for improving such capabilities in artificial neural networks. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
2050084X
Database :
Academic Search Index
Journal :
eLife
Publication Type :
Academic Journal
Accession number :
178872886
Full Text :
https://doi.org/10.7554/eLife.89911