Back to Search Start Over

Learning to be attractive: probabilistic computation with dynamic attractor networks

Authors :
Mathieu Lefort
Alexander Gepperth
Flowing Epigenetic Robots and Systems ( Flowers )
Inria Bordeaux - Sud-Ouest
Institut National de Recherche en Informatique et en Automatique ( Inria ) -Institut National de Recherche en Informatique et en Automatique ( Inria ) -Unité d'Informatique et d'Ingénierie des Systèmes ( U2IS )
École Nationale Supérieure de Techniques Avancées ( Univ. Paris-Saclay, ENSTA ParisTech ) -École Nationale Supérieure de Techniques Avancées ( Univ. Paris-Saclay, ENSTA ParisTech )
Robotique et Vision ( RV )
Unité d'Informatique et d'Ingénierie des Systèmes ( U2IS )
Gepperth, Alexander
Flowing Epigenetic Robots and Systems (Flowers)
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
Robotique et Vision (RV)
Unité d'Informatique et d'Ingénierie des Systèmes (U2IS)
École Nationale Supérieure de Techniques Avancées (ENSTA Paris)-École Nationale Supérieure de Techniques Avancées (ENSTA Paris)
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Unité d'Informatique et d'Ingénierie des Systèmes (U2IS)
Source :
Internal Conference on Development and LEarning (ICDL), Internal Conference on Development and LEarning (ICDL), 2016, Cergy-Pontoise, France, HAL, ICDL-EPIROB
Publication Year :
2016
Publisher :
HAL CCSD, 2016.

Abstract

International audience; In the context of sensory or higher-level cognitive processing, we present a recurrent neural network model, similar to the popular dynamic neural field (DNF) model, for performing approximate probabilistic computations. The model is biologically plausible, avoids impractical schemes such as log-encoding and noise assumptions, and is well-suited for working in stacked hierarchies. By Lyapunov analysis, we make it very plausible that the model computes the maximum a posteriori (MAP) estimate given a certain input that may be corrupted by noise. Key points of the model are its capability to learn the required posterior distributions and represent them in its lateral weights, the interpretation of stable neural activities as MAP estimates, and of latency as the probability associated with those estimates. We demonstrate for in simple experiments that learning of posterior distributions is feasible and results in correct MAP estimates. Furthermore, a pre-activation of field sites can modify attractor states when the data model is ambiguous, effectively providing an approximate implementation of Bayesian inference.

Details

Language :
English
Database :
OpenAIRE
Journal :
Internal Conference on Development and LEarning (ICDL), Internal Conference on Development and LEarning (ICDL), 2016, Cergy-Pontoise, France, HAL, ICDL-EPIROB
Accession number :
edsair.doi.dedup.....57eb954ed1a7df92e0742b84c359bed2