1. Learning to be attractive: probabilistic computation with dynamic attractor networks
- Author
-
Mathieu Lefort, Alexander Gepperth, Flowing Epigenetic Robots and Systems ( Flowers ), Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique ( Inria ) -Institut National de Recherche en Informatique et en Automatique ( Inria ) -Unité d'Informatique et d'Ingénierie des Systèmes ( U2IS ), École Nationale Supérieure de Techniques Avancées ( Univ. Paris-Saclay, ENSTA ParisTech ) -École Nationale Supérieure de Techniques Avancées ( Univ. Paris-Saclay, ENSTA ParisTech ), Robotique et Vision ( RV ), Unité d'Informatique et d'Ingénierie des Systèmes ( U2IS ), Gepperth, Alexander, Flowing Epigenetic Robots and Systems (Flowers), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Robotique et Vision (RV), Unité d'Informatique et d'Ingénierie des Systèmes (U2IS), École Nationale Supérieure de Techniques Avancées (ENSTA Paris)-École Nationale Supérieure de Techniques Avancées (ENSTA Paris), and Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Unité d'Informatique et d'Ingénierie des Systèmes (U2IS)
- Subjects
Lyapunov function ,business.industry ,Computer science ,Computation ,Probabilistic logic ,[INFO.INFO-LG] Computer Science [cs]/Machine Learning [cs.LG] ,02 engineering and technology ,Bayesian inference ,[ INFO.INFO-LG ] Computer Science [cs]/Machine Learning [cs.LG] ,Data modeling ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,Attractor ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Maximum a posteriori estimation ,020201 artificial intelligence & image processing ,Artificial intelligence ,Latency (engineering) ,business ,Algorithm ,030217 neurology & neurosurgery - Abstract
International audience; In the context of sensory or higher-level cognitive processing, we present a recurrent neural network model, similar to the popular dynamic neural field (DNF) model, for performing approximate probabilistic computations. The model is biologically plausible, avoids impractical schemes such as log-encoding and noise assumptions, and is well-suited for working in stacked hierarchies. By Lyapunov analysis, we make it very plausible that the model computes the maximum a posteriori (MAP) estimate given a certain input that may be corrupted by noise. Key points of the model are its capability to learn the required posterior distributions and represent them in its lateral weights, the interpretation of stable neural activities as MAP estimates, and of latency as the probability associated with those estimates. We demonstrate for in simple experiments that learning of posterior distributions is feasible and results in correct MAP estimates. Furthermore, a pre-activation of field sites can modify attractor states when the data model is ambiguous, effectively providing an approximate implementation of Bayesian inference.
- Published
- 2016