Back to Search Start Over

A deep learning framework for neuroscience

Authors :
Panayiota Poirazi
Greg Wayne
Christopher C. Pack
Surya Ganguli
Joel Zylberberg
Pieter R. Roelfsema
Grace W. Lindsay
Blake A. Richards
Walter Senn
Colleen J Gillon
Denis Therien
Philippe Beaudoin
Anna C. Schapiro
Kenneth D. Miller
Archy O. de Berker
Yoshua Bengio
Claudia Clopath
Peter E. Latham
Amelia J. Christensen
João Sacramento
Nikolaus Kriegeskorte
Timothy P. Lillicrap
Rui Ponte Costa
Danijar Hafner
Daniel L. K. Yamins
Benjamin Scellier
Rafal Bogacz
Adam Kepecs
Richard Naud
Friedemann Zenke
Konrad P. Kording
Andrew M. Saxe
Netherlands Institute for Neuroscience (NIN)
University of Zurich
Richards, Blake A
Wellcome Trust
Biotechnology and Biological Sciences Research Council (BBSRC)
Biotechnology and Biological Sciences Research Cou
Simons Foundation
National Institutes of Health
Source :
Richards, B A, Lillicrap, T P, Beaudoin, P, Bengio, Y, Bogacz, R, Christensen, A, Clopath, C, Costa, R P, de Berker, A, Ganguli, S, Gillon, C J, Hafner, D, Kepecs, A, Kriegeskorte, N, Latham, P, Lindsay, G W, Miller, K D, Naud, R, Pack, C C, Poirazi, P, Roelfsema, P, Sacramento, J, Saxe, A, Scellier, B, Schapiro, A C, Senn, W, Wayne, G, Yamins, D, Zenke, F, Zylberberg, J, Therien, D & Kording, K P 2019, ' A deep learning framework for neuroscience ', Nature Neuroscience, vol. 22, no. 11, pp. 1761-1770 . https://doi.org/10.1038/s41593-019-0520-2, Nature Neuroscience, 22(11), 1761-1770. Nature Publishing Group, Nat Neurosci
Publication Year :
2019

Abstract

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In the case of artificial neural networks, the three components specified by design are the objective functions, the learning rules, and architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.

Details

Language :
English
ISSN :
10976256
Database :
OpenAIRE
Journal :
Richards, B A, Lillicrap, T P, Beaudoin, P, Bengio, Y, Bogacz, R, Christensen, A, Clopath, C, Costa, R P, de Berker, A, Ganguli, S, Gillon, C J, Hafner, D, Kepecs, A, Kriegeskorte, N, Latham, P, Lindsay, G W, Miller, K D, Naud, R, Pack, C C, Poirazi, P, Roelfsema, P, Sacramento, J, Saxe, A, Scellier, B, Schapiro, A C, Senn, W, Wayne, G, Yamins, D, Zenke, F, Zylberberg, J, Therien, D & Kording, K P 2019, ' A deep learning framework for neuroscience ', Nature Neuroscience, vol. 22, no. 11, pp. 1761-1770 . https://doi.org/10.1038/s41593-019-0520-2, Nature Neuroscience, 22(11), 1761-1770. Nature Publishing Group, Nat Neurosci
Accession number :
edsair.doi.dedup.....3b42f784020df4a8b09aed7fa2d889d1
Full Text :
https://doi.org/10.1038/s41593-019-0520-2