Back to Search Start Over

Self-training superconducting neuromorphic circuits using reinforcement learning rules

Authors :
Schneider, M. L.
Jué, E. M.
Pufall, M. R.
Segall, K.
Anderson, C. W.
Publication Year :
2024

Abstract

Reinforcement learning algorithms are used in a wide range of applications, from gaming and robotics to autonomous vehicles. In this paper we describe a set of reinforcement learning-based local weight update rules and their implementation in superconducting hardware. Using SPICE circuit simulations, we implement a small-scale neural network with a learning time of order one nanosecond. This network can be trained to learn new functions simply by changing the target output for a given set of inputs, without the need for any external adjustments to the network. In this implementation the weights are adjusted based on the current state of the overall network response and locally stored information about the previous action. This removes the need to program explicit weight values in these networks, which is one of the primary challenges that analog hardware implementations of neural networks face. The adjustment of weights is based on a global reinforcement signal that obviates the need for circuitry to back-propagate errors.<br />Comment: 15 pages, 6 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.18774
Document Type :
Working Paper