1. FPGA Implementation of the Locally Recurrent Probabilistic Neural Network
- Author
-
Todor Ganchev, Nikolay Dukov, and Dimitar Kovachev
- Subjects
Artificial neural network ,business.industry ,Time delay neural network ,Computer science ,Quantization (signal processing) ,Machine learning ,computer.software_genre ,Probabilistic neural network ,Software ,Computer engineering ,Artificial intelligence ,Layer (object-oriented design) ,business ,Field-programmable gate array ,computer ,Energy (signal processing) - Abstract
The Locally Recurrent Probabilistic Neural Network (LRPNN) consists of an input layer, three hidden layers and an output layer. The first two hidden layers are derived from the original PNN, while the third layer referred as recurrent layer is capable to model correlations within temporal sequences of observations. In the present study, we investigate the feasibility of FPGA-based implementation of the locally recurrent layer of LRPNN. An important consideration due to the specifics of this architecture is the use of modules with very high precision in the hardware design. Although expensive in terms of available resources in the FPGA chip, this is necessary, in order to compensate for the added error of quantization due to the multiple feedbacks from neurons in the neural network. The weights for the recurrent layer of the LRPNN are automatically computed from the available training data and translated into the hardware design. The experimental evaluation was carried out on the DEAP database, where two classes of emotional states were considered. The design makes use of a computed short-term energy from a 32-channel electroencephalographic (EEG) signal as an input. Results of an extensive experimental validation show that there is approximately one percent difference between the accuracy achieved with CPU-based software and FPGA-based hardware implementation of the LRPNN. more...
- Published
- 2017
- Full Text
- View/download PDF