Back to Search
Start Over
A highly efficient implementation of a backpropagation learning algorithm using matrix ISA
- Source :
-
Journal of Parallel & Distributed Computing . Jul2008, Vol. 68 Issue 7, p949-961. 13p. - Publication Year :
- 2008
-
Abstract
- Abstract: BackPropagation (BP) is the most famous learning algorithm for Artificial Neural Networks (ANN). BP has received intensive research efforts to exploit its parallelism in order to reduce the training time for complex problems. A modified version of BP based on matrix–matrix multiplication was proposed for parallel processing. In this paper, we present the implementation of Matrix BackPropagation (MBP) using scalar, vector, and matrix Instruction Set Architectures (ISAs). Besides this, we show that the performance of the MBP is improved by switching from scalar ISA to vector ISA. It is further improved by switching from vector ISA to matrix ISA. On a practical application, speech recognition, the speedup of training a neural network using unrolling scalar ISA over scalar ISA is 1.83. On eight parallel lanes, the speedups of using vector, unrolling vector, and matrix ISAs are respectively 10.33, 11.88, and 15.36, where the maximum theoretical speedup is 16. The results obtained show that the use of matrix ISA gives a performance close to optimal, because of reusing the loaded data, decreasing the loop overhead, and overlapping the memory operations with arithmetic operations. [Copyright &y& Elsevier]
- Subjects :
- *ALGORITHMS
*ARTIFICIAL neural networks
*BACK propagation
*ARITHMETIC
*MATHEMATICS
Subjects
Details
- Language :
- English
- ISSN :
- 07437315
- Volume :
- 68
- Issue :
- 7
- Database :
- Academic Search Index
- Journal :
- Journal of Parallel & Distributed Computing
- Publication Type :
- Academic Journal
- Accession number :
- 32501016
- Full Text :
- https://doi.org/10.1016/j.jpdc.2007.12.004