Back to Search
Start Over
Stacked Autoencoders Using Low-Power Accelerated Architectures for Object Recognition in Autonomous Systems.
- Source :
- Neural Processing Letters; Apr2016, Vol. 43 Issue 2, p445-458, 14p
- Publication Year :
- 2016
-
Abstract
- This paper investigates low-energy consumption and low-power hardware models and processor architectures for performing the real-time recognition of objects in power-constrained autonomous systems and robots. Most recent developments show that convolutional deep neural networks are currently the state-of-the-art in terms of classification accuracy. In this article we propose to use of a different type of deep neural network-stacked autoencoders-and show that within a limited number of layers and nodes, for accommodating the use of low-power accelerators such as mobile GPUs and FPGAs, we are still able to achieve both classification levels not far from the state-of-the-art and a high number of processed frames per second. We present experiments using the color CIFAR-10 dataset. This enables the adaptation of the architecture to a live feed camera. Another novelty equally proposed for the first time in this work suggests that the training phase can also be performed in these low-power devices, instead of the usual approach that uses a desktop CPU or a GPU to perform this task and only runs the trained network later on the FPGA. This allows incorporating new functionalities as, for example, a robot performing runtime learning. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 13704621
- Volume :
- 43
- Issue :
- 2
- Database :
- Complementary Index
- Journal :
- Neural Processing Letters
- Publication Type :
- Academic Journal
- Accession number :
- 113529276
- Full Text :
- https://doi.org/10.1007/s11063-015-9430-9