Back to Search
Start Over
DFTerNet: Towards 2-bit Dynamic Fusion Networks for Accurate Human Activity Recognition
- Source :
- IEEE Access, Vol 6, Pp 56750-56764 (2018)
- Publication Year :
- 2018
- Publisher :
- IEEE, 2018.
-
Abstract
- Deep convolutional neural networks (DCNNs) are currently popular in human activity recognition (HAR) applications. However, in the face of modern artificial intelligence sensor-based games, many research achievements cannot be practically applied on portable devices (i.e., smart phone, VR/AR). DCNNs are typically resource-intensive and too large to be deployed on portable devices, and thus, this limits the practical application of complex activity detection. In addition, since portable devices do not possess high-performance graphic processing units, there is hardly any improvement in Action Game (ACT) experience. Besides, in order to deal with multi-sensor collaboration, all previous HAR models typically treated the representations from different sensor signal sources equally. However, distinct types of activities should adopt different fusion strategies. In this paper, a novel scheme is proposed. This scheme is used to train 2-bit CNNs with weights and activations constrained to {-0.5, 0, 0.5}. It takes into account the correlation between different sensor signal sources and the activity types. This model, which we refer to as DFTerNet, aims at producing a more reliable inference and better trade-offs for practical applications. It is known that quantization of weights and activations can substantially reduce memory size and use more efficient bitwise operations to replace floating or matrix operations to achieve much faster calculation and lower power consumption. Our basic idea is to exploit quantization of weights and activations directly in pre-trained filter banks and adopt dynamic fusion strategies for different activity types. Experiments demonstrate that by using a dynamic fusion strategy, it is possible to exceed the baseline model performance by up to ~5% on activity recognition data sets, such as the OPPORTUNITY and PAMAP2 data sets. Using the quantization method proposed, we were able to achieve performances closer to that of the full-precision counterpart. These results were also verified using the UniMiB-SHAR data set. In addition, the proposed method can achieve ~9x acceleration on CPUs and ~11x memory saving.
Details
- Language :
- English
- ISSN :
- 21693536
- Volume :
- 6
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Access
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.9ef3e86584fa452782bb411e431cfd08
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/ACCESS.2018.2873315