Back to Search Start Over

Improving Fault Tolerance for Reliable DNN Using Boundary-Aware Activation.

Authors :
Zhan, Jinyu
Sun, Ruoxu
Jiang, Wei
Jiang, Yucheng
Yin, Xunzhao
Zhuo, Cheng
Source :
IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems. Oct2022, Vol. 41 Issue 10, p3414-3425. 12p.
Publication Year :
2022

Abstract

In this article, we approach to construct reliable deep neural networks (DNNs) for safety-critical artificial intelligent applications. We propose to modify rectified linear unit (ReLU), a commonly used activation function in DNNs, to tolerate the faults incurred by bit-flip perturbation on weights. Through theoretic analysis of the fault propagation in the layers with ReLU activation, we observe that bounding the output of ReLU activation can help to tolerate the weight faults. Then, we propose a novel ReLU design called boundary-aware ReLU (BReLU) to improve the reliability of DNNs, in which an upper bound of ReLU is determined such that the deviation between the boundary and original outputs cannot affect the final result. We propose a gradient-ascent-based algorithm to find the boundaries for BReLU activations of all DNN layers. Without retraining the network, our approach is cost effective and practical when deployed in safety-critical artificial intelligent systems. Detailed experiments and real-life application benchmarking demonstrate that our approach can improve the accuracy of DNN VGG16 from 16.7% to 82.6% on average assuming the practical weight faults, with only 13% memory and 2.78% time overhead, respectively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02780070
Volume :
41
Issue :
10
Database :
Academic Search Index
Journal :
IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems
Publication Type :
Academic Journal
Accession number :
160651756
Full Text :
https://doi.org/10.1109/TCAD.2021.3129114