Back to Search
Start Over
Alarm-based explanations of process monitoring results from deep neural networks.
- Source :
-
Computers & Chemical Engineering . Nov2023, Vol. 179, pN.PAG-N.PAG. 1p. - Publication Year :
- 2023
-
Abstract
- • Data-driven methods offer excellent performance for process monitoring. • Their black-box nature without explanations, however, has limited their industrial adoption. • We propose a method for explaining the predictions by deep neural networks using alarm limits. • Two case studies demonstrate its utility to the plant operator as well as to the model developer. Deep Learning (DL) models are becoming the preferred approach for process monitoring due to their higher prediction accuracy; however, they are still viewed as black boxes. Explainable Artificial Intelligence (XAI) methods seek to address this shortcoming by providing explanations that are either global (explains the entire DL model) or local (explains the result for each individual sample). Due to nonlinearities and other complexities inherent in chemical processes, providing a local explanation is more suitable. This paper proposes a local XAI method that explains process monitoring results from a deep neural network (DNN) based on process alarms. The effectiveness of the proposed method is demonstrated using a CSTR case study and the Tennessee-Eastman benchmark process. Our results show that the explanations proffered by the proposed method assist operators in understanding DNN's prediction during online process monitoring. Additionally, during model development, it can offer insights that enable improvement of the DNN's architecture. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00981354
- Volume :
- 179
- Database :
- Academic Search Index
- Journal :
- Computers & Chemical Engineering
- Publication Type :
- Academic Journal
- Accession number :
- 173371490
- Full Text :
- https://doi.org/10.1016/j.compchemeng.2023.108442