TY - GEN
T1 - Interpretable Fault Detection Approach With Deep Neural Networks to Industrial Applications
AU - Kakavandi, Fatemeh
AU - Han, Peihua
AU - de Reus, Roger
AU - Larsen, Peter Gorm
AU - Zhang, Houxiang
PY - 2023/6
Y1 - 2023/6
N2 - Different explainable techniques have been introduced to overcome the challenges in complex machine learning models, such as uncertainty and lack of interpretability in sensitive processes. This paper presents an interpretable deep-leaning-based fault detection approach for two separate but relatively sensitive use cases. The first use case includes a vessel engine that aims to replicate a real-life ferry crossing. Furthermore, the second use case is an industrial, medical device assembly line that mounts and engages different product components. In this approach, first, we investigate two deep-learning models that can classify the samples as normal and abnormal. Then different explainable algorithms are studied to explain the prediction outcome for both models. Furthermore, the quantitative and qualitative evaluations of these methods are also carried on. Ultimately the deep learning model with the best-performing explainable algorithm is chosen as the final interpretable fault detector. However, depending on the use case, diverse classifiers and explainable techniques should be selected. For example, for the fault detection of the medical device assembly, the DeepLiftShap algorithm is most aligned with the expert knowledge and therefore has higher qualitative results. On the other hand, the Occlusion algorithm has lower sensitivity, and therefore, higher quantitative results. Consequently, choosing the final explainable algorithm compromises the qualitative and quantitative performance of the method.
AB - Different explainable techniques have been introduced to overcome the challenges in complex machine learning models, such as uncertainty and lack of interpretability in sensitive processes. This paper presents an interpretable deep-leaning-based fault detection approach for two separate but relatively sensitive use cases. The first use case includes a vessel engine that aims to replicate a real-life ferry crossing. Furthermore, the second use case is an industrial, medical device assembly line that mounts and engages different product components. In this approach, first, we investigate two deep-learning models that can classify the samples as normal and abnormal. Then different explainable algorithms are studied to explain the prediction outcome for both models. Furthermore, the quantitative and qualitative evaluations of these methods are also carried on. Ultimately the deep learning model with the best-performing explainable algorithm is chosen as the final interpretable fault detector. However, depending on the use case, diverse classifiers and explainable techniques should be selected. For example, for the fault detection of the medical device assembly, the DeepLiftShap algorithm is most aligned with the expert knowledge and therefore has higher qualitative results. On the other hand, the Occlusion algorithm has lower sensitivity, and therefore, higher quantitative results. Consequently, choosing the final explainable algorithm compromises the qualitative and quantitative performance of the method.
KW - Deep neural network
KW - Explainable artificial intelligence
KW - Fault detection
KW - Infidelity
KW - Qualitative and quantitative evaluation
KW - Sensitivity
U2 - 10.1109/ICCAD57653.2023.10152435
DO - 10.1109/ICCAD57653.2023.10152435
M3 - Article in proceedings
T3 - Proceedings, International Conference on Control, Automation and Diagnosis
BT - 2023 International Conference on Control, Automation and Diagnosis (ICCAD 2023)
PB - IEEE
ER -