Interpretable Fault Detection Approach With Deep Neural Networks to Industrial Applications

Fatemeh Kakavandi, Peihua Han, Roger de Reus, Peter Gorm Larsen, Houxiang Zhang

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review

2 Citations (Scopus)

Abstract

Different explainable techniques have been introduced to overcome the challenges in complex machine learning models, such as uncertainty and lack of interpretability in sensitive processes. This paper presents an interpretable deep-leaning-based fault detection approach for two separate but relatively sensitive use cases. The first use case includes a vessel engine that aims to replicate a real-life ferry crossing. Furthermore, the second use case is an industrial, medical device assembly line that mounts and engages different product components. In this approach, first, we investigate two deep-learning models that can classify the samples as normal and abnormal. Then different explainable algorithms are studied to explain the prediction outcome for both models. Furthermore, the quantitative and qualitative evaluations of these methods are also carried on. Ultimately the deep learning model with the best-performing explainable algorithm is chosen as the final interpretable fault detector. However, depending on the use case, diverse classifiers and explainable techniques should be selected. For example, for the fault detection of the medical device assembly, the DeepLiftShap algorithm is most aligned with the expert knowledge and therefore has higher qualitative results. On the other hand, the Occlusion algorithm has lower sensitivity, and therefore, higher quantitative results. Consequently, choosing the final explainable algorithm compromises the qualitative and quantitative performance of the method.
Original languageEnglish
Title of host publication2023 International Conference on Control, Automation and Diagnosis (ICCAD 2023)
PublisherIEEE
Publication dateJun 2023
ISBN (Electronic)979-8-3503-4707-4, 979-8-3503-4708-1
DOIs
Publication statusPublished - Jun 2023
SeriesProceedings, International Conference on Control, Automation and Diagnosis
ISSN2767-9896

Keywords

  • Deep neural network
  • Explainable artificial intelligence
  • Fault detection
  • Infidelity
  • Qualitative and quantitative evaluation
  • Sensitivity

Fingerprint

Dive into the research topics of 'Interpretable Fault Detection Approach With Deep Neural Networks to Industrial Applications'. Together they form a unique fingerprint.

Cite this