Aarhus University Seal

An explainable and interpretable model for attention deficit hyperactivity disorder in children using EEG signals

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

  • Smith K. Khare
  • ,
  • U. Rajendra Acharya, University of Southern Queensland, Singapore University of Social Sciences, Asia University Taiwan, Kumamoto University, University of Malaya

Background: Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder that affects a person's sleep, mood, anxiety, and learning. Early diagnosis and timely medication can help individuals with ADHD perform daily tasks without difficulty. Electroencephalogram (EEG) signals can help neurologists to detect ADHD by examining the changes occurring in it. The EEG signals are complex, non-linear, and non-stationary. It is difficult to find the subtle differences between ADHD and healthy control EEG signals visually. Also, making decisions from existing machine learning (ML) models do not guarantee similar performance (unreliable). Method: The paper explores a combination of variational mode decomposition (VMD), and Hilbert transform (HT) called VMD-HT to extract hidden information from EEG signals. Forty-one statistical parameters extracted from the absolute value of analytical mode functions (AMF) have been classified using the explainable boosted machine (EBM) model. The interpretability of the model is tested using statistical analysis and performance measurement. The importance of the features, channels and brain regions has been identified using the glass-box and black-box approach. The model's local and global explainability has been visualized using Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Partial Dependence Plot (PDP), and Morris sensitivity. To the best of our knowledge, this is the first work that explores the explainability of the model prediction in ADHD detection, particularly for children. Results: Our results show that the explainable model has provided an accuracy of 99.81%, a sensitivity of 99.78%, 99.84% specificity, an F-1 measure of 99.83%, the precision of 99.87%, a false detection rate of 0.13%, and Mathew's correlation coefficient, negative predicted value, and critical success index of 99.61%, 99.73%, and 99.66%, respectively in detecting the ADHD automatically with ten-fold cross-validation. The model has provided an area under the curve of 100% while the detection rate of 99.87% and 99.73% has been obtained for ADHD and HC, respectively. Conclusions: The model show that the interpretability and explainability of frontal region is highest compared to pre-frontal, central, parietal, occipital, and temporal regions. Our findings has provided important insight into the developed model which is highly reliable, robust, interpretable, and explainable for the clinicians to detect ADHD in children. Early and rapid ADHD diagnosis using robust explainable technologies may reduce the cost of treatment and lessen the number of patients undergoing lengthy diagnosis procedures.

Original languageEnglish
Article number106676
JournalComputers in Biology and Medicine
Publication statusPublished - Mar 2023

Bibliographical note

Publisher Copyright:
© 2023 The Authors

    Research areas

  • Attention deficit hyperactivity disorder, Electroencephalography, Explainable machine learning, Interpretable machine learning, Variational mode decomposition

See relations at Aarhus University Citationformats

ID: 312003341