TY - JOUR
T1 - Toward Interpretable Sleep Stage Classification Using Cross-Modal Transformers
AU - Pradeepkumar, Jathurshan
AU - Anandakumar, Mithunjha
AU - Kugathasan, Vinith
AU - Suntharalingham, Dhinesh
AU - Kappel, Simon L.
AU - De Silva, Anjula C.
AU - Edussooriya, Chamira U.S.
N1 - Publisher Copyright:
© 2001-2011 IEEE.
PY - 2024
Y1 - 2024
N2 - Accurate sleep stage classification is significant for sleep health assessment. In recent years, several machine-learning based sleep staging algorithms have been developed, and in particular, deep-learning based algorithms have achieved performance on par with human annotation. Despite improved performance, a limitation of most deep-learning based algorithms is their black-box behavior, which have limited their use in clinical settings. Here, we propose a cross-modal transformer, which is a transformer-based method for sleep stage classification. The proposed cross-modal transformer consists of a cross-modal transformer encoder architecture along with a multi-scale one-dimensional convolutional neural network for automatic representation learning. The performance of our method is on-par with the state-of-the-art methods and eliminates the black-box behavior of deep-learning models by utilizing the interpretability aspect of the attention modules. Furthermore, our method provides considerable reductions in the number of parameters and training time compared to the state-of-the-art methods. Our code is available at https://github.com/Jathurshan0330/Cross-Modal-Transformer. A demo of our work can be found at https://bit.ly/Cross_modal_transformer_demo.
AB - Accurate sleep stage classification is significant for sleep health assessment. In recent years, several machine-learning based sleep staging algorithms have been developed, and in particular, deep-learning based algorithms have achieved performance on par with human annotation. Despite improved performance, a limitation of most deep-learning based algorithms is their black-box behavior, which have limited their use in clinical settings. Here, we propose a cross-modal transformer, which is a transformer-based method for sleep stage classification. The proposed cross-modal transformer consists of a cross-modal transformer encoder architecture along with a multi-scale one-dimensional convolutional neural network for automatic representation learning. The performance of our method is on-par with the state-of-the-art methods and eliminates the black-box behavior of deep-learning models by utilizing the interpretability aspect of the attention modules. Furthermore, our method provides considerable reductions in the number of parameters and training time compared to the state-of-the-art methods. Our code is available at https://github.com/Jathurshan0330/Cross-Modal-Transformer. A demo of our work can be found at https://bit.ly/Cross_modal_transformer_demo.
KW - Automatic sleep stage classification
KW - deep neural networks
KW - interpretable deep learning
KW - transformers
UR - http://www.scopus.com/inward/record.url?scp=85200824552&partnerID=8YFLogxK
U2 - 10.1109/TNSRE.2024.3438610
DO - 10.1109/TNSRE.2024.3438610
M3 - Journal article
C2 - 39102323
AN - SCOPUS:85200824552
SN - 1534-4320
VL - 32
SP - 2893
EP - 2904
JO - IEEE Transactions on Neural Systems and Rehabilitation Engineering
JF - IEEE Transactions on Neural Systems and Rehabilitation Engineering
ER -