Pain detection using batch normalized discriminant restricted Boltzmann machine layers

Reza Kharghanian, Ali Peiravi*, Farshad Moradi, Alexandros Iosifidis

*Corresponding author for this work

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review


A system for automatic pain detection whereby pain-related features are extracted from facial images using a four-layer Convolutional Deep Belief Network (CDBN) is proposed in this study. The CDBN is trained by greedy layer-wise procedure whereby each added layer is trained as a Convolutional Restricted Boltzmann Machine (CRBM) by contrastive divergence. Since conventional CRBM is trained in a purely unsupervised manner, there is no guarantee that learned features are appropriate for the supervised task at hand. A discriminative objective based on between-class and within-class distances is proposed to adapt CRBM to learn task-related features. When discriminative and generative objectives are appropriately combined, a competitive classification performance can be achieved. Moreover, we introduced batch normalization (BN) units in the structure of the CRBM model to smooth optimization landscape and speed up the learning process. BN units come right before sigmoid units. Extracted features are then used to train a linear SVM to classify each frame into pain or no-pain classes. Extensive experiments on UNBC-McMaster Shoulder Pain database demonstrate the effectiveness of the proposed method for automatic pain detection.

Original languageEnglish
Article number103062
JournalJournal of Visual Communication and Image Representation
Number of pages8
Publication statusPublished - Apr 2021


  • Batch Normalization
  • Convolutional deep belief network
  • Discriminant Feature Learning
  • Pain detection
  • Representation learning


Dive into the research topics of 'Pain detection using batch normalized discriminant restricted Boltzmann machine layers'. Together they form a unique fingerprint.

Cite this