A Generalizable Speech Emotion Recognition Model Reveals Depression and Remission

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

DOI

  • Lasse Hansen
  • Yan-Ping Zhang, Roche Pharmaceutical Research & Early Development Informatics, Roche Pharma Research and Early Development (pRED), Roche Innovation Center Basel, Grenzacherstrasse 124, 4070 Basel, Switzerland., F Hoffmann La Roche Ltd, Roche Holding
  • ,
  • Detlef Wolf, Roche Pharma Research and Early Development (pRED), Roche Innovation Center Basel, Grenzacherstrasse 124, 4070 Basel, Switzerland.
  • ,
  • Konstantinos Sechidis, Advanced Methodology and Data Science
  • ,
  • Nicolai Ladegaard
  • Riccardo Fusaroli

OBJECTIVE: Affective disorders are associated with atypical voice patterns; however, automated voice analyses suffer from small sample sizes and untested generalizability on external data. We investigated a generalizable approach to aid clinical evaluation of depression and remission from voice using transfer learning: we train machine learning models on easily accessible non-clinical datasets and test them on novel clinical data in a different language.

METHODS: A Mixture-of-Experts machine learning model was trained to infer happy/sad emotional state using three publicly available emotional speech corpora in German and US English. We examined the model's predictive ability to classify the presence of depression on Danish speaking healthy controls (N = 42), patients with first-episode major depressive disorder (MDD) (N = 40), and the subset of the same patients who entered remission (N = 25) based on recorded clinical interviews. The model was evaluated on raw, de-noised, and speaker-diarized data.

RESULTS: The model showed separation between healthy controls and depressed patients at the first visit, obtaining an AUC of 0.71. Further, speech from patients in remission was indistinguishable from that of the control group. Model predictions were stable throughout the interview, suggesting that 20-30 seconds of speech might be enough to accurately screen a patient. Background noise (but not speaker diarization) heavily impacted predictions.

CONCLUSION: A generalizable speech emotion recognition model can effectively reveal changes in speaker depressive states before and after remission in patients with MDD. Data collection settings and data cleaning are crucial when considering automated voice analysis for clinical purposes.

Original languageEnglish
JournalActa Psychiatrica Scandinavica
Volume145
Issue2
Pages (from-to)186-199
Number of pages14
ISSN0001-690X
DOIs
Publication statusPublished - Feb 2022

    Research areas

  • depression, machine learning, speech acoustics, speech emotion recognition, transfer learning

See relations at Aarhus University Citationformats

ID: 227456788