Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input

Bashing James Hou, Joshua Newn, Ludwig Sidenmark, Anam Ahmad Khan, Per Bækgaard, Hans Gellersen

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review

90 Downloads (Pure)

Abstract

Head movement is widely used as a uniform type of input for human-computer interaction. However, there are fundamental differences between head movements coupled with gaze in support of our visual system, and head movements performed as gestural expression. Both Head-Gaze and Head Gestures are of utility for interaction but differ in their affordances. To facilitate the treatment of Head-Gaze and Head Gestures as separate types of input, we developed HeadBoost as a novel classifier, achieving high accuracy in classifying gaze-driven versus gestural head movement (F1-Score: 0.89). We demonstrate the utility of the classifier with three applications: gestural input while avoiding unintentional input by Head-Gaze; target selection with Head-Gaze while avoiding Midas Touch by head gestures; and switching of cursor control between Head-Gaze for fast positioning and Head Gesture for refinement. The classification of Head-Gaze and Head Gesture allows for seamless head-based interaction while avoiding false activation.

Original languageEnglish
Title of host publicationCHI 2023 - Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
Article number253
ISBN (Electronic)9781450394215
DOIs
Publication statusAccepted/In press - 1 Mar 2023

Keywords

  • Computational Interaction
  • Eye Tracking
  • Eye-head Coordination
  • Head Gestures
  • Machine Learning
  • Virtual Reality
  • XGBoost

Fingerprint

Dive into the research topics of 'Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input'. Together they form a unique fingerprint.

Cite this