Aarhus University Seal

Speech-Augmented Cone-of-Vision for Exploratory Data Analysis

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review



  • Riccardo Bovo, Imperial College
  • ,
  • Daniele Giunchi, University College London
  • ,
  • Ludwig Sidenmark, Lancaster University
  • ,
  • Joshua Newn, Lancaster University
  • ,
  • Hans Gellersen
  • Enrico Costanza, Imperial College
  • ,
  • Thomas Heinis, Imperial College
Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.
Original languageEnglish
Title of host publicationCHI 2023 - Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
Number of pages18
Article number162
ISBN (Electronic)9781450394215
Publication statusAccepted/In press - 1 Mar 2023

    Research areas

  • Field of View, VR collaborative analytics, eye-tracking, multi-modal visual attention cues

See relations at Aarhus University Citationformats

ID: 313026246