TY - JOUR
T1 - Gaze, Wall, and Racket
T2 - Combining Gaze and Hand-Controlled Plane for 3D Selection in Virtual Reality
AU - Wagner, Uta
AU - Albrecht, Matthias
AU - Jacobsen, Andreas Asferg
AU - Wang, Haopeng
AU - Gellersen, Hans
AU - Pfeuffer, Ken
N1 - Publisher Copyright:
© 2024 Owner/Author.
PY - 2024/10/20
Y1 - 2024/10/20
N2 - Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks.
AB - Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks.
KW - complex 3D tasks
KW - disambiguation
KW - eye-tracking
KW - gaze interaction
KW - object selection
KW - occlusion
UR - http://www.scopus.com/inward/record.url?scp=85207866159&partnerID=8YFLogxK
U2 - 10.1145/3698134
DO - 10.1145/3698134
M3 - Journal article
AN - SCOPUS:85207866159
SN - 2573-0142
VL - 8
SP - 189
EP - 213
JO - Proceedings of the ACM on Human-Computer Interaction
JF - Proceedings of the ACM on Human-Computer Interaction
IS - ISS
ER -