Publikation: Bidrag til bog/antologi/rapport/proceeding › Konferencebidrag i proceedings › Forskning › peer review
GazeConduits : Calibration-Free Cross-Device Collaboration through Gaze and Touch. / Voelker, Simon; Hueber, Sebastian; Holz, Christian; Remy, Christian; Marquardt, Nicolai.
CHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York : Association for Computing Machinery, 2020. 3376578.Publikation: Bidrag til bog/antologi/rapport/proceeding › Konferencebidrag i proceedings › Forskning › peer review
}
TY - GEN
T1 - GazeConduits
T2 - 2020 ACM CHI Conference on Human Factors in Computing Systems, CHI 2020
AU - Voelker, Simon
AU - Hueber, Sebastian
AU - Holz, Christian
AU - Remy, Christian
AU - Marquardt, Nicolai
PY - 2020
Y1 - 2020
N2 - We present GazeConduits, a calibration-free ad-hoc mobile interaction concept that enables users to collaboratively interact with tablets, other users, and content in a cross-device setting using gaze and touch input. GazeConduits leverages recently introduced smartphone capabilities to detect facial features and estimate users' gaze directions. To join a collaborative setting, users place one or more tablets onto a shared table and position their phone in the center, which then tracks users present as well as their gaze direction to determine the tablets they look at. We present a series of techniques using GazeConduits for collaborative interaction across mobile devices for content selection and manipulation. Our evaluation with 20 simultaneous tablets on a table shows that GazeConduits can reliably identify which tablet or collaborator a user is looking at.
AB - We present GazeConduits, a calibration-free ad-hoc mobile interaction concept that enables users to collaboratively interact with tablets, other users, and content in a cross-device setting using gaze and touch input. GazeConduits leverages recently introduced smartphone capabilities to detect facial features and estimate users' gaze directions. To join a collaborative setting, users place one or more tablets onto a shared table and position their phone in the center, which then tracks users present as well as their gaze direction to determine the tablets they look at. We present a series of techniques using GazeConduits for collaborative interaction across mobile devices for content selection and manipulation. Our evaluation with 20 simultaneous tablets on a table shows that GazeConduits can reliably identify which tablet or collaborator a user is looking at.
KW - cross-device interaction
KW - gaze input
KW - touch input
UR - http://www.scopus.com/inward/record.url?scp=85091292112&partnerID=8YFLogxK
U2 - 10.1145/3313831.3376578
DO - 10.1145/3313831.3376578
M3 - Article in proceedings
AN - SCOPUS:85091292112
BT - CHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
CY - New York
Y2 - 25 April 2020 through 30 April 2020
ER -