Logo des Repositoriums
 
Konferenzbeitrag

Multimodal Detection of External and Internal Attention in Virtual Reality using EEG and Eye Tracking Features

Vorschaubild nicht verfügbar

Volltext URI

Dokumententyp

Text/Conference Paper

Zusatzinformation

Datum

2024

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Association for Computing Machinery

Zusammenfassung

Future VR environments will sense users’ context, enabling a wide range of intelligent interactions, thus enabling diverse applications and improving usability through attention-aware VR systems. However, attention-aware VR systems based on EEG data suffer from long training periods, hindering generalizability and widespread adoption. At the same time, there remains a gap in research regarding which physiological features (EEG and eye tracking) are most effective for decoding attention direction in the VR paradigm. We addressed this issue by evaluating several classification models using EEG and eye tracking data. We recorded that training data simultaneously during tasks that required internal attention in an N-Back task or external attention allocation in Visual Monitoring. We used linear and deep learning models to compare classification performance under several uni- and multimodal feature sets alongside different window sizes. Our results indicate that multimodal features improve prediction for classical and modern classification models. We discuss approaches to assess the importance of physiological features and achieve automatic, robust, and individualized feature selection.

Beschreibung

Long, Xingyu; Mayer, Sven; Chiossi, Francesco (2024): Multimodal Detection of External and Internal Attention in Virtual Reality using EEG and Eye Tracking Features. Proceedings of Mensch und Computer 2024. DOI: 10.1145/3670653.3670657. Association for Computing Machinery. pp. 29–43. Karlsruhe, Germany

Zitierform

Tags