Logo des Repositoriums
 

Multimodal Detection of External and Internal Attention in Virtual Reality using EEG and Eye Tracking Features

dc.contributor.authorLong, Xingyu
dc.contributor.authorMayer, Sven
dc.contributor.authorChiossi, Francesco
dc.date.accessioned2024-10-08T15:13:00Z
dc.date.available2024-10-08T15:13:00Z
dc.date.issued2024
dc.description.abstractFuture VR environments will sense users’ context, enabling a wide range of intelligent interactions, thus enabling diverse applications and improving usability through attention-aware VR systems. However, attention-aware VR systems based on EEG data suffer from long training periods, hindering generalizability and widespread adoption. At the same time, there remains a gap in research regarding which physiological features (EEG and eye tracking) are most effective for decoding attention direction in the VR paradigm. We addressed this issue by evaluating several classification models using EEG and eye tracking data. We recorded that training data simultaneously during tasks that required internal attention in an N-Back task or external attention allocation in Visual Monitoring. We used linear and deep learning models to compare classification performance under several uni- and multimodal feature sets alongside different window sizes. Our results indicate that multimodal features improve prediction for classical and modern classification models. We discuss approaches to assess the importance of physiological features and achieve automatic, robust, and individualized feature selection.en
dc.identifier.doi10.1145/3670653.3670657
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/44855
dc.language.isoen
dc.pubPlaceNew York, NY, USA
dc.publisherAssociation for Computing Machinery
dc.relation.ispartofProceedings of Mensch und Computer 2024
dc.subjectAttention
dc.subjectEEG
dc.subjectEye Tracking
dc.subjectMachine Learning
dc.subjectPhysiological Computing
dc.subjectVirtual Reality
dc.titleMultimodal Detection of External and Internal Attention in Virtual Reality using EEG and Eye Tracking Featuresen
dc.typeText/Conference Paper
gi.citation.startPage29–43
gi.conference.locationKarlsruhe, Germany

Dateien