Long, XingyuMayer, SvenChiossi, Francesco2024-10-082024-10-082024https://dl.gi.de/handle/20.500.12116/44855Future VR environments will sense users’ context, enabling a wide range of intelligent interactions, thus enabling diverse applications and improving usability through attention-aware VR systems. However, attention-aware VR systems based on EEG data suffer from long training periods, hindering generalizability and widespread adoption. At the same time, there remains a gap in research regarding which physiological features (EEG and eye tracking) are most effective for decoding attention direction in the VR paradigm. We addressed this issue by evaluating several classification models using EEG and eye tracking data. We recorded that training data simultaneously during tasks that required internal attention in an N-Back task or external attention allocation in Visual Monitoring. We used linear and deep learning models to compare classification performance under several uni- and multimodal feature sets alongside different window sizes. Our results indicate that multimodal features improve prediction for classical and modern classification models. We discuss approaches to assess the importance of physiological features and achieve automatic, robust, and individualized feature selection.enAttentionEEGEye TrackingMachine LearningPhysiological ComputingVirtual RealityMultimodal Detection of External and Internal Attention in Virtual Reality using EEG and Eye Tracking FeaturesText/Conference Paper10.1145/3670653.3670657