Auflistung nach Schlagwort "BCI"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragClassification of Music Preferences Using EEG Data in Machine Learning Models(Mensch und Computer 2024 - Workshopband, 2024) Vedder, Helen; Stano, Fabio; Knierim, MichaelIn this paper, we investigate how EEG data can be used to predict individual music preferences. Our study relies on machine learning and specially developed models such as EEGNet to analyze participants' brain activity while listening to music. Participants listened to music excerpts, rated them, and their EEG data were recorded. We extracted relevant features from the EEG data and used convolutional neural networks (CNNs) to classify music preferences. Our results show that our models are able to predict music preferences with an accuracy of up to 69%. This confirms the potential of EEG in personalized music recommendation and demonstrates the feasibility of integrating EEG into wearable devices to improve the user experience.
- KonferenzbeitragMediaBrain: Annotating Videos based on Brain-Computer Interaction(Mensch & Computer 2012: interaktiv informiert – allgegenwärtig und allumfassend!?, 2012) Sahami Shirazi, Alireza; Funk, Markus; Pfleiderer, Florian; Glück, Hendrik; Schmidt, AlbrechtAdding notes to time segments on a video timeline makes it easier to search, find, and play-back important segments of the video. Various approaches have been explored to annotate videos (semi) automatically to summarize videos. In this research we investigate the feasi-bility of implicitly annotating videos based on brain signals retrieved from a Brain-Computer Interface (BCI) headset. The signals provided by the BCI can reveal different infor¬mation such as brain activities, facial expressions, or the level of users excitement. This in¬formation correlates with scenes the users watch in a video. Thus, it can be used for anno¬tating a video and automatically generating a summary. To achieve the goal, an annotation tool called MediaBrain is developed and a user study is conducted. The result reveals that it is possible to annotate a video and select a set of highlights based on the excitement information.