Konferenzbeitrag
Understanding Visual-Haptic Integration of Avatar Hands using a Fitts' Law Task in Virtual Reality
Vorschaubild nicht verfügbar
Volltext URI
Dokumententyp
Text/Conference Paper
Zusatzinformation
Datum
2019
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
ACM
Zusammenfassung
Virtual reality (VR) is becoming more and more ubiquitous to interact with digital content and often requires renderings of avatars as they enable improved spatial localization and high levels of presence. Previous work shows that visual-haptic integration of virtual avatars depends on body ownership and spatial localization in VR. However, there are different conclusions about how and which stimuli of the own appearance are integrated into the own body scheme. In this work, we investigate if systematic changes of model and texture of a users' avatar affect the input performance measured in a two-dimensional Fitts' law target selection task. Interestingly, we found that the throughput remained constant between our conditions and that neither model nor texture of the avatar significantly affected the average duration to complete the task even when participants felt different levels of presence and body ownership. In line with previous work, we found that the illusion of virtual limb-ownership does not necessarily correlate to the degree to which vision and haptics are integrated into the own body scheme. Our work supports findings indicating that body ownership and spatial localization are potentially independent mechanisms in visual-haptic integration.