Schwind, ValentinLeusmann, JanHenze, NielsAlt, FlorianBulling, AndreasDöring, Tanja2019-08-222019-08-222019https://dl.gi.de/handle/20.500.12116/24629Virtual reality (VR) is becoming more and more ubiquitous to interact with digital content and often requires renderings of avatars as they enable improved spatial localization and high levels of presence. Previous work shows that visual-haptic integration of virtual avatars depends on body ownership and spatial localization in VR. However, there are different conclusions about how and which stimuli of the own appearance are integrated into the own body scheme. In this work, we investigate if systematic changes of model and texture of a users' avatar affect the input performance measured in a two-dimensional Fitts' law target selection task. Interestingly, we found that the throughput remained constant between our conditions and that neither model nor texture of the avatar significantly affected the average duration to complete the task even when participants felt different levels of presence and body ownership. In line with previous work, we found that the illusion of virtual limb-ownership does not necessarily correlate to the degree to which vision and haptics are integrated into the own body scheme. Our work supports findings indicating that body ownership and spatial localization are potentially independent mechanisms in visual-haptic integration.enVirtual realityFitts' lawavatarsvisual-haptic integrationdepth cues.Understanding Visual-Haptic Integration of Avatar Hands using a Fitts' Law Task in Virtual RealityText/Conference Paper10.1145/3340764.3340769