Kulms, PhilippKopp, StefanAlt, FlorianBulling, AndreasDöring, Tanja2019-08-222019-08-222019https://dl.gi.de/handle/20.500.12116/24604Computer agents are increasingly endowed with anthropomorphic characteristics and autonomous behavior to improve their capabilities for problem-solving and make interactions with humans more natural. This poses new challenges for human users who need to make trust-based decisions in dynamic and complex environments. It remains unclear if people trust agents like other humans and thus apply the same social rules to human-computer interaction (HCI), or rather, if interactions with computers are characterized by idiosyncratic attributions and responses. To this ongoing and crucial debate we contribute an experiment on the impact of anthropomorphic cues on trust and trust-related attributions in a cooperative human-agent setting, permitting the investigation of interdependent, continued, and coordinated decision-making toward a joint goal. Our results reveal an incongruence between self-reported and behavioral trust measures. First, the varying degree of agent anthropomorphism (computer vs. virtual vs. human agent) did not affect people's decision to behaviorally trust the agent by adopting task-specific advice. Behavioral trust was affected by advice quality only. Second, subjective ratings indicate that anthropomorphism did increase self-reported trust.enTrustanthropomorphismhuman-agent cooperationvirtual agentsMore human-likeness, more trust? The effect of anthropomorphism on self-reported and behavioral trust in continued and interdependent human-agent cooperationText/Conference Paper10.1145/3340764.3340793