Konferenzbeitrag
More human-likeness, more trust? The effect of anthropomorphism on self-reported and behavioral trust in continued and interdependent human-agent cooperation
Vorschaubild nicht verfügbar
Volltext URI
Dokumententyp
Text/Conference Paper
Zusatzinformation
Datum
2019
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
ACM
Zusammenfassung
Computer agents are increasingly endowed with anthropomorphic characteristics and autonomous behavior to improve their capabilities for problem-solving and make interactions with humans more natural. This poses new challenges for human users who need to make trust-based decisions in dynamic and complex environments. It remains unclear if people trust agents like other humans and thus apply the same social rules to human-computer interaction (HCI), or rather, if interactions with computers are characterized by idiosyncratic attributions and responses. To this ongoing and crucial debate we contribute an experiment on the impact of anthropomorphic cues on trust and trust-related attributions in a cooperative human-agent setting, permitting the investigation of interdependent, continued, and coordinated decision-making toward a joint goal. Our results reveal an incongruence between self-reported and behavioral trust measures. First, the varying degree of agent anthropomorphism (computer vs. virtual vs. human agent) did not affect people's decision to behaviorally trust the agent by adopting task-specific advice. Behavioral trust was affected by advice quality only. Second, subjective ratings indicate that anthropomorphism did increase self-reported trust.