Auflistung nach Schlagwort "Empathy"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelEmpathy-Based Emotional Alignment for a Virtual Human: A Three-Step Approach(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Boukricha, Hana; Wachsmuth, IpkeAllowing virtual humans to align to others’ perceived emotions is believed to enhance their cooperative and communicative social skills. In our work, emotional alignment is realized by endowing a virtual human with the ability to empathize. Recent research shows that humans empathize with each other to different degrees depending on several factors including, among others, their mood, their personality, and their social relationships. Although providing virtual humans with features like affect, personality, and the ability to build social relationships, little attention has been devoted to the role of such features as factors modulating their empathic behavior. Supported by psychological models of empathy, we propose an approach to model empathy for the virtual human EMMA—an Empathic MultiModal Agent—consisting of three processing steps: First, the Empathy Mechanism by which an empathic emotion is produced. Second, the Empathy Modulation by which the empathic emotion is modulated. Third, the Expression of Empathy by which EMMA’s multiple modalities are triggered through the modulated empathic emotion. The proposed model of empathy is illustrated in a conversational agent scenario involving the virtual humans MAX and EMMA.
- KonferenzbeitragPerceived Authenticity, Empathy, and Pro-social Intentions evoked through Avatar-mediated Self-disclosures(Mensch und Computer 2019 - Tagungsband, 2019) Roth, Daniel; Bloch, Carola; Schmitt, Josephine; Frischlich, Lena; Latoschik, Marc Erich; Bente, GaryAvatars are our digital embodied alter egos. Virtual embodiment by avatars allows social interaction with others using the full spectrum of verbal and non-verbal behaviour. Still, one's avatar appearances is elective. Hence, avatars make it possible for users to discuss and exchange sensible or even problematic personal topics potentially hiding their real identity and hence preserving anonymity and privacy. While previous works identified similarities how participants perceive avatars compared to human stimuli, there is a question as to whether avatar-mediated self-disclosure is authentic and results in similar social responses. In the present study, we created a comparable stimulus set to investigate this issue and conducted an online study (N=172) for comparison. Our results indicate that avatars can be perceived as authentic and that empathy is attributed in similar level than to a human stimulus. In an exploratory model, we found that for in the overall results, authenticity fostered emotional empathy which in turn fostered pro-social intentions. We argue that avatars may serve as a valuable supporting medium for HCI applications related to mental well-being, self-disclosure, and support.
- ZeitschriftenartikelWhen Self-Humanization Leads to Algorithm Aversion(Business & Information Systems Engineering: Vol. 64, No. 3, 2022) Heßler, Pascal Oliver; Pfeiffer, Jella; Hafenbrädl, SebastianDecision support systems are increasingly being adopted by various digital platforms. However, prior research has shown that certain contexts can induce algorithm aversion, leading people to reject their decision support. This paper investigates how and why the context in which users are making decisions (for-profit versus prosocial microlending decisions) affects their degree of algorithm aversion and ultimately their preference for more human-like (versus computer-like) decision support systems. The study proposes that contexts vary in their affordances for self-humanization. Specifically, people perceive prosocial decisions as more relevant to self-humanization than for-profit contexts, and, in consequence, they ascribe more importance to empathy and autonomy while making decisions in prosocial contexts. This increased importance of empathy and autonomy leads to a higher degree of algorithm aversion. At the same time, it also leads to a stronger preference for human-like decision support, which could therefore serve as a remedy for an algorithm aversion induced by the need for self-humanization. The results from an online experiment support the theorizing. The paper discusses both theoretical and design implications, especially for the potential of anthropomorphized conversational agents on platforms for prosocial decision-making.