Faulhaber, Anja K.Ni, InaSchmidt, LudgerSchneegass, StefanPfleging, BastianKern, Dagmar2021-09-032021-09-032021https://dl.gi.de/handle/20.500.12116/37262The present study aimed to investigate whether explanations increase trust in an assistance system. Moreover, we wanted to take the role of the individual propensity to trust in technology into account. We conducted an empirical study in a virtual reality environment where 40 participants interacted with a specific assistance system for public transport users. The study was in a 2x2 mixed design with the within-subject factor assistance system feature (trip planner and connection request) and the between-subject factor explanation (with or without). We measured trust as explicit trust via a questionnaire and as implicit trust via an operationalization of the participants’ behavior. The results showed that trust propensity predicted explicit trust, and explanations increased explicit trust significantly. This was not the case for implicit trust, though, suggesting that explicit and implicit trust do not necessarily coincide. In conclusion, our results complement the literature on explainable artificial intelligence and trust in automation and provide topics for future research regarding the effect of explanations on trust in assistance systems or other technologies.enexplainabilitytrust in automationhuman-computer interactionvirtual realityThe Effect of Explanations on Trust in an Assistance System for Public Transport Users and the Role of the Propensity to TrustText/Conference Paper10.1145/3473856.3473886