Schlicker, Nadine FraukeLanger, MarkusSchneegass, StefanPfleging, BastianKern, Dagmar2021-09-032021-09-032021https://dl.gi.de/handle/20.500.12116/37301The public discussion about trustworthy AI is fueling research on new methods to make AI explainable and fair. However, users may incorrectly assess system trustworthiness and could consequently overtrust untrustworthy systems or undertrust trustworthy systems. In order to understand what determines accurate assessments of system trustworthiness we apply Brunswik’s Lens Model and the Realistic Accuracy Model. The assumption is that the actual trustworthiness of a system cannot be accessed directly and is therefore inferred via cues to form a user’s perceived trustworthiness. The accuracy of trustworthiness assessment then depends on: cue relevance, availability, detection, and utilization. We describe how the model can be used to systematically investigate determinants that increase the match between system’s actual trustworthiness and user’s perceived trustworthiness in order to achieve warranted trust.enTrustworthinesshuman-centered AITowards Warranted Trust: A Model on the Relation Between Actual and Perceived System TrustworthinessText/Conference Paper10.1145/3473856.3474018