Graichen, LisaGraichen, MatthiasStolze, MarkusLoch, FriederBaldauf, MatthiasAlt, FlorianSchneegass, ChristinaKosch, ThomasHirzle, TeresaSadeghian, ShadanDraxler, FionaBektas, KenanLohan, KatrinKnierim, Pascal2023-08-242023-08-242023https://dl.gi.de/handle/20.500.12116/42010Building an appropriate mental model about the functional principles and limitations of technical systems or AI-based applications is crucial, particularly when these systems are applied in domains involving high risk to user safety, like driving. The presented paper describes an upcoming study on applying methods from Explainable AI to facilitate the building of mental models and investigate their effects on user trust. For the interaction with an AI-based system, we use an algorithm designed to support drivers at intersections by predicting turning maneuvers, thus being able to warn a driver of potential cyclists when turning right. Participants will be able to experience the system in a simulated driving environment. We will investigate the effect of receiving comprehensive training about the system's functionality and limitations on mental models, trust, and acceptance.enHCXAI XAI Trust AcceptanceKnowing the Limits – Human-Centered Explanations of Functionality and Limits of AIText/Conference Paper10.1145/3603555.3608576