Weitz, Katharina2022-11-222022-11-222022https://dl.gi.de/handle/20.500.12116/39746Human-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.enhuman-centered AIexplainable AIdeep neural networksTowards Human-Centered AI: Psychological concepts as foundation for empirical XAI researchText/Journal Article10.1515/itit-2021-00472196-7032