Schmid, UteWrede, Britta2023-01-182023-01-1820222022http://dx.doi.org/10.1007/s13218-022-00786-2https://dl.gi.de/handle/20.500.12116/40056With the perspective on applications of AI-technology, especially data intensive deep learning approaches, the need for methods to control and understand such models has been recognized and gave rise to a new research domain labeled explainable artificial intelligence (XAI). In this overview paper we give an interim appraisal of what has been achieved so far and where there are still gaps in the research. We take an interdisciplinary perspective to identify challenges on XAI research and point to open questions with respect to the quality of the explanations regarding faithfulness and consistency of explanations. On the other hand we see a need regarding the interaction between XAI and user to allow for adaptability to specific information needs and explanatory dialog for informed decision making as well as the possibility to correct models and explanations by interaction. This endeavor requires an integrated interdisciplinary perspective and rigorous approaches to empirical evaluation based on psychological, linguistic and even sociological theories.68T01: general68T05: learning and adaptive systemsExplainable AIHuman-centred AIHybrid AIWhat is Missing in XAI So Far?Text/Journal Article10.1007/s13218-022-00786-21610-1987