Logo des Repositoriums
 

What is Missing in XAI So Far?

dc.contributor.authorSchmid, Ute
dc.contributor.authorWrede, Britta
dc.date.accessioned2023-01-18T13:08:25Z
dc.date.available2023-01-18T13:08:25Z
dc.date.issued2022
dc.description.abstractWith the perspective on applications of AI-technology, especially data intensive deep learning approaches, the need for methods to control and understand such models has been recognized and gave rise to a new research domain labeled explainable artificial intelligence (XAI). In this overview paper we give an interim appraisal of what has been achieved so far and where there are still gaps in the research. We take an interdisciplinary perspective to identify challenges on XAI research and point to open questions with respect to the quality of the explanations regarding faithfulness and consistency of explanations. On the other hand we see a need regarding the interaction between XAI and user to allow for adaptability to specific information needs and explanatory dialog for informed decision making as well as the possibility to correct models and explanations by interaction. This endeavor requires an integrated interdisciplinary perspective and rigorous approaches to empirical evaluation based on psychological, linguistic and even sociological theories.de
dc.identifier.doi10.1007/s13218-022-00786-2
dc.identifier.pissn1610-1987
dc.identifier.urihttp://dx.doi.org/10.1007/s13218-022-00786-2
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/40056
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 36, No. 0
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subject68T01: general
dc.subject68T05: learning and adaptive systems
dc.subjectExplainable AI
dc.subjectHuman-centred AI
dc.subjectHybrid AI
dc.titleWhat is Missing in XAI So Far?de
dc.typeText/Journal Article
gi.citation.endPage315
gi.citation.startPage303

Dateien