Auflistung Künstliche Intelligenz 36(3-4) - Dezember 2022 nach Erscheinungsdatum
1 - 10 von 15
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelGenerating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Finzel, Bettina; Saranti, Anna; Angerschmid, Alessa; Tafler, David; Pfeifer, Bastian; Holzinger, AndreasGraph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain’s and user’s perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.
- ZeitschriftenartikelWhat’s on Your Mind, NICO?(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Kerzel, Matthias; Ambsdorf, Jakob; Becker, Dennis; Lu, Wenhao; Strahl, Erik; Spisak, Josua; Gäde, Connor; Weber, Tom; Wermter, StefanExplainable AI has become an important field of research on neural machine learning models. However, most existing methods are designed as tools that provide expert users with additional insights into their models. In contrast, in human-robot interaction scenarios, non-expert users are frequently confronted with complex, embodied AI systems whose inner workings are unknown. Therefore, eXplainable Human-Robot Interaction (XHRI) should leverage the user’s intuitive ability to collaborate and to use efficient communication. Using NICO, the Neuro-Inspired COmpanion, as a use-case study, we propose an XHRI framework and show how different types of explanations enhance the interaction experience. These explanations range from (a) non-verbal cues for simple and intuitive feedback of inner states via (b) comprehensive verbal explanations of the robot’s intentions, knowledge and reasoning to (c) multimodal explanations using visualizations, speech and text. We revisit past HRI-related studies conducted with NICO and analyze them with the proposed framework. Furthermore, we present two novel XHRI approaches to extract suitable verbal and multimodal explanations from neural network modules in an HRI scenario.
- ZeitschriftenartikelA Framework for Learning Event Sequences and Explaining Detected Anomalies in a Smart Home Environment(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Baudisch, Justin; Richter, Birte; Jungeblut, ThorstenThis paper presents a framework for learning event sequences for anomaly detection in a smart home environment. It addresses environment conditions, device grouping, system performance and explainability of anomalies. Our method models user behavior as sequences of events, triggered by interaction of the home residents with the Internet of Things (IoT) devices. Based on a given set of recorded event sequences, the system can learn the habitual behavior of the residents. An anomaly is described as deviation from that normal behavior, previously learned by the system. One key feature of our framework is the explainability of detected anomalies, which is implemented through a simple rule analysis.
- ZeitschriftenartikelConstructing Explainability – Interdisciplinary Framework to Actively Shape Explanations in XAI(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Schmid, Ute
- ZeitschriftenartikelNews(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) null
- ZeitschriftenartikelHabilitation Abstract: Towards Explainable Fact Checking(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Augenstein, IsabelleWith the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.
- ZeitschriftenartikelExploring Monological and Dialogical Phases in Naturally Occurring Explanations(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Fisher, Josephine B.; Lohmer, Vivien; Kern, Friederike; Barthlen, Winfried; Gaus, Sebastian; Rohlfing, Katharina J.Recent approaches to Explainable AI (XAI) promise to satisfy diverse user expectations by allowing them to steer the interaction in order to elicit content relevant to them. However, little is known about how and to what extent the explainee takes part actively in the process of explaining. To tackle this empirical gap, we exploratively examined naturally occurring everyday explanations in doctor–patient interactions ( N = 11). Following the social design of XAI, we view explanations as emerging in interactions: first, we identified the verbal behavior of both the explainer and the explainee in the sequential context, which we could assign to phases that were either monological or dialogical; second, we investigated in particular who was responsible for the initiation of the different phases. Finally, we took a closer look at the global conversational structure of explanations by applying a context-sensitive model of organizational jobs, thus adding a third layer of analysis. Results show that in our small sample of conversational explanations, both monological and dialogical phases varied in their length, timing of occurrence (at the early or later stages of the interaction) and their initiation (by the explainer or the explainee). They alternated several times in the course of the interaction. However, we also found some patterns suggesting that all interactions started with a monological phase initiated by the explainer. Both conversational partners contributed to the core organizational job that constitutes an explanation. We interpret the results as an indication for naturally occurring everyday explanations in doctor–patient interactions to be co-constructed on three levels of linguistic description: (1) by switching back and forth between monological to dialogical phases that (2) can be initiated by both partners and (3) by the mutual accomplishment and thus responsibility for an explanation’s core job that is crucial for the success of the explanation. Because of the explorative nature of our study, these results need to be investigated (a) with a larger sample and (b) in other contexts. However, our results suggest that future designs of artificial explainable systems should design the explanatory dialogue in such a way that it includes monological and dialogical phases that can be initiated not only by the explainer but also by the explainee, as both contribute to the core job of explicating procedural, clausal, or conceptual relations in explanations.
- ZeitschriftenartikelAgnostic Explanation of Model Change based on Feature Importance(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Muschalik, Maximilian; Fumagalli, Fabian; Hammer, Barbara; Hüllermeier, EykeExplainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from real-time data streams, where models are learned incrementally and continuously adapted over the course of time. More specifically, we motivate the problem of explaining model change , i.e. explaining the difference between models before and after adaptation, instead of the models themselves. In this regard, we provide the first efficient model-agnostic approach to dynamically detecting, quantifying, and explaining significant model changes. Our approach is based on an adaptation of the well-known Permutation Feature Importance (PFI) measure. It includes two hyperparameters that control the sensitivity and directly influence explanation frequency, so that a human user can adjust the method to individual requirements and application needs. We assess and validate our method’s efficacy on illustrative synthetic data streams with three popular model classes.
- ZeitschriftenartikelWhat is Missing in XAI So Far?(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Schmid, Ute; Wrede, BrittaWith the perspective on applications of AI-technology, especially data intensive deep learning approaches, the need for methods to control and understand such models has been recognized and gave rise to a new research domain labeled explainable artificial intelligence (XAI). In this overview paper we give an interim appraisal of what has been achieved so far and where there are still gaps in the research. We take an interdisciplinary perspective to identify challenges on XAI research and point to open questions with respect to the quality of the explanations regarding faithfulness and consistency of explanations. On the other hand we see a need regarding the interaction between XAI and user to allow for adaptability to specific information needs and explanatory dialog for informed decision making as well as the possibility to correct models and explanations by interaction. This endeavor requires an integrated interdisciplinary perspective and rigorous approaches to empirical evaluation based on psychological, linguistic and even sociological theories.
- ZeitschriftenartikelExplainable AI(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Schmid, Ute; Wrede, Britta