Auflistung Künstliche Intelligenz 36(3-4) - Dezember 2022 nach Titel
1 - 10 von 15
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelA Framework for Learning Event Sequences and Explaining Detected Anomalies in a Smart Home Environment(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Baudisch, Justin; Richter, Birte; Jungeblut, ThorstenThis paper presents a framework for learning event sequences for anomaly detection in a smart home environment. It addresses environment conditions, device grouping, system performance and explainability of anomalies. Our method models user behavior as sequences of events, triggered by interaction of the home residents with the Internet of Things (IoT) devices. Based on a given set of recorded event sequences, the system can learn the habitual behavior of the residents. An anomaly is described as deviation from that normal behavior, previously learned by the system. One key feature of our framework is the explainability of detected anomalies, which is implemented through a simple rule analysis.
- ZeitschriftenartikelA Neural-Symbolic Approach for Explanation Generation Based on Sub-concept Detection: An Application of Metric Learning for Low-Time-Budget Labeling(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Rabold, JohannesDeep learning methods, although effective in their assigned tasks, are mostly black-boxes with respect to their inner workings. For image classification with CNNs, there exists a variety of visual explanation methods that highlight parts of input images that were relevant for the classification result. But in many domains visual highlighting may not be expressive enough when the classification relies on complex relations within visual concepts. This paper presents an approach to enrich visual explanations with verbal local explanations, emphasizing important relational information. The proposed SymMetric algorithm combines metric learning and inductive logic programming (ILP). Labels given by a human for a small subset of important image parts are first generalized to a neighborhood of similar images using a learned distance metric. The information about labels and their spatial relations is then used to build background knowledge for ILP and ultimately to learn a first-order theory that locally explains the black-box with respect to the given image. The approach is evaluated with the Dogs vs. Cats data set demonstrating the generalization ability of metric learning and with Picasso Faces to illustrate recognition of spatial meaningful constellations of sub-concepts and creation of an expressive explanation.
- ZeitschriftenartikelAgnostic Explanation of Model Change based on Feature Importance(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Muschalik, Maximilian; Fumagalli, Fabian; Hammer, Barbara; Hüllermeier, EykeExplainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from real-time data streams, where models are learned incrementally and continuously adapted over the course of time. More specifically, we motivate the problem of explaining model change , i.e. explaining the difference between models before and after adaptation, instead of the models themselves. In this regard, we provide the first efficient model-agnostic approach to dynamically detecting, quantifying, and explaining significant model changes. Our approach is based on an adaptation of the well-known Permutation Feature Importance (PFI) measure. It includes two hyperparameters that control the sensitivity and directly influence explanation frequency, so that a human user can adjust the method to individual requirements and application needs. We assess and validate our method’s efficacy on illustrative synthetic data streams with three popular model classes.
- ZeitschriftenartikelConstructing Explainability – Interdisciplinary Framework to Actively Shape Explanations in XAI(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Schmid, Ute
- ZeitschriftenartikelExplainable AI(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Schmid, Ute; Wrede, Britta
- ZeitschriftenartikelExploring Monological and Dialogical Phases in Naturally Occurring Explanations(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Fisher, Josephine B.; Lohmer, Vivien; Kern, Friederike; Barthlen, Winfried; Gaus, Sebastian; Rohlfing, Katharina J.Recent approaches to Explainable AI (XAI) promise to satisfy diverse user expectations by allowing them to steer the interaction in order to elicit content relevant to them. However, little is known about how and to what extent the explainee takes part actively in the process of explaining. To tackle this empirical gap, we exploratively examined naturally occurring everyday explanations in doctor–patient interactions ( N = 11). Following the social design of XAI, we view explanations as emerging in interactions: first, we identified the verbal behavior of both the explainer and the explainee in the sequential context, which we could assign to phases that were either monological or dialogical; second, we investigated in particular who was responsible for the initiation of the different phases. Finally, we took a closer look at the global conversational structure of explanations by applying a context-sensitive model of organizational jobs, thus adding a third layer of analysis. Results show that in our small sample of conversational explanations, both monological and dialogical phases varied in their length, timing of occurrence (at the early or later stages of the interaction) and their initiation (by the explainer or the explainee). They alternated several times in the course of the interaction. However, we also found some patterns suggesting that all interactions started with a monological phase initiated by the explainer. Both conversational partners contributed to the core organizational job that constitutes an explanation. We interpret the results as an indication for naturally occurring everyday explanations in doctor–patient interactions to be co-constructed on three levels of linguistic description: (1) by switching back and forth between monological to dialogical phases that (2) can be initiated by both partners and (3) by the mutual accomplishment and thus responsibility for an explanation’s core job that is crucial for the success of the explanation. Because of the explorative nature of our study, these results need to be investigated (a) with a larger sample and (b) in other contexts. However, our results suggest that future designs of artificial explainable systems should design the explanatory dialogue in such a way that it includes monological and dialogical phases that can be initiated not only by the explainer but also by the explainee, as both contribute to the core job of explicating procedural, clausal, or conceptual relations in explanations.
- ZeitschriftenartikelGenerating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Finzel, Bettina; Saranti, Anna; Angerschmid, Alessa; Tafler, David; Pfeifer, Bastian; Holzinger, AndreasGraph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain’s and user’s perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.
- ZeitschriftenartikelHabilitation Abstract: Towards Explainable Fact Checking(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Augenstein, IsabelleWith the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.
- ZeitschriftenartikelHuman-Robot Body Experience: An Artificial Intelligence Perspective(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Beckerle, PhilippHuman body experience is remarkably flexible, which enables us to integrate passive tools as well as intelligent robotic devices into our body representation. Accordingly, it can serve as a role model to make (assistive) robots interact seamlessly with their users or to provide (humanoid) robots with a human-like self-perception and behavior generation. This article discusses the potential of understanding human body experience and applying it to robotics. Particular focus is set on how to use artificial intelligence techniques and create intelligent artificial agents from insights about human body experience. The discussion is based on a summary of the author’s habilitation thesis and combines theoretical and experimental perspectives from psychology, cognitive science and neuroscience as well as computer science, engineering, and artificial intelligence. From this, it derives directions for future developments towards creating artificial body intelligence with human-like capabilities.
- ZeitschriftenartikelIdentification of Explainable Structures in Data with a Human-in-the-Loop(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Thrun, Michael C.Explainable AIs (XAIs) often do not provide relevant or understandable explanations for a domain-specific human-in-the-loop (HIL). In addition, internally used metrics have biases that might not match existing structures in the data. The habilitation thesis presents an alternative solution approach by deriving explanations from high dimensional structures in the data rather than from predetermined classifications. Typically, the detection of such density- or distance-based structures in data has so far entailed the challenges of choosing appropriate algorithms and their parameters, which adds a considerable amount of complex decision-making options for the HIL. Central steps of the solution approach are a parameter-free methodology for the estimation and visualization of probability density functions (PDFs); followed by a hypothesis for selecting an appropriate distance metric independent of the data context in combination with projection-based clustering (PBC). PBC allows for subsequent interactive identification of separable structures in the data. Hence, the HIL does not need deep knowledge of the underlying algorithms to identify structures in data. The complete data-driven XAI approach involving the HIL is based on a decision tree guided by distance-based structures in data (DSD). This data-driven XAI shows initial success in the application to multivariate time series and non-sequential high-dimensional data. It generates meaningful and relevant explanations that are evaluated by Grice’s maxims.