Auflistung Künstliche Intelligenz 36(3-4) - Dezember 2022 nach Erscheinungsdatum
1 - 10 von 15
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelExplainable AI(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Schmid, Ute; Wrede, Britta
- ZeitschriftenartikelAgnostic Explanation of Model Change based on Feature Importance(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Muschalik, Maximilian; Fumagalli, Fabian; Hammer, Barbara; Hüllermeier, EykeExplainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from real-time data streams, where models are learned incrementally and continuously adapted over the course of time. More specifically, we motivate the problem of explaining model change , i.e. explaining the difference between models before and after adaptation, instead of the models themselves. In this regard, we provide the first efficient model-agnostic approach to dynamically detecting, quantifying, and explaining significant model changes. Our approach is based on an adaptation of the well-known Permutation Feature Importance (PFI) measure. It includes two hyperparameters that control the sensitivity and directly influence explanation frequency, so that a human user can adjust the method to individual requirements and application needs. We assess and validate our method’s efficacy on illustrative synthetic data streams with three popular model classes.
- ZeitschriftenartikelWhat is Missing in XAI So Far?(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Schmid, Ute; Wrede, BrittaWith the perspective on applications of AI-technology, especially data intensive deep learning approaches, the need for methods to control and understand such models has been recognized and gave rise to a new research domain labeled explainable artificial intelligence (XAI). In this overview paper we give an interim appraisal of what has been achieved so far and where there are still gaps in the research. We take an interdisciplinary perspective to identify challenges on XAI research and point to open questions with respect to the quality of the explanations regarding faithfulness and consistency of explanations. On the other hand we see a need regarding the interaction between XAI and user to allow for adaptability to specific information needs and explanatory dialog for informed decision making as well as the possibility to correct models and explanations by interaction. This endeavor requires an integrated interdisciplinary perspective and rigorous approaches to empirical evaluation based on psychological, linguistic and even sociological theories.
- ZeitschriftenartikelHabilitation Abstract: Towards Explainable Fact Checking(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Augenstein, IsabelleWith the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.
- ZeitschriftenartikelExploring Monological and Dialogical Phases in Naturally Occurring Explanations(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Fisher, Josephine B.; Lohmer, Vivien; Kern, Friederike; Barthlen, Winfried; Gaus, Sebastian; Rohlfing, Katharina J.Recent approaches to Explainable AI (XAI) promise to satisfy diverse user expectations by allowing them to steer the interaction in order to elicit content relevant to them. However, little is known about how and to what extent the explainee takes part actively in the process of explaining. To tackle this empirical gap, we exploratively examined naturally occurring everyday explanations in doctor–patient interactions ( N = 11). Following the social design of XAI, we view explanations as emerging in interactions: first, we identified the verbal behavior of both the explainer and the explainee in the sequential context, which we could assign to phases that were either monological or dialogical; second, we investigated in particular who was responsible for the initiation of the different phases. Finally, we took a closer look at the global conversational structure of explanations by applying a context-sensitive model of organizational jobs, thus adding a third layer of analysis. Results show that in our small sample of conversational explanations, both monological and dialogical phases varied in their length, timing of occurrence (at the early or later stages of the interaction) and their initiation (by the explainer or the explainee). They alternated several times in the course of the interaction. However, we also found some patterns suggesting that all interactions started with a monological phase initiated by the explainer. Both conversational partners contributed to the core organizational job that constitutes an explanation. We interpret the results as an indication for naturally occurring everyday explanations in doctor–patient interactions to be co-constructed on three levels of linguistic description: (1) by switching back and forth between monological to dialogical phases that (2) can be initiated by both partners and (3) by the mutual accomplishment and thus responsibility for an explanation’s core job that is crucial for the success of the explanation. Because of the explorative nature of our study, these results need to be investigated (a) with a larger sample and (b) in other contexts. However, our results suggest that future designs of artificial explainable systems should design the explanatory dialogue in such a way that it includes monological and dialogical phases that can be initiated not only by the explainer but also by the explainee, as both contribute to the core job of explicating procedural, clausal, or conceptual relations in explanations.
- ZeitschriftenartikelXAINES: Explaining AI with Narratives(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Hartmann, Mareike; Du, Han; Feldhus, Nils; Kruijff-Korbayová, Ivana; Sonntag, DanielArtificial Intelligence (AI) systems are increasingly pervasive: Internet of Things, in-car intelligent devices, robots, and virtual assistants, and their large-scale adoption makes it necessary to explain their behaviour, for example to their users who are impacted by their decisions, or to their developers who need to ensure their functionality. This requires, on the one hand, to obtain an accurate representation of the chain of events that caused the system to behave in a certain way (e.g., to make a specific decision). On the other hand, this causal chain needs to be communicated to the users depending on their needs and expectations. In this phase of explanation delivery, allowing interaction between user and model has the potential to improve both model quality and user experience. The XAINES project investigates the explanation of AI systems through narratives targeted to the needs of a specific audience, focusing on two important aspects that are crucial for enabling successful explanation: generating and selecting appropriate explanation content, i.e. the information to be contained in the explanation, and delivering this information to the user in an appropriate way. In this article, we present the project’s roadmap towards enabling the explanation of AI with narratives.
- ZeitschriftenartikelHuman-Robot Body Experience: An Artificial Intelligence Perspective(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Beckerle, PhilippHuman body experience is remarkably flexible, which enables us to integrate passive tools as well as intelligent robotic devices into our body representation. Accordingly, it can serve as a role model to make (assistive) robots interact seamlessly with their users or to provide (humanoid) robots with a human-like self-perception and behavior generation. This article discusses the potential of understanding human body experience and applying it to robotics. Particular focus is set on how to use artificial intelligence techniques and create intelligent artificial agents from insights about human body experience. The discussion is based on a summary of the author’s habilitation thesis and combines theoretical and experimental perspectives from psychology, cognitive science and neuroscience as well as computer science, engineering, and artificial intelligence. From this, it derives directions for future developments towards creating artificial body intelligence with human-like capabilities.
- ZeitschriftenartikelA Neural-Symbolic Approach for Explanation Generation Based on Sub-concept Detection: An Application of Metric Learning for Low-Time-Budget Labeling(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Rabold, JohannesDeep learning methods, although effective in their assigned tasks, are mostly black-boxes with respect to their inner workings. For image classification with CNNs, there exists a variety of visual explanation methods that highlight parts of input images that were relevant for the classification result. But in many domains visual highlighting may not be expressive enough when the classification relies on complex relations within visual concepts. This paper presents an approach to enrich visual explanations with verbal local explanations, emphasizing important relational information. The proposed SymMetric algorithm combines metric learning and inductive logic programming (ILP). Labels given by a human for a small subset of important image parts are first generalized to a neighborhood of similar images using a learned distance metric. The information about labels and their spatial relations is then used to build background knowledge for ILP and ultimately to learn a first-order theory that locally explains the black-box with respect to the given image. The approach is evaluated with the Dogs vs. Cats data set demonstrating the generalization ability of metric learning and with Picasso Faces to illustrate recognition of spatial meaningful constellations of sub-concepts and creation of an expressive explanation.
- ZeitschriftenartikelIdentification of Explainable Structures in Data with a Human-in-the-Loop(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Thrun, Michael C.Explainable AIs (XAIs) often do not provide relevant or understandable explanations for a domain-specific human-in-the-loop (HIL). In addition, internally used metrics have biases that might not match existing structures in the data. The habilitation thesis presents an alternative solution approach by deriving explanations from high dimensional structures in the data rather than from predetermined classifications. Typically, the detection of such density- or distance-based structures in data has so far entailed the challenges of choosing appropriate algorithms and their parameters, which adds a considerable amount of complex decision-making options for the HIL. Central steps of the solution approach are a parameter-free methodology for the estimation and visualization of probability density functions (PDFs); followed by a hypothesis for selecting an appropriate distance metric independent of the data context in combination with projection-based clustering (PBC). PBC allows for subsequent interactive identification of separable structures in the data. Hence, the HIL does not need deep knowledge of the underlying algorithms to identify structures in data. The complete data-driven XAI approach involving the HIL is based on a decision tree guided by distance-based structures in data (DSD). This data-driven XAI shows initial success in the application to multivariate time series and non-sequential high-dimensional data. It generates meaningful and relevant explanations that are evaluated by Grice’s maxims.
- ZeitschriftenartikelNews(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) null