Logo des Repositoriums
 

Künstliche Intelligenz 36(3-4) - Dezember 2022

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 15
  • Zeitschriftenartikel
    What’s on Your Mind, NICO?
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Kerzel, Matthias; Ambsdorf, Jakob; Becker, Dennis; Lu, Wenhao; Strahl, Erik; Spisak, Josua; Gäde, Connor; Weber, Tom; Wermter, Stefan
    Explainable AI has become an important field of research on neural machine learning models. However, most existing methods are designed as tools that provide expert users with additional insights into their models. In contrast, in human-robot interaction scenarios, non-expert users are frequently confronted with complex, embodied AI systems whose inner workings are unknown. Therefore, eXplainable Human-Robot Interaction (XHRI) should leverage the user’s intuitive ability to collaborate and to use efficient communication. Using NICO, the Neuro-Inspired COmpanion, as a use-case study, we propose an XHRI framework and show how different types of explanations enhance the interaction experience. These explanations range from (a) non-verbal cues for simple and intuitive feedback of inner states via (b) comprehensive verbal explanations of the robot’s intentions, knowledge and reasoning to (c) multimodal explanations using visualizations, speech and text. We revisit past HRI-related studies conducted with NICO and analyze them with the proposed framework. Furthermore, we present two novel XHRI approaches to extract suitable verbal and multimodal explanations from neural network modules in an HRI scenario.
  • Zeitschriftenartikel
    Exploring Monological and Dialogical Phases in Naturally Occurring Explanations
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Fisher, Josephine B.; Lohmer, Vivien; Kern, Friederike; Barthlen, Winfried; Gaus, Sebastian; Rohlfing, Katharina J.
    Recent approaches to Explainable AI (XAI) promise to satisfy diverse user expectations by allowing them to steer the interaction in order to elicit content relevant to them. However, little is known about how and to what extent the explainee takes part actively in the process of explaining. To tackle this empirical gap, we exploratively examined naturally occurring everyday explanations in doctor–patient interactions ( N  = 11). Following the social design of XAI, we view explanations as emerging in interactions: first, we identified the verbal behavior of both the explainer and the explainee in the sequential context, which we could assign to phases that were either monological or dialogical; second, we investigated in particular who was responsible for the initiation of the different phases. Finally, we took a closer look at the global conversational structure of explanations by applying a context-sensitive model of organizational jobs, thus adding a third layer of analysis. Results show that in our small sample of conversational explanations, both monological and dialogical phases varied in their length, timing of occurrence (at the early or later stages of the interaction) and their initiation (by the explainer or the explainee). They alternated several times in the course of the interaction. However, we also found some patterns suggesting that all interactions started with a monological phase initiated by the explainer. Both conversational partners contributed to the core organizational job that constitutes an explanation. We interpret the results as an indication for naturally occurring everyday explanations in doctor–patient interactions to be co-constructed on three levels of linguistic description: (1) by switching back and forth between monological to dialogical phases that (2) can be initiated by both partners and (3) by the mutual accomplishment and thus responsibility for an explanation’s core job that is crucial for the success of the explanation. Because of the explorative nature of our study, these results need to be investigated (a) with a larger sample and (b) in other contexts. However, our results suggest that future designs of artificial explainable systems should design the explanatory dialogue in such a way that it includes monological and dialogical phases that can be initiated not only by the explainer but also by the explainee, as both contribute to the core job of explicating procedural, clausal, or conceptual relations in explanations.
  • Zeitschriftenartikel
    Agnostic Explanation of Model Change based on Feature Importance
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Muschalik, Maximilian; Fumagalli, Fabian; Hammer, Barbara; Hüllermeier, Eyke
    Explainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from real-time data streams, where models are learned incrementally and continuously adapted over the course of time. More specifically, we motivate the problem of explaining model change , i.e. explaining the difference between models before and after adaptation, instead of the models themselves. In this regard, we provide the first efficient model-agnostic approach to dynamically detecting, quantifying, and explaining significant model changes. Our approach is based on an adaptation of the well-known Permutation Feature Importance (PFI) measure. It includes two hyperparameters that control the sensitivity and directly influence explanation frequency, so that a human user can adjust the method to individual requirements and application needs. We assess and validate our method’s efficacy on illustrative synthetic data streams with three popular model classes.
  • Zeitschriftenartikel
    Constructing Explainability – Interdisciplinary Framework to Actively Shape Explanations in XAI
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Schmid, Ute
  • Zeitschriftenartikel
    Explainable AI
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Schmid, Ute; Wrede, Britta
  • Zeitschriftenartikel
    Human-Robot Body Experience: An Artificial Intelligence Perspective
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Beckerle, Philipp
    Human body experience is remarkably flexible, which enables us to integrate passive tools as well as intelligent robotic devices into our body representation. Accordingly, it can serve as a role model to make (assistive) robots interact seamlessly with their users or to provide (humanoid) robots with a human-like self-perception and behavior generation. This article discusses the potential of understanding human body experience and applying it to robotics. Particular focus is set on how to use artificial intelligence techniques and create intelligent artificial agents from insights about human body experience. The discussion is based on a summary of the author’s habilitation thesis and combines theoretical and experimental perspectives from psychology, cognitive science and neuroscience as well as computer science, engineering, and artificial intelligence. From this, it derives directions for future developments towards creating artificial body intelligence with human-like capabilities.
  • Zeitschriftenartikel
    Inflation?
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Visser, Ubbo
  • Zeitschriftenartikel
    News
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) null
  • Zeitschriftenartikel
    XAINES: Explaining AI with Narratives
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Hartmann, Mareike; Du, Han; Feldhus, Nils; Kruijff-Korbayová, Ivana; Sonntag, Daniel
    Artificial Intelligence (AI) systems are increasingly pervasive: Internet of Things, in-car intelligent devices, robots, and virtual assistants, and their large-scale adoption makes it necessary to explain their behaviour, for example to their users who are impacted by their decisions, or to their developers who need to ensure their functionality. This requires, on the one hand, to obtain an accurate representation of the chain of events that caused the system to behave in a certain way (e.g., to make a specific decision). On the other hand, this causal chain needs to be communicated to the users depending on their needs and expectations. In this phase of explanation delivery, allowing interaction between user and model has the potential to improve both model quality and user experience. The XAINES project investigates the explanation of AI systems through narratives targeted to the needs of a specific audience, focusing on two important aspects that are crucial for enabling successful explanation: generating and selecting appropriate explanation content, i.e. the information to be contained in the explanation, and delivering this information to the user in an appropriate way. In this article, we present the project’s roadmap towards enabling the explanation of AI with narratives.
  • Zeitschriftenartikel
    A Neural-Symbolic Approach for Explanation Generation Based on Sub-concept Detection: An Application of Metric Learning for Low-Time-Budget Labeling
    (KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Rabold, Johannes
    Deep learning methods, although effective in their assigned tasks, are mostly black-boxes with respect to their inner workings. For image classification with CNNs, there exists a variety of visual explanation methods that highlight parts of input images that were relevant for the classification result. But in many domains visual highlighting may not be expressive enough when the classification relies on complex relations within visual concepts. This paper presents an approach to enrich visual explanations with verbal local explanations, emphasizing important relational information. The proposed SymMetric algorithm combines metric learning and inductive logic programming (ILP). Labels given by a human for a small subset of important image parts are first generalized to a neighborhood of similar images using a learned distance metric. The information about labels and their spatial relations is then used to build background knowledge for ILP and ultimately to learn a first-order theory that locally explains the black-box with respect to the given image. The approach is evaluated with the Dogs vs. Cats data set demonstrating the generalization ability of metric learning and with Picasso Faces to illustrate recognition of spatial meaningful constellations of sub-concepts and creation of an expressive explanation.