Auflistung nach Schlagwort "Explainable Artificial Intelligence"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelAgnostic Explanation of Model Change based on Feature Importance(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Muschalik, Maximilian; Fumagalli, Fabian; Hammer, Barbara; Hüllermeier, EykeExplainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from real-time data streams, where models are learned incrementally and continuously adapted over the course of time. More specifically, we motivate the problem of explaining model change , i.e. explaining the difference between models before and after adaptation, instead of the models themselves. In this regard, we provide the first efficient model-agnostic approach to dynamically detecting, quantifying, and explaining significant model changes. Our approach is based on an adaptation of the well-known Permutation Feature Importance (PFI) measure. It includes two hyperparameters that control the sensitivity and directly influence explanation frequency, so that a human user can adjust the method to individual requirements and application needs. We assess and validate our method’s efficacy on illustrative synthetic data streams with three popular model classes.
- TextdokumentDesign Principles for (X)AI-based Patient Education Systems(3. Wissenschaftsforum: Digitale Transformation (WiFo21), 2021) Pfeuffer, NicolasRecently, the management of chronic diseases has advanced to a prime topic for Information Systems (IS) research and practice. With increasing capability of Information Technology, patients are empowered to engage in self-management of chronic diseases connected to promises of health benefits for the individual as well as an unburdening of clinics and economic advantages for health care systems. Nevertheless, patients must be adequately educated about risks, screening and examination options to make patient self-management effective, sustainable and profitable. In this regard, Explainable Artificial Intelligence ((X)AI)-based Patient Education Systems (PES) may be an opportunity to provide patient education in an interactive, intelligible and intelligent manner. By establishing Design Principles (DP) for the engineering of effective (X)AIbased PES, instantiating them in a system prototype and evaluating the DP with the help of general practitioners, this paper contributes to the body of knowledge in designing health IS.
- WorkshopbeitragAn Explainability Case-Study for Conversational User Interfaces in Walk-Up-And-Use Contexts(Mensch und Computer 2021 - Workshopband, 2021) Schrills, Tim; Schmid, Leon; Jetter, Hans-Christian; Franke, Thomasinterfaces (CUI) miss requirements for good usability, e.g. sufficient feedback regarding system status. Within a user-centred design process we created different design approaches to explain the CUI’s state. A prototypical explainable conversational user interface (XCUI) was developed, which explains its state by means of representations of (1) confidence, (2) intent alternatives, (3) entities, and (4) a context time line. The XCUI was then tested in a user study (N = 49) and compared with a conventional CUI in terms of user satisfaction and task completion time. Results indicated that completion time and satisfaction improvement were dependent on specific task characteristics. The effects of the implemented XCUI features potentially resulted from task-specific needs for explanation. This could be based on the tasks’ different complexity indicating the potential need for adaptive presentation of explainability features.