Auflistung nach Schlagwort "Explainability"
1 - 10 von 13
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragApplication of Graph Neural Networks to fraud classification problems in the insurance and financial domain(INFORMATIK 2024, 2024) Becher, Jona; Schäfer, AndreasIntentional loan defaults and fraudulent insurance claims result in billions of euros in losses each year in Germany. Detecting suspicious networks of individuals involved in credit, loans, and insurance claims is crucial for fraud prevention in the financial and insurance domain. These networks can be modeled as undirected graphs with properties assigned to nodes and edges. In this paper, we apply Graph Neural Networks (GNNs) to set up graph- and node-level classifications. Taking this novel approach enables us to share insights into the advantages and practical challenges of using GNNs. The classification results reveal that node-level classification with network background is superior to conventional classification without network background. Graph-level classification shows promising performance in selecting subnetworks for further investigation at node level. GNN explainability is used for analyzing, interpreting, and visualizing classification results for a better understanding of fraudulent networks.
- TextdokumentCluster Flow - an Advanced Concept for Ensemble-Enabling, Interactive Clustering(BTW 2021, 2021) Obermeier, Sandra; Beer, Anna; Wahl, Florian; Seidl, ThomasEven though most clustering algorithms serve knowledge discovery in fields other than computer science, most of them still require users to be familiar with programming or data mining to some extent. As that often prevents efficient research, we developed an easy to use, highly explainable clustering method accompanied by an interactive tool for clustering. It is based on intuitively understandable kNN graphs and the subsequent application of adaptable filters, which can be combined ensemble-like and iteratively and prune unnecessary or misleading edges. For a first overview of the data, fully automatic predefined filter cascades deliver robust results. A selection of simple filters and combination methods that can be chosen interactively yield very good results on benchmark datasets compared to various algorithms.
- KonferenzbeitragErste Überlegungen zur Erklärbarkeit von Deep-Learning-Modellen für die Analyse von Quellcode(Softwaretechnik-Trends Band 40, Heft 2, 2020) Sonnekalb, Tim; Heinze, Thomas S.; Mäder, PatrickDie meisten Deep-Learning-Verfahren haben einen entscheidenden Nachteil: Sie sind Black-Box-Verfahren. Dadurch ist die Auswertung der Ergebnisse mit der Frage nach dem Warum oftmals nicht oder nur bedingt möglich. Gerade bei der Analyse von Software möchte ein Entwickler aber Ergebnisse mit zusätzlicher Begründung, um sie schnell filtern zu können. Erklärbare und interpretierbare Methoden sollen helfen, die Rückverfolgbarkeit von Analyseergebnissen sowie Erklärungen zu liefern.
- ZeitschriftenartikelEvaluating Explainability Methods Intended for Multiple Stakeholders(KI - Künstliche Intelligenz: Vol. 35, No. 0, 2021) Martin, Kyle; Liret, Anne; Wiratunga, Nirmalie; Owusu, Gilbert; Kern, MathiasExplanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two metrics designed to model the quality of explanations - Meet-In-The-Middle (MITM) and Trust-Your-Neighbours (TYN). Our analysis of these metrics offers new insights into the use of similarity knowledge for the evaluation of explanations.
- KonferenzbeitragExplainable Online Reinforcement Learning for Adaptive Systems(Software Engineering 2023, 2023) Feit, Felix; Metzger, Andreas; Pohl, KlausThis talk presents our work on explainable online reinforcement learning for self-adaptive systems published at the 3rd IEEE Intl. Conf. on Autonomic Computing and Self-Organizing Systems.
- ZeitschriftenartikelExplainable software systems(it - Information Technology: Vol. 61, No. 4, 2019) Vogelsang, AndreasSoftware and software-controlled technical systems play an increasing role in our daily lives. In cyber-physical systems, which connect the physical and the digital world, software does not only influence how we perceive and interact with our environment but software also makes decisions that influence our behavior. Therefore, the ability of software systems to explain their behavior and decisions will become an important property that will be crucial for their acceptance in our society. We call software systems with this ability explainable software systems . In the past, we have worked on methods and tools to design explainable software systems. In this article, we highlight some of our work on how to design explainable software systems. More specifically, we describe an architectural framework for designing self-explainable software systems, which is based on the MAPE-loop for self-adaptive systems. Afterward, we show that explainability is also important for tools that are used by engineers during the development of software systems. We show examples from the area of requirements engineering where we use techniques from natural language processing and neural networks to help engineers comprehend the complex information structures embedded in system requirements.
- ZeitschriftenartikelExplaining Artificial Intelligence with Care(KI - Künstliche Intelligenz: Vol. 36, No. 2, 2022) Szepannek, Gero; Lübke, KarstenIn the recent past, several popular failures of black box AI systems and regulatory requirements have increased the research interest in explainable and interpretable machine learning. Among the different available approaches of model explanation, partial dependence plots (PDP) represent one of the most famous methods for model-agnostic assessment of a feature’s effect on the model response. Although PDPs are commonly used and easy to apply they only provide a simplified view on the model and thus risk to be misleading. Relying on a model interpretation given by a PDP can be of dramatic consequences in an application area such as forensics where decisions may directly affect people’s life. For this reason in this paper the degree of model explainability is investigated on a popular real-world data set from the field of forensics: the glass identification database. By means of this example the paper aims to illustrate two important aspects of machine learning model development from the practical point of view in the context of forensics: (1) the importance of a proper process for model selection, hyperparameter tuning and validation as well as (2) the careful used of explainable artificial intelligence. For this purpose, the concept of explainability is extended to multiclass classification problems as e.g. given by the glass data.
- KonferenzbeitragExplaining ECG Biometrics: Is It All In The QRS?(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Pinto, João Ribeiro; Cardoso, Jaime S.The literature seems to indicate that the QRS complex is the most important component of the electrocardiogram (ECG) for biometrics. To verify this claim, we use interpretability tools to explain how a convolutional neural network uses ECG signals to identify people, using on-theperson (PTB) and off-the-person (UofTDB) signals. While the QRS complex appears indeed to be a key feature on ECG biometrics, especially with cleaner signals, results indicate that, for larger populations in off-the-person settings, the QRS shares relevance with other heartbeat components, which it is essential to locate. These insights indicate that avoiding excessive focus on the QRS complex, using decision explanations during training, could be useful for model regularisation.
- KonferenzbeitragExplanation Needs in Automated Driving: Insights from German Driving Education and Vehicle Acquisition(Proceedings of Mensch und Computer 2024, 2024) Manger, Carina; Albrecht, Kathrin; Riener, AndreasAs driving assistance driving systems become increasingly advanced, a correct understanding of the functionality of these systems is crucial for safe use. In this work we explored drivers’ explanation needs and current explanation methods from an important but underlooked perspective: driver training and vehicle acquisition. In a two-step approach, we conducted expert interviews with n = 7 driving instructors and vehicle salespeople in Germany and validated these results with an online survey of n = 105. Our results show that Driver Assistance Systems (DASs) and Advanced Driver Assistance Systems (ADASs), are currently covered in both driver training and vehicle acquisition but to a varying extent and in a very application-oriented manner. A drivers’ tendency for preferring comparative explanations that build upon knowledge about similar systems was found. Based on the combined results, we emphasize the need for mandatory and standardized explanation methods to ensure a safe transition to automated driving.
- ZeitschriftenartikelKurz erklärt: Measuring Data Changes in Data Engineering and their Impact on Explainability and Algorithm Fairness(Datenbank-Spektrum: Vol. 21, No. 3, 2021) Klettke, Meike; Lutsch, Adrian; Störl, UtaData engineering is an integral part of any data science and ML process. It consists of several subtasks that are performed to improve data quality and to transform data into a target format suitable for analysis. The quality and correctness of the data engineering steps is therefore important to ensure the quality of the overall process. In machine learning processes requirements such as fairness and explainability are essential. The answers to these must also be provided by the data engineering subtasks. In this article, we will show how these can be achieved by logging, monitoring and controlling the data changes in order to evaluate their correctness. However, since data preprocessing algorithms are part of any machine learning pipeline, they must obviously also guarantee that they do not produce data biases. In this article we will briefly introduce three classes of methods for measuring data changes in data engineering and present which research questions still remain unanswered in this area.