Auflistung nach Schlagwort "Explainability"
1 - 10 von 10
Treffer pro Seite
Sortieroptionen
- TextdokumentCluster Flow - an Advanced Concept for Ensemble-Enabling, Interactive Clustering(BTW 2021, 2021) Obermeier, Sandra; Beer, Anna; Wahl, Florian; Seidl, ThomasEven though most clustering algorithms serve knowledge discovery in fields other than computer science, most of them still require users to be familiar with programming or data mining to some extent. As that often prevents efficient research, we developed an easy to use, highly explainable clustering method accompanied by an interactive tool for clustering. It is based on intuitively understandable kNN graphs and the subsequent application of adaptable filters, which can be combined ensemble-like and iteratively and prune unnecessary or misleading edges. For a first overview of the data, fully automatic predefined filter cascades deliver robust results. A selection of simple filters and combination methods that can be chosen interactively yield very good results on benchmark datasets compared to various algorithms.
- ZeitschriftenartikelEvaluating Explainability Methods Intended for Multiple Stakeholders(KI - Künstliche Intelligenz: Vol. 35, No. 0, 2021) Martin, Kyle; Liret, Anne; Wiratunga, Nirmalie; Owusu, Gilbert; Kern, MathiasExplanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two metrics designed to model the quality of explanations - Meet-In-The-Middle (MITM) and Trust-Your-Neighbours (TYN). Our analysis of these metrics offers new insights into the use of similarity knowledge for the evaluation of explanations.
- KonferenzbeitragExplainable Online Reinforcement Learning for Adaptive Systems(Software Engineering 2023, 2023) Feit, Felix; Metzger, Andreas; Pohl, KlausThis talk presents our work on explainable online reinforcement learning for self-adaptive systems published at the 3rd IEEE Intl. Conf. on Autonomic Computing and Self-Organizing Systems.
- ZeitschriftenartikelExplainable software systems(it - Information Technology: Vol. 61, No. 4, 2019) Vogelsang, AndreasSoftware and software-controlled technical systems play an increasing role in our daily lives. In cyber-physical systems, which connect the physical and the digital world, software does not only influence how we perceive and interact with our environment but software also makes decisions that influence our behavior. Therefore, the ability of software systems to explain their behavior and decisions will become an important property that will be crucial for their acceptance in our society. We call software systems with this ability explainable software systems . In the past, we have worked on methods and tools to design explainable software systems. In this article, we highlight some of our work on how to design explainable software systems. More specifically, we describe an architectural framework for designing self-explainable software systems, which is based on the MAPE-loop for self-adaptive systems. Afterward, we show that explainability is also important for tools that are used by engineers during the development of software systems. We show examples from the area of requirements engineering where we use techniques from natural language processing and neural networks to help engineers comprehend the complex information structures embedded in system requirements.
- ZeitschriftenartikelExplaining Artificial Intelligence with Care(KI - Künstliche Intelligenz: Vol. 36, No. 2, 2022) Szepannek, Gero; Lübke, KarstenIn the recent past, several popular failures of black box AI systems and regulatory requirements have increased the research interest in explainable and interpretable machine learning. Among the different available approaches of model explanation, partial dependence plots (PDP) represent one of the most famous methods for model-agnostic assessment of a feature’s effect on the model response. Although PDPs are commonly used and easy to apply they only provide a simplified view on the model and thus risk to be misleading. Relying on a model interpretation given by a PDP can be of dramatic consequences in an application area such as forensics where decisions may directly affect people’s life. For this reason in this paper the degree of model explainability is investigated on a popular real-world data set from the field of forensics: the glass identification database. By means of this example the paper aims to illustrate two important aspects of machine learning model development from the practical point of view in the context of forensics: (1) the importance of a proper process for model selection, hyperparameter tuning and validation as well as (2) the careful used of explainable artificial intelligence. For this purpose, the concept of explainability is extended to multiclass classification problems as e.g. given by the glass data.
- KonferenzbeitragExplaining ECG Biometrics: Is It All In The QRS?(BIOSIG 2020 - Proceedings of the 19th International Conference of the Biometrics Special Interest Group, 2020) Pinto, João Ribeiro; Cardoso, Jaime S.The literature seems to indicate that the QRS complex is the most important component of the electrocardiogram (ECG) for biometrics. To verify this claim, we use interpretability tools to explain how a convolutional neural network uses ECG signals to identify people, using on-theperson (PTB) and off-the-person (UofTDB) signals. While the QRS complex appears indeed to be a key feature on ECG biometrics, especially with cleaner signals, results indicate that, for larger populations in off-the-person settings, the QRS shares relevance with other heartbeat components, which it is essential to locate. These insights indicate that avoiding excessive focus on the QRS complex, using decision explanations during training, could be useful for model regularisation.
- ZeitschriftenartikelKurz erklärt: Measuring Data Changes in Data Engineering and their Impact on Explainability and Algorithm Fairness(Datenbank-Spektrum: Vol. 21, No. 3, 2021) Klettke, Meike; Lutsch, Adrian; Störl, UtaData engineering is an integral part of any data science and ML process. It consists of several subtasks that are performed to improve data quality and to transform data into a target format suitable for analysis. The quality and correctness of the data engineering steps is therefore important to ensure the quality of the overall process. In machine learning processes requirements such as fairness and explainability are essential. The answers to these must also be provided by the data engineering subtasks. In this article, we will show how these can be achieved by logging, monitoring and controlling the data changes in order to evaluate their correctness. However, since data preprocessing algorithms are part of any machine learning pipeline, they must obviously also guarantee that they do not produce data biases. In this article we will briefly introduce three classes of methods for measuring data changes in data engineering and present which research questions still remain unanswered in this area.
- ZeitschriftenartikelNon-Discrimination-by-Design: Handlungsempfehlungen für die Entwicklung von vertrauenswürdigen KI-Services(HMD Praxis der Wirtschaftsinformatik: Vol. 59, No. 2, 2022) Rebstadt, Jonas; Kortum, Henrik; Gravemeier, Laura Sophie; Eberhardt, Birgid; Thomas, OliverNeben der menschen-induzierten Diskriminierung von Gruppen oder Einzelpersonen haben in der jüngeren Vergangenheit auch immer mehr KI-Systeme diskriminierendes Verhalten gezeigt. Beispiele hierfür sind KI-Systeme im Recruiting, die Kandidatinnen benachteiligen, Chatbots mit rassistischen Tendenzen, oder die in autonomen Fahrzeugen eingesetzte Objekterkennung, welche schwarze Menschen schlechter als weiße Menschen erkennt. Das Verhalten der KI-Systeme entsteht hierbei durch die absichtliche oder unabsichtliche Reproduktion von Vorurteilen in den genutzten Daten oder den Entwicklerteams. Da sich KI-Systeme zunehmend als integraler Bestandteil sowohl privater als auch wirtschaftlicher Lebensbereiche etablieren, müssen sich Wissenschaft und Praxis mit den ethischen Rahmenbedingungen für deren Einsatz auseinandersetzen. Daher soll im Kontext dieser Arbeit ein wirtschaftlich und wissenschaftlich relevanter Beitrag zu diesem Diskurs geleistet werden, wobei am Beispiel des Ökosystems Smart Living auf einen sehr privaten Bezug zu einer diversen Bevölkerung bezuggenommen wird. Im Rahmen der Arbeit wurden sowohl in der Literatur als auch durch Expertenbefragungen Anforderungen an KI-Systeme im Smart-Living-Ökosystem in Bezug auf Diskriminierungsfreiheit erhoben, um Handlungsempfehlungen für die Entwicklung von KI-Services abzuleiten. Die Handlungsempfehlungen sollen vor allem Praktiker dabei unterstützen, ihr Vorgehen zur Entwicklung von KI-Systemen um ethische Faktoren zu ergänzen und so die Entwicklung nicht-diskriminierender KI-Services voranzutreiben. In addition to human-induced discrimination of groups or individuals, more and more AI systems have also shown discriminatory behavior in the recent past. Examples include AI systems in recruiting that discriminate against female candidates, chatbots with racist tendencies, or the object recognition used in autonomous vehicles that shows a worse performance in recognizing black than white people. The behavior of AI systems here arises from the intentional or unintentional reproduction of pre-existing biases in the training data, but also the development teams. As AI systems increasingly establish themselves as an integral part of both private and economic spheres of life, science and practice must address the ethical framework for their use. Therefore, in the context of this work, an economically and scientifically relevant contribution to this discourse will be made, using the example of the Smart Living ecosystem to argue with a very private reference to a diverse population. In this paper, requirements for AI systems in the Smart Living ecosystem with respect to non-discrimination were collected both in the literature and through expert interviews in order to derive recommendations for action for the development of AI services. The recommendations for action are primarily intended to support practitioners in adding ethical factors to their procedural models for the development of AI systems, thus advancing the development of non-discriminatory AI services.
- KonferenzbeitragWhich Rules Entail this Fact? - An Efficient Approach Using RDBMSs(BTW 2023, 2023) Gutberlet, Tim; Sauerbier, JanikIn this paper, we focus on the problem of identifying all rules that entail a certain target fact given a knowledge graph and a set of previously learned rules. This problem is relevant in the context of link prediction and explainability. We propose an efficient approach using relational database technology including indexing, filtering and pre-computing methods. Our experiments demonstrate the efficiency of our approach and the effect of various optimizations on different datasets like YAGO3-10, WN18RR and FB15k-237 using rules learned by the bottom up rule learner AnyBURL.
- KonferenzbeitragWorkshop “Modelle und KI”(Modellierung 2022 Satellite Events, 2022) Bork, Dominik; Fettke, Peter; Reimer, UlrichThe workshop focuses on topics at the intersection of the fields of conceptual modeling and AI and explores the value conceptual modeling brings to AI, and, vice versa, the value that AI can bring to conceptual modeling. This covers a wide range of issues such as how to combine learned and manually engineered models, data-driven modelling support, automatic incremental model adaptation, and how to achieve the explainability of learned models e.g. by utilizing conceptual models as background knowledge.