Auflistung Künstliche Intelligenz 34(2) - Juni 2020 nach Erscheinungsdatum
1 - 10 von 15
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelHow and What Can Humans Learn from Being in the Loop?(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Abdel-Karim, Benjamin M.; Pfeuffer, Nicolas; Rohde, Gernot; Hinz, OliverThis article discusses the counterpart of interactive machine learning, i.e., human learning while being in the loop in a human-machine collaboration. For such cases we propose the use of a Contradiction Matrix to assess the overlap and the contradictions of human and machine predictions. We show in a small-scaled user study with experts in the area of pneumology (1) that machine-learning based systems can classify X-rays with respect to diseases with a meaningful accuracy, (2) humans partly use contradictions to reconsider their initial diagnosis, and (3) that this leads to a higher overlap between human and machine diagnoses at the end of the collaboration situation. We argue that disclosure of information on diagnosis uncertainty can be beneficial to make the human expert reconsider her or his initial assessment which may ultimately result in a deliberate agreement. In the light of the observations from our project, it becomes apparent that collaborative learning in such a human-in-the-loop scenario could lead to mutual benefits for both human learning and interactive machine learning. Bearing the differences in reasoning and learning processes of humans and intelligent systems in mind, we argue that interdisciplinary research teams have the best chances at tackling this undertaking and generating valuable insights.
- ZeitschriftenartikelMeasuring the Quality of Explanations: The System Causability Scale (SCS)(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Holzinger, Andreas; Carrington, André; Müller, HeimoRecent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human–AI interfaces for explainable AI. In order to build effective and efficient interactive human–AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.
- ZeitschriftenartikelOn the Development of AI in Germany(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Bibel, WolfgangThe article gives a brief account of the historical evolution of Artificial Intelligence in Germany, covering key steps from antiquity to the present state of the discipline. Its focus is on AI as a science and on organisational aspects rather than on technological ones or on specific AI subjects.
- ZeitschriftenartikelITP: Inverse Trajectory Planning for Human Pose Prediction(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Peña, Pedro A.; Visser, UbboTracking and predicting humans in three dimensional space in order to know the location and heading of the human in the environment is a difficult task. Though if solved it will allow a robotic agent to know where it can safely be and navigate the environment without imposing any danger to the human that it is interacting with. We propose a novel probabilistic framework for robotic systems in which multiple models can be fused into a circular probabilitymap to forecast human poses. We developed and implemented the framework and tested it on Toyota’s HSR robot and Waymo Open Dataset. Our experiments show promising results.
- ZeitschriftenartikelChallenges in Interactive Machine Learning(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Teso, Stefano; Hinz, Oliver
- ZeitschriftenartikelOne Explanation Does Not Fit All(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Sokol, Kacper; Flach, PeterThe need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system’s operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations—a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up “What if?” questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats. Furthermore, we discuss the challenges of mirroring the explainee’s mental model, which is the main building block of intelligible human–machine interactions. We also deliberate on the risks of allowing the explainee to freely manipulate the explanations and thereby extracting information about the underlying predictive model, which might be leveraged by malicious actors to steal or game the model. Finally, building an end-to-end interactive explainability system is a challenging engineering task; unless the main goal is its deployment, we recommend “Wizard of Oz” studies as a proxy for testing and evaluating standalone interactive explainability algorithms.
- ZeitschriftenartikelInteractive Transfer Learning in Relational Domains(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Kumaraswamy, Raksha; Ramanan, Nandini; Odom, Phillip; Natarajan, SriraamWe consider the problem of interactive transfer learning where a human expert provides guidance to the transfer learning algorithm that aims to transfer knowledge from a source task to a target task. One of the salient features of our approach is that we consider cross-domain transfer, i.e., transfer of knowledge across unrelated domains. We present an intuitive interface that allows for an expert to refine the knowledge in target task based on his/her expertise. Our results show that such guided transfer can effectively reduce the search space thus improving the efficiency and effectiveness of the transfer process.
- ZeitschriftenartikeleXplainable Cooperative Machine Learning with NOVA(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Baur, Tobias; Heimerl, Alexander; Lingenfelser, Florian; Wagner, Johannes; Valstar, Michel F.; Schuller, Björn; André, ElisabethIn the following article, we introduce a novel workflow, which we subsume under the term “explainable cooperative machine learning” and show its practical application in a data annotation and model training tool called NOVA . The main idea of our approach is to interactively incorporate the ‘human in the loop’ when training classification models from annotated data. In particular, NOVA offers a collaborative annotation backend where multiple annotators join their workforce. A main aspect is the possibility of applying semi-supervised active learning techniques already during the annotation process by giving the possibility to pre-label data automatically, resulting in a drastic acceleration of the annotation process. Furthermore, the user-interface implements recent eXplainable AI techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanation. We show in an use-case evaluation that our workflow is able to speed up the annotation process, and further argue that by providing additional visual explanations annotators get to understand the decision making process as well as the trustworthiness of their trained machine learning models.
- ZeitschriftenartikelAI in Medicine, Covid-19 and Springer Nature's Open Access Agreement(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Sonntag, Daniel
- ZeitschriftenartikelMutual Explanations for Cooperative Decision Making in Medicine(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Schmid, Ute; Finzel, BettinaExploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the Inductive Logic Programming system Aleph to allow for interactive learning. Medical experts can ask for verbal explanations. They can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge can be taken into account in form of constraints for model adaption.