Logo des Repositoriums

Künstliche Intelligenz 34(2) - Juni 2020

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 15
  • Zeitschriftenartikel
    Interactive Transfer Learning in Relational Domains
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Kumaraswamy, Raksha; Ramanan, Nandini; Odom, Phillip; Natarajan, Sriraam
    We consider the problem of interactive transfer learning where a human expert provides guidance to the transfer learning algorithm that aims to transfer knowledge from a source task to a target task. One of the salient features of our approach is that we consider cross-domain transfer, i.e., transfer of knowledge across unrelated domains. We present an intuitive interface that allows for an expert to refine the knowledge in target task based on his/her expertise. Our results show that such guided transfer can effectively reduce the search space thus improving the efficiency and effectiveness of the transfer process.
  • Zeitschriftenartikel
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020)
  • Zeitschriftenartikel
    Dealing with Mislabeling via Interactive Machine Learning
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Zhang, Wanyi; Passerini, Andrea; Giunchiglia, Fausto
    We propose an interactive machine learning framework where the machine questions the user feedback when it realizes it is inconsistent with the knowledge previously accumulated. The key idea is that the machine uses its available knowledge to check the correctness of its own and the user labeling. The proposed architecture and algorithms run through a series of modes with progressively higher confidence and features a conflict resolution component. The proposed solution is tested in a project on university student life where the goal is to recognize tasks like user location and transportation mode from sensor data. The results highlight the unexpected extreme pervasiveness of annotation mistakes and the advantages provided by skeptical learning.
  • Zeitschriftenartikel
    On the Development of AI in Germany
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Bibel, Wolfgang
    The article gives a brief account of the historical evolution of Artificial Intelligence in Germany, covering key steps from antiquity to the present state of the discipline. Its focus is on AI as a science and on organisational aspects rather than on technological ones or on specific AI subjects.
  • Zeitschriftenartikel
    Just-In-Time Constraint-Based Inference for Qualitative Spatial and Temporal Reasoning
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Sioutis, Michael
    We discuss a research roadmap for going beyond the state of the art in qualitative spatial and temporal reasoning (QSTR). Simply put, QSTR is a major field of study in Artificial Intelligence that abstracts from numerical quantities of space and time by using qualitative descriptions instead (e.g., precedes, contains, is left of); thus, it provides a concise framework that allows for rather inexpensive reasoning about entities located in space or time. Applications of QSTR can be found in a plethora of areas and domains such as smart environments, intelligent vehicles, and unmanned aircraft systems. Our discussion involves researching novel local consistencies in the aforementioned discipline, defining dynamic algorithms pertaining to these consistencies that can allow for efficient reasoning over changing spatio-temporal information, and leveraging the structures of the locally consistent related problems with regard to novel decomposability and theoretical tractability properties. Ultimately, we argue for pushing the envelope in QSTR via defining tools for tackling dynamic variants of the fundamental reasoning problems in this discipline, i.e., problems stated in terms of changing input data. Indeed, time is a continuous flow and spatial objects can change (e.g., in shape, size, or structure) as time passes; therefore, it is pertinent to be able to efficiently reason about dynamic spatio-temporal data. Finally, these tools are to be integrated into the larger context of highly active areas such as neuro-symbolic learning and reasoning, planning, data mining, and robotic applications. Our final goal is to inspire further discussion in the community about constraint-based QSTR in general, and the possible lines of future research that we outline here in particular.
  • Zeitschriftenartikel
    One Explanation Does Not Fit All
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Sokol, Kacper; Flach, Peter
    The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system’s operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations—a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up “What if?” questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats. Furthermore, we discuss the challenges of mirroring the explainee’s mental model, which is the main building block of intelligible human–machine interactions. We also deliberate on the risks of allowing the explainee to freely manipulate the explanations and thereby extracting information about the underlying predictive model, which might be leveraged by malicious actors to steal or game the model. Finally, building an end-to-end interactive explainability system is a challenging engineering task; unless the main goal is its deployment, we recommend “Wizard of Oz” studies as a proxy for testing and evaluating standalone interactive explainability algorithms.
  • Zeitschriftenartikel
    eXplainable Cooperative Machine Learning with NOVA
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Baur, Tobias; Heimerl, Alexander; Lingenfelser, Florian; Wagner, Johannes; Valstar, Michel F.; Schuller, Björn; André, Elisabeth
    In the following article, we introduce a novel workflow, which we subsume under the term “explainable cooperative machine learning” and show its practical application in a data annotation and model training tool called NOVA . The main idea of our approach is to interactively incorporate the ‘human in the loop’ when training classification models from annotated data. In particular, NOVA offers a collaborative annotation backend where multiple annotators join their workforce. A main aspect is the possibility of applying semi-supervised active learning techniques already during the annotation process by giving the possibility to pre-label data automatically, resulting in a drastic acceleration of the annotation process. Furthermore, the user-interface implements recent eXplainable AI techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanation. We show in an use-case evaluation that our workflow is able to speed up the annotation process, and further argue that by providing additional visual explanations annotators get to understand the decision making process as well as the trustworthiness of their trained machine learning models.
  • Zeitschriftenartikel
    Measuring the Quality of Explanations: The System Causability Scale (SCS)
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Holzinger, Andreas; Carrington, André; Müller, Heimo
    Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human–AI interfaces for explainable AI. In order to build effective and efficient interactive human–AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.
  • Zeitschriftenartikel
    ITP: Inverse Trajectory Planning for Human Pose Prediction
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Peña, Pedro A.; Visser, Ubbo
    Tracking and predicting humans in three dimensional space in order to know the location and heading of the human in the environment is a difficult task. Though if solved it will allow a robotic agent to know where it can safely be and navigate the environment without imposing any danger to the human that it is interacting with. We propose a novel probabilistic framework for robotic systems in which multiple models can be fused into a circular probabilitymap to forecast human poses. We developed and implemented the framework and tested it on Toyota’s HSR robot and Waymo Open Dataset. Our experiments show promising results.
  • Zeitschriftenartikel
    How and What Can Humans Learn from Being in the Loop?
    (KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Abdel-Karim, Benjamin M.; Pfeuffer, Nicolas; Rohde, Gernot; Hinz, Oliver
    This article discusses the counterpart of interactive machine learning, i.e., human learning while being in the loop in a human-machine collaboration. For such cases we propose the use of a Contradiction Matrix to assess the overlap and the contradictions of human and machine predictions. We show in a small-scaled user study with experts in the area of pneumology (1) that machine-learning based systems can classify X-rays with respect to diseases with a meaningful accuracy, (2) humans partly use contradictions to reconsider their initial diagnosis, and (3) that this leads to a higher overlap between human and machine diagnoses at the end of the collaboration situation. We argue that disclosure of information on diagnosis uncertainty can be beneficial to make the human expert reconsider her or his initial assessment which may ultimately result in a deliberate agreement. In the light of the observations from our project, it becomes apparent that collaborative learning in such a human-in-the-loop scenario could lead to mutual benefits for both human learning and interactive machine learning. Bearing the differences in reasoning and learning processes of humans and intelligent systems in mind, we argue that interdisciplinary research teams have the best chances at tackling this undertaking and generating valuable insights.