Auflistung Künstliche Intelligenz 34(2) - Juni 2020 nach Titel
1 - 10 von 15
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelActive and Incremental Learning with Weak Supervision(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Brust, Clemens-Alexander; Käding, Christoph; Denzler, JoachimLarge amounts of labeled training data are one of the main contributors to the great success that deep models have achieved in the past. Label acquisition for tasks other than benchmarks can pose a challenge due to requirements of both funding and expertise. By selecting unlabeled examples that are promising in terms of model improvement and only asking for respective labels, active learning can increase the efficiency of the labeling process in terms of time and cost. In this work, we describe combinations of an incremental learning scheme and methods of active learning. These allow for continuous exploration of newly observed unlabeled data. We describe selection criteria based on model uncertainty as well as expected model output change (EMOC). An object detection task is evaluated in a continuous exploration context on the PASCAL VOC dataset. We also validate a weakly supervised system based on active and incremental learning in a real-world biodiversity application where images from camera traps are analyzed. Labeling only 32 images by accepting or rejecting proposals generated by our method yields an increase in accuracy from 25.4 to 42.6%.
- ZeitschriftenartikelAI in Medicine, Covid-19 and Springer Nature's Open Access Agreement(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Sonntag, Daniel
- ZeitschriftenartikelChallenges in Interactive Machine Learning(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Teso, Stefano; Hinz, Oliver
- ZeitschriftenartikelDealing with Mislabeling via Interactive Machine Learning(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Zhang, Wanyi; Passerini, Andrea; Giunchiglia, FaustoWe propose an interactive machine learning framework where the machine questions the user feedback when it realizes it is inconsistent with the knowledge previously accumulated. The key idea is that the machine uses its available knowledge to check the correctness of its own and the user labeling. The proposed architecture and algorithms run through a series of modes with progressively higher confidence and features a conflict resolution component. The proposed solution is tested in a project on university student life where the goal is to recognize tasks like user location and transportation mode from sensor data. The results highlight the unexpected extreme pervasiveness of annotation mistakes and the advantages provided by skeptical learning.
- ZeitschriftenartikeleXplainable Cooperative Machine Learning with NOVA(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Baur, Tobias; Heimerl, Alexander; Lingenfelser, Florian; Wagner, Johannes; Valstar, Michel F.; Schuller, Björn; André, ElisabethIn the following article, we introduce a novel workflow, which we subsume under the term “explainable cooperative machine learning” and show its practical application in a data annotation and model training tool called NOVA . The main idea of our approach is to interactively incorporate the ‘human in the loop’ when training classification models from annotated data. In particular, NOVA offers a collaborative annotation backend where multiple annotators join their workforce. A main aspect is the possibility of applying semi-supervised active learning techniques already during the annotation process by giving the possibility to pre-label data automatically, resulting in a drastic acceleration of the annotation process. Furthermore, the user-interface implements recent eXplainable AI techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanation. We show in an use-case evaluation that our workflow is able to speed up the annotation process, and further argue that by providing additional visual explanations annotators get to understand the decision making process as well as the trustworthiness of their trained machine learning models.
- ZeitschriftenartikelHow and What Can Humans Learn from Being in the Loop?(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Abdel-Karim, Benjamin M.; Pfeuffer, Nicolas; Rohde, Gernot; Hinz, OliverThis article discusses the counterpart of interactive machine learning, i.e., human learning while being in the loop in a human-machine collaboration. For such cases we propose the use of a Contradiction Matrix to assess the overlap and the contradictions of human and machine predictions. We show in a small-scaled user study with experts in the area of pneumology (1) that machine-learning based systems can classify X-rays with respect to diseases with a meaningful accuracy, (2) humans partly use contradictions to reconsider their initial diagnosis, and (3) that this leads to a higher overlap between human and machine diagnoses at the end of the collaboration situation. We argue that disclosure of information on diagnosis uncertainty can be beneficial to make the human expert reconsider her or his initial assessment which may ultimately result in a deliberate agreement. In the light of the observations from our project, it becomes apparent that collaborative learning in such a human-in-the-loop scenario could lead to mutual benefits for both human learning and interactive machine learning. Bearing the differences in reasoning and learning processes of humans and intelligent systems in mind, we argue that interdisciplinary research teams have the best chances at tackling this undertaking and generating valuable insights.
- ZeitschriftenartikelInteractive Transfer Learning in Relational Domains(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Kumaraswamy, Raksha; Ramanan, Nandini; Odom, Phillip; Natarajan, SriraamWe consider the problem of interactive transfer learning where a human expert provides guidance to the transfer learning algorithm that aims to transfer knowledge from a source task to a target task. One of the salient features of our approach is that we consider cross-domain transfer, i.e., transfer of knowledge across unrelated domains. We present an intuitive interface that allows for an expert to refine the knowledge in target task based on his/her expertise. Our results show that such guided transfer can effectively reduce the search space thus improving the efficiency and effectiveness of the transfer process.
- ZeitschriftenartikelITP: Inverse Trajectory Planning for Human Pose Prediction(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Peña, Pedro A.; Visser, UbboTracking and predicting humans in three dimensional space in order to know the location and heading of the human in the environment is a difficult task. Though if solved it will allow a robotic agent to know where it can safely be and navigate the environment without imposing any danger to the human that it is interacting with. We propose a novel probabilistic framework for robotic systems in which multiple models can be fused into a circular probabilitymap to forecast human poses. We developed and implemented the framework and tested it on Toyota’s HSR robot and Waymo Open Dataset. Our experiments show promising results.
- ZeitschriftenartikelJust-In-Time Constraint-Based Inference for Qualitative Spatial and Temporal Reasoning(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Sioutis, MichaelWe discuss a research roadmap for going beyond the state of the art in qualitative spatial and temporal reasoning (QSTR). Simply put, QSTR is a major field of study in Artificial Intelligence that abstracts from numerical quantities of space and time by using qualitative descriptions instead (e.g., precedes, contains, is left of); thus, it provides a concise framework that allows for rather inexpensive reasoning about entities located in space or time. Applications of QSTR can be found in a plethora of areas and domains such as smart environments, intelligent vehicles, and unmanned aircraft systems. Our discussion involves researching novel local consistencies in the aforementioned discipline, defining dynamic algorithms pertaining to these consistencies that can allow for efficient reasoning over changing spatio-temporal information, and leveraging the structures of the locally consistent related problems with regard to novel decomposability and theoretical tractability properties. Ultimately, we argue for pushing the envelope in QSTR via defining tools for tackling dynamic variants of the fundamental reasoning problems in this discipline, i.e., problems stated in terms of changing input data. Indeed, time is a continuous flow and spatial objects can change (e.g., in shape, size, or structure) as time passes; therefore, it is pertinent to be able to efficiently reason about dynamic spatio-temporal data. Finally, these tools are to be integrated into the larger context of highly active areas such as neuro-symbolic learning and reasoning, planning, data mining, and robotic applications. Our final goal is to inspire further discussion in the community about constraint-based QSTR in general, and the possible lines of future research that we outline here in particular.
- ZeitschriftenartikelMeasuring the Quality of Explanations: The System Causability Scale (SCS)(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Holzinger, Andreas; Carrington, André; Müller, HeimoRecent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human–AI interfaces for explainable AI. In order to build effective and efficient interactive human–AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.