Auflistung nach Schlagwort "explainable AI"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelExplainable AI and Multi-Modal Causability in Medicine(i-com: Vol. 19, No. 3, 2021) Holzinger, AndreasProgress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex “black-boxes”, which make it hard to understand why a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask “what-if” questions (counterfactuals) to gain insight into the underlying independent explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result.
- KonferenzbeitragExplainable AI: Leaf-based medicinal plant classification using knowledge distillation(44. GIL - Jahrestagung, Biodiversität fördern durch digitale Landwirtschaft, 2024) Mengisti Berihu Girmay, Samuel ObengMedicinal plants are used in a variety of ways in the pharmaceutical industry in many parts of the world to obtain medicines. They are traditionally used especially in developing countries, where they provide cost-effective treatments. However, accurate identification of medicinal plants can be challenging. This study uses a deep neural network and knowledge distillation approach based on a dataset of 4,026 images of 8 species of leaf-based Ethiopian medicinal plants. Knowledge from a ResNet50 teacher model was applied to a lightweight 2-layer student model. The student model, optimized for efficiency, achieved 96.91% accuracy and came close to the 98.98% accuracy of the teacher model on unseen test data. The training was built on optimization strategies, including oversampling, data augmentation, and learning rate adjustment. To understand the model's decisions, LIME (Local Interpretable Model-agnostic Explanations) and degree Grad-CAM (Gradient-weighted Class Activation Mapping) post-hoc explanation techniques were used to highlight influential image regions that contributed to classification.
- KonferenzbeitragJumpXClass: Explainable AI for Jump Classification in Trampoline Sports(BTW 2023, 2023) Woltmann, Lucas; Ferger, Katja; Hartmann, Claudio; Lehner, WolfgangMovement patterns in trampoline gymnastics have become faster and more complex with the increase in the athletes’ capabilities. This makes the assessment of jump type, pose, and quality during training or competitions by humans very difficult or even impossible. To counteract this development, data-driven solutions are thought to be a solution to improve training. In recent work, sensor measurements and machine learning is used to automatically predict jumps and give feedback to the athletes and trainers. However, machine learning models, and especially neural networks, are black boxes most of the time. Therefore, the athletes and trainers cannot gain any insights about the jump from the machine learning-based jump classification. To better understand the jump execution during training, we propose JumpXClass: a tool for automatic machine learning-based jump classification with explainable artificial intelligence. Using elements of explainable artificial intelligence can improve the training experience for athletes and trainers. This work will demonstrate a live system capable to classify and explain jumps from trampoline athletes.
- ZeitschriftenartikelTowards Human-Centered AI: Psychological concepts as foundation for empirical XAI research(it - Information Technology: Vol. 64, No. 1-2, 2022) Weitz, KatharinaHuman-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.