Auflistung nach Schlagwort "explainable AI"
1 - 8 von 8
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelThe European commitment to human-centered technology: the integral role of HCI in the EU AI Act’s success(i-com: Vol. 23, No. 2, 2024) Valdez, André Calero; Heine, Moreen; Franke, Thomas; Jochems, Nicole; Jetter, Hans-Christian; Schrills, TimThe evolution of AI is set to profoundly reshape the future. The European Union, recognizing this impending prominence, has enacted the AI Act, regulating market access for AI-based systems. A salient feature of the Act is to guard democratic and humanistic values by focusing regulation on transparency, explainability, and the human ability to understand and control AI systems. Hereby, the EU AI Act does not merely specify technological requirements for AI systems. The EU issues a democratic call for human-centered AI systems and, in turn, an interdisciplinary research agenda for human-centered innovation in AI development. Without robust methods to assess AI systems and their effect on individuals and society, the EU AI Act may lead to repeating the mistakes of the General Data Protection Regulation of the EU and to rushed, chaotic, ad-hoc, and ambiguous implementation, causing more confusion than lending guidance. Moreover, determined research activities in Human-AI interaction will be pivotal for both regulatory compliance and the advancement of AI in a manner that is both ethical and effective. Such an approach will ensure that AI development aligns with human values and needs, fostering a technology landscape that is innovative, responsible, and an integral part of our society.
- ZeitschriftenartikelExplainable AI and Multi-Modal Causability in Medicine(i-com: Vol. 19, No. 3, 2021) Holzinger, AndreasProgress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex “black-boxes”, which make it hard to understand why a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask “what-if” questions (counterfactuals) to gain insight into the underlying independent explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result.
- KonferenzbeitragExplainable AI: Leaf-based medicinal plant classification using knowledge distillation(44. GIL - Jahrestagung, Biodiversität fördern durch digitale Landwirtschaft, 2024) Mengisti Berihu Girmay, Samuel ObengMedicinal plants are used in a variety of ways in the pharmaceutical industry in many parts of the world to obtain medicines. They are traditionally used especially in developing countries, where they provide cost-effective treatments. However, accurate identification of medicinal plants can be challenging. This study uses a deep neural network and knowledge distillation approach based on a dataset of 4,026 images of 8 species of leaf-based Ethiopian medicinal plants. Knowledge from a ResNet50 teacher model was applied to a lightweight 2-layer student model. The student model, optimized for efficiency, achieved 96.91% accuracy and came close to the 98.98% accuracy of the teacher model on unseen test data. The training was built on optimization strategies, including oversampling, data augmentation, and learning rate adjustment. To understand the model's decisions, LIME (Local Interpretable Model-agnostic Explanations) and degree Grad-CAM (Gradient-weighted Class Activation Mapping) post-hoc explanation techniques were used to highlight influential image regions that contributed to classification.
- ZeitschriftenartikelFrom explanations to human-AI co-evolution: charting trajectories towards future user-centric AI(i-com: Vol. 23, No. 2, 2024) Ziegler, Jürgen; Donkers, TimThis paper explores the evolving landscape of User-Centric Artificial Intelligence, particularly in light of the challenges posed by systems that are powerful but not fully transparent or comprehensible to their users. Despite advances in AI, significant gaps remain in aligning system actions with user understanding, prompting a reevaluation of what “user-centric” really means. We argue that current XAI efforts are often too much focused on system developers rather than end users, and fail to address the comprehensibility of the explanations provided. Instead, we propose a broader, more dynamic conceptualization of human-AI interaction that emphasizes the need for AI not only to explain, but also to co-create and cognitively resonate with users. We examine the evolution of a communication-centric paradigm of human-AI interaction, underscoring the need for AI systems to enhance rather than mimic human interactions. We argue for a shift toward more meaningful and adaptive exchanges in which AI’s role is understood as facilitative rather than autonomous. Finally, we outline how future UCAI may leverage AI’s growing capabilities to foster a genuine co-evolution of human and machine intelligence, while ensuring that such interactions remain grounded in ethical and user-centered principles.
- KonferenzbeitragJumpXClass: Explainable AI for Jump Classification in Trampoline Sports(BTW 2023, 2023) Woltmann, Lucas; Ferger, Katja; Hartmann, Claudio; Lehner, WolfgangMovement patterns in trampoline gymnastics have become faster and more complex with the increase in the athletes’ capabilities. This makes the assessment of jump type, pose, and quality during training or competitions by humans very difficult or even impossible. To counteract this development, data-driven solutions are thought to be a solution to improve training. In recent work, sensor measurements and machine learning is used to automatically predict jumps and give feedback to the athletes and trainers. However, machine learning models, and especially neural networks, are black boxes most of the time. Therefore, the athletes and trainers cannot gain any insights about the jump from the machine learning-based jump classification. To better understand the jump execution during training, we propose JumpXClass: a tool for automatic machine learning-based jump classification with explainable artificial intelligence. Using elements of explainable artificial intelligence can improve the training experience for athletes and trainers. This work will demonstrate a live system capable to classify and explain jumps from trampoline athletes.
- Conference demoSicherer Einsatz von xAI in der Bildung: Erkennung von LLM-Halluzinationen bei der Generierung von Lehr- und Lernmaterialien(Proceedings of DELFI 2024, 2024) Ledel, Benjamin; Schwarz, TabeaDiese Demonstration bietet einen praktischen Einblick in eine neu entwickelte generative xAI (explainable AI), die im Bildungskontext eingesetzt werden kann, da sie eine strikte Einhaltung der DSGVO gewährleistet. Es handelt sich hierbei um ein für den Bildungsbereich optimiertes Large Language Model (LLM) - ohne Bindung an Open AI o.ä. - das in Deutschland betrieben und gehostet wird. Der Schwerpunkt der Demonstration liegt auf der Veranschaulichung der Funktionsweise dieser xAI, die neben der Textgenerierung auch Video-, Bild- und Audiogenerierung umfasst. Es wird gezeigt, wie durch die Struktur der xAI im Gegensatz zu herkömmlichen KIs Halluzinationen erkannt werden können. Darüber hinaus wird einerseits demonstriert, wie die Schnittstelle zwischen dieser künstlichen Intelligenz und H5P es Lehrenden ermöglicht, interaktives Lehrmaterial zu erstellen. Des Weiteren wird gezeigt, wie die xAI auf Basis des Lehrmaterials Fragen beantworten und somit als virtueller Tutor fungieren kann.
- ZeitschriftenartikelTowards Human-Centered AI: Psychological concepts as foundation for empirical XAI research(it - Information Technology: Vol. 64, No. 1-2, 2022) Weitz, KatharinaHuman-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.
- KonferenzbeitragYou Can Only Verify When You Know the Answer: Feature-Based Explanations Reduce Overreliance on AI for Easy Decisions, but Not for Hard Ones(Proceedings of Mensch und Computer 2024, 2024) Zhang, Zelun Tony; Buchner, Felicitas; Liu, Yuanting; Butz, AndreasExplaining the mechanisms behind model predictions is a common strategy in AI-assisted decision-making to help users rely appropriately on AI. However, recent research shows that the effectiveness of explanations depends on numerous factors, leading to mixed results, with many studies finding no effect or even an increase in overreliance, while explanations do improve appropriate reliance in other studies. We consider the factor of decision difficulty to better understand when feature-based explanations can mitigate overreliance. To this end, we conducted an online experiment (N = 200) with carefully selected task instances that cover a wide range of difficulties. We found that explanations reduce overreliance for easy decisions, but that this effect vanishes with increasing decision difficulty. For the most difficult decisions, explanations might even increase overreliance. Our results imply that explanations of the model’s inner workings are only helpful for a limited set of decision tasks where users easily know the answer themselves.