Auflistung nach Autor:in "Spitzer, Philipp"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragLooking Through the Deep Glasses: How Large Language Models Enhance Explainability of Deep Learning Models(Proceedings of Mensch und Computer 2024, 2024) Spitzer, Philipp; Celis, Sebastian; Martin, Dominik; Kühl, Niklas; Satzger, GerhardAs AI becomes more powerful, it also becomes more complex. Traditionally, eXplainable AI (XAI) is used to make these models more transparent and interpretable to decision-makers. However, research shows that decision-makers can lack the ability to properly interpret XAI techniques. Large language models (LLMs) offer a solution to this challenge by providing natural language text in combination with XAI techniques to provide more understandable explanations. However, previous work has only explored this approach for inherently interpretable models–an understanding of how LLMs can assist decision-makers when using deep learning models is lacking. To fill this gap, we investigate how different augmentation strategies of LLMs assist decision-makers in interacting with deep learning models. We evaluate the satisfaction and preferences of decision-makers through a user study. Overall, our results provide first insights into how LLMs support decision-makers in interacting with deep learning models and open future avenues to continue this endeavor.
- Konferenzbeitrag(X)AI as a Teacher: Learning with Explainable Artificial Intelligence(Proceedings of Mensch und Computer 2024, 2024) Spitzer, Philipp; Goutier, Marc; Kühl, Niklas; Satzger, GerhardDue to changing demographics, limited availability of experts, and frequent job transitions, retaining and sharing knowledge within organizations is crucial. While many learning systems already address this issue, they typically lack automation and scalability in teaching novices and, thus, hinder the learning processes within organizations. Recent research emphasizes the capability of explainable artificial intelligence (XAI) to make black-box artificial intelligence systems interpretable for decision-makers. This work explores the potential of using (X)AI-based learning systems for providing learning examples and explanations to novices. In an exploratory study, we evaluate novices’ learning performance in a learning setting taking into account their cognitive abilities. Our results show that novices increase their learning performance throughout the exploratory study. These results shed light on how XAI can facilitate learning, taking first steps towards understanding the potential of XAI in learning systems.