Spitzer, PhilippCelis, SebastianMartin, DominikKühl, NiklasSatzger, Gerhard2024-10-082024-10-082024https://dl.gi.de/handle/20.500.12116/44878As AI becomes more powerful, it also becomes more complex. Traditionally, eXplainable AI (XAI) is used to make these models more transparent and interpretable to decision-makers. However, research shows that decision-makers can lack the ability to properly interpret XAI techniques. Large language models (LLMs) offer a solution to this challenge by providing natural language text in combination with XAI techniques to provide more understandable explanations. However, previous work has only explored this approach for inherently interpretable models–an understanding of how LLMs can assist decision-makers when using deep learning models is lacking. To fill this gap, we investigate how different augmentation strategies of LLMs assist decision-makers in interacting with deep learning models. We evaluate the satisfaction and preferences of decision-makers through a user study. Overall, our results provide first insights into how LLMs support decision-makers in interacting with deep learning models and open future avenues to continue this endeavor.enArtificial IntelligenceExplainable AIHuman-Computer InteractionLarge Language ModelsLooking Through the Deep Glasses: How Large Language Models Enhance Explainability of Deep Learning ModelsText/Conference Paper10.1145/3670653.3677488