Logo des Repositoriums
 
Konferenzbeitrag

Looking Through the Deep Glasses: How Large Language Models Enhance Explainability of Deep Learning Models

Vorschaubild nicht verfügbar

Volltext URI

Dokumententyp

Text/Conference Paper

Zusatzinformation

Datum

2024

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Association for Computing Machinery

Zusammenfassung

As AI becomes more powerful, it also becomes more complex. Traditionally, eXplainable AI (XAI) is used to make these models more transparent and interpretable to decision-makers. However, research shows that decision-makers can lack the ability to properly interpret XAI techniques. Large language models (LLMs) offer a solution to this challenge by providing natural language text in combination with XAI techniques to provide more understandable explanations. However, previous work has only explored this approach for inherently interpretable models–an understanding of how LLMs can assist decision-makers when using deep learning models is lacking. To fill this gap, we investigate how different augmentation strategies of LLMs assist decision-makers in interacting with deep learning models. We evaluate the satisfaction and preferences of decision-makers through a user study. Overall, our results provide first insights into how LLMs support decision-makers in interacting with deep learning models and open future avenues to continue this endeavor.

Beschreibung

Spitzer, Philipp; Celis, Sebastian; Martin, Dominik; Kühl, Niklas; Satzger, Gerhard (2024): Looking Through the Deep Glasses: How Large Language Models Enhance Explainability of Deep Learning Models. Proceedings of Mensch und Computer 2024. DOI: 10.1145/3670653.3677488. Association for Computing Machinery. pp. 566–570. Karlsruhe, Germany

Zitierform

Tags