Logo des Repositoriums

eXplainable AI: Take one Step Back, Move two Steps forward

Vorschaubild nicht verfügbar

Volltext URI


Text/Workshop Paper





ISSN der Zeitschrift



Gesellschaft für Informatik e.V.


In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.


Alizadeh, Fatemeh; Esau, Margarita; Stevens, Gunnar; Cassens, Lena (2020): eXplainable AI: Take one Step Back, Move two Steps forward. Mensch und Computer 2020 - Workshopband. DOI: 10.18420/muc2020-ws111-369. Bonn: Gesellschaft für Informatik e.V.. MCI-WS02: UCAI 2020: Workshop on User-Centered Artificial Intelligence. Magdeburg. 6.-9. September 2020