Towards Human-Centered AI: Psychological concepts as foundation for empirical XAI research
dc.contributor.author | Weitz, Katharina | |
dc.date.accessioned | 2022-11-22T09:48:31Z | |
dc.date.available | 2022-11-22T09:48:31Z | |
dc.date.issued | 2022 | |
dc.description.abstract | Human-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system. | en |
dc.identifier.doi | 10.1515/itit-2021-0047 | |
dc.identifier.pissn | 2196-7032 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/39746 | |
dc.language.iso | en | |
dc.publisher | De Gruyter | |
dc.relation.ispartof | it - Information Technology: Vol. 64, No. 1-2 | |
dc.subject | human-centered AI | |
dc.subject | explainable AI | |
dc.subject | deep neural networks | |
dc.title | Towards Human-Centered AI: Psychological concepts as foundation for empirical XAI research | en |
dc.type | Text/Journal Article | |
gi.citation.endPage | 75 | |
gi.citation.publisherPlace | Berlin | |
gi.citation.startPage | 71 | |
gi.conference.sessiontitle | Self-Portrayals of GI Junior Fellows |