Show simple item record

dc.contributor.authorNorkute, Milda
dc.contributor.editorWienrich, Carolin
dc.contributor.editorWintersberger, Philipp
dc.contributor.editorWeyers, Benjamin
dc.date.accessioned2021-09-05T18:56:32Z
dc.date.available2021-09-05T18:56:32Z
dc.date.issued2021
dc.identifier.urihttp://dl.gi.de/handle/20.500.12116/37353
dc.description.abstractThis paper discusses research that explored different roles for explanations of AI systems. A lot of the research focuses on investigating the role of explanations in mediating the level of users’ trust in the AI system and helping them form correct mental models about it. This paper argues that more research should be dedicated to investigate the alternative roles that explanations could play in supporting the user’s interactions with AI systems such as helping them enrich the AI suggestions they are presented with or correct them, help users do tasks more efficiently and effectively.en
dc.language.isoen
dc.publisherGesellschaft für Informatik e.V.
dc.relation.ispartofMensch und Computer 2021 - Workshopband
dc.relation.ispartofseriesMensch und Computer
dc.subjectExplainable artificial intelligence
dc.subjectinterpretable machine learning
dc.titleThe Role of Explanations of AI Systems: Beyond Trust and Helping to Form Mental Modelsen
dc.typeText/Conference Poster
dc.pubPlaceBonn
mci.document.qualitydigidoc
mci.conference.sessiontitleMCI-WS02: UCAI 2021: Workshop on User-Centered Artificial Intelligence
mci.conference.locationIngolstadt
mci.conference.date5.-8. September 2021
dc.identifier.doi10.18420/muc2021-mci-ws02-387


Files in this item

Thumbnail

Show simple item record