Show simple item record

dc.contributor.authorPanda, Swaroop
dc.contributor.authorRoy, Shatarupa Thakurta
dc.contributor.editorHansen, Christian
dc.contributor.editorNürnberger, Andreas
dc.contributor.editorPreim, Bernhard
dc.date.accessioned2020-08-18T15:19:49Z
dc.date.available2020-08-18T15:19:49Z
dc.date.issued2020
dc.identifier.urihttp://dl.gi.de/handle/20.500.12116/33512
dc.description.abstractAn aspect of User friendly AI involves explanation and better transparency of AI. Explainable AI(XAI) is an emerging area of research dedicated to explain and elucidate AI systems. In order to accomplish such an explanation, XAI uses a variety of tools, devices and frameworks. However, some of these tools may prove complex or ambiguous in themselves, requiring explanation. Visualization is such a tool used extensively in XAI. In this paper, we examine how such tools can be complex and ambiguous in itself and thus distort the originally intended AI explanation. We further propose three broad ways to mitigate the risks associated with tools, devices and frameworks used in XAI systems.en
dc.language.isoen
dc.publisherGesellschaft für Informatik e.V.
dc.relation.ispartofMensch und Computer 2020 - Workshopband
dc.relation.ispartofseriesMensch und Computer
dc.subjectExplainable AI
dc.subjectVisualization
dc.titleExplaining Explainable AIen
dc.typeText/Conference Poster
dc.pubPlaceBonn
mci.document.qualitydigidoc
mci.conference.sessiontitleMCI-WS02: UCAI 2020: Workshop on User-Centered Artificial Intelligence
mci.conference.locationMagdeburg
mci.conference.date6.-9. September 2020
dc.identifier.doi10.18420/muc2020-ws111-347


Files in this item

Thumbnail

Show simple item record