Panda, SwaroopRoy, Shatarupa ThakurtaHansen, ChristianNürnberger, AndreasPreim, Bernhard2020-08-182020-08-182020https://dl.gi.de/handle/20.500.12116/33512An aspect of User friendly AI involves explanation and better transparency of AI. Explainable AI(XAI) is an emerging area of research dedicated to explain and elucidate AI systems. In order to accomplish such an explanation, XAI uses a variety of tools, devices and frameworks. However, some of these tools may prove complex or ambiguous in themselves, requiring explanation. Visualization is such a tool used extensively in XAI. In this paper, we examine how such tools can be complex and ambiguous in itself and thus distort the originally intended AI explanation. We further propose three broad ways to mitigate the risks associated with tools, devices and frameworks used in XAI systems.enExplainable AIVisualizationExplaining Explainable AIText/Workshop Paper10.18420/muc2020-ws111-347