Anuyah, OghenemaroFine, WilliamMetoyer, RonaldWienrich, CarolinWintersberger, PhilippWeyers, Benjamin2021-09-052021-09-052021https://dl.gi.de/handle/20.500.12116/37372Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model’s decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models’ decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models in use. We also discuss how we applied our framework to design an explanation interface for trace link prediction of software artifacts.enExplainable AIinteraction designdesign activityartificial intelligencemachine learninguser researchdesign decisionDesign Decision Framework for AI ExplanationsText/Workshop Paper10.18420/muc2021-mci-ws02-237