Logo des Repositoriums
 

Evaluating Explainability Methods Intended for Multiple Stakeholders

dc.contributor.authorMartin, Kyle
dc.contributor.authorLiret, Anne
dc.contributor.authorWiratunga, Nirmalie
dc.contributor.authorOwusu, Gilbert
dc.contributor.authorKern, Mathias
dc.date.accessioned2021-12-16T13:23:00Z
dc.date.available2021-12-16T13:23:00Z
dc.date.issued2021
dc.description.abstractExplanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two metrics designed to model the quality of explanations - Meet-In-The-Middle (MITM) and Trust-Your-Neighbours (TYN). Our analysis of these metrics offers new insights into the use of similarity knowledge for the evaluation of explanations.de
dc.identifier.doi10.1007/s13218-020-00702-6
dc.identifier.pissn1610-1987
dc.identifier.urihttp://dx.doi.org/10.1007/s13218-020-00702-6
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/37810
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 35, No. 0
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectExplainability
dc.subjectInformation retrieval
dc.subjectMachine learning
dc.subjectSimilarity modeling
dc.titleEvaluating Explainability Methods Intended for Multiple Stakeholdersde
dc.typeText/Journal Article
gi.citation.endPage411
gi.citation.startPage397

Dateien