Evaluating Explainability Methods Intended for Multiple Stakeholders
dc.contributor.author | Martin, Kyle | |
dc.contributor.author | Liret, Anne | |
dc.contributor.author | Wiratunga, Nirmalie | |
dc.contributor.author | Owusu, Gilbert | |
dc.contributor.author | Kern, Mathias | |
dc.date.accessioned | 2021-12-16T13:23:00Z | |
dc.date.available | 2021-12-16T13:23:00Z | |
dc.date.issued | 2021 | |
dc.description.abstract | Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two metrics designed to model the quality of explanations - Meet-In-The-Middle (MITM) and Trust-Your-Neighbours (TYN). Our analysis of these metrics offers new insights into the use of similarity knowledge for the evaluation of explanations. | de |
dc.identifier.doi | 10.1007/s13218-020-00702-6 | |
dc.identifier.pissn | 1610-1987 | |
dc.identifier.uri | http://dx.doi.org/10.1007/s13218-020-00702-6 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/37810 | |
dc.publisher | Springer | |
dc.relation.ispartof | KI - Künstliche Intelligenz: Vol. 35, No. 0 | |
dc.relation.ispartofseries | KI - Künstliche Intelligenz | |
dc.subject | Explainability | |
dc.subject | Information retrieval | |
dc.subject | Machine learning | |
dc.subject | Similarity modeling | |
dc.title | Evaluating Explainability Methods Intended for Multiple Stakeholders | de |
dc.type | Text/Journal Article | |
gi.citation.endPage | 411 | |
gi.citation.startPage | 397 |