Auflistung nach Schlagwort "Explanations"
1 - 5 von 5
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragCounterfactual Explanations for Models of Code(Software Engineering 2024 (SE 2024), 2024) Cito, Jürgen; Dillig, Isil; Murali, Vijayaraghavan; Chandra, Satish
- ZeitschriftenartikelLeveraging Arguments in User Reviews for Generating and Explaining Recommendations(Datenbank-Spektrum: Vol. 20, No. 2, 2020) Donkers, Tim; Ziegler, JürgenReview texts constitute a valuable source for making system-generated recommendations both more accurate and more transparent. Reviews typically contain statements providing argumentative support for a given item rating that can be exploited to explain the recommended items in a personalized manner. We propose a novel method called Aspect-based Transparent Memories (ATM) to model user preferences with respect to relevant aspects and compare them to item properties to predict ratings, and, by the same mechanism, explain why an item is recommended. The ATM architecture consists of two neural memories that can be viewed as arrays of slots for storing information about users and items. The first memory component encodes representations of sentences composed by the target user while the second holds an equivalent representation for the target item based on statements of other users. An offline evaluation was performed with three datasets, showing advantages over two baselines, the well-established Matrix Factorization technique and a recent competitive representative of neural attentional recommender techniques.
- ZeitschriftenartikelOne Explanation Does Not Fit All(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Sokol, Kacper; Flach, PeterThe need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system’s operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations—a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up “What if?” questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats. Furthermore, we discuss the challenges of mirroring the explainee’s mental model, which is the main building block of intelligible human–machine interactions. We also deliberate on the risks of allowing the explainee to freely manipulate the explanations and thereby extracting information about the underlying predictive model, which might be leveraged by malicious actors to steal or game the model. Finally, building an end-to-end interactive explainability system is a challenging engineering task; unless the main goal is its deployment, we recommend “Wizard of Oz” studies as a proxy for testing and evaluating standalone interactive explainability algorithms.
- ZeitschriftenartikelTowards a Theory of Explanations for Human–Robot Collaboration(KI - Künstliche Intelligenz: Vol. 33, No. 4, 2019) Sridharan, Mohan; Meadows, BenThis paper makes two contributions towards enabling a robot to provide explanatory descriptions of its decisions, the underlying knowledge and beliefs, and the experiences that informed these beliefs. First, we present a theory of explanations comprising (i) claims about representing, reasoning with, and learning domain knowledge to support the construction of explanations; (ii) three fundamental axes to characterize explanations; and (iii) a methodology for constructing these explanations. Second, we describe an architecture for robots that implements this theory and supports scalability to complex domains and explanations. We demonstrate the architecture’s capabilities in the context of a simulated robot (a) moving target objects to desired locations or people; or (b) following recipes to bake biscuits.
- KonferenzbeitragWhat Did I Say Again? Relating User Needs to Search Outcomes in Conversational Commerce(Proceedings of Mensch und Computer 2024, 2024) Schott, Kevin; Papenmeier, Andrea; Hienert, Daniel; Kern, DagmarRecent advances in natural language processing and deep learning have accelerated the development of digital assistants. In conversational commerce, these assistants help customers find suitable products in online shops through natural language conversations. During the dialogue, the assistant identifies the customer’s needs and preferences and subsequently suggests potentially relevant products. Traditional online shops often allow users to filter search results based on their preferences using facets. Selected facets can also serve as a reminder of how the product base was filtered. In conversational commerce, however, the absence of facets and the use of advanced natural language processing techniques can leave customers uncertain about how their input was processed by the system. This can hinder transparency and trust, which are critical factors influencing customers’ purchase intentions. To address this issue, we propose a novel text-based digital assistant that, in the product assessment step, explains how specific product aspects relate to the user’s previous utterances to enhance transparency and facilitate informed decision-making. We conducted a user study (N=135) and found a significant increase in user-perceived transparency when natural language explanations and highlighted text passages were provided, demonstrating their potential to extend system transparency to the product assessment step in conversational commerce.