Auflistung nach Schlagwort "Explainable AI"
1 - 10 von 26
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelA Framework for Learning Event Sequences and Explaining Detected Anomalies in a Smart Home Environment(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Baudisch, Justin; Richter, Birte; Jungeblut, ThorstenThis paper presents a framework for learning event sequences for anomaly detection in a smart home environment. It addresses environment conditions, device grouping, system performance and explainability of anomalies. Our method models user behavior as sequences of events, triggered by interaction of the home residents with the Internet of Things (IoT) devices. Based on a given set of recorded event sequences, the system can learn the habitual behavior of the residents. An anomaly is described as deviation from that normal behavior, previously learned by the system. One key feature of our framework is the explainability of detected anomalies, which is implemented through a simple rule analysis.
- ZeitschriftenartikelAI-Enhanced Hybrid Decision Management(Business & Information Systems Engineering: Vol. 65, No. 2, 2023) Bork, Dominik; Ali, Syed Juned; Dinev, Georgi MilenovThe Decision Model and Notation (DMN) modeling language allows the precise specification of business decisions and business rules. DMN is readily understandable by business users involved in decision management. However, as the models get complex, the cognitive abilities of humans threaten manual maintainability and comprehensibility. Proper design of the decision logic thus requires comprehensive automated analysis of e.g., all possible cases the decision shall cover; correlations between inputs and outputs; and the importance of inputs for deriving the output. In the paper, the authors explore the mutual benefits of combining human-driven DMN decision modeling with the computational power of Artificial Intelligence for DMN model analysis and improved comprehension. The authors propose a model-driven approach that uses DMN models to generate Machine Learning (ML) training data and show, how the trained ML models can inform human decision modelers by means of superimposing the feature importance within the original DMN models. An evaluation with multiple real DMN models from an insurance company evaluates the feasibility and the utility of the approach.
- KonferenzbeitragAugmentation through Generative AI: Exploring the Effects of Human-AI Interaction and Explainable AI on Service Performance(Mensch und Computer 2024 - Workshopband, 2024) Reinhard, PhilippGenerative artificial intelligence (GenAI), particularly large language models (LLMs), offer new capabilities of natural language understanding and generation, potentially reducing employee stress and high turnover rates in customer service delivery. However, these systems also present risks, such as generating convincing but erroneous responses, known as hallucinations and confabulations. Thus, this study investigates the impact of GenAI on service performance in customer support settings, emphasizing augmentation over automation to address three key inquiries: identifying patterns of GenAI infusion that alter service routines, assessing the effects of human-AI interaction on cognitive load and task performance, and evaluating the role of explainable AI (XAI) in detecting erroneous responses such as hallucinations. Employing a design science research approach, the study combines literature reviews, expert interviews, and experimental designs to derive implications for designing GenAI-driven augmentation. Preliminary findings reveal three key insights: (1) Service employees play a critical role in retaining organizational knowledge and delegating decisions to GenAI agents; (2) Utilizing GenAI co-pilots significantly reduces the cognitive load during stressful customer interactions; and (3) Novice employees face challenges in discerning accurate AI-generated advice from inaccurate suggestions without additional explanatory context.
- ZeitschriftenartikelCurrent topics and challenges in geoAI(KI - Künstliche Intelligenz: Vol. 37, No. 1, 2023) Richter, Kai-Florian; Scheider, SimonTaken literally, geoAI is the use of Artificial Intelligence methods and techniques in solving geo-spatial problems. Similar to AI more generally, geoAI has seen an influx of new (big) data sources and advanced machine learning techniques, but also a shift in the kind of problems under investigation. In this article, we highlight some of these changes and identify current topics and challenges in geoAI.
- WorkshopbeitragDesign Decision Framework for AI Explanations(Mensch und Computer 2021 - Workshopband, 2021) Anuyah, Oghenemaro; Fine, William; Metoyer, RonaldExplanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model’s decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models’ decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models in use. We also discuss how we applied our framework to design an explanation interface for trace link prediction of software artifacts.
- WorkshopbeitragDesigning for Technology Transparency – Transparency Cues and User Experience(Usability Professionals 23, 2023) Hein, Ilka; Diefenbach, Sarah; Ullrich, DanielAs technologies become more complex, the question of how transparent they should be for users and how transparency cues should be designed comes to the fore. Transparency refers to the extent to which users learn, for example, how the technology works or arrives at certain results. The increased interest in this topic also stems from legal changes such as the debate about a European AI regulation, which demands transparent AI systems and thus necessitates solutions for an optimal design of transparency cues. The paper discusses examples and risks of lacking transparency and approaches and the state of knowledge for improving the user experience by technology-based transparency cues. Finally, we present an outlook on the promising directions for design guidelines and next steps of research.
- KonferenzbeitragEnhancing Explainability and Scrutability of Recommender Systems(BTW 2023, 2023) Ghazimatin, AzinOur increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations as end users and the algorithm's behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in filtering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. To this end, we put forward proposals for explaining recommendations to the end users. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Such explanations usually contain valuable clues as to how a system perceives user preferences and more importantly how its behavior can be modified. Therefore, as a natural next step, we develop a framework for leveraging user feedback on explanations to improve their future recommendations. We evaluate all the proposed models and methods with real user studies and demonstrate their benefits at achieving explainability and scrutability in recommender systems.
- KonferenzbeitragEnsuring trustworthy AI for sensitive infrastructure using Knowledge Representation(INFORMATIK 2024, 2024) Mejri, Oumayma; Waedt, Karl; Yatagha, Romarick; Edeh, Natasha; Sebastiao, Claudia LemosArtificial intelligence (AI) has become increasingly integrated into various aspects of society, from healthcare and finance to law enforcement and hiring processes. More recently, sensitive infrastructure such as nuclear plants is engaging AI in aspects of safety. However, these systems are not immune to biases and ethical concerns. This paper explores the role of knowledge representation in addressing ethics and fairness in AI, examining how biased or incomplete representations can lead to unfair outcomes and unreliable decision-making. It proposes strategies to mitigate these risks.
- WorkshopbeitragEvidenzbasierte Definition von Spiel-Design-Elementen durch automatisierte Regelextraktion aus Spielanleitungen(Mensch und Computer 2022 - Workshopband, 2022) Schneider, AlexanderDie Anwendung von Entwurfsmustern zur Lösung wiederkehrender Probleme hat sich in der Praxis bewährt. Auch in der Gamification werden Muster in Form von Spielelementen eingesetzt, um Prozesse motivierender zu gestalten. Die dafür genutzten Muster müssen jedoch gefunden und definiert werden. Es gibt zwar bereits Sammlungen von Spiel-Design-Elementen, aber die Forschung kann nicht als abgeschlossen betrachtet werden, da jedes Jahr neue Spiele entwickelt werden, in denen es möglicherweise neue Elemente zu entdecken gibt. Ein empirischer Ansatz wird vom Projekt EMPAMOS verfolgt, das Spielanleitungen von Gesellschaftsspielen nach Spiel-Design-Elementen durchsucht.Werden in verschiedenen Spielen Kandidaten für Muster gefunden, werden die Textstellen gesammelt und von Fachkräften diskutiert. Am Ende des Prozesses steht die Definition eines Spiel-Design-Elementes. Diese Arbeit stellt einen Ansatz vor, der Fachkräfte für Spiel-Design-Elemente bei der Suche nach neuen Elementen unterstützt, in dem aus den gefunden Textstellen eine möglichst allgemeingültige Definition für das jeweilige Spiel-Design-Element generiert werden soll.
- ZeitschriftenartikeleXplainable Cooperative Machine Learning with NOVA(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Baur, Tobias; Heimerl, Alexander; Lingenfelser, Florian; Wagner, Johannes; Valstar, Michel F.; Schuller, Björn; André, ElisabethIn the following article, we introduce a novel workflow, which we subsume under the term “explainable cooperative machine learning” and show its practical application in a data annotation and model training tool called NOVA . The main idea of our approach is to interactively incorporate the ‘human in the loop’ when training classification models from annotated data. In particular, NOVA offers a collaborative annotation backend where multiple annotators join their workforce. A main aspect is the possibility of applying semi-supervised active learning techniques already during the annotation process by giving the possibility to pre-label data automatically, resulting in a drastic acceleration of the annotation process. Furthermore, the user-interface implements recent eXplainable AI techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanation. We show in an use-case evaluation that our workflow is able to speed up the annotation process, and further argue that by providing additional visual explanations annotators get to understand the decision making process as well as the trustworthiness of their trained machine learning models.
- «
- 1 (current)
- 2
- 3
- »