Auflistung nach Schlagwort "Understandability"
1 - 5 von 5
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelIdentification of Explainable Structures in Data with a Human-in-the-Loop(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Thrun, Michael C.Explainable AIs (XAIs) often do not provide relevant or understandable explanations for a domain-specific human-in-the-loop (HIL). In addition, internally used metrics have biases that might not match existing structures in the data. The habilitation thesis presents an alternative solution approach by deriving explanations from high dimensional structures in the data rather than from predetermined classifications. Typically, the detection of such density- or distance-based structures in data has so far entailed the challenges of choosing appropriate algorithms and their parameters, which adds a considerable amount of complex decision-making options for the HIL. Central steps of the solution approach are a parameter-free methodology for the estimation and visualization of probability density functions (PDFs); followed by a hypothesis for selecting an appropriate distance metric independent of the data context in combination with projection-based clustering (PBC). PBC allows for subsequent interactive identification of separable structures in the data. Hence, the HIL does not need deep knowledge of the underlying algorithms to identify structures in data. The complete data-driven XAI approach involving the HIL is based on a decision tree guided by distance-based structures in data (DSD). This data-driven XAI shows initial success in the application to multivariate time series and non-sequential high-dimensional data. It generates meaningful and relevant explanations that are evaluated by Grice’s maxims.
- KonferenzbeitragImproving Collaborative Modeling by an Operation-Based Versioning Approach(Software Engineering 2024 (SE 2024), 2024) Exelmans, Joeri; Pietron, Jakob; Raschke, Alexander; Vangheluwe, Hans; Tichy, Matthias
- ZeitschriftenartikelMapping platforms into a new open science model for machine learning(it - Information Technology: Vol. 61, No. 4, 2019) Weißgerber, Thomas; Granitzer, MichaelData-centric disciplines like machine learning and data science have become major research areas within computer science and beyond. However, the development of research processes and tools did not keep pace with the rapid advancement of the disciplines, resulting in several insufficiently tackled challenges to attain reproducibility, replicability, and comparability of achieved results. In this discussion paper, we review existing tools, platforms and standardization efforts for addressing these challenges. As a common ground for our analysis, we develop an open science centred process model for machine learning research, which combines openness and transparency with the core processes of machine learning and data science. Based on the features of over 40 tools, platforms and standards, we list the, in our opinion, 11 most central platforms for the research process in this paper. We conclude that most platforms cover only parts of the requirements for overcoming the identified challenges.
- WorkshopbeitragModel Factors Influencing Petri Net Understandability: A Case Study on Simplicity(AWPN 2024 workshop proceedings, 2024) Kimmel, Marc; Schalk, Patrizia; Lorenz, Robert
- ZeitschriftenartikelThe Influence of Using Collapsed Sub-processes and Groups on the Understandability of Business Process Models(Business & Information Systems Engineering: Vol. 62, No. 2, 2020) Turetken, Oktay; Dikici, Ahmet; Vanderfeesten, Irene; Rompen, Tessa; Demirors, OnurMany factors influence the creation of business process models which are understandable for a target audience. Understandability of process models becomes more critical when size and complexity of the models increase. Using vertical modularization to decompose such models hierarchically into modules is considered to improve their understandability. To investigate this assumption, two experiments were conducted. The experiments involved 2 large-scale real-life business process models that were modeled using BPMN v2.0 (Business Process Model and Notation) in the form of collaboration diagrams. Each process was modeled in 3 modularity forms: fully-flattened, flattened where activities are clustered using BPMN groups, and modularized using separately viewed BPMN sub-processes. The objective was to investigate if and how different forms of modularity representation (used for vertical modularization) in BPMN collaboration diagrams influence the understandability of process models. In addition to the forms of modularity representation, the presentation medium (paper vs. computer) and model reader's level of business process modeling competency were investigated as factors that potentially influence model comprehension. 60 business practitioners from a large organization and 140 graduate students participated in our experiments. The results indicate that, when these three modularity representations are considered, it is best to present the model in a 'flattened' form (with or without the use of groups) and in the 'paper' format in order to optimally understand a BPMN model. The results also show that the model reader's business process modeling competency is an important factor of process model comprehension.