Auflistung nach Schlagwort "Interactive"
1 - 5 von 5
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelAdvanced User Assistance Systems(Business & Information Systems Engineering: Vol. 58, No. 5, 2016) Maedche, Alexander; Morana, Stefan; Schacht, Silvia; Werth, Dirk; Krumeich, Julian
- TextdokumentCluster Flow - an Advanced Concept for Ensemble-Enabling, Interactive Clustering(BTW 2021, 2021) Obermeier, Sandra; Beer, Anna; Wahl, Florian; Seidl, ThomasEven though most clustering algorithms serve knowledge discovery in fields other than computer science, most of them still require users to be familiar with programming or data mining to some extent. As that often prevents efficient research, we developed an easy to use, highly explainable clustering method accompanied by an interactive tool for clustering. It is based on intuitively understandable kNN graphs and the subsequent application of adaptable filters, which can be combined ensemble-like and iteratively and prune unnecessary or misleading edges. For a first overview of the data, fully automatic predefined filter cascades deliver robust results. A selection of simple filters and combination methods that can be chosen interactively yield very good results on benchmark datasets compared to various algorithms.
- TextdokumentDICE: Density-based Interactive Clustering and Exploration(BTW 2019, 2019) Kazempour, Daniyal; Kazakov, Maksim; Kröger, Peer; Seidl, ThomasClustering algorithms are mostly following the pipeline to provide input data, and hyperparameter values. Then the algorithms are executed and the output files are generated or visualized. We provide in our work an early prototype of an interactive density-based clustering tool named DICE in which the users can change the hyperparameter settings and immediately observe the resulting clusters. Further the users can browse through each of the single detected clusters and get statistics regarding as well as a convex hull profile for each cluster. Further DICE keeps track of the chosen settings, enabling the user to review which hyperparameter values have been previously chosen. DICE can not only be used in scientific context of analyzing data, but also in didactic settings in which students can learn in an exploratory fashion how a density-based clustering algorithm like e.g. DBSCAN behaves.
- ZeitschriftenartikelOne Explanation Does Not Fit All(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Sokol, Kacper; Flach, PeterThe need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system’s operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations—a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up “What if?” questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats. Furthermore, we discuss the challenges of mirroring the explainee’s mental model, which is the main building block of intelligible human–machine interactions. We also deliberate on the risks of allowing the explainee to freely manipulate the explanations and thereby extracting information about the underlying predictive model, which might be leveraged by malicious actors to steal or game the model. Finally, building an end-to-end interactive explainability system is a challenging engineering task; unless the main goal is its deployment, we recommend “Wizard of Oz” studies as a proxy for testing and evaluating standalone interactive explainability algorithms.
- WorkshopbeitragWhat Users Expect from Players for Interactive (Non-linear) Videos(Mensch & Computer 2012: interaktiv informiert – allgegenwärtig und allumfassend!?, 2012) Meixner, Britta; Kandlbinder, Klaus; Siegel, Beate; Lehner, Franz; Kosch, Harald; Kohl, AndreasVarious players for interactive non-linear videos exist in the web nowadays. Each player provides commonly known buttons as well as buttons triggering additional functions of the player or the video presentation. Additional buttons show a large variety of different icons. This work examines which functions and GUI-elements users expect from players for interactive non-linear videos. Therefore the layout of buttons in existing web-players is tested for its intelligibility. User s expectations are determined in a second step. Used methods are a labeling exercise/questionnaire and a paper prototyping.