Logo des Repositoriums
 

i-com Band 19 (2020) Heft 3

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 8 von 8
  • Zeitschriftenartikel
    Examining Autocompletion as a Basic Concept for Interaction with Generative AI
    (i-com: Vol. 19, No. 3, 2021) Lehmann, Florian; Buschek, Daniel
    Autocompletion is an approach that extends and continues partial user input. We propose to interpret autocompletion as a basic interaction concept in human-AI interaction. We first describe the concept of autocompletion and dissect its user interface and interaction elements, using the well-established textual autocompletion in search engines as an example. We then highlight how these elements reoccur in other application domains, such as code completion, GUI sketching, and layouting. This comparison and transfer highlights an inherent role of such intelligent systems to extend and complete user input, in particular useful for designing interactions with and for generative AI. We reflect on and discuss our conceptual analysis of autocompletion to provide inspiration and a conceptual lens on current challenges in designing for human-AI interaction.
  • Zeitschriftenartikel
    Reflecting on Social Media Behavior by Structuring and Exploring Posts and Comments
    (i-com: Vol. 19, No. 3, 2021) Herder, Eelco; Roßner, Daniel; Atzenbeck, Claus
    Social networks use several user interaction techniques for enabling and soliciting user responses, such as posts, likes and comments. Some of these triggers may lead to posts or comments that a user may regret at a later stage. In this article, we investigate how users may be supported in reflecting upon their past activities, making use of an exploratory spatial hypertext tool. We discuss how we transform raw Facebook data dumps into a graph-based structure and reflect upon design decisions. First results provide insights in users motivations for using such a tool and confirm that the approach helps them in discovering past activities that they perceive as outdated or even embarrassing.
  • Zeitschriftenartikel
    Demystifying Deep Learning: Developing and Evaluating a User-Centered Learning App for Beginners to Gain Practical Experience
    (i-com: Vol. 19, No. 3, 2021) Schultze, Sven; Gruenefeld, Uwe; Boll, Susanne
    Deep Learning has revolutionized Machine Learning, enhancing our ability to solve complex computational problems. From image classification to speech recognition, the technology can be beneficial in a broad range of scenarios. However, the barrier to entry is quite high, especially when programming skills are missing. In this paper, we present the development of a learning application that is easy to use, yet powerful enough to solve practical Deep Learning problems. We followed the human-centered design approach and conducted a technical evaluation to identify solvable classification problems. Afterwards, we conducted an online user evaluation to gain insights on users’ experience with the app, and to understand positive as well as negative aspects of our implemented concept. Our results show that participants liked using the app and found it useful, especially for beginners. Nonetheless, future iterations of the learning app should step-wise include more features to support advancing users.
  • Zeitschriftenartikel
    Explainable AI and Multi-Modal Causability in Medicine
    (i-com: Vol. 19, No. 3, 2021) Holzinger, Andreas
    Progress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex “black-boxes”, which make it hard to understand why a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask “what-if” questions (counterfactuals) to gain insight into the underlying independent explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result.
  • Zeitschriftenartikel
    How to Handle Health-Related Small Imbalanced Data in Machine Learning?
    (i-com: Vol. 19, No. 3, 2021) Rauschenberger, Maria; Baeza-Yates, Ricardo
    When discussing interpretable machine learning results, researchers need to compare them and check for reliability, especially for health-related data. The reason is the negative impact of wrong results on a person, such as in wrong prediction of cancer, incorrect assessment of the COVID-19 pandemic situation, or missing early screening of dyslexia. Often only small data exists for these complex interdisciplinary research projects. Hence, it is essential that this type of research understands different methodologies and mindsets such as the Design Science Methodology, Human-Centered Design or Data Science approaches to ensure interpretable and reliable results. Therefore, we present various recommendations and design considerations for experiments that help to avoid over-fitting and biased interpretation of results when having small imbalanced data related to health. We also present two very different use cases: early screening of dyslexia and event prediction in multiple sclerosis.
  • Zeitschriftenartikel
    Intelligent Questionnaires Using Approximate Dynamic Programming
    (i-com: Vol. 19, No. 3, 2021) Logé, Frédéric; Pennec, Erwan Le; Amadou-Boubacar, Habiboulaye
    Inefficient interaction such as long and/or repetitive questionnaires can be detrimental to user experience, which leads us to investigate the computation of an intelligent questionnaire for a prediction task. Given time and budget constraints (maximum q questions asked), this questionnaire will select adaptively the question sequence based on answers already given. Several use-cases with increased user and customer experience are given.

    The problem is framed as a Markov Decision Process and solved numerically with approximate dynamic programming, exploiting the hierarchical and episodic structure of the problem. The approach, evaluated on toy models and classic supervised learning datasets, outperforms two baselines: a decision tree with budget constraint and a model with q best features systematically asked. The online problem, quite critical for deployment seems to pose no particular issue, under the right exploration strategy.

    This setting is quite flexible and can incorporate easily initial available data and grouped questions.

  • Zeitschriftenartikel
    Explaining Review-Based Recommendations: Effects of Profile Transparency, Presentation Style and User Characteristics
    (i-com: Vol. 19, No. 3, 2021) Hernandez-Bocanegra, Diana C.; Ziegler, Jürgen
    Providing explanations based on user reviews in recommender systems (RS) may increase users’ perception of transparency or effectiveness. However, little is known about how these explanations should be presented to users, or which types of user interface components should be included in explanations, in order to increase both their comprehensibility and acceptance. To investigate such matters, we conducted two experiments and evaluated the differences in users’ perception when providing information about their own profiles, in addition to a summarized view on the opinions of other customers about the recommended hotel. Additionally, we also aimed to test the effect of different display styles (bar chart and table) on the perception of review-based explanations for recommended hotels, as well as how useful users find different explanatory interface components. Our results suggest that the perception of an RS and its explanations given profile transparency and different presentation styles, may vary depending on individual differences on user characteristics, such as decision-making styles, social awareness, or visualization familiarity.
  • Zeitschriftenartikel
    Editorial
    (i-com: Vol. 19, No. 3, 2021) Buschek, Daniel; Loepp, Benedikt; Ziegler, Jürgen